The artificial intelligence (AI) boom has brought with it a cornucopia of jargon — from “generative AI” to “synthetic data” — that can be hard to parse. And as hard as it is to really understand what AI is (see our explainer for that) having a working knowledge of AI terms can help make sense of this technology.

As part of our series explaining the basics of AI, here is a short glossary of terms that will hopefully help you navigate the rapidly developing field.

Artificial Intelligence: Technology that aims to replicate human-like thinking within machines.  Some examples of abilities that fall into this category include identifying people in pictures, work in factories and even doing taxes.

Generative AI: Generative AI is an AI that can create things like text, images, sound and video. Traditional applications of AI largely classify content, while generative AI models create it. For instance, a voice recognition model can identify your voice, while a generative voice model can use your voice to create audiobooks. Almost all models that have recently captured the public’s attention have been generative, including chatbots like OpenAI’s ChatGPT, image creators like Stable Diffusion and MidJourney, and voice-cloning programs like Resemble.

Training Data: A collection of information — text, image, sound — curated to help AI models accomplish tasks. In language models, training datasets focus on text-based materials like books, comments from social media, and even code. Because AI models learn from training data, ethical questions have been raised around its sourcing and curation. Low-quality training data can introduce bias, leading to unfair models that make racist or sexist decisions.

Algorithmic Bias: An error resulting from bad training data and poor programming that causes models to make prejudiced decisions. Such models may draw inappropriate assumptions based on gender, ability or race. In practice, these errors can cause serious harm by affecting decision-making — from mortgage applications to organ-transplant approvals. Many critics of the speedy rollout of AI have invoked the potential for algorithmic bias.

Artificial General Intelligence (AGI): A description of programs that are as capable — or even more capable — than a human. While full general intelligence is still off in the future, models are growing in sophistication. Some have demonstrated skills across multiple domains ranging from chemistry to psychology, with task performance paralleling human benchmarks.

Autonomous Agents: An AI model that has both an objective and enough tools to achieve it. For instance, self-driving cars are autonomous agents that use sensory input, GPS data and driving algorithms to make independent decisions about how to navigate and reach destinations. A group of autonomous agents can even develop cultures, traditions and shared language, as researchers from Stanford have demonstrated.

Prompt Chaining: The process of using previous interactions with an AI model to create new, more finely tuned responses, specifically in prompt-driven language modeling. For example, when you ask ChatGPT to send your friend a text, you expect it to remember things like the tone you use to talk to her, inside jokes and other content from previous conversations. These techniques help incorporate this context.

May 16, 202304:37

Large Language Models (LLM): An application of AI — usually generative — that aims to understand, engage and communicate with language in a human-like way. These models are distinguished by their large size: The biggest version of GPT-3, a direct predecessor to ChatGPT, contained 175 billion different variables called parameters that were trained on 570 gigabytes of data. Google’s PaLm model is even larger, having 540 billion parameters. As hardware and software continue to advance, this scale is expected to increase.

Hallucination: Hallucinations are unexpected and incorrect responses from AI programs that can arise for reasons that are not yet fully known. A language model might suddenly bring up fruit salad recipes when you were asking about planting fruit trees. It might also make up scholarly citations, lie about data you ask it to analyze, or make up facts about events that aren’t in its training data. It’s not fully understood why this happens, but can arise from sparse data, information gaps and misclassification.

Emergent Behavior: Skills that AI might demonstrate that it was not explicitly built for. Some examples include emoji interpretation, sarcasm and using gender-inclusive language. A research team at Google Brain identified over 100 of these behaviors, noting that more are likely to emerge as models continue to scale.

Alignment: Efforts to ensure AI systems share the same values and goals as their human operators. To bring motives into agreement, alignment research seeks to train and calibrate models, often using functions to reward or penalize models. If the model does a good job, you give it positive feedback. If not, you give it negative feedback.

Multimodal AI: A form of AI that can understand and work with multiple types of information, including text, image, speech and more. This is powerful because it allows AI to understand and express itself in multiple dimensions, giving both a broader and more nuanced understanding of tasks. One application of multimodal AI is this translator, which can convert Japanese comics into English.

Prompt Engineering: This is the act of giving AI an instruction so it has the context it needs to achieve your goal. Prompt engineering is best associated with OpenAI’s ChatGPT, describing the tasks users feed into the algorithm (e.g. “Give me five popular baby names”).

Training: Training is the process of refining AI using data so it’s better suited for a task. An AI can be trained by feeding in data based on what you want it to learn from — like feeding Shakespearean sonnets to a poetry bot. You can do this multiple times in iterations called “epochs,” until your model’s performance is consistent and reliable.

Neural Networks: Neural networks are computer systems constructed to approximate the structure of human thought, specifically via the structure of your brain. They’re built like this because they allow a model to build up from the abstract to the concrete. In an image model, initial layers, concepts like color or position might be formed, building up to firmer, more familiar forms like fruit or animals.

Narrow AI: Some AI algorithms have a one-track mind. Literally. They’re designed to do one thing and nothing more. If a narrow AI algorithm can play checkers, it can’t play chess. Examples include algorithms that only detect NSFW images and recommendation engines designed to tell you what Amazon or Etsy product to buy next.

Go to source


Related Posts

Useful ChatGPT prompts for writing code

Developers, hobbyists and those of you just learning how to code might be interested on how you can use the latest artificial intelligence from OpenAI to help you expand your knowledge of your chosen coding languages using a variety of different prompts. Using the correct prompt when using ChatGPT helps you get the best answer possible and below are some hints and tips on how you can refine your prompts when coding to get the best results, answers and guidance.

Bing with ChatGPT now produces AI–generated shopping guides for you

Microsoft has launched a range of AI-generated shopping tools for Bing search results and the Edge browser as it continues to implement ChatGPT …

ChatGPT Lifts Business Professionals’ Productivity and Improves Work Quality

This article was originally published by Nielsen Norman Group at https://www.nngroup.com/articles/chatgpt-productivity/ and is reprinted by …

That’s Not Right: How to Tell ChatGPT When It’s Wrong

Like any machine, ChatGPT makes mistakes. Here’s how to correct the AI chatbot when it when it gives you wrong or incomplete information. ChatGPT is …

Apps

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.