What’s the first thing that comes to mind when you hear “artificial intelligence (AI)”?
Maybe you think of a generative AI tool like OpenAI’s ChatGPT or something more specific like a disease mapping tool. Or maybe your mind jumps to early sci-fi films, where humans live alongside sentient AI robots. Perhaps you don’t think about any of that, and instead find yourself chewing on issues like AI ethics or privacy.
The truth is, there’s no right answer to this question.
Although we tend to throw around “AI” like it’s a one-size-fits-all term, in reality, AI is a complex and nuanced field packed with countless ideas, solutions, and tools. And, while many people’s awareness of AI has grown in the past year, there are still a lot of terms that fly over people’s heads.
That’s why we’ve put together this comprehensive AI glossary for marketers. Below, you’ll find all of the most commonly referenced AI marketing terms to help you dive deeper into the world of AI.
So, without further ado, let’s get into the AI glossary 🤖
AI analytics: AI analytics is an application of AI that uses machine learning to analyze large datasets and identify patterns, trends, and relationships from them. This enables businesses to gain valuable insights quickly without putting the burden on humans, ultimately allowing them to make data-driven decisions more easily.
AI bias: AI bias, also known as machine learning bias or algorithm bias, is a phenomenon in which the algorithm produces biased results due to erroneous or prejudiced assumptions made in the machine learning process. Not only does AI bias perpetuate human biases, but it can also lead humans to make inaccurate decisions.
AI chatbot: AI chatbots are chatbots that are trained through natural language processing (NLP) to have humanlike conversations. With NLP, AI chatbots can interpret human language as it is written, which enables them to operate more or less on their own.
AI assistant: An AI assistant or virtual assistant is a software program that leverages natural language processing and machine learning to perform tasks based on user commands, be they written or voiced. Examples include smart assistants like Apple’s Siri and website chatbots.
AI ethics: AI ethics are a set of moral principles and practices intended to guide companies through the responsible use and development of artificial intelligence.
Algorithm: An algorithm is a set of instructions that acts as a procedure for computers to follow to make calculations or solve a problem. An AI runs on a set of algorithms and can modify or create new algorithms based on learned inputs and data.
Artificial general intelligence (AGI): Artificial general intelligence is a theoretical type of AI that has cognitive capabilities equal to a human, including the ability to self-teach.
Artificial intelligence (AI): Artificial intelligence is a field of computer science that focuses on enabling machines to think, understand, and perform tasks like human beings.
Artificial narrow intelligence (ANI): Artificial narrow intelligence (also known as narrow AI and weak AI) is a type of AI that is designed to perform a single specific task. Most of the AI we use today — such as recommendation engines and spam filters — fall under ANI.
Artificial superintelligence (ASI): Artificial superintelligence is a theoretical type of AI that has cognitive capabilities that exceed human intelligence across all fields.
Association rule learning: Association rule learning is a machine learning technique that is used to discover patterns, trends, and relationships within large datasets. The goal of association rule learning is to find strong and interesting correlations between variables in a dataset.
Augmented reality (AR): Augmented reality is a technology that overlays computer-generated content onto the user’s view of the real world. Think TikTok filters, the now discontinued Google Glass, or Pokémon GO.
Automatic speech recognition (ASR): Automatic speech recognition is a technology that can recognize spoken language by converting it into written text. ASR is how AI assistants like Siri and Alexa can understand you.
Backpropagation: Backpropagation is a neural network training process where, after an output has been generated, an algorithm will feed the error rate back through the network to adjust the parameters of specific inputs. Generally, backpropagation is classified as supervised learning, as the algorithm checks the resulting output against a desired output to improve the neural network’s accuracy.
BERT: BERT (Bidirectional Encoder Representations for Transformers) is a natural language processing training technique introduced by Google in 2018. Unlike its predecessors, BERT’s models were built to process words in relation to all the other words in a sentence, rather than simply processing words from left-to-right or right-to-left. Thus, BERT represents a massive advancement in natural language processing.
Bots: Bots are software programs that perform automated, repetitive, rule-based tasks, usually with the aim of imitating or replacing human activity.
Central processing unit (CPU): A central processing unit is a hardware component that allows a computer to execute instructions to run the operating system and all software programs. The CPU is commonly referred to as the “brain” of the computer as it is integral for computation.
Chatbot: Chatbots are software programs that simulate human conversations. While some chatbots follow a set of pre-designed rules to mimic real-life interactions, other chatbots use AI to analyze interactions at an almost human level.
ChatGPT: ChatGPT is a generative AI chatbot developed by OpenAI. Following the GPT language model, ChatGPT uses natural language processing to understand user prompts and generate a response — all through dialogue.
Computer vision: Computer vision is a field of AI that is focused on empowering machines to analyze, understand, and react to visual inputs such as images and video. One example of this is self-driving cars, which use object detection to navigate their surroundings.
Composite AI: Composite AI is an approach that combines multiple AI techniques to create an enhanced system that is more efficient, more knowledgeable, or solves a specific problem.
Conversational AI: Conversational AI is a type of AI that is able to understand, process, and respond to human language in a natural and relevant way. Using natural language processing, Conversational AI can process a written or spoken input, understand its context and intent, and then generate an appropriate response.
Data mining: Data mining is the process of using machine learning techniques to analyze large datasets in order to uncover patterns, trends, and relationships.
Deepfake: A deepfake is a form of synthetic media that has been digitally manipulated so as to imitate a person’s likeness. Deepfakes use deep learning methods to generate a convincing hoax of an image, video, or audio — hence the name “deepfake.”
Deep learning: Deep learning is a subset of machine learning methods that uses artificial neural networks to emulate the human ability to make complex decisions. The “deep” in deep learning refers to the multiple layers of nodes within a neural network, which are what enable the network to break down complex data.
DeepMind: DeepMind is an AI research laboratory that focuses on the production of an artificial general intelligence through interdisciplinary approaches such as deep reinforcement learning. It is most well-known for developing AI systems that can play games like Go and StarCraft II. After being acquired by Google in 2014 and becoming Google DeepMind in 2023, DeepMind has also been responsible for the development of AI systems like Gemini.
Decision tree: A decision tree is a tree-like hierarchical model that maps the decision-making process through a series of conditions (or nodes) that continue to branch off until it reaches an outcome. With AI, decision trees are used as a form of supervised machine learning, specifically for classification and regression tasks.
DALL-E: DALL-E is a series of text-to-image AI models developed by OpenAI, which uses deep learning methods to generate images and art from natural language prompts.
Emergent abilities: Emergent abilities are skills and functions that an AI model spontaneously develops during the training process without explicitly being built or trained for that purpose.
Entity annotation: Entity annotation is a method of identifying, extracting, and labeling parts of a text, such as names, keywords, or parts of speech (e.g. nouns, adjectives, or verbs). This is what enables an AI model to correctly interpret natural language.
Expert system: An expert system is a type of AI system that is designed to make decisions and solve problems the same way a human expert in that specific field would. For example, CaDet is an expert system that is used to detect cancer in its early stages.
Explainable AI: Explainable AI is a set of tools and methods that help users understand how an AI system reached its decisions with the goal of delivering more transparency and building trust with its users. The term is also used to refer to AI technologies that have these tools or methods built-in.
Feature engineering: Feature engineering is the process of transforming raw data into features (i.e. a unique attribute or variable) so that the AI model can more easily understand and use that data for training. The more accurate and relevant those features are, the better the AI will be.
Foundation model: A foundation model is a type of AI model that is trained on a broad range of data at scale so that it can accomplish a wide variety of tasks. As its name implies, a foundation model can be used as a base to develop many different applications of AI.
Gemini: Gemini (formerly known as Bard) is a generative AI chatbot offered by Google. Powered by Google’s AI model of the same name, Gemini was built to be multimodal so that it will be able to work seamlessly across text, images, audio, video, and code.
Generative AI: Generative AI is a type of AI that is able to generate new content — whether it’s text, images, or code — based on the data it has been trained on. Generative AI can be applied to a variety of business cases, including product development, customer service, and marketing.
Generative adversarial network (GAN): A generative adversarial network is an approach to generative modeling in which two neural networks — the generator and discriminator — work against each other to create new data that is more authentic to the original dataset. In a GAN, first, the generator produces data that the discriminator will identify as fake. Then, through a feedback loop, the generator will iterate on its output to get closer to the real data (i.e. until the discriminator can no longer identify if it is fake).
General intelligence: See artificial general intelligence (AGI).
GPT: GPT (or generative pretrained transformer) is a type of large language model (LLM) that is built on a transformer architecture (which changes input sequences to output sequences) and is trained on a large corpus of text data in order to generate new content. While GPT can refer to any LLM that meets these criteria, it usually refers to OpenAI’s series of language models.
Hallucination: A hallucination or artificial hallucination is a phenomenon where an AI generates false or misleading information that it presents as fact.
Human-in-the-loop: Human-in-the-loop is an iterative process where humans provide feedback to an AI model during training and testing in order to improve its performance. With AI tools that have human-in-the-loop, humans act as the ultimate guardrails to ensure the AI’s output is both accurate and trustworthy.
Image recognition: Image recognition is a machine’s ability to identify features such as objects, people, and places in digital images and video. The facial recognition software on smartphones is one example of this.
Large language model (LLM): A large language model is a type of AI model that is trained on massive datasets using deep learning techniques to understand natural language and generate responses that are coherent and relevant.
Language model for dialogue applications (LaMDA): LaMDA is a series of large language models developed by Google that has specifically been trained on dialogue, so it can engage in open-ended conversations.
Limited memory AI: Limited memory AI is a type of AI which learns from historical information, data, or predictions it has previously acquired and improves based on those experiences — just like a human does. It is considered the second of the four main types of AI.
Machine learning: Machine learning is a field of AI focused on the development of computer systems that learn from data to perform a task or tasks and improve over time without explicit programming.
Machine translation (MT): Machine translation is an application of machine learning that allows a system to automatically translate text or speech from one language to another, without human involvement.
Midjourney: Midjourney is a generative AI software program that generates new images from natural language prompts. It is similar to DALL-E.
Model: An AI model is a software program that has been trained on a certain dataset so that it can identify patterns and make decisions based on new, unseen data.
Model drift: Model drift (or model decay) is a phenomenon where the performance of a model degrades over time due to external real-world changes, such as the evolution of a concept or changes in variables.
Narrow AI: See artificial narrow intelligence (ANI).
Natural language generation (NLG): Natural language generation is an application of AI that produces written or spoken language from a dataset that is similar to human language. It is a subset of natural language processing.
Natural language processing (NLP): Natural language processing is a field of AI that focuses on enabling software programs to understand, manipulate, and generate human language as it is actually spoken and written. AI applications like chatbots and automatic transcriptions fall under NLP.
Natural language query (NLQ): A natural language query is a written input that uses language as it would be spoken — without technical language or syntax.
Natural language understanding (NLU): Natural language understanding is an application of AI that enables machines to understand the context and intent behind human language inputs. It is a subset of natural language processing.
Neural network: A neural network is a machine learning model that is made to mimic the way a human brain makes decisions — passing information through multiple layers of artificial neurons (AKA nodes). Neural networks are made up of an input layer, one or more hidden layers, and an output layer with each node in these layers having its own parameters that determine whether the information is sent on to the next layer.
Objective function: An objective function is a mathematical formula used to measure how well an AI model is performing. During the model’s training, the objective function drives the iterative learning process, acting as a guidepost for the model to work towards optimal performance.
OpenAI: OpenAI is an AI research company that was founded in 2015 with the aim of building an artificial general intelligence that benefits all of humanity. It is most famous for initiating the AI boom with its release of ChatGPT in November 2022.
Parameter: A parameter in an AI model refers to a variable that is configured during the training phase that enables the model to transform inputs into the desired outputs with greater accuracy.
Pattern recognition: Pattern recognition is a machine’s ability to automatically identify patterns in datasets using machine learning techniques. It is used in cases such as speech recognition, medical diagnoses, and image recognition.
Predictive analytics: Predictive analytics is a branch of data analytics that uses data, statistics, and machine learning techniques to predict future outcomes based on historical data. Often, it is used for forecasting, such as in AI for sales.
Prompts: Prompts in AI are natural language inputs that a human gives to an AI model to generate a desired output.
Prompt engineering: Prompt engineering is the process of refining natural language inputs (or prompts) to guide a generative AI system so that it generates the best possible output.
Reactive machines: Reactive machines are a type of AI that are specific to a task and have no memory, which means they will react to a given input in the same way every time. As one of the most basic forms of AI, reactive machines are considered the first of the four main types of AI.
Reinforcement learning: Reinforcement learning is a machine learning method used to train an AI model by reinforcing actions that work towards the optimal result and ignoring those that do not. Reinforcement learning mimics the human process of trial and error.
Responsible AI: Responsible AI is an approach to the design, development, and use of AI that prioritizes safety, ethics, and trust. With responsible AI, the overall goal is to deploy AI in a way that benefits humans and society rather than harming them.
Self-aware AI: Self-aware AI is a theoretical type of AI that has a human level of consciousness with its own thoughts, desires, and emotions. It is the fourth of the four main types of AI because it is considered to be the pinnacle of what can be achieved with AI (as we know it today).
Self-supervised learning: Self-supervised learning is a machine learning method where the AI model trains on unstructured data which it labels on its own based on the inherent patterns and structures found in the data. These self-generated labels are then used to train and validate the model in its next iterations.
Semantic analysis: Semantic analysis is a natural language process that enables a software program to draw meaning from a text by analyzing the relationships between text elements in a specific context.
Semi-supervised learning: Semi-supervised learning is a machine learning method where the AI model is first trained on a small amount of labeled data — after which the model is improved and refined on a larger unlabeled dataset. By combining supervised and unsupervised learning methods, semi-supervised learning allows the AI model to understand the broader structure of the data, which leads to more accurate predictions. A popular example of this is text classification, like training a Conversational AI to understand natural language.
Sentiment analysis: Sentiment analysis is a natural language process that analyzes patterns and trends in a text to determine its overall emotional tone.
Sentient AI: See self-aware AI.
Supervised learning: Supervised learning is a machine learning method where an AI model is trained on labeled datasets — where the data is marked with the correct answer — which act as a guide so the model can learn the relationship between the inputs and outputs. It is the opposite of unsupervised learning.
Theory of mind AI: Theory of mind AI is a still-theoretical type of AI that can understand, remember, and react to the thoughts, motivations, and emotions of other intelligent beings, just as humans do in social interactions. Theory of mind AI is the third of the four main types of AI.
Token: A token in AI is the most fundamental unit of data that an AI language model uses to understand or generate written or spoken language.
Turing test: The Turing test (originally known as the imitation game) is a thought experiment devised by mathematician Alan Turing that is designed to test a machine’s ability to imitate human communication and intelligence. In the experiment, a human interrogator is given the task of identifying which of their two conversation partners is a machine based on text-based responses. A machine will “pass” the Turing test if the interrogator is unable to identify it as the machine.
Uncanny valley: The uncanny valley is a theoretical phenomenon first introduced by roboticist Masahiro Mori which theorizes that an artificial entity that closely (but doesn’t fully) resemble a human being will evoke a sense of eerieness rather than familiarity.
Unsupervised learning: Unsupervised learning is a machine learning method in which an AI model is trained exclusively on unlabeled data, allowing it to discover patterns and insights in the data without human guidance. It is the opposite of supervised learning.
Virtual reality (VR): Virtual reality is a three-dimensional, computer-generated environment which a user can explore, interact with, and immerse themselves in. It’s commonly experienced using a device like a VR headset.