The field of artificial intelligence (AI) is comprised of many disciplines, technologies and subfields.
There are dozens of terms that are used to describe AI technologies, and the definitions can be complex and confusing.
A large part of our focus with the Marketing AI Institute is to make AI more approachable and actionable.
To do that, we’ve created this AI terms cheat sheet, which features easy, accessible definitions of core AI terminology.
We encourage you to skip to the term you’re curious about. But, the terms are also in specific order to help you build on each piece of knowledge.
An algorithm is a series of steps used to solve a problem or perform an action.
Human programmers write algorithms. Then, machines follow them to produce an outcome.
Almost every piece of software you use consists of a machine following instructions written by a human.
That includes AI. Except that AI uses different algorithms in different ways to do things your typical software can’t do.
Artificial Intelligence (AI)
AI is the science of making machines smart.
That definition comes from AI expert and CEO of DeepMind Demis Hassabis.
Here’s what we mean by making machines smart…
Your typical software can only follow the instructions it’s given.
That is helpful for automation. Even basic software makes our lives easier by doing things for us better and faster.
But typical software is static. It only does what it is told to do, over and over. And it only becomes better when a human programmer upgrades it.
In short, typical software cannot adapt to real-time or changing conditions in data or environment.
That makes it unsuitable for fast-changing situations or markets.
AI is different. It allows us to teach machines to become more human-like.
We give them the ability to see, hear, speak, move, and write. We even give them the ability to understand and make predictions.
In some cases, these smart machines can teach themselves to get better at the tasks listed above.
That gives AI abilities that typical software doesn’t have. It can respond, react, and recommend in real-time—without a human explicitly telling it what to do.
This makes AI suitable for a wide-range of intelligent tasks that were typically reserved only for humans.
Today, AI can:
- Write sentences and paragraphs.
- Understand human speech and respond coherently.
- Drive vehicles.
- Identify faces and objects.
- Navigate roads, cities, and warehouses.
- Predict what you want to buy or watch next.
- Predict how different actions will impact a business.
- Create new solutions to problems.
And much, much more.
Now, “artificial intelligence” isn’t one technology that does all of these smart tasks.
It’s actually an umbrella term for a collection of technologies.
Some of these technologies include natural language generation (NLG), natural language processing (NLP), machine learning, deep learning, and neural networks.
Machine learning is the core subset of AI that makes its most advanced capabilities possible.
Machine learning is how AI technology learns and gets smarter on its own.
In machine learning, a human trains a machine to achieve an outcome, using data prepared by the human.
Using what it learned from the human, the machine then goes and tries to achieve the outcome using data it’s never seen before.
Every time the machine tries to achieve the outcome, it learns from the results—even if they’re bad. And it applies these learnings to its next attempt.
In this way, the machine uses machine learning to rapidly improve at a task without direct human involvement.
In the process, it might discover new, creative, or counterintuitive ways to achieve the outcome that humans never thought to try.
This is why AI enabled by machine learning is so powerful. Once trained, it can very quickly outpace humans, as well as find solutions and patterns we’re unable to see.
As an example, Demis Hassabis (mentioned above) taught a machine learning program how to beat video games from the 1980s.
The system is “programmed to find a score rewarding, but is given no instruction in how to obtain that reward,” according to The New Yorker.
To start, the system makes random moves, sometimes scoring and sometimes not.
However, the program’s machine learning algorithms assess its past moves and determine which ones work best, putting that information into practice in its future games.
In this way, it improves. DeepMind’s system went from knowing nothing about a given game to mastering it in a matter of hours using this methodology.
Pattern recognition is when machines detect pattens in data.
These patterns help machines better optimize towards outcomes, which makes pattern recognition a key function in machine learning.
Pattern recognition is also what powers AI’s predictive capabilities. A machine uses patterns in historical data to predict which future outcomes are most likely.
Natural Language Generation (NLG)
Natural language generation is when AI writes or speaks human-sounding language.
Natural language generation powers everything from writing tools to smart home assistants to chatbots. It makes it possible to converse with machines.
Natural Language Processing (NLP)
Natural language processing is when AI interprets what human language means.
To do NLG, a machine must use natural language processing to first understand written or spoken language.
For instance, Google Translate uses NLP to understand the text you type, then generate its translation in whatever language you select.
Sentiment analysis is when AI understands the tone and emotion of human language.
Sentiment analysis takes NLP one step further. It not only understands language, but also understands its tone and emotion.
Sentiment analysis makes it possible for a machine to adjust its NLG output based on the mood of the person it’s talking to.
Image recognition is when AI accurately identifies objects in photos.
Using machine learning, AI systems can identify objects in images with a high degree of accuracy.
The AI system is trained on millions or billions of images to detect certain objects. Trained well, it can then go and recognize those objects in images it hasn’t seen before.
One example of image recognition used today is in radiology. AI is able to identify certain tumors with a high degree of accuracy better, faster, and cheaper than humans.
Facial recognition is when AI accurately identifies human faces in photos and videos.
You use facial recognition any time you use the Face ID function on your iPhone. AI is able to recognize your face, then use that information to confirm your identity.
Facial recognition is also used by social media platforms to tag your friends in photos or to map video filters to your face.
Computer vision is when AI accurately identifies objects in videos or real-time visual feeds.
Computer vision takes image and facial recognition further. It’s when AI can actually recognize moving objects, either in a video or out in the world.
In fact, self-driving cars rely on computer vision to drive without crashing. They use sensors to “see” the world around them, then computer vision to steer around objects.
Robots and robotics are not AI—they’re powered by AI.
Robotics combines image recognition, facial recognition, and computer vision to power a physical body. (If a robot talks, it may also use NLG and NLP.)
A robot itself is not AI, but it relies heavily on AI software to function.
Robots can use these core AI functions to do everything from walk and talk to unload goods in a warehouse.
Deep learning is a subset of machine learning that unlocks superhuman AI performance.
While machine learning makes AI’s advanced capabilities possible, deep learning pushes the very boundaries of what’s possible with AI.
Deep learning seeks to mimic how the human brain works. It does that by using “neural nets,” a collection of interconnected artificial neurons.
Each layer’s neurons are “weighted” to prioritize some criteria over others, depending on the goal of the neural network.
Machine learning systems can learn how to improve at tasks they’ve been trained to do—on data they’ve never seen.
Deep learning takes this further.
The most complex and cutting-edge deep learning systems can actually learn to do things they were never trained to do.
This unlocks the possibility that we will create machines that can learn to do many different things at levels far beyond human beings.
Artificial Narrow Intelligence (ANI)
All AI systems are artificial narrow intelligence, which means they only perform narrowly defined tasks.
They may perform these narrowly defined tasks at superhuman levels.
For example, a system that recognizes faces better and faster than humans can’t turn around and learn how to drive a car.
No matter how powerful, AI is limited by the scope of what it can do. But that could change.
Artificial General Intelligence (AGI)
Artificial general intelligence doesn’t exist, but describes an AI system that can learn and understand any intelligent task.
Right now, AGI doesn’t exist and isn’t close to existing.
But it’s what many people think of first when they think of AI: A superintelligent computer or robot that can do anything a human can do.
This is impossible today. And some experts are divided on whether or not it’s even possible to build at all.
But as techniques like deep learning advance, we may inch closer to general-purpose intelligent machines.
The potential creation of AGI raises fundamental questions about the benefits and dangers of technology, as well as what it means to be human.
Thankfully, today these aren’t questions we have to answer, since AGI is currently science fiction.