Any sufficiently advanced technology is indistinguishable from magic – Arthur C. Clarke
As a marketer, it can sometimes seem like the work my engineering colleagues are doing is magic — pure computer science sorcery.
I mean, they’re building a chatbot that can communicate using human language and learn from the conversations it’s having.
(Meanwhile, I’m busy contemplating Oxford commas and racking my brain over whether or not “racking” should actually be spelled “wracking.”)
I’m not going to sugarcoat it. AI can be a tough topic to wrap your head around. And with all of the various branches — machine learning, deep learning, natural language processing — it’s not a topic that you can hope to master by reading a single blog post. Or even a single book.
So the purpose of this post isn’t to provide you with an exhaustive, engineering-degree-level understanding of AI. Instead, it’s to “translate” some of the most commonly used AI terms into everyday language so you can understand them at a basic level.
Moving forward, it’s likely that artificial intelligence will be playing a more significant role in the work marketers do. Becoming fluent in AI-speak now can help prepare us for the road ahead.
Let’s dive in.
Artificial Intelligence (AI)
When a computer can do things that humans need intelligence to do
Or if you’re talking about AI as a discipline, it’s figuring out how to make computers do things that humans need intelligence to do.
Using logic, forming hypotheses, solving problems — those are a few examples of activities that we typically think of as requiring a human-level of intelligence. When a computer or computer program is able to do those types of things, it’s considered artificially intelligent.
Or at least, some people would consider it artificially intelligent.
As computer science professor Toshinori Munakata wrote in Fundamentals of the New Artificial Intelligence, “There is no standard definition of exactly what artificial intelligence is. If you ask five computing professionals to define “AI”, you are likely to get five different answers.”
Not really helping my cause here, professor.
Historically, the Turing test has been the gold standard for determining whether or not a computer is truly intelligent.
First described by computing pioneer Alan Turing in a 1950 paper, the Turing test invites a participant to exchange messages, in real-time, with an unseen party. In some cases that unseen party is another human, in other cases it’s a computer. If the participant is unable to distinguish the computer from the human, the computer is said to have passed the Turing test and can be considered intelligent.
So that settles it, right? When a computer’s behavior becomes indistinguishable from the behavior of a human, we can say that AI has been achieved.
Not so fast.
A separate camp of AI researchers argues that framing AI as a quest to understand and imitate human intelligence is the wrong approach.
After all, they argue, humans didn’t achieve “artificial flight” through building machines that flap their wings like birds, bats, or bugs. Instead of imitating nature, humans relied on other engineering principles in order to create the planes we fly around in today.
Operating with this approach, the goal of AI isn’t to build computers that can behave like humans, but to build highly flexible, rational computers that can perceive their environments and take actions that maximize their chances of success toward some goal.
This, as it turns out, is in-line with how Amazon’s AI-powered program Alexa “thinks” about AI.
When I asked Alexa to define AI, she replied that it’s “the branch of computer science that deals with writing computer programs that can solve problems creatively.”
On a related note, when I asked Alexa if she could pass the Turing test, she replied, “I don’t need to pass that — I’m not pretending to be human.”
When a computer program can automatically improve with experience
Or if you’re talking about machine learning as a discipline, it’s the branch of AI that explores how to create programs that can automatically improve with experience.
In the early days of AI research, computer scientists thought they could achieve intelligence through feeding a computer program a huge list of rules that it had to follow.
When x happens, do y. When y happens, do z. And so on.
With machine learning, on the other hand, a computer becomes intelligent the same way a human becomes intelligent: through learning from experience.
For example, if you wanted to teach a computer to be able to identify animals, you could show it tons of labeled photographs (e.g. this is a shark, this is an ostrich, and so on) and a machine learning algorithm could figure out what features differentiate certain animals from others.
This is machine learning’s sweet spot: recognizing patterns in vast amounts of data.
In our example, the labeled photographs serve as training data. The machine learning algorithm uses this data to develop a model that can match an input (e.g. photograph of shark) with the correct output (e.g. “This is a shark”).
From there, you can feed the algorithm photographs it has never seen before (test data) and ask it to predict the best outputs based on what it’s already learned. The more experience your animal-identifying algorithm gets, the better it will become at correctly identifying animals and returning the right outputs.
That’s machine learning in a nutshell — Instead of programming a computer to perform some specific task, you’re programming a computer to learn how to perform some specific task so it can get better at performing that task over time.
Another way of thinking about it: Instead of solving a problem by putting inputs into a program and getting outputs (like you do with traditional programming), with machine learning you’re taking inputs and outputs and using them to develop a program that can solve the problem.
If you’re looking for a formal way of determining whether or not something qualifies as “machine learning,” I’ll leave you with computer science professor Tom Mitchell’s formulaic definition below:
A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.
A type of machine learning that uses a layered, brain-like network of “neurons”
Deep learning and artificial neural networks (ANNs) are the two AI concepts the really had me scratching my head when I started learning about this stuff. But then my coworker Luke explained them to me in terms any marketer can understand: Instagram.
With basic machine learning, you’re applying one filter. Adding a single effect.
With deep learning, you’re stacking effects. You’re increasing the brightness, saving your photo, then taking that brighter version of your photo and increasing the contrast, saving it again, then taking that brighter, higher contrast version of your photo and increasing the saturation, and so on.
Of course, this isn’t a perfect analogy. In reality, those filters or effects are actually layers of interconnected processing nodes, or “neurons,” which form a neural network.
And that’s where the “deep” in deep learning comes from: Those neural networks don’t use single layers of neurons for processing information, they use multiple, stacked layers.
At each layer of a neural network, data is transformed before being handed off to the next layer. And then the next. And so on. It’s an iterative process, with each layer working off of the output of the layer before it.
From a mathematical perspective, this allows you to move from the realm of linear to non-linear equations. Models can become “wavier” and more abstract — closer to the way human brains function.
To understand what this means in terms of real-world applications, let’s turn to Google DeepMind’s AlphaGo program, which has been making headlines recently for beating the world’s top-ranked players at the ancient board game Go.
Unlike a game of chess, where you have around 40 options to choose from for each move, with Go you have up to 200 options to choose from for each move. As Google DeepMind researcher David Silver once explained: “There are more configurations on the board than there are atoms in the universe.”
Thus, beating people who are experts at this game requires more than brute force computing: You need pattern recognition, and intuition. And those aren’t things that a computer can be explicitly programmed to have — they’re things a computer has to learn.
The IBM chess computer Deep Blue, which famously beat grandmaster Garry Kasparov in 1997, was explicitly programmed to win at the game. But AlphaGo was not preprogrammed to play Go: rather, it learned using a general-purpose algorithm that allowed it to interpret the game’s patterns…
Natural Language Processing (NLP)
When a computer program uses machine learning to understand and communicate in human language (e.g. English)
Think of NLP as applying machine learning to the “problem” of human-produced text and language. Here’s what I mean:
Since their inception, computers have functioned using specialized computer languages. In the ’50s there was COBOL, in the ’60s there was BASIC, in the ’70s there was C, in the ’80s there was C++, in the ’90s there was Python and Ruby and Java and so on.
The goal of NLP is to teach computers to understand natural languages, i.e. languages that we humans use to communicate with each other.
And while computer scientists have tried to simply program computers to speak and understand natural language, it has become more and more apparent that learning a language is the only way to truly understand it and capture all of the nuances. That’s where the machine learning comes into play. (Although to be fair, you don’t have to use machine learning to do NLP. NLP is more about a class of problems than a particular method. That being said, nearly all modern NLP solutions use machine learning.)
NLP has a wide variety of applications, some of which you’ve almost certainly been exposed to. For example, spelling and grammar checking tools, language translation tools, and search engines can all use NLP.
Now, some people might argue that the underlying purpose of NLP is to beat the Turing test — to create a chatbot or program that can communicate so convincingly in natural language that it’s impossible to distinguish it from a human.
And while that outcome might someday become a reality, we think about chatbots and NLP a little differently here at Drift.
Ultimately, we want our chatbot, Driftbot, to learn natural language not so it can trick you into thinking it’s human, but so it can understand exactly what you’re asking — that way it can be sure it’s putting you in touch with the best-qualified person for answering your questions.
Supervised Learning vs. Unsupervised Learning
When you tell a machine learning algorithm what patterns to look for vs. when you let a machine learning algorithm discover patterns on its own
With supervised learning, you’re dealing with labeled data, and are trying to optimize your machine learning algorithm to be able to produce the single, correct output for each input.
Our animal-identifying algorithm was an example of supervised learning. All the data was labeled, and our goal was to teach our algorithm to get better at producing the correct outputs for specific inputs.
With unsupervised learning, on the other hand, you’re dealing with data that is unlabeled, so you don’t know what the output is going to be.
Instead of training your algorithm to return a correct output based on a specific input, you’re training it to look for structure and patterns in the entire dataset.
For example, an unsupervised learning algorithm could take a bunch of unlabeled photographs of animals and then organize them into clusters based on the like-characteristics it identifies.
Using this approach, unsupervised learning algorithms can identify patterns that humans might not otherwise have thought to look for.
At Drift, a lot of our natural language processing (NLP) work is going to be unsupervised. That way, instead of having to go through every Drift conversation ever and label everything (e.g. this conversation was about our Slack integration, this conversation was about pricing), we can have our Driftbot figure out how to cluster different conversations into relevant categories.