AI vs ML: What's the Difference?
AI and ML

AI vs ML: What's the Difference?

Neha Kulshreshtha
Neha Kulshreshtha

For innovators and entrepreneurs alike, AI vs ML is a key question. Which is better for curing cancer? Which is needed to design a safe self-driving car? Which can detect misinformation online? These questions will be answered with either AI or machine learning or, likely, a combination of both, but to understand why requires understanding the individual technologies.

A reasonable person could ask:

What is the difference between machine learning and artificial intelligence?

Machine learning is a part AI, but AI itself requires more than just machine learning.

Machine learning is, according to Tom M. Mitchell,

“The study of computer algorithms that allow computer programs to automatically improve through experience”

whereas AI is

"The science and engineering of making computers behave in ways that, until recently, we thought required human intelligence”

in the words of Andrew Moore, Dean of the School of Computer Science at Carnegie Mellon University.

AI and machine learning, while inter-related, are not the same, and their applications varies. While machine learning is maturing as a field, coming into widespread use in science, academia, business, and consumer products, AI--or “true AI”--remains some distance in the future, despite many businesses claiming to use AI today.

It’s important then to be able to distinguish between AI vs ML, to know what is machine learning, what is AI, and what is just marketing hype.

What Is Machine Learning?

Understanding machine learning is relatively simple. At its core, machine learning is pattern recognition. A machine learning program is just a computer program that can not only be ‘taught’ patterns but can then ‘learn’ the pattern on its own--in the words of computer scientist Tom M. Mitchell “to automatically improve through experience."

To do this requires large amounts of data, and as a result, machine learning and data mining are closely related in concept. To oversimplify, a machine “learns” by being fed a set of data from which patterns can be observed or extracted. Once a machine has learned the pattern or patterns from an initial, known set of data, the machine can then be given a set of unknown data and spot similar patterns, in theory.

So, if a programmer feeds thousands and thousands of images to an image recognition program, feeding it only pictures known to be of cats, telling the machine “this is a cat”, the machine can “learn” what a cat looks like by detecting patterns. It can begin to distinguish between a cat and a dog by picking up on patterns of nose shapes, ear shapes, tail shape, fur color, and so on.

If given an unknown image that has a nose and ear shape that matches that of a cat, the machine can identify the picture as being that of a cat. At least, in theory.

However, machine learning has limitations. Going with the previous example, this machine learning program has not only learned the pattern of cats (e.g. “this is what a cat generally looks like, so if the machine sees a picture of a cat it’s never seen before, it can correctly identify it as a cat”), it has also learned the pattern of dogs, so now the machine can take a random assortment of dog and cat photos and sort them into “pictures of cats” and “pictures of dogs”--but what if in that pile of pictures, a picture of a monkey has inadvertently slipped into the data set?

The machine learning program would fail; it would be unable to identify the picture of the monkey because the machine only knows dogs and cats. If you then wanted to change the task of this program from “sort pictures of dogs and cats” to “identify pictures of animals,” it could not do this, whereas a human could.

Even a human who has never seen a monkey before could likely identify from a single picture of a monkey that the monkey is a mammal, similar to dogs and cats because it has two eyes, two ears, a tail, and so on. A machine cannot do this. A machine learning program would have to start from scratch: be given many thousands or millions of pictures of monkeys to “learn” monkeys.

Put simply, machine learning can be a useful tool to solve a specific problem, but it can only solve one problem at a time; a machine learning program is not a general problem-solving tool.

So, outside a specific context, machine learning quickly falters in its ability to solve problems which it hasn’t previously learned to solve, and for a machine learning program to solve multiple problems requires ‘teaching’ it to solve each problem individually--it can’t take the “lessons” it “learned” solving one problem and apply it to another autonomously.

From this, we can begin to see the difference between machine learning and artificial intelligence.

For example, if one developed a machine learning program for Texas Hold ‘Em Poker, the computer could be taught the rules of the game, which hands beat the other hands, it could know the ranking of the cards, the suits, and so on.

It could then be really good at playing Texas Hold ‘Em poker because it can, much better than a human, calculate odds of drawing certain cards, recognize patterns from many previous games played, the play style of a human opponent in previous games, and so on.

It could “solve” Texas Hold ‘Em, but if you then tried to have this same computer program play 5-card draw--another form of poker, one with many of the same rules and probabilities but a different style of play---the machine learning program would be completely unable to play this slightly different variation of poker. This reveals a key difference in ai vs ml.

By contrast, a human could switch from Texas Hold ‘Em to five card stud and, after learning the basic rules and maybe watching a game or two, play quite competently, because the human is intelligent.

The human can take lessons learned from previous problems and apply those lessons to new problems, or use the mental tools previously used for other problems to solve a new problem. And of course, this being poker, there is one thing a human can do which a machine could not: bluff.

A human being can play poker and play like they have a winning hand, even though they hold no winning cards at all. A machine can of course calculate the odds, with great precision, of its opponent having a winning hand and come to the conclusion that it is very very likely the opponent does not have a winning hand….but not a 0% chance.

A machine learning program could be programmed to recognize players who bluff but would struggle to make a decision autonomously whether a human opponent was bluffing or whether they did have a highly unlikely winning hand in fact.

This has real-world implications of great magnitude when it comes to ai vs ml. For example, if a security company wants to develop facial recognition software to identify known criminals, it can teach a machine to recognize human faces and then match them to a database of pictures of criminals’ faces.

But what if, in response to a global pandemic, everyone starts wearing masks which cover most of their face? difference between machine learning and artificial intelligence

As humans, our own “facial recognition software” is hampered by masks, but we can still identify people we know even when they wear masks, by looking for other clues, like their hair, their clothing, the way they walk, how tall they are, visible tattoos, and so on.

A machine learning program which only knows how to recognize faces could not do this. It requires intelligence. And that is the key difference between machine learning and artificial intelligence.

What is Artificial Intelligence?

Artificial Intelligence, AI, is a complex subject, and there is actually some disagreement about what exactly AI is and what AI is not. Indeed, there are some experts who say AI can’t exist at all. At root, AI is the idea of a computer being “intelligent”--that is, being able to autonomously recognize new data and solve new problems.

Or, even more simply, AI is a computer that can “think” on its own, like a human being, and thus can be a general problem-solving tool. A funny way to think about it is C-3PO from Star Wars. When the Star Wars movies were being made, C-3PO was just a human being in a robot costume portraying  an AI program. The goal of realizing AI then is to take the person out of the robot costume and put a real machine into it instead. Replace the human with a computer program.

And that is what makes AI vs ML hard to define: AI is not really one technology but is a cluster of several technologies, machine learning among them, which enables an AI program to solve problems.

This cluster of technologies then becomes an “artificial neural network” similar to the human brain, where one technology or capability can solve one aspect of a problem and then another technology or cluster of technologies can solve another, related problem.

With enough clusters forming a large enough neural network, in theory, one machine can become the ultimate problem solving machine, a machine that can solve any problem given to it (much like the human brain can). This has enormous potential for businesses and entrepreneurs, and for precisely that reason, industry leaders like Google and IBM are pumping tens of billions of dollars into developing the first true AI program. That is the ultimate dream of AI development, “broad AI” it is called, though it remains some distance into the future.

By contrast, “narrow AI” is already in widespread use, in such applications as speech recognition programs. These are programs that use machine learning so they can carry out specific tasks--that is, narrow tasks--without being explicitly programmed how to do so.

Broad AI, however, is the still unrealized goal of being able to combine all these narrow AI tools into one comprehensive package, making computers more human-like. Hence ‘artificial intelligence’ because these computers would act or “think” like a human.

This is seen in countless movies and TV shows, from the eponymous cyborg in The Terminator to HAL 9000 in 2001: A Space Odyssey; people see these works of fiction and expect real machines of the future to be able to act like they do in movies, recognizing new, unknown problems and solving them. Yet that future still seems a long way off.

Real AI programs are not yet able to even drive a car, let alone ride a motorcycle and fire a shotgun after travelling backwards through time (sorry, Terminator fans).

While AI-lite programs are already in common commercial use--auto-pilot programs on commercial airliners, for example--there’s a minority opinion among experts that “smart machines” aren’t AI at all. Take for example IBM’s “Watson” which successfully defeated two human champions at the game-show Jeopardy! in 2011 and provides a useful demonstration of ai vs ml in practice.

Watson was able to “play” the game-show Jeopardy!, which required learning to recognize speech and decipher natural language questions; impressively, Watson used machine-learning to learn this ability entirely on its own.

However, Watson had to have other capabilities as well in order to win Jeopardy!, such as a vast cache of knowledge, properly indexed, to be able to come up with the correct answer after it had listened to and understood the question asked by the host Alex Trebek. Machine learning, on its own, was not enough to win the game, but it played a vital part in helping Watson ‘learn’ how to play and how to win.

Watson was able to listen to host Alex Trebek read a question, piece together contextual clues, and find the answer to a question (or, being Jeopardy!, the question to an answer).

But this wasn’t real AI, or so some experts say, experts like Douglas Hofstadter, a cognitive scientist who maintains Watson is “just a text search algorithm connected to a database, just like Google search. It doesn’t understand what it’s reading.”

Yet other experts profoundly disagree, saying that a machine which can, in real-time,  listen to new input, develop a response, and assess its confidence level in the response being correct (Watson would not ‘buzz in’ to answer the question unless it was confident it had the right answer), of course, must be the product of artificial intelligence. Regardless of what the experts say, results in the real world have been mixed.

After winning Jeopardy! in 2011, IBM announced Watson would be used in the healthcare industry as an artificially intelligent doctor, able to diagnose patients.

Despite changing the game of Jeopardy!, Watson has not proven to be the game-changing AI program healthcare workers were promised. Nearly a decade later, human doctors have yet to see Watson live up to the hype, frequently lamenting that it’s not “real” AI.

So just what is “real” AI? What is the difference between AI and ML? Watson, despite its shortcomings, provides an illustration of the difference between machine learning and artificial intelligence, an example of how machine-learning on its own is not AI but machine-learning will be a vital component of AI.

When it comes to ML vs AI, the difference between AI and ML can be hard to understand because machine learning is going to be one piece of artificial intelligence, one piece that has to combine with many other pieces to make AI work. Machine learning on its own is just machine learning, but machine learning combined with other technologies can become AI.

This is seen in other areas of AI development, the need to combine multiple forms of machine intelligence. The classic example for our current moment is the dream of a self-driving car.

Driving a car requires several different skills for a human, not only mechanically operating the pedals and steering wheel but being able to see road conditions and react rapidly to unexpected developments, as well as an ability to navigate roads so the car reaches its destination, and of course understanding and following the rules of the road.

A computer which can successfully, safely drive a car would probably have to be true “AI” because it has to combine so many different problem-solving functions or learned “skills” and will have to be able to operate in an environment where every situation is a little bit different from the last. And that requires a decision-making ability, a key component to “true” AI.

This, once achieved, will have enormous implications for human life. A machine that can diagnose cancer more accurately than the best doctor, or a machine that can drive a car more safely than the best human driver, will offer exponential improvements to daily quality of life. And, potentially, AI will enable many jobs to be automated--to be performed by machines.

This of course is the cause of deep anxiety among many people, but it is worth remembering that whenever humans have previously invented machines which “took jobs”, it in fact led to a net creation of more jobs by freeing up more human beings to perform other tasks. AI will likely be no different.

Start Learning More About AI and ML

The difference between AI and machine learning is that machine learning will be a part of AI, but AI is more than just machine learning. Machine learning is pattern recognition and the ability to learn new patterns or sets of data autonomously. AI is this ability combined with multiple other abilities to give a machine the capacity to solve problems it has not encountered before.

When it comes to the question ‘what is the difference between machine learning and artificial intelligence’, we can find answers already by looking to the real world, where machine learning has seen tremendous breakthroughs in recent years. Many products now on the market were built with machine learning, or use it to perform important functions.

In business, machine learning capable tools are becoming an essential asset to entrepreneurs, and, indeed, exploiting machine-learning is the basis of entire business models for some companies. Machine learning is here; the future is now.

AI, however, remains distant, a dream yet to come true. A machine that can combine multiple aspects of machine learning, deep learning, artificial neural networks, and more with still other, related technologies (like visual recognition programs), even to perform a single task (like driving a car), has yet to be fully realized.

The idea of a general problem-solving tool mimicking a human seems as far from reality now as it ever has been, despite the enormous leaps forward which have been made. It speaks volumes, however, that fifty years ago, many computer scientists believed that machine learning itself was what AI would be. That a computer which could learn on its own would of course be intelligent. And now, fifty years later, as machines become ever better at machine learning, humans are starting to learn how much further there is to go.


Try Fireflies for free