The usual and recurring, view of new developments in artificial intelligence research is that conscious and intelligent machines are only on the boundary. Machines understand verbal commands, play games, discern images, and drive cars better than we do. How long can it be before they go between us?
The new White House report on artificial intelligence treats this dream with skepticism, saying that in the next 20 years, “machines likely will not have widespread intelligence comparable or surpassing that of humans,” though it goes on to say that in the coming years, “machines will achieve and surpass human performance in more and more tasks.” But its assumptions about how these capabilities will develop have overlooked some essential points.
As an AI researcher, I admit that it was nice to highlight by own field at the highest levels of American government, but the report focused almost exclusively on what I call “the boring kind of AI.” In half a sentence, it rejected my AI research branch, which examines how evolution can help to develop ever-better AI systems, and how computational models can help us understand how our human intelligence has evolved.
The report concentrates on what might be called extraordinary AI tools: deep learning and machine learning. These are the kinds of technologies that have been able to play “Jeopardy”! Well and beat humans go masters in the most complicated game ever invented. These current intelligent systems can process huge amounts of data and perform complex calculations very quickly.
We have to do more than teach machines to learn, we have to reduce the boundaries that define the 4 different kinds of AI, the barriers that separate machines from us and us from them.
HOW MANY TYPES OF ARTIFICIAL INTELLIGENCE ARE THERE?
There are four types of artificial intelligence: limited memory, reactive machines, a theory of the mind, and self-awareness.
1. LIMITED MEMORY!
Self-driving cars already do this in part. They observe, for example, the speed and direction of other cars. This cannot happen in a moment but requires identifying certain objects and monitoring them over time.
These observations are added to the pre-programmed maps of the world of self-driving cars, which include lane markings, traffic lights, and other important elements such as curves on the road. They are incorporated when the car decides when to shift lanes to avoid cutting off another driver or being hit by a nearest car.
But this simple information about the past is only temporary, and are not stored as part of the car’s experience library, from which it can learn how human drivers gain experience behind the wheel for years.
So how can we build artificial intelligence systems that build full representations, keep in mind their experiences, and know how to deal with new conditions? Brooks was right in that it is very difficult to do so. My own research into ways inspired by Darwinian expansion can start to compensate for human shortcomings by letting machines build their own representations.
2. REACTIVE MACHINES
The most basic types of AI systems are purely reactive and have neither the ability to form memories nor the ability to use past experience to influence current decisions. Deep Blue, IBM’s chess-playing supercomputer that beat international grandmaster Garry Kasparov in the late 1990s, is an ideal example of this type of machine.
Deep Blue can identify the pieces on a chessboard and knows how each move is going. It can predict what moves might come next for it and its opponent, and it can select the most optimal moves from the possibilities.
Except for a rarely used chess-specific rule of repeating the same move three times, Deep Blue ignores everything that happened before the present moment. All it does is look at the pieces on the chessboard as they currently stand and choose from possible next moves.
This type of intelligence involves the computer perceiving the world directly and acting on what it sees and is not based on an internal concept of the world. In a groundbreaking paper, AI researcher Rodney Brooks argued that we should build only such machines, mainly because humans are not particularly good at programming precisely simulated worlds for computers, which AI research calls the “representation” of the world.
Today’s intelligent machines we admire have either no such worldview or a very limited and specialized idea for their particular tasks. The modernization in Deep Blue’s design was not to develop the range of possible films the computer was considering. Rather, the developers found a way to narrow their view and refrain from taking some potential future steps based on the evaluation of its outcome. Without this ability, Deep Blue would have had to be an even more powerful computer to actually beat Kasparov.
Even Google’s AlphaGo, which has beaten the best human Go experts, cannot evaluate all potential future moves; its analysis method is more sophisticated than Deep Blue’s and uses a neural network to evaluate game developments.
While these methods improve the ability of AI systems to play certain games better, they are not easily modified or applied to other situations. These computer-based ideas have no concept of the wider world, i.e. they cannot function beyond the specific tasks assigned to them and are easy to deceive.
You can’t interactively participate in the world as we might imagine AI systems one-day. Rather, these machines behave the same method every time they experience the same condition. This can very well help make an AI system trustworthy: you want your autonomous car to be a reliable driver, but it’s bad if we really want machines to engage with the world and respond to it. These simplest AI systems will be boring, interested, or sad.
3. THEORY OF MIND
We could stop here and call this point the important gap between the machines that we have and the machines that we will build in the future. However, it is better to discuss in more detail what kinds of representations machines must be, and what they must be about.
Machines in the next, more progressive class form representations not only of the globe but also of other operators or beings in the globe. In psychology, this is called the “theory of mind” – the understanding that people, creatures, and objects in the world can have thoughts and emotions that influence their own behavior.
This is crucial to the way we humans have built societies because they have enabled us to have social interactions. Without understanding each other’s motives and intentions, and without taking into account what someone else knows about me or the environment, cooperation is difficult at best, impossible at worst.
If AI systems been ever to transform among us, they must be able to understand that each of us has thoughts, feelings, and expectations of how we are treated, and they must adapt their behavior accordingly.
The final step in AI development is to develop systems that can represent themselves. Conclusively, we AI researchers must not only know awareness but frame machines that have it.
This is, in a sense, an extension of the “theory of mind,” which is possessed by Type III artificial intelligence pieces. Consciousness is not called “self-awareness” for nothing. “I want this object” is a very different statement from “I know I want this object.” Conscious beings are aware of themselves, know their inner states, and can predict the feelings of others. We assume that someone honks their horn in traffic behind us is angry or impatient because we feel this way when we honk at others. Without a theory of mind, we would not be able to draw these kinds of conclusions.
Although we are probably far from creating self-confident machines, we should focus our efforts on understanding memory, learning, and the ability to base decisions on experiences. This is an important step in understanding human intelligence on its own, and it is critical if we are to design or develop machines that are more than exceptional in divide what they see in front of them.