How Does AI Compare with Human Intelligence? A Critical Look

Facebook
Twitter
LinkedIn

In the previous post I argued that AI can be of tremendous help in our theological attempt to better understand what makes humans distinctive and in the image of God. But before jumping to theological conclusions, it is worth spending some time trying to understand what kind of intelligence machines are currently developing, and how much similarity is there between human and artificial intelligence.Image by Gordon Johnson from Pixabay

The short answer is, not much. The current game in AI seems to be the following: try to replicate human capabilities as well as possible, regardless of how you do it. As long as an AI program produces the desired output, it does not matter how humanlike its methods are. The end result is much more important than what goes on ‘on the inside,’ even more so in an industry driven by enormous financial stakes.

Good Old Fashioned AI

This approach was already at work in first wave of AI, also known as symbolic AI or GOFAI (good old-fashioned AI). Starting with the 1950s, the AI pioneers struggled to replicate our ability to do math and play chess, considered the epitome of human intelligence, without any real concern for how such results were achieved. They simply assumed that this must be how the human mind operates at the most fundamental level, through the logical manipulation of a finite number of symbols.

GOFAI ultimately managed to reach human-level in chess. In 1996, an IBM program defeated the human world-champion, Gary Kasparov, but it did it via brute force, by simply calculating millions of variations in advance. That is obviously not how humans play chess.

Although GOFAI worked well for ‘high’ cognitive tasks, it was completely incompetent in more ‘mundane’ tasks, such as vision or kinesthetic coordination. As roboticist Hans Moravec famously observed, it is paradoxically easier to replicate the higher functions of human cognition than to endow a machine with the perceptive and mobility skills of a one-year-old. What this means is that symbolic thinking is not how human intelligence really works.

The Advent of Machine Learning

Photo by Kevin Ku on Unsplash

What replaced symbolic AI since roughly the turn of the millennium is the approach known as machine learning (ML). One subset of ML that has proved wildly successful is deep learning, which uses layers of artificial neural networks. Loosely inspired by the brain’s anatomy, this algorithm aims to be a better approximation of human cognition. Unlike previous AI versions, it is not instructed on how to think. Instead, these programs are being fed huge sets of selected data, in order to develop their own rules for how the data should be interpreted.

For example, instead of teaching an ML algorithm that a cat is a furry mammal with four paws, pointed ears, and so forth, the program is trained on hundreds of thousands of pictures of cats and non-cats, by being ‘rewarded’ or ‘punished’ every time it makes a guess about what’s in the picture. After extensive training, some neural pathways become strengthened, while others are weakened or discarded. The end result is that the algorithm does learn to recognize cats. The flip side, however, is that its human programmers no longer necessarily understand how the conclusions are reached. It is a sort of mathematical magic.

ML algorithms of this kind are behind the impressive successes of contemporary AI. They can recognize objects and faces, spot cancer better than human pathologists, translate text instantly from one language to another, produce coherent prose, or simply converse with us as smart assistants. Does this mean that AI is finally starting to think like us? Not really.

When machines fail, they fail badly, and for different reasons than us.

Even when machines manage to achieve human or super-human level in certain cognitive tasks, they do it in a very different fashion. Humans don’t need millions of examples to learn something, they sometimes do very fine with at as little as one example. Humans can also usually provide explanations for their conclusions, whereas ML programs are often these ‘black boxes’ that are too complex to interrogate.

More importantly, the notion of common sense is completely lacking in AI algorithms. Even when their average performance is better than that of human experts, the few mistakes that they do make reveal a very disturbing lack of understanding from their part. Images that are intentionally perturbed so slightly that the adjustment is imperceptible to humans can still cause algorithms to misclassify them completely. It has been shown, for example, that sticking minuscule white stickers, almost imperceptible to the human eye, on a Stop sign on the road causes the AI algorithms used in self-driving vehicles to misclassify it as a Speed Limit 45 sign. When machines fail, they fail badly, and for different reasons than us.

Machine Learning vs Human Intelligence

Perhaps the most important difference between artificial and human intelligence is the former’s complete lack of any form of consciousness. In the words of philosophers Thomas Nagel and David Chalmers, “it feels like something” to be a human or a bat, although it is very difficult to pinpoint exactly what that feeling is and how it arises. However, we can intuitively say that very likely it doesn’t feel like anything to be a computer program or a robot, or at least not yet. So far, AI has made significant progress in problem-solving, but it has made zero progress in developing any form of consciousness or ‘inside-out-ness.’

Current AI is therefore very different from human intelligence. Although we might notice a growing functional overlap between the two, they differ strikingly in terms of structure, methodology, and some might even say ontology. Artificial and human intelligence might be capable of similar things, but that does not make them similar phenomena. Machines have in many respects already reached human level, but in a very non-humanlike fashion.

For Christian anthropology, such observations are particularly important, because they can inform how we think of the developments in AI and how we understand our own distinctiveness as intelligent beings, created in the image of God. In the next post, we look into the future, imagining what kind of creature an intelligent robot might be, and how humanlike we might expect human-level AI to become.

More Posts

Watching for AI-enabled falsehoods

This is a historical year for elections world-wide. This is also a time for unprecedented ai-enabled misinformation. The Misinformation Hub by AI and Faith focuses

A Priest at an AI Conference

Created through prompt by Dall-E Last month, I attended the 38th conference of the Association for the Advancement of Artificial Intelligence. The conference was held

Send Us A Message

Don't know where to start?

Download our free AI primer and get a comprehensive introduction to this emerging technology
Free