Human-level, but not Humanlike: The Strangeness of Strong AI

Facebook
Twitter
LinkedIn

The emergence of AI opens up exciting new avenues of thought, promising to add some clarity to our understanding of intelligence and of the relation between intelligence and consciousness. For Christian anthropology, observing which aspects of human cognition are easily replicated in machines can be of particular help in refining the theological definition of human distinctiveness and the image of God.

However, by far the most theologically exciting scenario is the possibility of human-level AI, or artificial general intelligence (AGI), the Holy Grail of AI research. AGI would be capable to convincingly replicate human behavior. It could, in principle, pass as human, if it chose to. This is precisely how the Turing Test is designed to work. But how humanlike would a human-level AI really be?

Computer programs have already become capable of impressive things, which, when done by humans, require some of our ‘highest’ forms of intelligence. However, the way AI approaches such tasks is very non-humanlike, as explained in the previous post. If the current paradigm continues its march towards human-level intelligence, what could we expect AGI to be like? What kind of creature might such an intelligent robot be? How humanlike would it be? The short answer is, not much, or even not at all.

The Problem of Consciousness

Philosophically, there is a huge difference between what John Searle calls ‘strong’ and ‘weak’ AI. While strong AI would be an emulation of intelligence, weak AI would be a mere simulation. The two would be virtually indistinguishable on the ‘outside,’ but very different ‘on the inside.’ Strong AI would be someone, a thinking entity, endowed with conscience, while weak AI would be a something, a clockwork machine completely empty on the inside.

It is still too early to know whether AGI will be strong or weak, because we currently lack a good theory of how consciousness arises from inert matter. In philosophy, this is known as “the hard problem of consciousness.” But if current AI applications are any indication, weak AGI is a much more likely scenario than strong AGI. So far, AI has made significant progress in problem-solving, but it has made zero progress in developing any form of consciousness or ‘inside-out-ness.’

Even if AGI does somehow become strong AI (how could we even tell?), there are good reasons to believe that it would be a very alien type of intelligence.

What makes human life enjoyable is arguably related to the chunk of our mind that is not completely rational.

Photo by Maximalfocus on Unsplash

The Mystery of Being Human

As John McCarthy – one of the founding fathers of AI – argues, an AGI would have complete access to its internal states and algorithms. Just think about how weird that is! Humans have a very limited knowledge of what happens ‘on their inside.’ We only see the tip of the iceberg, because only a tiny fraction of our internal processes enter our stream of consciousness. Most information remains unconscious, and that is crucial for how we perceive, feel, and act.

Most of the time we have no idea why we do the things we do, even though we might fabricate compelling, post-factum rationalizations of our behavior. But would we really want to know such things and always act in a perfectly rational manner? Or, even better, would we be friends or lovers with such a hyper-rational person? I doubt.

Part of what makes us what we are and what makes human life enjoyable is arguably related to the chunk of our mind that is not completely rational. Our whole lives are journeys of self-discovery, and with each experience and relationship we get a better understanding of who we are. That is largely what motivates us to reach beyond our own self and do stuff. Just think of how much of human art is driven precisely by a longing to touch deeper truths about oneself, which are not easily accessible otherwise.

Strong AI could be the opposite of that. Robots might understand their own algorithms much better than we do, without any need to discover anything further. They might be able to communicate such information directly as raw data, without needing the approximation/encryption of metaphors. As Ian McEwan’s fictitious robot character emphatically declares, most human literature would be completely redundant for such creatures.

The Uncanny Worldview of an Intelligent Robot

Intelligent robots would likely have a very different perception of the world. With access to Bluetooth and WiFi, they would be able to ‘smell’ other connected devices and develop a sort of ‘sixth sense’ of knowing when a particular person is approaching merely from their Bluetooth signature. As roboticist Rodney Brooks shows, robots will soon be able to measure one’s breathing and heart rate without any biometric sensor, simply by analyzing how a person’s physical presence slightly changes the behavior of WiFi signals.

The technology for this already exists, and it could enable the robot to have access to a totally different kind of information about the humans around, such as their emotional state or health. Similar technologies of detecting changes in the radio field could allow the robots to do something akin to echolocation and know if they are surrounded by wood, stone, or metal. Just imagine how alien a creature endowed with such senses would be!

AGI might also perceive time very differently from us because they would think much faster. The ‘wetware’ of our biological brains constrains the speed at which electrical signals can travel. Electronic brains, however, could enable speeds closer to the ultimate physical limit, the speed of light. Minds running on such faster hardware would also think proportionally faster, making their experience of the passage of time proportionally slower.

If AGI would think ‘only’ ten thousand times faster than humans, a conservative estimation, they would inhabit a completely different world. It is difficult to imagine how such creatures might regard humans, but futurist James Lovelock chillingly estimates that “the experience of watching your garden grow gives you some idea of how future AI systems will feel when observing human life.”

The way AGI is depicted in sci-fi (e.g. Terminator, Ex Machina, or Westworld) might rightly give us existential shivers. But if the predictions above are anywhere near right, then AGI might turn out to be weirder than our wildest sci-fi dreams. AI might reach human-level, but it would most likely be radically non-humanlike.

Is this good or bad news for theological anthropology? How would the emergence of such an alien type of affect our understanding of humanity and its imago Dei status? The next post, the last one in this four-part series, wrestles head-on with this question.

More Posts

Watching for AI-enabled falsehoods

This is a historical year for elections world-wide. This is also a time for unprecedented ai-enabled misinformation. The Misinformation Hub by AI and Faith focuses

A Priest at an AI Conference

Created through prompt by Dall-E Last month, I attended the 38th conference of the Association for the Advancement of Artificial Intelligence. The conference was held

Send Us A Message

Don't know where to start?

Download our free AI primer and get a comprehensive introduction to this emerging technology
Free