Netflix “Eden”: Human Frailty in a Technological Paradise

Recently, my 11 year-old daughter told me she wanted to watch animes. I have watched a few and was a bit concerned about her request. While I have come to really appreciate this art form, I feared that some thematic elements would not be appropriate to her 11 year-old mind. Yet, after watching the first episode of Netflix Eden, my concerns were appeased and I invited my two oldest (11 and 9) to watch it with me. With only 4 episodes of 25 minutes each, the series make it for a great way to spend a lazy Saturday afternoon. Thankfully, beyond being suitable there was enough that for me to reflect on. In fact, captivating characters and an enchanting soundtrack moved me to tears making Netflix Eden a delightful exploration of human frailty.

Here is my review of this heart-warming, beautifully written story.

Official Trailer

A Simple but Compelling Plot

Without giving a way much, the story revolves around a simple structure. From the onset we learn that no human have lived on earth for 1,000 years. Self-sufficient robots successfully turned a polluted wasteland into a lush oasis. The first scenes show groups of robots tending and harvesting an apple orchard.

Two of these robots stumble into an unlikely finding: a human child. Preserved in a cryogenic capsule, the toddler stumbles out and wails. The robots are confused and helpless as to how to respond. They quickly identify her as a human bio-form but cannot comprehend what her crying means.

After the initial shock, the toddler turns to the robots and calls them “papa” and “mama” kicking off the story. The plot develops around the idea of two robots raising a human child in a human-less planet earth. We also learn that humans are perceived as a threat and to be surrendered to the authorities. In spite of their programming, the robots choose to hide and protect the girl.

Photo by Bruno Melo on Unsplash

Are Humans Essential for Life to Flourish on Earth?

Even with only 4 episodes, the anime packs quite a philosophical punch. From a theological perspective, the careful observer quickly sees why the show is named after the Biblical garden. It is an illusion to the Genesis’ story where life begins on earth yet it includes with a twist. Now Eden is lush and thriving without human interference. It is as if God is recreating earth through technological means. This echoes Francis Bacon’s vision of technology as a way to mitigate the destructive effects of the fall.

Later we learn the planet had become uninhabitable. The robot creators envisioned a solution that entailed freezing cryogenically a number of humans while the robots worked to restore earth back to its previous glory. The plan apparently works except for the wrinkle of this girl waking up before her assigned time. Just like in the original story of Eden, humans come to mess it up.

Embedded in this narrative is the provocative question of human ultimate utility for life in the planet. After all, if machines are able to manage the earth flawlessly, why introduce human error? Of course, the flip side of the question is the belief that machines in themselves are free of error. Putting that aside, the question is still valid.

Photo by Alesia Kazantceva on Unsplash

Human Frailty and Renewal

Watching the story unfold, I could not help but reflect on Dr. Dorabantu’s past post on how AI would help us see the image of God in our vulnerability. That is, learning that robots could surpass us in rationality, we would have to attribute our uniqueness not to a narrow view of intelligence but our ability to love. The anime seems to be getting at the heart of this question and it gets there by using AI. It is in the Robot’s journey to understand human’s essence that we learn about what makes us unique in creation. In this way, the robots become the mirrors that reflect our image back to us.

Another parallel here is with the biblical story of Noah. In a world destroyed by pollution and revived through technological ingenuity, the ark is no longer a boat but a capsule. Humans are preserved by pausing the aging process in their bodies, a clear nod to Transhumanism. The combination of cryogenics and advanced AI can preseve human life on earth albeit for a limited number of humans.

I left the story feeling grateful for our imperfect humanity. It is unfortunate that Christian theology in an effort to paint a perfect God have in turn made human vulnerability seem undesirable. Without denying our potential for harm and destruction, namely our sinfulness, it is time Christian theology embraces and celebrate human vulnerability as part of our Imago Dei. This way, Netflix Eden, helps put human frailty back in the conversation.

How AI and Faith Communities Can Empower Climate Resilience in Cities

AI technologies continue to empower humanity for good. In a previous blog, we explored how AI was empowering government agencies to fight deforestation in the Amazon. In this blog, we discuss the role AI is playing to build climate resilience in cities. We will also look at how faith communities can use AI-enabled microgrids to serve communities hit by climate disassters.

A Changing Climate Puts Cities in Harm way.

I recently listened to an insightful Technopolis podcast on how cities are preparing for an increased incidence of natural disasters. The episode discussed manifold ways city leaders are using technology to prepare, predict and mitigate the impact of climate events. This is a complex challenge that requires a combination of good governance, technological tools, and planning to tackle.

Climate resilience is not just about decreasing carbon footprint, it is also about preparing for the increased incidence of extreme weather. Whether there are fires in California, Tifoons in East Asia, or severe droughts in Northern Africa, the planet is in for a bumpy ride in the coming decades. They will also exacerbate existing problems such as air pollution, water scarcity and heat diseases in urban areas. Governments and civic society groups need to start bracing for this reality by taking bold preventive steps in the present.

Cities illustrate the costs of delaying action on climate change by enshrining resource-intensive infrastructure and behaviors. The choices cities make today will determine their ability to handle climate change and reap the benefits of resource-efficient growth. Currently, 51% of the world’s population lives in cities and within a generation, an estimated two-thirds of the world’s population will live in cities. Hence, addressing cities’ vulnerabilities will be crucial for human life on the planet.

Photo by Karim MANJRA on Unsplash

AI and Climate Resilience

AI is a powerful tool to build climate resilience. We can use it to understand our current reality better, predict future weather events, create new products and services, and minimize human impact. By doing so, we can not only save and improve lives but also create a healthier world while also making the economy more efficient.

Deep learning, for example, enables better predictions and estimates of climate change than ever before. This information can be used to identify major vulnerabilities and risk zones. For example, in the case of fires, better prediction can not only identify risk areas but also help understand how it will spread in those areas. As you can imagine, predicting the trajectory of a fire is a complex task that involves a plethora of variables related to wind, vegetation, humidity, and other factors

The Gifts of Satellite Imagery

Another crucial area in that AI is becoming essential is satellite imagery. Research led by Google, the Mila Institute and the German Aerospace Center harness AI to develop and make sense of extensive datasets on Earth. This in turn empowers us to better understand climate change from a global perspective and to act accordingly.

Combining integrated global imagery with sophisticated modeling capabilities gives communities at risk precious advance warning to prepare. Governments can work with citizens living in these areas to strengthen their ability to mitigate extreme climate impacts. This will become particularly salient in coastal communities that should see their shores recede in the coming decades.

This is just one example of how AI can play a prominent role in climate resilience. A recent paper titled “Tackling Climate Change with Machine Learning,” revealed 13 areas where ML can be developed. They include but are not limited to energy consumption, CO2 removal, education, solar energy, engineering, and finance. Opportunities in these areas include the creation of new low-carbon materials, better monitoring of deforestation, and cleaner transport.

Photo by Biel Morro on Unsplash

Microgrids and Faith Communities

If climate change is the defining test of our generation, then technology alone will not be enough. As much as AI can help find solutions, the threat calls for collective action at unprecedented levels. This is both a challenge and an opportunity for faith communities seeking to re-imagine a future where their relevance surpasses the confines of their pews.

Thankfully, faith communities already play a crucial role in disaster relief. Their buildings often double as shelter and service centers when calamity strikes. Yet, if climate-related events will become more frequent, these institutions must expand their range of services offered to affected populations.

An example of that is in the creation of AI-managed microgrids. They are small, easily controllable electricity systems consisting of one or more generating units connected to nearby users and operated locally. Microgrids contain all the elements of a complex energy system, but because they maintain a balance between production and consumption, they operate independently of the grid. These systems work well with renewable energy sources further decreasing our reliance on fossil fuels

When climate disaster strikes, one of the first things to go is electricity. What if houses of worship, equipped with microgrids, become the places to go for those out of power? When the grid fails, houses of worship could become the lifeline for a neighborhood helping impacted populations communicate with family, charge their phones, and find shelter from cold nights. Furthermore, they could sell their excess energy units in the market finding new sources of funding for their spiritual mission.

Microgrids in churches, synagogues, and mosques – that’s an idea the world can believe in. It is also a great step towards climate resilience.

Klara and the Sun: Robotic Faith for an Unbelieving Humanity

In his first novel since winning the Nobel prize of literature, Kazuo Ishiguro explores the world through the lens of an AI droid. The novel retains many of the features that made him famous for previous bestsellers such as concentrating in confined spaces, building up emotional tension, and fragmented story-telling. All of this gains a fresh relevance when applied to the Sci-Fi genre and more specifically to the relationship between humanity and sentient technology. I’ll do my best to keep you from any spoilers as I suspect this will become a motion picture in the near future. Suffice it to say that Klara and the Sun is a compelling statement for robotic faith. How? Read on.

Introducing the Artificial Friend

Structured into 6 long sections, the book starts by introducing us to Klara. She is an AF (Artificial Friend), a humanoid equipped with Artificial Intelligence and designed commercially to be a human companion. At least, this is what we can deduce from the first pages as no clear explanation is given. In fact, this is a key trait in the story: we learn about the world along with Klara. She is the one and only narrator throughout the novel.

Klara is shaped like a short woman with brown hair. The story starts in the store where she is on display for sale. There she interacts with customers, other AFs, and “Manager”, the human responsible for the store. All humans are referred to by their capitalized job or function. Otherwise, they are classified by their appearance or something peculiar to them.

The first 70 pages occur inside the store where she is on display. We learn about her personality, the fact that she is very observant, and what peer AFs think of her. At times, she is placed near the front window of the store. That is when we get a glimpse of the outside world. This is probably where Ishiguro’s brilliance shines through as he carefully creates a worldview so unique, compelling, humane but in many ways also true to a robotic personality. The reader slowly grows fond of her as she immerses us in her whimsical perspective of the outside world. To her, a busy city street is a rich mixture of sights with details we most often take for granted.

We also get to learn how Klara processes emotions and even has a will of her own. At times she mentions feeling fear. She is also very sensitive to conflict, trying to avoid it at all costs. With that said, she is no push over. Once she sabotages a customer attempt to buy her because she had committed herself to another prospect. She also seems to stand out compared to the other AFs instilling both contempt and admiration from them.

Book cover from Amazon.com

The World Through Klara’s Eyes

She is sensitive, captivating and always curious. Her observations are unique and honest. She brings together an innocence of a child with the mathematical ability of a scientist. This often leads to some quirky observations as she watches the world unfold in front of her. In one instance, she describes a woman as “food-blender-shaped.”

Klara also has an acute ability to detect complex emotions in faces. In this way, she is able to peer through the crevices of the body and see the soul. In one instance, she spots how a child is both smiling at her AF while her eyes portray anger. When observing a fight, she could see the intensity of anger in the men’s faces describing them as horrid shapes as if they were no longer human. When seeing a couple embrace, she captures both the joy and the pain of that moment and struggles to understand how it could be so.

This uncanny ability to read human emotion becomes crucial when Klara settles in her permanent home. Being a quiet observer, she is able understand the subtle unspoken dynamics that happen in family relationships. In some instances, she could see the love between mother and daughter even as they argued vehemently. She could see through the housekeeper’s hostility towards her not as a threat but as concern. In this way, her view of humans tended err on the side of charity rather than malice.

Though being a keen human observer, it is her relationship with the sun that drives the narrative forward. From the first pages, Klara notices the presence of sun rays in most situations. She will usually start her description of an image by emphasizing how the sun rays were entering a room. We quickly learned that the sun is her main source of energy and nourishment. Hence it is not surprising that its looms so large in her worldview.

Yet, Ishiguro’s takes this relationship further. Similar to ancient humans, Klara develops a religious-like devotion to the sun. The star is not only her source of nourishment but becomes a source of meaning and a god-like figure that she looks to when in fear or in doubt.

That is when the novel gets theologically interesting.

Robotic Faith and Hope

As the title suggests, the sun plays a central role in Klara’s universe. This is true not only physiologically as she runs on solar energy, but also a spiritual role. This nods towards a religious relationship that starts through observation. Already understanding the power of the sun to give her energy, she witnesses how the sun restores a beggar and his dog back to health. Witnessing this event become Klara’s epiphany of the healing powers of the sun. She holds that memory dear and it becomes an anchor of hope for her later in the book when she realizes that her owner is seriously ill.

Klara develops a deep devotion toward the sun and like the ancients, she starts praying to it. The narrative moves forward when Klara experiences theophanies worthy of human awe. Her pure faith is so compelling that the reader cannot help but hope along with her that what she believes is true. In this way, Klara points us back to the numinous.

Her innocent and captivating faith has an impact in the human characters of the novel. For some reason, they start hoping for the best even as there is no reason to do so. In spite of overwhelming odds, they start seeing a light at the end of the tunnel. Some of them willingly participate, in this case the men in the novel, in her religious rites without understanding the rationale behind her actions. Yet, unlike human believers who often like to proselytize, she keeps her faith secret from all. In fact, secrecy is part of her religious relationship with the sun. In this way, she invites humans to transcend their reason and step into a child-like faith.

This reminds me of a previous blog where I explore this idea of pure faith and robots . But I digress.

Conclusion

I hope the first part of this review sparks your interest in reading this novel. It beautifully explores how AI can help us find faith again. Certainly, we are still decades away from the kind of AI that Ishiguro’s portrays in this book. Yet, like most works of Science Fiction, they help us extrapolate present directions so we can reflect on our future possibilities.

Contrasting to the dominant narrative of “robot trying to kill us”, the author opts for one that highlights the possibility that they can reflect the best in us. As they do so, they can change us into better human beings rather than allowing us to devolve into our worse vices. Consequently, Ishiguro gives us a vivid picture of how technology can work towards human flourishing.

In the next blog, I will explore the human world in which Klara lives. There are some interesting warnings and rich reflection in the dystopian situation described in the book. While our exposure to it is limited, maybe this is one part I wish the author had expanded a bit more, we do get enough ponder about the impact of emerging technologies in our society. This is especially salient for a digital native generation who is learning to send tweets before they give their first kiss.

Vulnerable like God: Perfect Machines and Imperfect Humans

This four-part series started with the suggestion that AI can be of real help to theologians, in their attempt to better understand what makes humans distinctive and in the image of God. We have since noted how different machine intelligence is from human intelligence, and how alien-like an intelligent robot could be ‘on the inside’, in spite of its humanlike outward behavior.

For theological anthropology, the main takeaway is that intelligence – understood as rationality and problem-solving – is not the defining feature of human nature. We’ve long been the most intelligent and capable creature in town, but that might soon change, with the emergence of AI. What makes us special and in the image of God is thus not some intellectual capacity (in theology, this is known as the substantive interpretation), nor something that we can do on God’s behalf (the functional interpretation), because AI could soon surpass us in both respects.

The interpretation of the imago Dei that seems to account best for the scenario of human-level AI is the relational one. According to it, the image of God is our special I-Thou relationship with God, the fact that we can be an authentic Thou, capable to receive God’s love and respond back. We exist only because God calls us into existence. Our relationship with God is therefore the deepest foundation of our ontology. Furthermore, we are deeply relational beings. Our growth and fulfillment can only be realized in authentic personal relationships with other human beings and with God.

AI and Authentic Relationality

It is not surprising that the key to human distinctiveness is profoundly relational. Alan Turing tapped right into this intuition when he designed his eponymous test for AI. Turing’s test is, in fact, a measurement of AI’s ability to relate like us. Unsurprisingly, the most advanced AIs still struggle when it comes to simulating relationships, and none has yet passed the Turing test.

But even if a program will someday convincingly relate to humans, will that be an authentic relationship? We’ve already seen that human-level AI will be anything but humanlike ‘on the inside.’ Intelligent robots might become capable to speak and act like us, but they will be completely different from us in terms of their internal motivation or meaning systems. What kind of relationship could there be between us and them, when we’d have so little in common?

We long for other humans precisely because we are not self-sufficient. Hence, we seek others precisely because we want to discover them and our own selves through relationships. We fall in love because we are not completely rational. Human-level AI will be the opposite of that: self-sufficient, perfectly rational, and with a quasi-complete knowledge of itself.

The Quirkiness of Human intelligence

Our limitations are instrumental for the kind of relationships that we have with each other. An argument can thus be made that a significant degree of cognitive and physical vulnerability is required for authentic relationality to be possible. There can be no authentic relationship without the two parts intentionally making themselves vulnerable to each other, opening to one another outside any transactional logic.

Photo by Duy Pham on Unsplash

A hyper-rational being would likely have very serious difficulties to engage fully in relationships and make oneself totally vulnerable to the loved other. It surely does not sound very smart.

Nevertheless, we, humans, do this tirelessly and often at high costs, exactly, perhaps, because we are not that intelligent and goal oriented as AI. Although that appears to be illogic, it is such experiences that give meaning and fulfillment to our lives.

From an evolutionary perspective, it is puzzling that our species evolved to be this way. Evolution promotes organisms that are better at adapting to the challenges of their environment, thus at solving practical survival and reproduction problems. It is therefore unsurprising that intelligence-as-problem-solving is a common feature of evolved organisms, and this is precisely the direction in which AI seems to develop.

If vulnerability is so important for the image of God as relationship, does this imply that God too is vulnerable?

What is strange in the vast space of possible intelligences is our quirky type of intelligence, one heavily optimized for relationship, marked by a bizarre thirst for meaning, and plagued by a surprising degree of irrationality. In the previous post I called out the strangeness of strong AI, but it is we who seem to be strange ones. However, it is specifically this kind of intellectual imperfection, or vulnerability, that enables us to dwell in the sort of Goldilocks of intelligence where personal relationships and the image of God are possible.

Vulnerability, God, Humans and Robots

Photo by Jordan Whitt on Unsplash

If vulnerability is so important for the image of God as relationship, does this imply that God too is vulnerable? Indeed, that seems to be the conclusion, and it’s not surprising at all, especially when we think of Christ. Through God’s incarnation, suffering, and voluntary death, we have been revealed a deeply vulnerable side of the divine. God is not an indifferent creator of the world, nor a dispassionate almighty, all-intelligent ruler. God cares deeply for creation, to the extent of committing to the supreme self-sacrifice to redeem it (Jn. 3: 16).

This means that we are most like God not when we are at our smartest or strongest, but when we engage in this kind of hyper-empathetic, though not entirely logical, behavior.

Compared to AI, we might look stupid, irrational, and outdated, but it is paradoxically due to these limitations that we are able to cultivate our divine likeness through loving, authentic, personal relationships. If looking at AI teaches theologians anything, it is that our limitations are just as important as our capabilities. We are vulnerable, just as our God has revealed to be vulnerable. Being like God does not necessarily mean being more intelligent, especially when intelligence is seen as rationality or problem solving

Christ – whether considered historically or symbolically – shows that what we value most about human nature are traits like empathy, meekness, and forgiveness, which are eminently relational qualities. Behind such qualities are ways of thinking rooted more in the irrational than in the rational parts of our minds. We should then wholeheartedly join the apostle Paul in “boast[ing] all the more gladly about [our] weaknesses […] for when [we] are weak, then [we] are strong” (2 Cor. 12: 9-10).

Human-level, but not Humanlike: The Strangeness of Strong AI

The emergence of AI opens up exciting new avenues of thought, promising to add some clarity to our understanding of intelligence and of the relation between intelligence and consciousness. For Christian anthropology, observing which aspects of human cognition are easily replicated in machines can be of particular help in refining the theological definition of human distinctiveness and the image of God.

However, by far the most theologically exciting scenario is the possibility of human-level AI, or artificial general intelligence (AGI), the Holy Grail of AI research. AGI would be capable to convincingly replicate human behavior. It could, in principle, pass as human, if it chose to. This is precisely how the Turing Test is designed to work. But how humanlike would a human-level AI really be?

Computer programs have already become capable of impressive things, which, when done by humans, require some of our ‘highest’ forms of intelligence. However, the way AI approaches such tasks is very non-humanlike, as explained in the previous post. If the current paradigm continues its march towards human-level intelligence, what could we expect AGI to be like? What kind of creature might such an intelligent robot be? How humanlike would it be? The short answer is, not much, or even not at all.

The Problem of Consciousness

Philosophically, there is a huge difference between what John Searle calls ‘strong’ and ‘weak’ AI. While strong AI would be an emulation of intelligence, weak AI would be a mere simulation. The two would be virtually indistinguishable on the ‘outside,’ but very different ‘on the inside.’ Strong AI would be someone, a thinking entity, endowed with conscience, while weak AI would be a something, a clockwork machine completely empty on the inside.

It is still too early to know whether AGI will be strong or weak, because we currently lack a good theory of how consciousness arises from inert matter. In philosophy, this is known as “the hard problem of consciousness.” But if current AI applications are any indication, weak AGI is a much more likely scenario than strong AGI. So far, AI has made significant progress in problem-solving, but it has made zero progress in developing any form of consciousness or ‘inside-out-ness.’

Even if AGI does somehow become strong AI (how could we even tell?), there are good reasons to believe that it would be a very alien type of intelligence.

What makes human life enjoyable is arguably related to the chunk of our mind that is not completely rational.

Photo by Maximalfocus on Unsplash

The Mystery of Being Human

As John McCarthy – one of the founding fathers of AI – argues, an AGI would have complete access to its internal states and algorithms. Just think about how weird that is! Humans have a very limited knowledge of what happens ‘on their inside.’ We only see the tip of the iceberg, because only a tiny fraction of our internal processes enter our stream of consciousness. Most information remains unconscious, and that is crucial for how we perceive, feel, and act.

Most of the time we have no idea why we do the things we do, even though we might fabricate compelling, post-factum rationalizations of our behavior. But would we really want to know such things and always act in a perfectly rational manner? Or, even better, would we be friends or lovers with such a hyper-rational person? I doubt.

Part of what makes us what we are and what makes human life enjoyable is arguably related to the chunk of our mind that is not completely rational. Our whole lives are journeys of self-discovery, and with each experience and relationship we get a better understanding of who we are. That is largely what motivates us to reach beyond our own self and do stuff. Just think of how much of human art is driven precisely by a longing to touch deeper truths about oneself, which are not easily accessible otherwise.

Strong AI could be the opposite of that. Robots might understand their own algorithms much better than we do, without any need to discover anything further. They might be able to communicate such information directly as raw data, without needing the approximation/encryption of metaphors. As Ian McEwan’s fictitious robot character emphatically declares, most human literature would be completely redundant for such creatures.

The Uncanny Worldview of an Intelligent Robot

Intelligent robots would likely have a very different perception of the world. With access to Bluetooth and WiFi, they would be able to ‘smell’ other connected devices and develop a sort of ‘sixth sense’ of knowing when a particular person is approaching merely from their Bluetooth signature. As roboticist Rodney Brooks shows, robots will soon be able to measure one’s breathing and heart rate without any biometric sensor, simply by analyzing how a person’s physical presence slightly changes the behavior of WiFi signals.

The technology for this already exists, and it could enable the robot to have access to a totally different kind of information about the humans around, such as their emotional state or health. Similar technologies of detecting changes in the radio field could allow the robots to do something akin to echolocation and know if they are surrounded by wood, stone, or metal. Just imagine how alien a creature endowed with such senses would be!

AGI might also perceive time very differently from us because they would think much faster. The ‘wetware’ of our biological brains constrains the speed at which electrical signals can travel. Electronic brains, however, could enable speeds closer to the ultimate physical limit, the speed of light. Minds running on such faster hardware would also think proportionally faster, making their experience of the passage of time proportionally slower.

If AGI would think ‘only’ ten thousand times faster than humans, a conservative estimation, they would inhabit a completely different world. It is difficult to imagine how such creatures might regard humans, but futurist James Lovelock chillingly estimates that “the experience of watching your garden grow gives you some idea of how future AI systems will feel when observing human life.”

The way AGI is depicted in sci-fi (e.g. Terminator, Ex Machina, or Westworld) might rightly give us existential shivers. But if the predictions above are anywhere near right, then AGI might turn out to be weirder than our wildest sci-fi dreams. AI might reach human-level, but it would most likely be radically non-humanlike.

Is this good or bad news for theological anthropology? How would the emergence of such an alien type of affect our understanding of humanity and its imago Dei status? The next post, the last one in this four-part series, wrestles head-on with this question.

How Does AI Compare with Human Intelligence? A Critical Look

In the previous post I argued that AI can be of tremendous help in our theological attempt to better understand what makes humans distinctive and in the image of God. But before jumping to theological conclusions, it is worth spending some time trying to understand what kind of intelligence machines are currently developing, and how much similarity is there between human and artificial intelligence.Image by Gordon Johnson from Pixabay

The short answer is, not much. The current game in AI seems to be the following: try to replicate human capabilities as well as possible, regardless of how you do it. As long as an AI program produces the desired output, it does not matter how humanlike its methods are. The end result is much more important than what goes on ‘on the inside,’ even more so in an industry driven by enormous financial stakes.

Good Old Fashioned AI

This approach was already at work in first wave of AI, also known as symbolic AI or GOFAI (good old-fashioned AI). Starting with the 1950s, the AI pioneers struggled to replicate our ability to do math and play chess, considered the epitome of human intelligence, without any real concern for how such results were achieved. They simply assumed that this must be how the human mind operates at the most fundamental level, through the logical manipulation of a finite number of symbols.

GOFAI ultimately managed to reach human-level in chess. In 1996, an IBM program defeated the human world-champion, Gary Kasparov, but it did it via brute force, by simply calculating millions of variations in advance. That is obviously not how humans play chess.

Although GOFAI worked well for ‘high’ cognitive tasks, it was completely incompetent in more ‘mundane’ tasks, such as vision or kinesthetic coordination. As roboticist Hans Moravec famously observed, it is paradoxically easier to replicate the higher functions of human cognition than to endow a machine with the perceptive and mobility skills of a one-year-old. What this means is that symbolic thinking is not how human intelligence really works.

The Advent of Machine Learning

Photo by Kevin Ku on Unsplash

What replaced symbolic AI since roughly the turn of the millennium is the approach known as machine learning (ML). One subset of ML that has proved wildly successful is deep learning, which uses layers of artificial neural networks. Loosely inspired by the brain’s anatomy, this algorithm aims to be a better approximation of human cognition. Unlike previous AI versions, it is not instructed on how to think. Instead, these programs are being fed huge sets of selected data, in order to develop their own rules for how the data should be interpreted.

For example, instead of teaching an ML algorithm that a cat is a furry mammal with four paws, pointed ears, and so forth, the program is trained on hundreds of thousands of pictures of cats and non-cats, by being ‘rewarded’ or ‘punished’ every time it makes a guess about what’s in the picture. After extensive training, some neural pathways become strengthened, while others are weakened or discarded. The end result is that the algorithm does learn to recognize cats. The flip side, however, is that its human programmers no longer necessarily understand how the conclusions are reached. It is a sort of mathematical magic.

ML algorithms of this kind are behind the impressive successes of contemporary AI. They can recognize objects and faces, spot cancer better than human pathologists, translate text instantly from one language to another, produce coherent prose, or simply converse with us as smart assistants. Does this mean that AI is finally starting to think like us? Not really.

When machines fail, they fail badly, and for different reasons than us.

Even when machines manage to achieve human or super-human level in certain cognitive tasks, they do it in a very different fashion. Humans don’t need millions of examples to learn something, they sometimes do very fine with at as little as one example. Humans can also usually provide explanations for their conclusions, whereas ML programs are often these ‘black boxes’ that are too complex to interrogate.

More importantly, the notion of common sense is completely lacking in AI algorithms. Even when their average performance is better than that of human experts, the few mistakes that they do make reveal a very disturbing lack of understanding from their part. Images that are intentionally perturbed so slightly that the adjustment is imperceptible to humans can still cause algorithms to misclassify them completely. It has been shown, for example, that sticking minuscule white stickers, almost imperceptible to the human eye, on a Stop sign on the road causes the AI algorithms used in self-driving vehicles to misclassify it as a Speed Limit 45 sign. When machines fail, they fail badly, and for different reasons than us.

Machine Learning vs Human Intelligence

Perhaps the most important difference between artificial and human intelligence is the former’s complete lack of any form of consciousness. In the words of philosophers Thomas Nagel and David Chalmers, “it feels like something” to be a human or a bat, although it is very difficult to pinpoint exactly what that feeling is and how it arises. However, we can intuitively say that very likely it doesn’t feel like anything to be a computer program or a robot, or at least not yet. So far, AI has made significant progress in problem-solving, but it has made zero progress in developing any form of consciousness or ‘inside-out-ness.’

Current AI is therefore very different from human intelligence. Although we might notice a growing functional overlap between the two, they differ strikingly in terms of structure, methodology, and some might even say ontology. Artificial and human intelligence might be capable of similar things, but that does not make them similar phenomena. Machines have in many respects already reached human level, but in a very non-humanlike fashion.

For Christian anthropology, such observations are particularly important, because they can inform how we think of the developments in AI and how we understand our own distinctiveness as intelligent beings, created in the image of God. In the next post, we look into the future, imagining what kind of creature an intelligent robot might be, and how humanlike we might expect human-level AI to become.

Artificial Intelligence: The Disguised Friend of Christian Anthropology

AI is making one significant leap after the other. Computer programs can nowadays convincingly converse with us, generate plausible prose, diagnose disease better than human experts, and totally trash us in strategy games like Go and chess, once considered the epitome of human intelligence. Could they one day reach human-level intelligence? It would be extremely unwise to discount such a possibility without very good reasons. Time, after all, is on AI’s side, and the kind of things that machines are capable of today used to be seen as quasi-impossible just a generation ago.

How could we possibly speak of human distinctiveness when robots become indistinguishable from us?

The scenario of human-level AI, also known as artificial general intelligence (AGI), would be a game-changer for every aspect of human life and society, but it would raise particularly difficult questions for theological anthropology. Since the dawn of the Judeo-Christian tradition, humans have perceived themselves as a creature unlike any other. The very first chapter of the Bible tells us that we are special, because only we of all creatures are created in the image of God (imago Dei). However, the Copernican revolution showed us that we are not the center of the universe (not literally, at least), and the Darwinian revolution revealed that we are not ontologically different from non-human animals. AGI is set to deliver the final blow, by conquering the last bastion of our distinctiveness: our intelligence.

By definition, AGI would be capable of doing anything that a standard human can do, at a similar or superior level. How could we possibly speak of human distinctiveness when robots become indistinguishable from us? Christian anthropology would surely be doomed, right? Well, not really, actually quite the contrary. Instead of rendering us irrelevant and ordinary, AI could in fact represent an unexpected opportunity to better understand ourselves and what makes us in God’s image.

Science’s Contribution to the Imago Dei

To explain why, it is useful to step back a little and acknowledge how much the imago Dei theology has benefitted historically from an honest engagement with the science of its time. Based solely on the biblical text, it is impossible to decide what the image of God is supposed to mean exactly. The creation story in Genesis 1 tells us that only humans are created in the image and likeness of God, but little else about what the image means. The New Testament does not add much, except for affirming that Jesus Christ is the perfect image. Ever since Patristic times, Christian anthropology has constantly wrestled with how to define the imago Dei, without much success or consensus.

The obvious way to tackle the question of our distinctiveness is to examine what differentiates us from the animals, the only others with which we can meaningfully compare ourselves. For the most part of Christian history, this difference has been located in our intellectual capacities, an approach heavily influenced by Aristotelian philosophy, which defined the human as the rational animal. But then came Darwin and showed us that we are not as different from the animals as we thought we were.

Theologian Aubrey Moore famously said that Darwin “under the guise of a foe, did the work of a friend” for Christianity.

Furthermore, the following century and a half of ethology and evolutionary science revealed that our cognitive capacities are not bestowed upon us from above. Instead, they are rooted deep within our evolutionary history, and most of them are shared with at least some of the animals. If there is no such thing as a uniquely human capacity, then surely we were wrong all along to regard ourselves as distinctive, right?

Not quite. Theologian Aubrey Moore famously said that Darwin “under the guise of a foe, did the work of a friend” for Christianity. Confronted with the findings of evolutionary science, theologians were forced to abandon the outdated Aristotelian model of human distinctiveness and look for more creative ways to define the image of God. Instead of equating the image with a capacity that humans have, post-Darwinian theology regards the imago Dei in terms of something we are called to do or to be.

Defining the Imago Dei

Some theologians interpret the image functionally, as our election to represent God in the universe and exercise stewardship over creation. Others go for a relational interpretation, defining the image through the prism of the covenantal ‘I-Thou’ relationship that we are called to have with God, which is the fundament of human existence. To be in the image of God is to be in a personal, authentic relationship with God and with other human beings. Finally, there are others who interpret the imago Dei eschatologically, as a special destiny for human beings, a sort of gravitational pull that directs us toward existential fulfilment in the fellowship with God, in the eschaton. Which of these interpretations is the best? Well, hard to say. Without going into detail, let’s just say that there are good theological arguments for each of them.

If purely theological debate does not produce clear answers, we might then try to compare ourselves with the animals. This, though, does not lead us too far either. Although ‘technically’ we are not very different from the animals and we share with them similar bodily and cognitive structures, in practical terms the difference is huge. Our mental lives, our societies and our achievements are so radically different than theirs, that it is actually impossible to pinpoint just one dimension that represents the decisive difference. Animals are simply no match for us. This is good news for human distinctiveness, but it also means that we might be stuck in a never-ending theological debate on how to interpret the image of God, with so many options on our hand.

How Can AI help Define Who We Are?

This is where the emergence of human-level AI can be a game-changer. For the first time, we would be faced with the possibility of an equal or superior other, one that could potentially (out)match us in everything, from our intellectual capacities, to what we can do in the world, our relational abilities, or the complexity of our mental lives. Instead of agonizing about AI replacing us or rendering us irrelevant, we could relish the opportunity to better understand our distinctiveness through the insights brought about by the emergence of this new other.

The hypothesis of AGI might present theologians with an extraordinary opportunity to narrow down their definitions of human distinctiveness and the image of God. Looking at what would differentiate us from human-level AI, if indeed anything at all, may provide just the right amount of conceptual constraint needed for a better definition of the imago Dei. In this respect, our encounter with AI might prove to be our best shot at comparing ourselves with a different type of intelligence, apart from maybe the possibility of ever finding extra-terrestrial intelligence in the universe.


Dr. Marius Dorobantu is a research associate in science & theology at VU Univ. Amsterdam (NL). His PhD thesis (2020, Univ. of Strasbourg, FR) analysed the potential challenges of human-level AI for theological anthropology. The project he’s currently working on, funded by the Templeton WCF within the “Diverse Intelligences” initiative, is entitled ‘Understanding spiritual intelligence: psychological, theological, and computational approaches.

How to Integrate the Sacred with the Technical: an AI worldview

At first glance, the combination between AI and theology may sound like strange bedfellows. After all, what does technology have to do with spirituality? In our compartmentalization-prone western view, these disciplines are dealt with separately. Hence the first step on this journey is to reject this separation, aiming instead to hold these different perspectives in view simultaneously. Doing so fosters a new avenue for knowledge creation. Let’s begin by examining an AI worldview

What is AI?

AI is not simply a technology defined by algorithms that create outcomes out of binary code. Instead, AI brings with it a unique perspective on reality. For AI, in its present form, to exist there must be first algorithms, data, and adequate hardware. The first one came on the scene in the 1950s while the other two became a reality mostly in the last two decades. This may partially explain why we have been hearing about AI for a long time while only now it is actually impacting our lives on a large scale. 

The algorithm in its basic form consists of a set of instructions to perform, such as to transform input into output. This can be as simple as taking the inputs (2,3), passing through an instruction (add them), and getting an output (5). If you ever made that calculation in your head, congratulations: you have used an algorithm. It is logical, linear, and repeatable. This is what gives it “machine” quality. It is an automated process to create outputs.

Data + Algorithms + Hardware = AI

Data is the very fuel of AI in its dominant form today. Without it, nothing would be accomplished. This is what differentiates programming from AI (machine learning). The first depends on a human person to imagine, direct and define the outcomes of an input. Machine learning is an automated process that takes data and transforms it into the desired outcome. It is learning because, although the algorithm is repeatable, the variability in the data makes the outcome unique and at times hard to predict. It involves risk but it also yields new knowledge. The best that human operators can do is to monitor the inputs and outputs while the machine “learns” from new data. 

Data is digitized information so that it can be processed by algorithms. Human brains operate in an analog perspective, taking information from the world and processes them through neural pulses. Digital computers need information to be first translated into binary code before they can “understand” it. As growing chunks of our reality are digitized, the more the machines can learn.  

All of this takes energy to take shape. If data is like the soul, algorithms like the mind, then hardware is like the body. It was only in the last few decades when, through fast advancement, it was possible to apply AI algorithms to the commensurate amount of data needed for them to work properly. The growth in computing power is one of the most underrated wonders of our time. This revolution is the engine that allowed algorithms to process and make sense of the staggering amount of data we now produce. The combination of the three made possible the emergence of an AI ecosystem of knowledge creation. Not only that but the beginning of an AI worldview.

Photo by Franki Chamaki on Unsplash

Seeing the World Through Robotic Eyes

How can AI be a worldview? How does it differ from existing human-created perspectives? It is so because its peculiar process of information processing in effect crafts a new vision of the world. This complex machine-created perspective has some unique traits worth mentioning. It is backward-looking but only to recent history. While we have a wealth of data nowadays, our record still does not go back for more than 20-30 years. This is important because it means it will bias the recent past and the present as it looks into the future.

Furthermore, an AI worldview while recent past-based is quite sensitive to emerging shifts. In fact, algorithms can detect variations much faster than humans. That is an important trait in providing decision-makers with timely warnings of trouble or opportunities ahead. In that sense, if foreseeing a world that is about to arrive. A reality that is here but not yet. Let the theologians understand. 

It is inherently evidence-based. That it is, it approaches data with no presuppositions. Instead, especially at the beginning of a training process, it looks at from the equivalent of a child’s eyes. This is both an asset and a liability. This open view of the world enables it to discover new insights that would otherwise pass unnoticed to human brains that rely on assumptions to process information. It is also a liability because it can mistake an ordinary even for extraordinary simply because it is the first time it encounters it. In short, it is prime for forging innovation as long as it is accompanied by human wisdom. 

Rationality Devoid of Ethics  

Finally, and this is its more controversial trait, It approaches the world with no moral compass. It applies logic devoid of emotion and makes decisions without the benefit of high-level thinking. This makes it superior to human capacity in narrow tasks. However, it is utterly inadequate for making value judgments.

It is true that with the development of AGI (artificial general intelligence), it may acquire capabilities more like human wisdom than it is today. However, since machine learning (narrow AI) is the type of technology mostly present in our current world, it is fair to say that AI is intelligent but not wise, fast but not discerning, and accurate but not righteous.

This concludes the first part of this series of blogs. In the next blog, I’ll define the other side of this integration: theology. Just like AI, theology requires some preliminary definitions before we can pursue integration.

Does God Hear Robot Prayers? A Modern Day Parable

The short video above portrays Juanello Turiano’s (1500-1585 AD) invention, an automated monk that recites prayers while moving in a circle. It was commissioned by King Philip II to honor a Friar whom he believed had healed his son. The engineer delivered a work of art, creepy but surprisingly life-like, in a time where Artificial Intelligence was but a distant dream. This Medieval marvel now sits at the Smithsonian museum in Washington, DC.

Take a pause to watch the 2 minute video before reading on.

What can this marvelous work of religious art teach us today, nearly 5 centuries later, about our relationship with machines?

In a beautifully well-written piece for Aeon, Ed Simon asks whether robots can pray. In discussing the automated monk, he argues that the medieval invention was not simply simulating prayer. It was actually praying! Its creation was an offer of thanksgiving to the Christian God and till this day continues to recite its petitions.

Such reflection opens the door for profound theological questions. For if the machine is indeed communicating with the divine, would God listen?

Can an inanimate object commune with the Creator?

We now turn to a short parable portraying different responses to the medieval droid.

A Modern Day Parable

Photo by Drew Willson on Unsplash

In an effort to raise publicity for its exhibit, the Smithsonian takes Turiano’s invention above in a road show. Aiming to create a buzz, they place the automated monk in a crowded square in New York city along with a sign that asks:

When this monk prays, does God listen?

They place hidden cameras to record peoples’ reaction.

A few minutes go by and a scientist approaches to inspect the scene. Upon reading the sign he quickly dismisses it as an artifact from a bygone era. “Of course, machines cannot pray” – he mulls. He posits that because they are not alive, one cannot ascribe to them human properties. That would be anthropomorphising. That is when people project human traits on non-human entities. “Why even bother asking why God would listen if prayer itself is a human construct?” Annoyed by the whole matter, he walks away hurriedly as he realizes he is late for work.

Moments later, a priest walks by and stops to examine the exhibit. The religious person is taken aback by such question. “Of course, machines cannot pray, they are mere human artifacts” – he mulls. “They are devoid of God’s image which is exclusive property of humans” he continues. “Where in Scripture can one find a example of an object that prays? Machines are works of the flesh, worldly pursuits not worthy of an eternal God’s attention” he concludes. Offended by the blasphemous display, the priest walks away from the moving monk on to holier things.

Finally, a child approaches the center of the square. She sees the walking monk and runs to the droid filled with wonder. “Look at the cool moving monk, mom!” she yells. Soon, she gives it a name: monk Charlie. She sits down and watches mesmerized by the intricate movements of his mouth. The child notices the etched sandals on his feet. She also pays attention to the movement of his arms and mouth.

After a while, she answers: “Yes, God listens to Charlie.” She joins with him, imitating his movement with sheer delight. In that moment, the droid becomes her new playmate.

How would you respond?

Green Tech: How Scientists are Using AI to Fight Deforestation

In the previous blog, I talked about upcoming changes to US AI policy with a new administration. Part of that change is a renewed focus on harnessing this technology for sustainability. Here I will showcase an example of green tech – how machine learning models are helping researchers detect illegal logging and burning in the vast Amazon rainforest. This is an exciting development and one more example of how AI can work for good.

The problem

Imagining trying to patrol an area nearly the size of the lower 48 states of dense rainforest! It is as the proverbial saying goes: finding needle in a haystack. The only way to to catch illegal activity is to find ways to narrow the surveilling area. Doing so gives you the best chances to use your limited resources of law enforcement wisely. Yet, how can that be done?

How do illegal logging and burning happen in the Amazon? Are there any patterns that could help narrow the search? Fortunately, there is. A common trait for them is happening near a road. In fact, 95% of them occur within 6 miles from a road or a river. These activities require equipment that must be transported through dense jungle. For logging, lumber must be transported so it can be traded. The only way to do that is either through waterways or dirt roads. Hence, tracking and locating illegal roads go along way to honing in areas of possible illegal activity.

While authorities had records for the government-built roads, no one knew the extent of the illegal network of roads in the Amazon. To attack the problem, enforcing agencies needed richer maps that could spot this unofficial web. Only then could they start to focus resources around these roads. Voila, there you have, green tech working for preserving rather than destroying the environment.

An Ingenious solution

In order to solve this problem, Scientist from Imazon (Amazon’s Institute of Humans and the Environment) went to work in a search for ways to detect these roads. Fortunately, by carefully studying satellite imagery they could manually trace these additional roads. In 2016 they completed this initial heroic but rather tedious work. The new estimate was now 13 times the size of the original! Now they had something to work with.

Once the initial tracing was complete, it became clear updating it manually would be an impossible task. These roads could spring up overnight as loggers and ranchers worked to evade monitoring. That is when they turned to computer vision to see if it could detect new roads. The initial manual work became the training dataset that taught the algorithm how to detect these roads from the satellite images. In supervised learning, one must first have a collection of data that shows the actual target (labels) to the algorithm (i.e: an algorithm to recognize cats must first be fed with millions of Youtube videos of cats to work).

The result was impressive. At first, the model achieved 70% accuracy and with some additional processing on top, it increased to 90%. The research team presented their results in the latest meeting of the American Geophysical Union. They also plan to share their model with neighboring countries so they can use it for their enforcement of the Amazon in areas outside Brazil.

Reflection

Algorithms can be effective allies in the fight for preserving the environment. As the example of Imazon shows, it takes some ingenuity, hard work, and planning to make that happen. While a lot of discussions around AI quickly devolve into cliches of “machines replacing humans”, this example shows how it can augment human problem-solving abilities. It took a person to connect the dots between the potential of AI for solving a particular problem. Indeed the real future of AI may be in green tech.

In this blog and in our FB community we seek to challenge, question and re-imagine how technologies like AI can empower human flourishing. Yet, this is not limited to humans but to the whole ecosystem we inhabit. If algorithms are to fulfill their promise, then they must be relevant in sustainability.

How is your work making life more sustainable on this planet?