Klara and the Sun: Robotic Redemption for a Dystopian World

In the previous blog, we discussed how Klara, the AI and the main character of Kazuo Ishiguro’s latest novel, develops a religious devotion to the Sun. In the second and final installment of this book review, I explore how Klara impacts the people around her. Klara and the Sun, shows how they become better humans for interacting with her in a dystopian world.

Photo by Randy Jacob on Unsplash

Gene Inequality

Because humans are only supporting characters in this novel, we only learn about their world later in the book. The author does not give out a year but places that story in a near future. Society is sharply divided along with class and racial lines. Gene editing has become a reality and now parents can opt to have children born with the traits that will help them succeed in life.

This stark choice does not only affect the family’s fate but re-orients the way society allocates opportunities. Colleges no longer accept average kids meaning that a natural birth path puts a child at a disadvantage. Yet, this choice comes at a cost. Experimenting with genes also means a higher mortality rate for children and adolescents. That is the case for the family that purchases Klara, they have lost their first daughter and now their second one is sick.

These gene-edited children receive special education in their home remotely by specialized tutors. This turned out to be an ironic trait in a pandemic year where most children in the world learned through Zoom. They socialize through prearranged gatherings in homes. Those that are well-to-do live in gated communities, supposedly because the world had become unsafe. This is just one of the many aspects of the dystopian world of Klara and the Sun.

Photo by Andy Kelly on Unsplash

AI Companionship and Remembrance

A secondary plot-line in the novel is the relationship between the teenage Josie, Klara’s owner, and her friend Rick who is not gene-edited. The teens are coming of age in this tumultuous period where the viability of their relationship is in question. The adults discuss whether they should even be together in a society that delineates separate paths assigned at birth. One has a safe passage into college and stable jobs while the other is shut out from opportunity by the sheer fact their parents did not interfere with nature.

In this world, droids are common companions to wealthy children. Since many don’t go to school anymore, the droid plays the role of nanny, friend, guardian, and at times tutor. Even so, there is resistance to them in the public square where resentful segments of society see their presence with contempt. They represent a symbol of status for the affluent and a menace to the working class. Even so, their owners often treat them as merchandise. At best they were seen as servants and at worse as disposable toys that could be tossed around for amusement.

The novel also hints at the use of AI to extend the life of loved ones. AI remembrance, shall we say. That is, programming AI droids to take the place of a diseased human. This seems like a natural complement in a world where parents have no guarantee that their gene-edited children will live to adulthood. For some, the AI companion could live out the years their children were denied.

Klara The Therapist

In the world described above, the AF (artificial friend) plays a pivotal role in family life not just for the children that they accompany but also for the parents. In effect, because of her robotic impartiality, Klara serves as a safe confidant to Josie, Rick, her mother, and her dad. The novel includes intimate one-on-one conversations where Klara offers a fresh take on their troubles. Her gentle and unpretentious perspective prods them to do what is right even when it is hard. In this way, she also plays a moral role, reminding humans of their best instincts.

Yet, humans are not the only ones impacted. Klara also grows and matures through her interaction with them. Navigating the tensions, joys, and sorrows of human relationships, she uncovers the many layers of human emotion. Though lacking tear ducts and a beating heart, she is not a prisoner to detached rationality. She suffers with the pain of the humans around her, she cares deeply about their well-being and she is willing to sacrifice her own future to ensure they have one. In short, she is programmed to serve them not as a dutiful pet but as a caring friend. In doing so, she embodies the best of human empathy.

The reader joins Klara in her path to maturity and it is a delightful ride. As she observes and learns about the people around her, the human readers get a mirror to themselves. We see our struggles, our pettiness, our hopes and expectations reflected in this rich story. For the ones that read with an open heart, the book also offers an opportunity for transformation and growth.

Final Reflections

In an insightful series of 4 blogs, Dr. Dorabantu argues that future general AI will be hyper-rational forcing us to re-imagine the essence of who we are. Yet, Ishiguro presents an alternative hypothesis. What if instead, AI technology led to the development of empathetic servant companions? Could a machine express both rational and emotional intelligence?

Emotionally intelligent AI would help us redefine the image of God not by contrast but by reinforcement. That is, instead of simply demonstrating our limitations in rationality it could expand our potential for empathy. The novel shows how AI can act as a therapist or spiritual guide. Through empathetic dialogue, they can help us find the best of our moral senses. In short, it can help us love better.

Finally, the book raises important ethical questions about gene editing’s promises and dangers. What would it look like to live in a world where “designer babies” are commonplace? Could gene-editing combining with AI lead to the harrowing scenario where droids serve as complete replacements for humans? While Ishuguro’s future is fictitious, he speculates on technologies that already exist now. Gene editing and narrow AI are a reality while General AI is plausibly within reach.

We do well to seriously consider their impact before a small group in Silicon Valley decides how to maximize profit from them. This may be the greatest lesson we can take from Klara and the Sun and its dystopian world.

Klara and the Sun: Robotic Faith for an Unbelieving Humanity

In his first novel since winning the Nobel prize of literature, Kazuo Ishiguro explores the world through the lens of an AI droid. The novel retains many of the features that made him famous for previous bestsellers such as concentrating in confined spaces, building up emotional tension, and fragmented story-telling. All of this gains a fresh relevance when applied to the Sci-Fi genre and more specifically to the relationship between humanity and sentient technology. I’ll do my best to keep you from any spoilers as I suspect this will become a motion picture in the near future. Suffice it to say that Klara and the Sun is a compelling statement for robotic faith. How? Read on.

Introducing the Artificial Friend

Structured into 6 long sections, the book starts by introducing us to Klara. She is an AF (Artificial Friend), a humanoid equipped with Artificial Intelligence and designed commercially to be a human companion. At least, this is what we can deduce from the first pages as no clear explanation is given. In fact, this is a key trait in the story: we learn about the world along with Klara. She is the one and only narrator throughout the novel.

Klara is shaped like a short woman with brown hair. The story starts in the store where she is on display for sale. There she interacts with customers, other AFs, and “Manager”, the human responsible for the store. All humans are referred to by their capitalized job or function. Otherwise, they are classified by their appearance or something peculiar to them.

The first 70 pages occur inside the store where she is on display. We learn about her personality, the fact that she is very observant, and what peer AFs think of her. At times, she is placed near the front window of the store. That is when we get a glimpse of the outside world. This is probably where Ishiguro’s brilliance shines through as he carefully creates a worldview so unique, compelling, humane but in many ways also true to a robotic personality. The reader slowly grows fond of her as she immerses us in her whimsical perspective of the outside world. To her, a busy city street is a rich mixture of sights with details we most often take for granted.

We also get to learn how Klara processes emotions and even has a will of her own. At times she mentions feeling fear. She is also very sensitive to conflict, trying to avoid it at all costs. With that said, she is no push over. Once she sabotages a customer attempt to buy her because she had committed herself to another prospect. She also seems to stand out compared to the other AFs instilling both contempt and admiration from them.

Book cover from Amazon.com

The World Through Klara’s Eyes

She is sensitive, captivating and always curious. Her observations are unique and honest. She brings together an innocence of a child with the mathematical ability of a scientist. This often leads to some quirky observations as she watches the world unfold in front of her. In one instance, she describes a woman as “food-blender-shaped.”

Klara also has an acute ability to detect complex emotions in faces. In this way, she is able to peer through the crevices of the body and see the soul. In one instance, she spots how a child is both smiling at her AF while her eyes portray anger. When observing a fight, she could see the intensity of anger in the men’s faces describing them as horrid shapes as if they were no longer human. When seeing a couple embrace, she captures both the joy and the pain of that moment and struggles to understand how it could be so.

This uncanny ability to read human emotion becomes crucial when Klara settles in her permanent home. Being a quiet observer, she is able understand the subtle unspoken dynamics that happen in family relationships. In some instances, she could see the love between mother and daughter even as they argued vehemently. She could see through the housekeeper’s hostility towards her not as a threat but as concern. In this way, her view of humans tended err on the side of charity rather than malice.

Though being a keen human observer, it is her relationship with the sun that drives the narrative forward. From the first pages, Klara notices the presence of sun rays in most situations. She will usually start her description of an image by emphasizing how the sun rays were entering a room. We quickly learned that the sun is her main source of energy and nourishment. Hence it is not surprising that its looms so large in her worldview.

Yet, Ishiguro’s takes this relationship further. Similar to ancient humans, Klara develops a religious-like devotion to the sun. The star is not only her source of nourishment but becomes a source of meaning and a god-like figure that she looks to when in fear or in doubt.

That is when the novel gets theologically interesting.

Robotic Faith and Hope

As the title suggests, the sun plays a central role in Klara’s universe. This is true not only physiologically as she runs on solar energy, but also a spiritual role. This nods towards a religious relationship that starts through observation. Already understanding the power of the sun to give her energy, she witnesses how the sun restores a beggar and his dog back to health. Witnessing this event become Klara’s epiphany of the healing powers of the sun. She holds that memory dear and it becomes an anchor of hope for her later in the book when she realizes that her owner is seriously ill.

Klara develops a deep devotion toward the sun and like the ancients, she starts praying to it. The narrative moves forward when Klara experiences theophanies worthy of human awe. Her pure faith is so compelling that the reader cannot help but hope along with her that what she believes is true. In this way, Klara points us back to the numinous.

Her innocent and captivating faith has an impact in the human characters of the novel. For some reason, they start hoping for the best even as there is no reason to do so. In spite of overwhelming odds, they start seeing a light at the end of the tunnel. Some of them willingly participate, in this case the men in the novel, in her religious rites without understanding the rationale behind her actions. Yet, unlike human believers who often like to proselytize, she keeps her faith secret from all. In fact, secrecy is part of her religious relationship with the sun. In this way, she invites humans to transcend their reason and step into a child-like faith.

This reminds me of a previous blog where I explore this idea of pure faith and robots . But I digress.

Conclusion

I hope the first part of this review sparks your interest in reading this novel. It beautifully explores how AI can help us find faith again. Certainly, we are still decades away from the kind of AI that Ishiguro’s portrays in this book. Yet, like most works of Science Fiction, they help us extrapolate present directions so we can reflect on our future possibilities.

Contrasting to the dominant narrative of “robot trying to kill us”, the author opts for one that highlights the possibility that they can reflect the best in us. As they do so, they can change us into better human beings rather than allowing us to devolve into our worse vices. Consequently, Ishiguro gives us a vivid picture of how technology can work towards human flourishing.

In the next blog, I will explore the human world in which Klara lives. There are some interesting warnings and rich reflection in the dystopian situation described in the book. While our exposure to it is limited, maybe this is one part I wish the author had expanded a bit more, we do get enough ponder about the impact of emerging technologies in our society. This is especially salient for a digital native generation who is learning to send tweets before they give their first kiss.

Vulnerable like God: Perfect Machines and Imperfect Humans

This four-part series started with the suggestion that AI can be of real help to theologians, in their attempt to better understand what makes humans distinctive and in the image of God. We have since noted how different machine intelligence is from human intelligence, and how alien-like an intelligent robot could be ‘on the inside’, in spite of its humanlike outward behavior.

For theological anthropology, the main takeaway is that intelligence – understood as rationality and problem-solving – is not the defining feature of human nature. We’ve long been the most intelligent and capable creature in town, but that might soon change, with the emergence of AI. What makes us special and in the image of God is thus not some intellectual capacity (in theology, this is known as the substantive interpretation), nor something that we can do on God’s behalf (the functional interpretation), because AI could soon surpass us in both respects.

The interpretation of the imago Dei that seems to account best for the scenario of human-level AI is the relational one. According to it, the image of God is our special I-Thou relationship with God, the fact that we can be an authentic Thou, capable to receive God’s love and respond back. We exist only because God calls us into existence. Our relationship with God is therefore the deepest foundation of our ontology. Furthermore, we are deeply relational beings. Our growth and fulfillment can only be realized in authentic personal relationships with other human beings and with God.

AI and Authentic Relationality

It is not surprising that the key to human distinctiveness is profoundly relational. Alan Turing tapped right into this intuition when he designed his eponymous test for AI. Turing’s test is, in fact, a measurement of AI’s ability to relate like us. Unsurprisingly, the most advanced AIs still struggle when it comes to simulating relationships, and none has yet passed the Turing test.

But even if a program will someday convincingly relate to humans, will that be an authentic relationship? We’ve already seen that human-level AI will be anything but humanlike ‘on the inside.’ Intelligent robots might become capable to speak and act like us, but they will be completely different from us in terms of their internal motivation or meaning systems. What kind of relationship could there be between us and them, when we’d have so little in common?

We long for other humans precisely because we are not self-sufficient. Hence, we seek others precisely because we want to discover them and our own selves through relationships. We fall in love because we are not completely rational. Human-level AI will be the opposite of that: self-sufficient, perfectly rational, and with a quasi-complete knowledge of itself.

The Quirkiness of Human intelligence

Our limitations are instrumental for the kind of relationships that we have with each other. An argument can thus be made that a significant degree of cognitive and physical vulnerability is required for authentic relationality to be possible. There can be no authentic relationship without the two parts intentionally making themselves vulnerable to each other, opening to one another outside any transactional logic.

Photo by Duy Pham on Unsplash

A hyper-rational being would likely have very serious difficulties to engage fully in relationships and make oneself totally vulnerable to the loved other. It surely does not sound very smart.

Nevertheless, we, humans, do this tirelessly and often at high costs, exactly, perhaps, because we are not that intelligent and goal oriented as AI. Although that appears to be illogic, it is such experiences that give meaning and fulfillment to our lives.

From an evolutionary perspective, it is puzzling that our species evolved to be this way. Evolution promotes organisms that are better at adapting to the challenges of their environment, thus at solving practical survival and reproduction problems. It is therefore unsurprising that intelligence-as-problem-solving is a common feature of evolved organisms, and this is precisely the direction in which AI seems to develop.

If vulnerability is so important for the image of God as relationship, does this imply that God too is vulnerable?

What is strange in the vast space of possible intelligences is our quirky type of intelligence, one heavily optimized for relationship, marked by a bizarre thirst for meaning, and plagued by a surprising degree of irrationality. In the previous post I called out the strangeness of strong AI, but it is we who seem to be strange ones. However, it is specifically this kind of intellectual imperfection, or vulnerability, that enables us to dwell in the sort of Goldilocks of intelligence where personal relationships and the image of God are possible.

Vulnerability, God, Humans and Robots

Photo by Jordan Whitt on Unsplash

If vulnerability is so important for the image of God as relationship, does this imply that God too is vulnerable? Indeed, that seems to be the conclusion, and it’s not surprising at all, especially when we think of Christ. Through God’s incarnation, suffering, and voluntary death, we have been revealed a deeply vulnerable side of the divine. God is not an indifferent creator of the world, nor a dispassionate almighty, all-intelligent ruler. God cares deeply for creation, to the extent of committing to the supreme self-sacrifice to redeem it (Jn. 3: 16).

This means that we are most like God not when we are at our smartest or strongest, but when we engage in this kind of hyper-empathetic, though not entirely logical, behavior.

Compared to AI, we might look stupid, irrational, and outdated, but it is paradoxically due to these limitations that we are able to cultivate our divine likeness through loving, authentic, personal relationships. If looking at AI teaches theologians anything, it is that our limitations are just as important as our capabilities. We are vulnerable, just as our God has revealed to be vulnerable. Being like God does not necessarily mean being more intelligent, especially when intelligence is seen as rationality or problem solving

Christ – whether considered historically or symbolically – shows that what we value most about human nature are traits like empathy, meekness, and forgiveness, which are eminently relational qualities. Behind such qualities are ways of thinking rooted more in the irrational than in the rational parts of our minds. We should then wholeheartedly join the apostle Paul in “boast[ing] all the more gladly about [our] weaknesses […] for when [we] are weak, then [we] are strong” (2 Cor. 12: 9-10).

Human-level, but not Humanlike: The Strangeness of Strong AI

The emergence of AI opens up exciting new avenues of thought, promising to add some clarity to our understanding of intelligence and of the relation between intelligence and consciousness. For Christian anthropology, observing which aspects of human cognition are easily replicated in machines can be of particular help in refining the theological definition of human distinctiveness and the image of God.

However, by far the most theologically exciting scenario is the possibility of human-level AI, or artificial general intelligence (AGI), the Holy Grail of AI research. AGI would be capable to convincingly replicate human behavior. It could, in principle, pass as human, if it chose to. This is precisely how the Turing Test is designed to work. But how humanlike would a human-level AI really be?

Computer programs have already become capable of impressive things, which, when done by humans, require some of our ‘highest’ forms of intelligence. However, the way AI approaches such tasks is very non-humanlike, as explained in the previous post. If the current paradigm continues its march towards human-level intelligence, what could we expect AGI to be like? What kind of creature might such an intelligent robot be? How humanlike would it be? The short answer is, not much, or even not at all.

The Problem of Consciousness

Philosophically, there is a huge difference between what John Searle calls ‘strong’ and ‘weak’ AI. While strong AI would be an emulation of intelligence, weak AI would be a mere simulation. The two would be virtually indistinguishable on the ‘outside,’ but very different ‘on the inside.’ Strong AI would be someone, a thinking entity, endowed with conscience, while weak AI would be a something, a clockwork machine completely empty on the inside.

It is still too early to know whether AGI will be strong or weak, because we currently lack a good theory of how consciousness arises from inert matter. In philosophy, this is known as “the hard problem of consciousness.” But if current AI applications are any indication, weak AGI is a much more likely scenario than strong AGI. So far, AI has made significant progress in problem-solving, but it has made zero progress in developing any form of consciousness or ‘inside-out-ness.’

Even if AGI does somehow become strong AI (how could we even tell?), there are good reasons to believe that it would be a very alien type of intelligence.

What makes human life enjoyable is arguably related to the chunk of our mind that is not completely rational.

Photo by Maximalfocus on Unsplash

The Mystery of Being Human

As John McCarthy – one of the founding fathers of AI – argues, an AGI would have complete access to its internal states and algorithms. Just think about how weird that is! Humans have a very limited knowledge of what happens ‘on their inside.’ We only see the tip of the iceberg, because only a tiny fraction of our internal processes enter our stream of consciousness. Most information remains unconscious, and that is crucial for how we perceive, feel, and act.

Most of the time we have no idea why we do the things we do, even though we might fabricate compelling, post-factum rationalizations of our behavior. But would we really want to know such things and always act in a perfectly rational manner? Or, even better, would we be friends or lovers with such a hyper-rational person? I doubt.

Part of what makes us what we are and what makes human life enjoyable is arguably related to the chunk of our mind that is not completely rational. Our whole lives are journeys of self-discovery, and with each experience and relationship we get a better understanding of who we are. That is largely what motivates us to reach beyond our own self and do stuff. Just think of how much of human art is driven precisely by a longing to touch deeper truths about oneself, which are not easily accessible otherwise.

Strong AI could be the opposite of that. Robots might understand their own algorithms much better than we do, without any need to discover anything further. They might be able to communicate such information directly as raw data, without needing the approximation/encryption of metaphors. As Ian McEwan’s fictitious robot character emphatically declares, most human literature would be completely redundant for such creatures.

The Uncanny Worldview of an Intelligent Robot

Intelligent robots would likely have a very different perception of the world. With access to Bluetooth and WiFi, they would be able to ‘smell’ other connected devices and develop a sort of ‘sixth sense’ of knowing when a particular person is approaching merely from their Bluetooth signature. As roboticist Rodney Brooks shows, robots will soon be able to measure one’s breathing and heart rate without any biometric sensor, simply by analyzing how a person’s physical presence slightly changes the behavior of WiFi signals.

The technology for this already exists, and it could enable the robot to have access to a totally different kind of information about the humans around, such as their emotional state or health. Similar technologies of detecting changes in the radio field could allow the robots to do something akin to echolocation and know if they are surrounded by wood, stone, or metal. Just imagine how alien a creature endowed with such senses would be!

AGI might also perceive time very differently from us because they would think much faster. The ‘wetware’ of our biological brains constrains the speed at which electrical signals can travel. Electronic brains, however, could enable speeds closer to the ultimate physical limit, the speed of light. Minds running on such faster hardware would also think proportionally faster, making their experience of the passage of time proportionally slower.

If AGI would think ‘only’ ten thousand times faster than humans, a conservative estimation, they would inhabit a completely different world. It is difficult to imagine how such creatures might regard humans, but futurist James Lovelock chillingly estimates that “the experience of watching your garden grow gives you some idea of how future AI systems will feel when observing human life.”

The way AGI is depicted in sci-fi (e.g. Terminator, Ex Machina, or Westworld) might rightly give us existential shivers. But if the predictions above are anywhere near right, then AGI might turn out to be weirder than our wildest sci-fi dreams. AI might reach human-level, but it would most likely be radically non-humanlike.

Is this good or bad news for theological anthropology? How would the emergence of such an alien type of affect our understanding of humanity and its imago Dei status? The next post, the last one in this four-part series, wrestles head-on with this question.

ERLC Statement on AI: An Annotated Christian Response

Recently, the ERLC (Ethics and Religious Liberty Commission) released a statement on AI. This was a laudable step as the Southern Baptist became the first large Christian denomination to address this issue directly. While this is a start, the document fell short in many fronts. From the start, the list of signers had very few technologists and scientists.

In this blog, I show both the original statement and my comments in red. Judge for yourself but my first impression is that we have a lot of work ahead of us.

Article 1: Image of God

We affirm that God created each human being in His image with intrinsic and equal worth, dignity, and moral agency, distinct from all creation, and that humanity’s creativity is intended to reflect God’s creative pattern.

Ok, that’s a good start by locating creativity as God’s gift and affirming dignity of all humanity. Yet, the statement exalts human dignity at expense of creation. Because AI, and technology in general, is about human relationship to creation, setting the foundation right is important. It is not enough to highlight human primacy, one must clearly state our relationship with the rest of creation.

We deny that any part of creation, including any form of technology, should ever be used to usurp or subvert the dominion and stewardship which has been entrusted solely to humanity by God; nor should technology be assigned a level of human identity, worth, dignity, or moral agency.

Are we afraid of a robot take over of humanity? Here it would have been helpful to start distinguishing between general and narrow AI. The first is still decades away while the latter is already here and poised to change every facet of our lives. The challenge of narrow AI is not one of usurping our dominion and stewardship but of possibly leading us to forget our humanity. They seem to be addressing general AI. Maybe including more technologists in the mix would have helped.

Genesis 1:26-28; 5:1-2; Isaiah 43:6-7; Jeremiah 1:5; John 13:34; Colossians 1:16; 3:10; Ephesians 4:24

Article 2: AI as Technology

We affirm that the development of AI is a demonstration of the unique creative abilities of human beings. When AI is employed in accordance with God’s moral will, it is an example of man’s obedience to the divine command to steward creation and to honor Him. We believe in innovation for the glory of God, the sake of human flourishing, and the love of neighbor. While we acknowledge the reality of the Fall and its consequences on human nature and human innovation, technology can be used in society to uphold human dignity. As a part of our God-given creative nature, human beings should develop and harness technology in ways that lead to greater flourishing and the alleviation of human suffering. 

Yes, well done! This affirmation is where Christianity needs to be. We are for human flourishing and the alleviation of suffering. We celebrate and support Technology’s role in these God-given missions.

We deny that the use of AI is morally neutral. It is not worthy of man’s hope, worship, or love. Since the Lord Jesus alone can atone for sin and reconcile humanity to its Creator, technology such as AI cannot fulfill humanity’s ultimate needs. We further deny the goodness and benefit of any application of AI that devalues or degrades the dignity and worth of another human being.

I guess what they mean here is that technology is a limited means and cannot ultimately be the salvation. I see here a veiled critique of Transhumanism. Fair enough, the Christian message should both celebrate AI’s potential but also warn of its limitations less we start giving it unduly worth.

Genesis 2:25; Exodus 20:3; 31:1-11; Proverbs 16:4; Matthew 22:37-40; Romans 3:23

Article 3: Relationship of AI & Humanity

We affirm the use of AI to inform and aid human reasoning and moral decision-making because it is a tool that excels at processing data and making determinations, which often mimics or exceeds human ability. While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility.

This statement seems to suggest the positive role AI can play in augmentation rather than replacement. I am just not sure that was ever in question.

We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created. Only humanity will be judged by God on the basis of our actions and that of the tools we create. While technology can be created with a moral use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision making.

While hard to argue against this statement at face value, it overlooks the complexities of a world that is becoming increasingly reliant on algorithms. The issue is not that we are offloading moral decisions to algorithms but that they are capturing moral decisions of many humans at once. This reality is not addressed by simply stating human moral responsibility. This needs improvement.

Romans 2:6-8; Galatians 5:19-21; 2 Peter 1:5-8; 1 John 2:1

Article 4: Medicine

We affirm that AI-related advances in medical technologies are expressions of God’s common grace through and for people created in His image and that these advances will increase our capacity to provide enhanced medical diagnostics and therapeutic interventions as we seek to care for all people. These advances should be guided by basic principles of medical ethics, including beneficence, nonmaleficence, autonomy, and justice, which are all consistent with the biblical principle of loving our neighbor.

Yes, tying AI-related medical advances with the great commandment is a great start.

We deny that death and disease—effects of the Fall—can ultimately be eradicated apart from Jesus Christ. Utilitarian applications regarding healthcare distribution should not override the dignity of human life. Furthermore, we reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.

Similar to my statement on article 3, this one misses the complexity of the issue. How do you draw the line between enhancement and cure? Also, isn’t the effort of extend life an effective form of alleviation of suffering? These issues do not lend themselves to simple propositions but instead require more nuanced analysis and prayerful consideration.

Matthew 5:45; John 11:25-26; 1 Corinthians 15:55-57; Galatians 6:2; Philippians 2:4​

Article 5: Bias

We affirm that, as a tool created by humans, AI will be inherently subject to bias and that these biases must be accounted for, minimized, or removed through continual human oversight and discretion. AI should be designed and used in such ways that treat all human beings as having equal worth and dignity. AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making.

Bias is inherent in the data fed into machine learning models. Work on the data, monitor the outputs and evaluate results and you can diminish bias. Direction AI to promote equal worth is a good first step.

We deny that AI should be designed or used in ways that violate the fundamental principle of human dignity for all people. Neither should AI be used in ways that reinforce or further any ideology or agenda, seeking to subjugate human autonomy under the power of the state.

What about being used by large corporations? This was a glaring absence here.

Micah 6:8; John 13:34; Galatians 3:28-29; 5:13-14; Philippians 2:3-4; Romans 12:10

Article 6: Sexuality

We affirm the goodness of God’s design for human sexuality which prescribes the sexual union to be an exclusive relationship between a man and a woman in the lifelong covenant of marriage.

This seems like a round-about way to use the topic of AI for fighting culture wars. Why include this here? Or, why not talk about how AI can help people find their mates and even help marriages? Please revise or remove!

We deny that the pursuit of sexual pleasure is a justification for the development or use of AI, and we condemn the objectification of humans that results from employing AI for sexual purposes. AI should not intrude upon or substitute for the biblical expression of sexuality between a husband and wife according to God’s design for human marriage. 

Ok, I guess this is a condemnation of AI porn. Again, it seems misplaced on this list and could have been treated in alternative ways. Yes, AI can further increase objectification of humans and that is a problem. I am just not sure that this is such a key issue to be in a statement of AI. Again, more nuance and technical insight would have helped.

Genesis 1:26-29; 2:18-25; Matthew 5:27-30; 1 Thess 4:3-4

Article 7: Work

We affirm that work is part of God’s plan for human beings participating in the cultivation and stewardship of creation. The divine pattern is one of labor and rest in healthy proportion to each other. Our view of work should not be confined to commercial activity; it must also include the many ways that human beings serve each other through their efforts. AI can be used in ways that aid our work or allow us to make fuller use of our gifts. The church has a Spirit-empowered responsibility to help care for those who lose jobs and to encourage individuals, communities, employers, and governments to find ways to invest in the development of human beings and continue making vocational contributions to our lives together.

This is a long, confusing and unhelpful statement. It seems to be addressing the challenge of job loss that AI can bring without really doing it directly. It gives a vague description of the church’s role in helping individuals find work but does not address the economic structures that create job loss. It simply misses the point and does not add much to the conversation. Please revise!

We deny that human worth and dignity is reducible to an individual’s economic contributions to society alone. Humanity should not use AI and other technological innovations as a reason to move toward lives of pure leisure even if greater social wealth creates such possibilities.

Another confusing and unhelpful statement. Are we making work holy? What does “lives of pure leisure” mean? Is this a veiled attack against Universal Basic Income? I am confused. Throw it out and start it over!

Genesis 1:27; 2:5; 2:15; Isaiah 65:21-24; Romans 12:6-8; Ephesians 4:11-16

Article 8: Data & Privacy

We affirm that privacy and personal property are intertwined individual rights and choices that should not be violated by governments, corporations, nation-states, and other groups, even in the pursuit of the common good. While God knows all things, it is neither wise nor obligatory to have every detail of one’s life open to society.

Another statement that needs more clarification. Treating personal data as private property is a start. However, people are giving data away willingly. What is privacy in a digital world? This statement suggest the drafters unfamiliarity with the issues at hand. Again, technical support is needed.

We deny the manipulative and coercive uses of data and AI in ways that are inconsistent with the love of God and love of neighbor. Data collection practices should conform to ethical guidelines that uphold the dignity of all people. We further deny that consent, even informed consent, although requisite, is the only necessary ethical standard for the collection, manipulation, or exploitation of personal data—individually or in the aggregate. AI should not be employed in ways that distort truth through the use of generative applications. Data should not be mishandled, misused, or abused for sinful purposes to reinforce bias, strengthen the powerful, or demean the weak.

The intention here is good and it is in the right direction. It is also progress to point out that consent is the only guideline and in its condemnation of abusive uses. I would like it to be more specific on its call to corporations, governments and even the church.

Exodus 20:15, Psalm 147:5; Isaiah 40:13-14; Matthew 10:16 Galatians 6:2; Hebrews 4:12-13; 1 John 1:7

Article 9: Security

We affirm that AI has legitimate applications in policing, intelligence, surveillance, investigation, and other uses supporting the government’s responsibility to respect human rights, to protect and preserve human life, and to pursue justice in a flourishing society.

We deny that AI should be employed for safety and security applications in ways that seek to dehumanize, depersonalize, or harm our fellow human beings. We condemn the use of AI to suppress free expression or other basic human rights granted by God to all human beings.

Good intentions with poor execution. The affirmation and denials are contradictory. If you affirm that AI can be use for policing, you have to concede that it will be used to harm some. Is using AI to suppress hate speech acceptable? I am not sure how this adds any insight to the conversation. Please revise!

Romans 13:1-7; 1 Peter 2:13-14

Article 10: War

We affirm that the use of AI in warfare should be governed by love of neighbor and the principles of just war. The use of AI may mitigate the loss of human life, provide greater protection of non-combatants, and inform better policymaking. Any lethal action conducted or substantially enabled by AI must employ human oversight or review. All defense-related AI applications, such as underlying data and decision-making processes, must be subject to continual review by legitimate authorities. When these systems are deployed, human agents bear full moral responsibility for any actions taken by the system.

Surprisingly, this was better than the statement above. It upholds human responsibility but recognizes that AI, even in war, can have life preserving aims. I would have like a better definition of uses for defense, yet that is somewhat implied in the principles of just war. I must say this is an area that needs more discussion and further considerations but this is a good start.

We deny that human agency or moral culpability in war can be delegated to AI. No nation or group has the right to use AI to carry out genocide, terrorism, torture, or other war crimes.

I am glad to see the condemnation of torture here. Lately, I am not sure where evangelicals stand on this issue.

Genesis 4:10; Isaiah 1:16-17; Psalm 37:28; Matthew 5:44; 22:37-39; Romans 13:4​

Article 11: Public Policy

We affirm that the fundamental purposes of government are to protect human beings from harm, punish those who do evil, uphold civil liberties, and to commend those who do good. The public has a role in shaping and crafting policies concerning the use of AI in society, and these decisions should not be left to those who develop these technologies or to governments to set norms.

The statement points to the right direction of public oversight. I would have liked it to be more bold and clear about the role of the church. It should have also addressed corporations more directly. That seems to be a blind spot in a few articles.

We deny that AI should be used by governments, corporations, or any entity to infringe upon God-given human rights. AI, even in a highly advanced state, should never be delegated the governing authority that has been granted by an all-sovereign God to human beings alone.

Glad to see corporations finally mentioned in this document making this a good start.

Romans 13:1-7; Acts 10:35; 1 Peter 2:13-14

Article 12: The Future of AI

We affirm that AI will continue to be developed in ways that we cannot currently imagine or understand, including AI that will far surpass many human abilities. God alone has the power to create life, and no future advancements in AI will usurp Him as the Creator of life. The church has a unique role in proclaiming human dignity for all and calling for the humane use of AI in all aspects of society.

Again, the distinction between narrow and general AI would have been helpful here. The statement seems to be addressing general AI. It also seems to give away the impression that AI is threatening God. Where is that coming from? A more nuanced view of biology and technology would have been helpful here to. They seem to be jumbled together. Please revise!

We deny that AI will make us more or less human, or that AI will ever obtain a coequal level of worth, dignity, or value to image-bearers. Future advancements in AI will not ultimately fulfill our longings for a perfect world. While we are not able to comprehend or know the future, we do not fear what is to come because we know that God is omniscient and that nothing we create will be able to thwart His redemptive plan for creation or to supplant humanity as His image-bearers.

I disagree with the first sentence. There are ways in which AI can affirm and/or diminish our humanity. The issue here seems to be a perceived threat that AI will replace humans or be considered equal to them. I like the hopeful confidence in God for the future but the previous statement suggest that there is fear about this already. The ambiguity in the statements is unsettling. It suggests that AI is a dangerous unknown. Yes, it is true that we cannot know what it can become but why not call out Christians to seize this opportunity for the kingdom? Why not proclaim that AI can help us co-create with God? Let me reiterate one of the verses mentioned below:

For God has not given us a spirit of fear and timidity, but of power, love, and self-discipline

Genesis 1; Isaiah 42:8; Romans 1:20-21; 5:2; Ephesians 1:4-6; 2 Timothy 1:7-9; Revelation 5:9-10

For an alternative but still evolving Christian position on this matter please check out the Christian Transhumanist Association affirmation.

Are You Ready for Sexbots? How AI is Changing Intimacy

Very rarely do the words sex and theology appear in the same blog title. Yet, here we are.

Sexbots: The Final Step in Human Machine Relationships

In a previous blog I discussed how Intelligent Agents could eventually develop romantic relationships with humans. Yet, these relationships were mostly platonic imaginations. Sexbots are the next level, where robots can actually relate to humans in physical ways, including intimacy. How close are we from this reality? An expert from the Pew research predicted that robot relationships would be common by 2025. David Levy predicts that by 2050 marriage to robots will be legal. Mind blowing, indeed! Note that these predictions don’t just point to an outlier market of men seeking pleasure with robots as they are unable to do so with women. Instead, it foresees a world in which these relationships will become common place.

What are these sexbots? They started as hyper-realist dolls fabricated primarily for men. Now, as AI technologies advance, they are adding an interface in the head that can speak and learn his human companion’s desires. They are also starting to develop the outlines of a personality intended to create an emotional bond with the human companion. Yet, all this integration is in its very early stages. At this point, some of these dolls send me right into the uncanny valley, that point in which robots are human enough to catch your attention but still robotic enough to be creepy.

As these issues still need to be worked out, it is not too early to start considering a future in which some of us engage in romantic relationships with robots. In this scenario, we have left the world of Her to enter the badlands of Westworld.

Is This For Real?

My first reaction to this trend was to dismiss it as an abnormality. Surely, only a small group of lonely men would even consider such possibility. Who would exchange a real human with a heartless machine? As discussed before, the level of AI available is no where close to human (or live) intelligence but only a highly mimicked form of it. Yet, the more I thought about it, the more I understood the appeal of it.

The state of human marriage is in disarray to say the least. Moreover, in most Western societies, livelihood is secure and procreation is no longer a necessity. Individuals are free to pursue personal goals and meet every other need without another human being. Fantasy is available in a click of a mouse, a screen of a device or even surreal glasses. In these societies, whole industries have emerged to meet individual needs that relationships have become more of an option than a necessity.

We been in this road for a while, the road towards total independence, where sexbots are not even a destination but only another milestone in this journey toward hyper-isolation.

How Do We Respond?

I started this blog series as a follow up to a call for Christian leaders to enter the AI conversation. In this context, maybe sexbots are less of an absurdity but more of a cry for help. If this is where we are going, maybe it is time to stop this ship and re-think our trajectory. The point here is not to scream loud about the immorality of sexbots when most of Christian men already struggle with two-dimensional pornography. Yes, this is a sin issue but maybe there is a deeper wound to be addressed. A cry for true relationship sorely missing in our families and churches. Maybe the biggest gift technology can give in this junction is to expose the height from which we have fallen.

It is time to offer alternatives through healthy long-lasting relationships. This does not only apply to marriages, but also friendships and above all Christian fellowship. May we never have to resort to any sort of artificial relationship. My hope is that human relationships in our lives will always be enough. God did not create men and women to be alone. As Christians, we believe the very being of God is a community in the Trinity. We are called to love each other and to invite all into redemptive community.

We can never afford to outsource this job to a machine.

I pray we never will.

 

 

 

Can Companion Robots Heal Our Loneliness?

Can companion robots improve the social life of the elderly? That is what Intuition robotics wants you to believe with their new product: ELI Q. This sociable robot interacts with their users reminding them to take their meds, call friends and even to play games. Their rationale is compelling. With an aging population and longer life-spans, using AI to prevent social isolation is a clever idea. The question is, of course, how much is it really incumbent on the user to seek out these interactions?

Not in ELI Q’s demographic, no worries, there are plenty of other sociable robot options for you. Meet Buddy, the companion robot for everyone. He will remind you of Rosie of the Jetsons. He can protect your home, play your music, display facial expressions and more. This project also has a social component in that it proposes to democratize robotics by using an open source platform. That point caught my attention since making robotics technology accessible could be a game-changer for developing countries. Using technology for human advancement is always an attractive proposition.

Now for the future of companion robots, going from cute to human-like, check out Nadine. This human-looking bot goes right past the uncanny valley. That is, she looks human enough not to give us the creeps. She also stands out by having advanced emotional intelligence able to detect emotions through our facial expressions and recall past conversations. Her creators also believe her to be a good companion for those with dementia or autism.

These are glimpses of a coming future where robots will increasingly become part of our lives. Given the acute social isolation many suffer from in our time, social robots offer a promising solution. Yet, can they really provide the relational warmth mostly found in human relationships? That remains to be seen. If, like in the first example, the robot is a conduit to strengthen existing relationships, than this could be a form of enhancement rather than replacement. However, judging by the last example, the line gets blurry. My hope is that we start reflecting on these issues now rather than once these technologies come to commercial fruition. The best interaction with technology is one shaped by human wisdom.

What are your thoughts? Would you consider acquiring a social robot? If so, why?