The Glaring Omission of Religious Voices in AI Ethics

Pew Research released a report predicting the state of AI ethics in 10 years. The primary question was: will AI systems have robust ethical principles focused on the common good by 2030? Of the over 600 experts who responded, 2/3 did not believe this would happen. Yet, this was not the most surprising thing about the report. Looking over the selection of responders, there was no clergy or academics of religion included. In the burgeoning field of AI ethics research, we are missing the millenary wisdom of religious voices.

Reasons to Worry and Hope

In a follow-up webinar, the research group presented the 7 main findings from the survey. They are the following:

Concerning Findings

1. There is no consensus on how to define AI ethics. Context, nature and power of actors are important.

2. Leading actors are large companies and governments that may not have the public interest at the center of their considerations.

3. AI is already deployed through opaque systems that are impossible to dissect. This is the infamous “black box” problem pervasive in most machine learning algorithms.

4. The AI race between China and the United States will shape the direction of development more than anything else. Furthermore, there are rogue actors that could also cause a lot of trouble

Hopeful Findings

5. AI systems design will be enhanced by AI itself which should speed up the mitigation of harmful effects.

6. Humanity has made acceptable adjustments to similar new technology in the past. Users have the power to bent AI uses towards their benefit.

7. There is widespread recognition of the challenges of AI. In the last decade, awareness has increased significantly resulting in efforts to regulate and curb AI abuses. The EU has led the way in this front.

Photo by Wisnu Widjojo on Unsplash

Widening the Table of AI Ethics with Faith

This eye-opening report confirms many trends we have addressed in this blog. In fact, the very existence of AI theology is proof of #7, showing that awareness is growing. Yet, I would add another concerning trend to the list above which is the narrow group of people in the AI ethics dialogue table. The field remains dominated by academic and industry leaders. However, the impact of AI is so ubiquitous that we cannot afford this lack of diversity.

Hopefully, this is starting to change. A recent New York Times piece outlines efforts of the AI and Faith network. The group consists of an impressive list of clergy, ethicists, and technologists who want to bring their faith values to the table. They seek to introduce the diverse universe of religious faith in AI ethics providing new questions and insights into this important task.

If we are to face the challenge of AI, why not start by consulting thousands of years of human wisdom? It is time we add religious voices to the AI ethics table as a purely secular approach will ostracize the majority of the human population.

We ignore them to our peril.

Can AI Empower the Poor or Will it Increase Inequality?

Faster, better, stronger, smarter. These are, with no exaggerations, the revolutionary goals of AI. Faster trading is revolutionizing capitalism.[1] Better diagnostics is revolutionizing health care.[2] Stronger defense systems are revolutionizing warfare.[3] And smarter everything will revolutionize all aspects of our lives, from transportation,[4] to criminal justice,[5] to manufacturing,[6] to science,[7] and so forth. But can AI also revolutionize our relationship to the poor?

According to International Data Corporation, AI is a $157 billion industry and expected to surpass $300 billion by 2024.[8] What’s behind this figure, however, is that “AI” is being developed by companies for specifically targeted goals. While some organizations, like Google’s Deep Mind, have their goal as Artificial General Intelligence, nearly every current breakthrough and application of AI is targeted toward specific industries. The money spent on AI is, therefore, seen primarily as an investment—the technology will yield much greater profit than human-based approaches.

This shouldn’t surprise us. As they say, money makes the world go around. But it does create a moral problem for Christians. Is it really a good thing for AI to be developed around the primary goal of increasing wealth? According to Latin American Liberation Theology, the answer is no.

Photo by Roberto Huczek on Unsplash

Liberation Theology

Latin American Liberation Theology, distinct from, say, Black Liberation Theology or Minjung Theology, is a theological tradition rooted in Roman Catholic communities in Latin America. The tradition, as explained by Gustavo Gutierrez, is rooted in a Marxian approach to society that develops theology through “praxis.” Praxis, for Gutierrez, is a cyclical process of letting one’s theology and activity in the world mutually influence each other.[9] Theology should not be removed from the experiences of the campesinos. A theology stuck in the “ivory tower” is, in the view of liberation theology, a dead theology.

Liberation theology has had a large impact on Catholic Social Teaching from the late 60s on. One of the most popular contributions is the so-called “option for the poor,” an idea taken from the 1968 Latin American Episcopal Conference in Medellin, Colombia. The basic idea of this, which Pope John Paul II validated in his 1987 encyclical Sollicitudo Rei Socialis, is that our social perspective should prioritize the needs and experiences of the poor above all else. The idea has undergone some modifications in more recent theologians use of it, but the core remains that those most underprivileged by society should get the greatest attention from Christians.

But what does this have to do with AI?

The Civilization of Wealth and the Civilization of Poverty

The Jesuit martyr Ignacio Ellacuría proposed the concepts of a “Civilization of Wealth” and a “Civilization of Poverty.” Like Luther’s Two Kingdoms or St Augustine’s Two Cities, these antagonistic civilizations sit as dipoles for Christians. The Civilization of Wealth, for Ellacuría, is modeled by so-called “first world” countries like the United States and Western Europe. It’s the goal of growth, of efficiency, of progress and wealth. In this model, it is “the possessive accumulation, by individuals or families, of the maximum possible wealth [that is] the fundamental basis of one’s own security and the possibility of an ever-increasing consumerism as the basis of one’s own happiness.”[10] The problem with this model, Ellcuría’s student Jon Sobrino notes, is that it “does not meet the basic needs of everyone, and…that it does not build spirit or values that can humanize people and societies.”[11] In short, the goal of technological progress and “faster, better, stronger, smarter” that the Civilization of Wealth pursues is a goal that lets some people starve while others are rich (cf: Thomas Malthus), but also reduces human beings and the world around us to use objects. Max Weber called this phenomenon “instrumental rationality”—the world becomes an assemblage of numerical values, which, for capitalists, can be converted to money while, for data scientists, can be converted to data.

I don’t think it is too much to suggest that nearly all AI projects currently underway operate under these goals of the Civilization of Wealth. The Civilization of Poverty, in contrast, “rejects the accumulation of capital as the engine of history, and the possession-enjoyment of wealth as the principle of humanization; rather, it makes the universal satisfaction of basic needs the principle of development, and the growth of shared solidarity the basis of humanization.”[12] This model may not be the “wealth of nations” Adam Smith promised nearly 250 years ago, but it is a civilization where the poor and hungry are not reduced to poverty statistics. The dedication to human rights and the virtue of solidarity over progress leads to collective flourishing, even if it does not lead to leaps and bounds in science and technology. There may be no AGI in the Civilization of Poverty, but there will also be no discarded human beings.

A New Role for AI: The Voice of the Poor?

The place of AI in liberation theology I have presented is quite unfavorable, but I believe it is not the last word. The “option for the poor” is a privileged, but poorly developed notion in Catholic thought. As both an undergraduate and a grad student, I often heard this phrase tied to the call to be “voices for the voiceless.” The sentiment is noble, but how can we really have an “option” for the poor if we don’t actually hear from the poor? Why not give the “voiceless” their own voice? Therein lies my biggest problem with liberation theology as well: while Ellacuría and Sobrino are prophetic voices, they were also middle-class Spanish Jesuits, not formed within the third-world poverty they encountered.

Since AI develops its “understanding” based on the data and rules programmed into it, the problem of AI serving the Civilization of Wealth extends as far as the programmers themselves pursue those goals. AI programmed on data sets created by the poor, or AI programmed by the poor could, theoretically, be able to be an actual voice for the poor. An AI that can help shape policies directed toward the Civilization of Poverty because its references are taken from the voices of the poor does not have the same limitations or blind spots that current AI projects suffer from.

Ultimately, it remains to be seen whether AI can or will be an instrument to promote the flourishing of the poor or if its uses will remain tethered to the Civilization of Wealth. As Christians, our task must be toward building the Kingdom of God, a place where, Isaiah reminds us, all eat and drink without money and without cost (Isaiah 55:1).


Levi Checketts. Photo by Jiyoung Ko

Levi Checketts is an incoming Assistant Professor of Religion and Philosophy at Hong Kong Baptist University and an assistant pastor at Jesus Love Korean United Methodist Church in Cupertino, California. His research focuses on ethical issues related to new technologies, with a special interest for the transhumanist movement and Artificial Intelligence. He has been published in Religions, Theology and Science and Techne: Research in Philosophy and Technology and is currently working on a book related to the challenge of our obligations to the poor and AI. When not teaching or preaching, Levi likes to play RPGs and point-and-click adventure games and go site-seeing with his wife and daughter. 


[1] https://builtin.com/artificial-intelligence/ai-trading-stock-market-tech

[2] https://www.healtheuropa.eu/technological-innovations-of-ai-in-medical-diagnostics/103457/

[3] https://fas.org/sgp/crs/natsec/IF11150.pdf

[4] https://indatalabs.com/blog/ai-in-logistics-and-transportation

[5] https://www.ojp.gov/pdffiles1/nij/252038.pdf

[6] https://www.plantautomation-technology.com/articles/the-future-of-artificial-intelligence-in-manufacturing-industries

[7] https://royalsociety.org/-/media/policy/projects/ai-and-society/AI-revolution-in-science.pdf?la=en-GB&hash=5240F21B56364A00053538A0BC29FF5F

[8] https://www.idc.com/getdoc.jsp?containerId=prUS46757920

[9] Gustavo Gutierrez, A Theology of Liberation: History, Politics, Salvation, trans. Sr. Caridad Inda and John Eagleson (Maryknoll, NY: Orbis Books, 1986), 10-13.

[10] Ignacio Ellacuría, “Utopía y Profetismo,” Revista Lationamericana de Teología 17 (1989): 170.

[11] Jon Sobrino, “The Crucified People and the Civilization of Poverty: Ignacio Ellcuría’s ‘Taking Hold of Reality,’” in No Salvation Outside the Poor: Prophetic-Utopian Essays, trans. Margaret Wilde (Maryknoll, NY: Orbis, 2008), 9.

[12] Ellcauría, 170.

AI Artistic Parrots and the Hope of the Resurrection

Guest contributor Dr. Scott Hawley discusses the implications for generative models and resurrection. As this technology improves, the generation of new work attributed to the dead multiply. How does that square with the Christian hope for resurrection?

“It is the business of the future to be dangerous.”

(Fake) Ivan Illich

“The first thing that technology gave us was greater strength. Then it gave us greater speed. Now it promises us greater intelligence. But always at the cost of meaninglessness.”

(Fake) Ivan Illich

Playing with Generative Models

The previous two quotes are just a sample of 365 fake quotes in the style of philosopher/theologian Ivan Illich by feeding a page’s worth of real Illich quotes from GoodReads.com into OpenAI’s massive language model, GPT-3, and had it continue “writing” from there. The wonder of GPT-3 is that it exhibits what its authors describe as “few-shot learning.” That is, rather than requiring of 100+ pages of Illich as older models, it works with a few Illich quotes. Two to three original sayings and the GPT-3 can generate new quotes that are highly believable.

Have I resurrected Illich? Am I putting words into the mouth of Illich, now dead for nearly 20 years? Would he (or the guardians of his estate) approve? The answers to these questions are: No, Explicitly not (via my use of the word “Fake”), and Almost certainly not. Even generating them started to feel “icky” after a bit. Perhaps someone with as flamboyant a public persona as Marshall McLuhan would have been pleased to be ― what shall we say, “re-animated“? ― in such a fashion, but Illich likely would have recoiled. At least, such is the intuition of myself and noted Illich commentator L.M. Sacasas, who inspired my initial foray into creating an “IllichBot”:

…and while I haven’t abandoned the IllichBot project entirely, Sacasas and I both feel that it would be better if it posted real Illich quotes rather than fake rehashes via GPT-3 or some other model.

Re-creating Dead Artists’ Work

For the AI Theology blog, I was not asked to write about “IllichBot,” but rather on the story of AI creating Nirvana music in a project called “Lost Tapes of the 27 Club.” This story was originally mis-reported (and is still in the Rolling Stone headline metadata) as “Hear ‘New’ Nirvana Song Written, Performed by Artificial Intelligence,” but really the song was “composed” by the AI system and then performed by a (human) cover band. One might ask, how is this is different from humans deciding to imitate another artists?

For example, the artist known as The Weeknd sounds almost exactly like the late Michael Jackson. Greta Van Fleet make songs that sound like Led Zeppelin anew. Songwriters, musicians, producers, and promoters routinely refer to prior work as signifiers when trying to communicate musical ideas. When AI generates a song idea, is that just a “tool” for the human artists? Are games for music composition or songwriting the same as “AI”? These are deep questions regarding “what is art?” and I will refer the reader to Marcus du Sautoy’s bestselling survey The Creativity Code: Art and Innovation in the Age of AI. (See my review here.)

Since that book was published, newer, more sophisticated models have emerged that generate not just ideas and tools but “performance.” The work of OpenAI’s Jukebox effort and artist-researchers Dadabots generate completely new audio such as “Country, in the style of Alan Jackson“. Dadabots have even partnered with a heavy metal band and beatbox artist Reeps One to generate entirely new music. When Dadabots used Jukebox to produce the “impossible cover song” of Frank Sinatra singing a Britney Spears song, they received a copyright takedown notice on YouTube…although it’s still unclear who requested the takedown or why.

Photo by Michal Matlon on Unsplash

Theology of Generative Models?

Where’s the theology angle on this? Well, relatedly, mistyping “Dadabots” as “dadbots” in a Google search will get you stories such as “A Son’s Race to Give His Dying Father Artificial Immortality” in which, like our Fake Ivan Illich, a man has trained a generative language model on his father’s statements to produce a chatbot to emulate his dad after he’s gone. Now we’re not merely talking about fake quotes by a theologian, or “AI cover songs,” or even John Dyer’s Worship Song Generator, but “AI cover Dad.” In this case there’s no distraction of pondering interesting legal/copyright issues, and no side-stepping the “uncomfortable” feeling that I personally experience.

One might try to couch the “uncomfortable” feeling in theological terms, as some sort of abhorrence of “digital” divination. It echoes the Biblical story of the witch of Endor temporarily bringing the spirit of Samuel back from the dead at Saul’s request. It can also relate to age-old taboos about defiling the (memory of) the dead. One could try to introduce a distinction between taboo “re-animation” that is the stuff of multiple horror tropes vs. the Christian hope of the resurrection through the power of God in Christ.

However I would stop short of this, because the source of my “icky” feeling stems not from theology but from a simpler objection to anthropomorphism, the “ontological” confusion that results when people try to cast a generative (probabilistic) algorithm as a person. I identify with the scientist-boss in the digital-Frosty-the-Snowman movie Short Circuit:

“It’s a machine, Schroeder. It doesn’t get pissed off. It doesn’t get happy, it doesn’t get sad, it doesn’t laugh at your jokes. It just runs programs.”

Short Circuit

Materialists, given their requirement that the human mind is purely physical, can perhaps anthropomorphize with impunity. I submit our present round of language and musical models, however impressively they may perform, are only a “reflection, as in a glass darkly” of true human intelligence. The error of anthropomorphism goes back for millenia, however, the Christian hope for resurrection addresses being truly reunited with lost loved ones. That means being able to hear new compositions of Haydn, by Haydn himself!

Acknowledgement: The title is an homage to the “Stochastic Parrots” paper of the (former) Google AI ethics team.


Scott H. Hawley is Professor of Physics at Belmont University and a Founding Member of AI and Faith. His writings include the winning entry of FaithTech Institute’s 2020 Writing Contest and the most popular Acoustics Today article of 2020, and have appeared in Perspectives on Science and Christian Faith and The Transhumanism Handbook.

3 Ways to Discover Strong Spiritual Connection Through Zoom

When our first online groups for Integral Christian Network started meeting in 2019, we spent a decent amount of time getting people up to speed on using a somewhat unfamiliar video conferencing technology called “Zoom.”

Obviously, that is no longer quite so necessary. Now we find the opposite problem, which has been deemed “Zoom fatigue.” To which I say, not all zoom meetings are created equal.

When we participate through technological systems of connection, what is the relationship between what we bring as active partakers and the limitations and offerings of the system itself? We might recognize that the platform is not neutral, but do we see how we also are not neutral as well? We are co-creating with technology to create new forms of connection and engagement.

Photo by Robert Lukeman on Unsplash

Deepening our Connections Online

At ICN, we gather together in groups of 5-10 for what we call “WeSpace.” These communities of practice connect those from around the world to share together in a meditative prayer practice of “Whole-Body Mystical Awakening.” As you might suppose, these are not meetings of passive, detached online “conferencing.”

Rather, we are seeking to actively engage with one another in spiritual and energetic ways that involve our whole bodies and our spiritual faculties—and a felt-sense of the interconnected space among us, not just our own separate, interior experiences in proximity to others. To do this, we must be present and engaged with one another with a fuller sort of attention, with openness and genuine care.

Sound a little scary? It can be. But don’t we all both fear and crave intimacy?

A surprising bit of feedback that we’ve received often is that it may actually be easier to be present in this way online. Coming from the safety of our own home, we are in a comfortable space. Women talk about not having to be on alert for any threats of unwanted advances or physical danger. The exit door is always just a click away—not that we want to be halfway out the door of course, but it’s some comfort to know you can always bail if things get dicey.

We are also face-to-face with one another. Or as we say it, heart-to-heart. This has a different felt sense than the circular or horizontal shoulder-to-shoulder dynamics of shared physical space in churches or otherwise.

In our groups, we engage the body in our meditative practice, bringing awareness and presence to our physical embodiment in the time and space we are sharing. We do this for many reasons, but it also serves to counter the sometimes “disembodied” presence many bring to digital spaces. This allows us to be more present to the fullness of ourselves—but also to one another in the WeSpace “field” of interconnection.

Photo by Jr Korpa on Unsplash

Creating a Field

If you’ve ever been a part of a zoom meeting where all participants have their videos off except for the presenter/teacher, you know the opposite of what I’m talking about. We might as well be watching a YouTube video.

And yet, can you feel a difference? Even those black boxes with names or pictures reflect a presence that you not just know is there, but perhaps even feel a little. You have the awareness of some kind of collective, shared space. It isn’t the same as watching a YouTube video, is it?

What does it look like to lean into the opposite movement, to press into rather than pull away from the interconnected space together? Of course, you need the right type of group and setting—though you can do it yourself in any meeting. Just like you can be more or less present to others when you are sharing a physical space. Though there are some differences for online space.

Here are a few things we’ve found that help.

First, overcome skepticism.

One of the things we hear over and over is the surprise people express about just how much they can actually feel and sense. Many come in skeptical that they can feel as connected to one another and God in an online space. “I didn’t think this would work over zoom” is a regular refrain.

Much more is possible than you might think.

Research from the HeartMath Institute has shown that our hearts create an electromagnetic field that can be detected up to three feet away from our bodies. In our meetings, we have seen over and over again that the spiritual energetics between us are not bound by space at all—perhaps even not by time as well.

Being fully present is a challenge both online and off – and we are not always aware how our movement of attention through digital portals affects our presence.

Of course, we don’t have the research here yet, nor do I know quite how it would be measured. But repeated anecdotal evidence continues to mount in our and other group experiences.

Second, enter the space.

In our meetings, we ask people to keep their heart facing the group and have their videos on the majority of the time. To create a shared field, we must be present to one another with attention and engagement. It’s not only distracting when someone is checking their phone or looking at something else, it can literally be felt as a diminishment of their presence and therefore the energy of the collective field.

Being fully present is a challenge both online and off—and we’re not always aware how our movement of attention through digital portals affects our presence. We need to become more conscious of this effect and seek to cultivate spaces with fewer distractions and more compelling engagement. This doesn’t mean everyone must speak, but that we keep attention and give ourselves to one another energetically.

We’ve found this comes not through putting on a better show to capture attention, but engaging more than just the mind in our shared space. When we’re present with our hearts, grounded in our bodies, and centered in our guts, we find that we’re less easily taken away by the wanderings of our mind.

Third, discover WeSpace

We are not separate from one another. Many are beginning to see this in the way our systems and technologies work. Further recognition of collective values and cultural conditioning show that our inner lives and decisions are not nearly as independent as we once thought. And spiritually, the age of individualism is fading. “The next Buddha will be a Sangha” Thich Nhat Hahn has declared, meaning that community is the great spiritual teacher.

Technology is often viewed as a consumer good to serve individuals and systems. But what if we begin more and more to utilize it not just for profits and efficiency, but for enhancing our ability to craft and cultivate authentic community of depth and presence with one another?

In so doing, we just might discover the next great spiritual teacher.

Us.  


Luke Healy is the co-founder of Integral Christian Network, an endeavor to help further the loving evolution of Christianity and the world. He is passionate about pioneering innovation in forms of spiritual community, in gathering like-minded and like-hearted pilgrims on the spiritual journey, and making mystical experience of God accessible in individual and collective practice.

Netflix “Eden”: Human Frailty in a Technological Paradise

Recently, my 11 year-old daughter told me she wanted to watch animes. I have watched a few and was a bit concerned about her request. While I have come to really appreciate this art form, I feared that some thematic elements would not be appropriate to her 11 year-old mind. Yet, after watching the first episode of Netflix Eden, my concerns were appeased and I invited my two oldest (11 and 9) to watch it with me. With only 4 episodes of 25 minutes each, the series make it for a great way to spend a lazy Saturday afternoon. Thankfully, beyond being suitable there was enough that for me to reflect on. In fact, captivating characters and an enchanting soundtrack moved me to tears making Netflix Eden a delightful exploration of human frailty.

Here is my review of this heart-warming, beautifully written story.

Official Trailer

A Simple but Compelling Plot

Without giving a way much, the story revolves around a simple structure. From the onset we learn that no human have lived on earth for 1,000 years. Self-sufficient robots successfully turned a polluted wasteland into a lush oasis. The first scenes show groups of robots tending and harvesting an apple orchard.

Two of these robots stumble into an unlikely finding: a human child. Preserved in a cryogenic capsule, the toddler stumbles out and wails. The robots are confused and helpless as to how to respond. They quickly identify her as a human bio-form but cannot comprehend what her crying means.

After the initial shock, the toddler turns to the robots and calls them “papa” and “mama” kicking off the story. The plot develops around the idea of two robots raising a human child in a human-less planet earth. We also learn that humans are perceived as a threat and to be surrendered to the authorities. In spite of their programming, the robots choose to hide and protect the girl.

Photo by Bruno Melo on Unsplash

Are Humans Essential for Life to Flourish on Earth?

Even with only 4 episodes, the anime packs quite a philosophical punch. From a theological perspective, the careful observer quickly sees why the show is named after the Biblical garden. It is an illusion to the Genesis’ story where life begins on earth yet it includes with a twist. Now Eden is lush and thriving without human interference. It is as if God is recreating earth through technological means. This echoes Francis Bacon’s vision of technology as a way to mitigate the destructive effects of the fall.

Later we learn the planet had become uninhabitable. The robot creators envisioned a solution that entailed freezing cryogenically a number of humans while the robots worked to restore earth back to its previous glory. The plan apparently works except for the wrinkle of this girl waking up before her assigned time. Just like in the original story of Eden, humans come to mess it up.

Embedded in this narrative is the provocative question of human ultimate utility for life in the planet. After all, if machines are able to manage the earth flawlessly, why introduce human error? Of course, the flip side of the question is the belief that machines in themselves are free of error. Putting that aside, the question is still valid.

Photo by Alesia Kazantceva on Unsplash

Human Frailty and Renewal

Watching the story unfold, I could not help but reflect on Dr. Dorabantu’s past post on how AI would help us see the image of God in our vulnerability. That is, learning that robots could surpass us in rationality, we would have to attribute our uniqueness not to a narrow view of intelligence but our ability to love. The anime seems to be getting at the heart of this question and it gets there by using AI. It is in the Robot’s journey to understand human’s essence that we learn about what makes us unique in creation. In this way, the robots become the mirrors that reflect our image back to us.

Another parallel here is with the biblical story of Noah. In a world destroyed by pollution and revived through technological ingenuity, the ark is no longer a boat but a capsule. Humans are preserved by pausing the aging process in their bodies, a clear nod to Transhumanism. The combination of cryogenics and advanced AI can preseve human life on earth albeit for a limited number of humans.

I left the story feeling grateful for our imperfect humanity. It is unfortunate that Christian theology in an effort to paint a perfect God have in turn made human vulnerability seem undesirable. Without denying our potential for harm and destruction, namely our sinfulness, it is time Christian theology embraces and celebrate human vulnerability as part of our Imago Dei. This way, Netflix Eden, helps put human frailty back in the conversation.

How AI and Faith Communities Can Empower Climate Resilience in Cities

AI technologies continue to empower humanity for good. In a previous blog, we explored how AI was empowering government agencies to fight deforestation in the Amazon. In this blog, we discuss the role AI is playing to build climate resilience in cities. We will also look at how faith communities can use AI-enabled microgrids to serve communities hit by climate disassters.

A Changing Climate Puts Cities in Harm way.

I recently listened to an insightful Technopolis podcast on how cities are preparing for an increased incidence of natural disasters. The episode discussed manifold ways city leaders are using technology to prepare, predict and mitigate the impact of climate events. This is a complex challenge that requires a combination of good governance, technological tools, and planning to tackle.

Climate resilience is not just about decreasing carbon footprint, it is also about preparing for the increased incidence of extreme weather. Whether there are fires in California, Tifoons in East Asia, or severe droughts in Northern Africa, the planet is in for a bumpy ride in the coming decades. They will also exacerbate existing problems such as air pollution, water scarcity and heat diseases in urban areas. Governments and civic society groups need to start bracing for this reality by taking bold preventive steps in the present.

Cities illustrate the costs of delaying action on climate change by enshrining resource-intensive infrastructure and behaviors. The choices cities make today will determine their ability to handle climate change and reap the benefits of resource-efficient growth. Currently, 51% of the world’s population lives in cities and within a generation, an estimated two-thirds of the world’s population will live in cities. Hence, addressing cities’ vulnerabilities will be crucial for human life on the planet.

Photo by Karim MANJRA on Unsplash

AI and Climate Resilience

AI is a powerful tool to build climate resilience. We can use it to understand our current reality better, predict future weather events, create new products and services, and minimize human impact. By doing so, we can not only save and improve lives but also create a healthier world while also making the economy more efficient.

Deep learning, for example, enables better predictions and estimates of climate change than ever before. This information can be used to identify major vulnerabilities and risk zones. For example, in the case of fires, better prediction can not only identify risk areas but also help understand how it will spread in those areas. As you can imagine, predicting the trajectory of a fire is a complex task that involves a plethora of variables related to wind, vegetation, humidity, and other factors

The Gifts of Satellite Imagery

Another crucial area in that AI is becoming essential is satellite imagery. Research led by Google, the Mila Institute and the German Aerospace Center harness AI to develop and make sense of extensive datasets on Earth. This in turn empowers us to better understand climate change from a global perspective and to act accordingly.

Combining integrated global imagery with sophisticated modeling capabilities gives communities at risk precious advance warning to prepare. Governments can work with citizens living in these areas to strengthen their ability to mitigate extreme climate impacts. This will become particularly salient in coastal communities that should see their shores recede in the coming decades.

This is just one example of how AI can play a prominent role in climate resilience. A recent paper titled “Tackling Climate Change with Machine Learning,” revealed 13 areas where ML can be developed. They include but are not limited to energy consumption, CO2 removal, education, solar energy, engineering, and finance. Opportunities in these areas include the creation of new low-carbon materials, better monitoring of deforestation, and cleaner transport.

Photo by Biel Morro on Unsplash

Microgrids and Faith Communities

If climate change is the defining test of our generation, then technology alone will not be enough. As much as AI can help find solutions, the threat calls for collective action at unprecedented levels. This is both a challenge and an opportunity for faith communities seeking to re-imagine a future where their relevance surpasses the confines of their pews.

Thankfully, faith communities already play a crucial role in disaster relief. Their buildings often double as shelter and service centers when calamity strikes. Yet, if climate-related events will become more frequent, these institutions must expand their range of services offered to affected populations.

An example of that is in the creation of AI-managed microgrids. They are small, easily controllable electricity systems consisting of one or more generating units connected to nearby users and operated locally. Microgrids contain all the elements of a complex energy system, but because they maintain a balance between production and consumption, they operate independently of the grid. These systems work well with renewable energy sources further decreasing our reliance on fossil fuels

When climate disaster strikes, one of the first things to go is electricity. What if houses of worship, equipped with microgrids, become the places to go for those out of power? When the grid fails, houses of worship could become the lifeline for a neighborhood helping impacted populations communicate with family, charge their phones, and find shelter from cold nights. Furthermore, they could sell their excess energy units in the market finding new sources of funding for their spiritual mission.

Microgrids in churches, synagogues, and mosques – that’s an idea the world can believe in. It is also a great step towards climate resilience.

Klara and the Sun: Robotic Redemption for a Dystopian World

In the previous blog, we discussed how Klara, the AI and the main character of Kazuo Ishiguro’s latest novel, develops a religious devotion to the Sun. In the second and final installment of this book review, I explore how Klara impacts the people around her. Klara and the Sun, shows how they become better humans for interacting with her in a dystopian world.

Photo by Randy Jacob on Unsplash

Gene Inequality

Because humans are only supporting characters in this novel, we only learn about their world later in the book. The author does not give out a year but places that story in a near future. Society is sharply divided along with class and racial lines. Gene editing has become a reality and now parents can opt to have children born with the traits that will help them succeed in life.

This stark choice does not only affect the family’s fate but re-orients the way society allocates opportunities. Colleges no longer accept average kids meaning that a natural birth path puts a child at a disadvantage. Yet, this choice comes at a cost. Experimenting with genes also means a higher mortality rate for children and adolescents. That is the case for the family that purchases Klara, they have lost their first daughter and now their second one is sick.

These gene-edited children receive special education in their home remotely by specialized tutors. This turned out to be an ironic trait in a pandemic year where most children in the world learned through Zoom. They socialize through prearranged gatherings in homes. Those that are well-to-do live in gated communities, supposedly because the world had become unsafe. This is just one of the many aspects of the dystopian world of Klara and the Sun.

Photo by Andy Kelly on Unsplash

AI Companionship and Remembrance

A secondary plot-line in the novel is the relationship between the teenage Josie, Klara’s owner, and her friend Rick who is not gene-edited. The teens are coming of age in this tumultuous period where the viability of their relationship is in question. The adults discuss whether they should even be together in a society that delineates separate paths assigned at birth. One has a safe passage into college and stable jobs while the other is shut out from opportunity by the sheer fact their parents did not interfere with nature.

In this world, droids are common companions to wealthy children. Since many don’t go to school anymore, the droid plays the role of nanny, friend, guardian, and at times tutor. Even so, there is resistance to them in the public square where resentful segments of society see their presence with contempt. They represent a symbol of status for the affluent and a menace to the working class. Even so, their owners often treat them as merchandise. At best they were seen as servants and at worse as disposable toys that could be tossed around for amusement.

The novel also hints at the use of AI to extend the life of loved ones. AI remembrance, shall we say. That is, programming AI droids to take the place of a diseased human. This seems like a natural complement in a world where parents have no guarantee that their gene-edited children will live to adulthood. For some, the AI companion could live out the years their children were denied.

Klara The Therapist

In the world described above, the AF (artificial friend) plays a pivotal role in family life not just for the children that they accompany but also for the parents. In effect, because of her robotic impartiality, Klara serves as a safe confidant to Josie, Rick, her mother, and her dad. The novel includes intimate one-on-one conversations where Klara offers a fresh take on their troubles. Her gentle and unpretentious perspective prods them to do what is right even when it is hard. In this way, she also plays a moral role, reminding humans of their best instincts.

Yet, humans are not the only ones impacted. Klara also grows and matures through her interaction with them. Navigating the tensions, joys, and sorrows of human relationships, she uncovers the many layers of human emotion. Though lacking tear ducts and a beating heart, she is not a prisoner to detached rationality. She suffers with the pain of the humans around her, she cares deeply about their well-being and she is willing to sacrifice her own future to ensure they have one. In short, she is programmed to serve them not as a dutiful pet but as a caring friend. In doing so, she embodies the best of human empathy.

The reader joins Klara in her path to maturity and it is a delightful ride. As she observes and learns about the people around her, the human readers get a mirror to themselves. We see our struggles, our pettiness, our hopes and expectations reflected in this rich story. For the ones that read with an open heart, the book also offers an opportunity for transformation and growth.

Final Reflections

In an insightful series of 4 blogs, Dr. Dorabantu argues that future general AI will be hyper-rational forcing us to re-imagine the essence of who we are. Yet, Ishiguro presents an alternative hypothesis. What if instead, AI technology led to the development of empathetic servant companions? Could a machine express both rational and emotional intelligence?

Emotionally intelligent AI would help us redefine the image of God not by contrast but by reinforcement. That is, instead of simply demonstrating our limitations in rationality it could expand our potential for empathy. The novel shows how AI can act as a therapist or spiritual guide. Through empathetic dialogue, they can help us find the best of our moral senses. In short, it can help us love better.

Finally, the book raises important ethical questions about gene editing’s promises and dangers. What would it look like to live in a world where “designer babies” are commonplace? Could gene-editing combining with AI lead to the harrowing scenario where droids serve as complete replacements for humans? While Ishuguro’s future is fictitious, he speculates on technologies that already exist now. Gene editing and narrow AI are a reality while General AI is plausibly within reach.

We do well to seriously consider their impact before a small group in Silicon Valley decides how to maximize profit from them. This may be the greatest lesson we can take from Klara and the Sun and its dystopian world.

Vulnerable like God: Perfect Machines and Imperfect Humans

This four-part series started with the suggestion that AI can be of real help to theologians, in their attempt to better understand what makes humans distinctive and in the image of God. We have since noted how different machine intelligence is from human intelligence, and how alien-like an intelligent robot could be ‘on the inside’, in spite of its humanlike outward behavior.

For theological anthropology, the main takeaway is that intelligence – understood as rationality and problem-solving – is not the defining feature of human nature. We’ve long been the most intelligent and capable creature in town, but that might soon change, with the emergence of AI. What makes us special and in the image of God is thus not some intellectual capacity (in theology, this is known as the substantive interpretation), nor something that we can do on God’s behalf (the functional interpretation), because AI could soon surpass us in both respects.

The interpretation of the imago Dei that seems to account best for the scenario of human-level AI is the relational one. According to it, the image of God is our special I-Thou relationship with God, the fact that we can be an authentic Thou, capable to receive God’s love and respond back. We exist only because God calls us into existence. Our relationship with God is therefore the deepest foundation of our ontology. Furthermore, we are deeply relational beings. Our growth and fulfillment can only be realized in authentic personal relationships with other human beings and with God.

AI and Authentic Relationality

It is not surprising that the key to human distinctiveness is profoundly relational. Alan Turing tapped right into this intuition when he designed his eponymous test for AI. Turing’s test is, in fact, a measurement of AI’s ability to relate like us. Unsurprisingly, the most advanced AIs still struggle when it comes to simulating relationships, and none has yet passed the Turing test.

But even if a program will someday convincingly relate to humans, will that be an authentic relationship? We’ve already seen that human-level AI will be anything but humanlike ‘on the inside.’ Intelligent robots might become capable to speak and act like us, but they will be completely different from us in terms of their internal motivation or meaning systems. What kind of relationship could there be between us and them, when we’d have so little in common?

We long for other humans precisely because we are not self-sufficient. Hence, we seek others precisely because we want to discover them and our own selves through relationships. We fall in love because we are not completely rational. Human-level AI will be the opposite of that: self-sufficient, perfectly rational, and with a quasi-complete knowledge of itself.

The Quirkiness of Human intelligence

Our limitations are instrumental for the kind of relationships that we have with each other. An argument can thus be made that a significant degree of cognitive and physical vulnerability is required for authentic relationality to be possible. There can be no authentic relationship without the two parts intentionally making themselves vulnerable to each other, opening to one another outside any transactional logic.

Photo by Duy Pham on Unsplash

A hyper-rational being would likely have very serious difficulties to engage fully in relationships and make oneself totally vulnerable to the loved other. It surely does not sound very smart.

Nevertheless, we, humans, do this tirelessly and often at high costs, exactly, perhaps, because we are not that intelligent and goal oriented as AI. Although that appears to be illogic, it is such experiences that give meaning and fulfillment to our lives.

From an evolutionary perspective, it is puzzling that our species evolved to be this way. Evolution promotes organisms that are better at adapting to the challenges of their environment, thus at solving practical survival and reproduction problems. It is therefore unsurprising that intelligence-as-problem-solving is a common feature of evolved organisms, and this is precisely the direction in which AI seems to develop.

If vulnerability is so important for the image of God as relationship, does this imply that God too is vulnerable?

What is strange in the vast space of possible intelligences is our quirky type of intelligence, one heavily optimized for relationship, marked by a bizarre thirst for meaning, and plagued by a surprising degree of irrationality. In the previous post I called out the strangeness of strong AI, but it is we who seem to be strange ones. However, it is specifically this kind of intellectual imperfection, or vulnerability, that enables us to dwell in the sort of Goldilocks of intelligence where personal relationships and the image of God are possible.

Vulnerability, God, Humans and Robots

Photo by Jordan Whitt on Unsplash

If vulnerability is so important for the image of God as relationship, does this imply that God too is vulnerable? Indeed, that seems to be the conclusion, and it’s not surprising at all, especially when we think of Christ. Through God’s incarnation, suffering, and voluntary death, we have been revealed a deeply vulnerable side of the divine. God is not an indifferent creator of the world, nor a dispassionate almighty, all-intelligent ruler. God cares deeply for creation, to the extent of committing to the supreme self-sacrifice to redeem it (Jn. 3: 16).

This means that we are most like God not when we are at our smartest or strongest, but when we engage in this kind of hyper-empathetic, though not entirely logical, behavior.

Compared to AI, we might look stupid, irrational, and outdated, but it is paradoxically due to these limitations that we are able to cultivate our divine likeness through loving, authentic, personal relationships. If looking at AI teaches theologians anything, it is that our limitations are just as important as our capabilities. We are vulnerable, just as our God has revealed to be vulnerable. Being like God does not necessarily mean being more intelligent, especially when intelligence is seen as rationality or problem solving

Christ – whether considered historically or symbolically – shows that what we value most about human nature are traits like empathy, meekness, and forgiveness, which are eminently relational qualities. Behind such qualities are ways of thinking rooted more in the irrational than in the rational parts of our minds. We should then wholeheartedly join the apostle Paul in “boast[ing] all the more gladly about [our] weaknesses […] for when [we] are weak, then [we] are strong” (2 Cor. 12: 9-10).

Human-level, but not Humanlike: The Strangeness of Strong AI

The emergence of AI opens up exciting new avenues of thought, promising to add some clarity to our understanding of intelligence and of the relation between intelligence and consciousness. For Christian anthropology, observing which aspects of human cognition are easily replicated in machines can be of particular help in refining the theological definition of human distinctiveness and the image of God.

However, by far the most theologically exciting scenario is the possibility of human-level AI, or artificial general intelligence (AGI), the Holy Grail of AI research. AGI would be capable to convincingly replicate human behavior. It could, in principle, pass as human, if it chose to. This is precisely how the Turing Test is designed to work. But how humanlike would a human-level AI really be?

Computer programs have already become capable of impressive things, which, when done by humans, require some of our ‘highest’ forms of intelligence. However, the way AI approaches such tasks is very non-humanlike, as explained in the previous post. If the current paradigm continues its march towards human-level intelligence, what could we expect AGI to be like? What kind of creature might such an intelligent robot be? How humanlike would it be? The short answer is, not much, or even not at all.

The Problem of Consciousness

Philosophically, there is a huge difference between what John Searle calls ‘strong’ and ‘weak’ AI. While strong AI would be an emulation of intelligence, weak AI would be a mere simulation. The two would be virtually indistinguishable on the ‘outside,’ but very different ‘on the inside.’ Strong AI would be someone, a thinking entity, endowed with conscience, while weak AI would be a something, a clockwork machine completely empty on the inside.

It is still too early to know whether AGI will be strong or weak, because we currently lack a good theory of how consciousness arises from inert matter. In philosophy, this is known as “the hard problem of consciousness.” But if current AI applications are any indication, weak AGI is a much more likely scenario than strong AGI. So far, AI has made significant progress in problem-solving, but it has made zero progress in developing any form of consciousness or ‘inside-out-ness.’

Even if AGI does somehow become strong AI (how could we even tell?), there are good reasons to believe that it would be a very alien type of intelligence.

What makes human life enjoyable is arguably related to the chunk of our mind that is not completely rational.

Photo by Maximalfocus on Unsplash

The Mystery of Being Human

As John McCarthy – one of the founding fathers of AI – argues, an AGI would have complete access to its internal states and algorithms. Just think about how weird that is! Humans have a very limited knowledge of what happens ‘on their inside.’ We only see the tip of the iceberg, because only a tiny fraction of our internal processes enter our stream of consciousness. Most information remains unconscious, and that is crucial for how we perceive, feel, and act.

Most of the time we have no idea why we do the things we do, even though we might fabricate compelling, post-factum rationalizations of our behavior. But would we really want to know such things and always act in a perfectly rational manner? Or, even better, would we be friends or lovers with such a hyper-rational person? I doubt.

Part of what makes us what we are and what makes human life enjoyable is arguably related to the chunk of our mind that is not completely rational. Our whole lives are journeys of self-discovery, and with each experience and relationship we get a better understanding of who we are. That is largely what motivates us to reach beyond our own self and do stuff. Just think of how much of human art is driven precisely by a longing to touch deeper truths about oneself, which are not easily accessible otherwise.

Strong AI could be the opposite of that. Robots might understand their own algorithms much better than we do, without any need to discover anything further. They might be able to communicate such information directly as raw data, without needing the approximation/encryption of metaphors. As Ian McEwan’s fictitious robot character emphatically declares, most human literature would be completely redundant for such creatures.

The Uncanny Worldview of an Intelligent Robot

Intelligent robots would likely have a very different perception of the world. With access to Bluetooth and WiFi, they would be able to ‘smell’ other connected devices and develop a sort of ‘sixth sense’ of knowing when a particular person is approaching merely from their Bluetooth signature. As roboticist Rodney Brooks shows, robots will soon be able to measure one’s breathing and heart rate without any biometric sensor, simply by analyzing how a person’s physical presence slightly changes the behavior of WiFi signals.

The technology for this already exists, and it could enable the robot to have access to a totally different kind of information about the humans around, such as their emotional state or health. Similar technologies of detecting changes in the radio field could allow the robots to do something akin to echolocation and know if they are surrounded by wood, stone, or metal. Just imagine how alien a creature endowed with such senses would be!

AGI might also perceive time very differently from us because they would think much faster. The ‘wetware’ of our biological brains constrains the speed at which electrical signals can travel. Electronic brains, however, could enable speeds closer to the ultimate physical limit, the speed of light. Minds running on such faster hardware would also think proportionally faster, making their experience of the passage of time proportionally slower.

If AGI would think ‘only’ ten thousand times faster than humans, a conservative estimation, they would inhabit a completely different world. It is difficult to imagine how such creatures might regard humans, but futurist James Lovelock chillingly estimates that “the experience of watching your garden grow gives you some idea of how future AI systems will feel when observing human life.”

The way AGI is depicted in sci-fi (e.g. Terminator, Ex Machina, or Westworld) might rightly give us existential shivers. But if the predictions above are anywhere near right, then AGI might turn out to be weirder than our wildest sci-fi dreams. AI might reach human-level, but it would most likely be radically non-humanlike.

Is this good or bad news for theological anthropology? How would the emergence of such an alien type of affect our understanding of humanity and its imago Dei status? The next post, the last one in this four-part series, wrestles head-on with this question.

How Does AI Compare with Human Intelligence? A Critical Look

In the previous post I argued that AI can be of tremendous help in our theological attempt to better understand what makes humans distinctive and in the image of God. But before jumping to theological conclusions, it is worth spending some time trying to understand what kind of intelligence machines are currently developing, and how much similarity is there between human and artificial intelligence.Image by Gordon Johnson from Pixabay

The short answer is, not much. The current game in AI seems to be the following: try to replicate human capabilities as well as possible, regardless of how you do it. As long as an AI program produces the desired output, it does not matter how humanlike its methods are. The end result is much more important than what goes on ‘on the inside,’ even more so in an industry driven by enormous financial stakes.

Good Old Fashioned AI

This approach was already at work in first wave of AI, also known as symbolic AI or GOFAI (good old-fashioned AI). Starting with the 1950s, the AI pioneers struggled to replicate our ability to do math and play chess, considered the epitome of human intelligence, without any real concern for how such results were achieved. They simply assumed that this must be how the human mind operates at the most fundamental level, through the logical manipulation of a finite number of symbols.

GOFAI ultimately managed to reach human-level in chess. In 1996, an IBM program defeated the human world-champion, Gary Kasparov, but it did it via brute force, by simply calculating millions of variations in advance. That is obviously not how humans play chess.

Although GOFAI worked well for ‘high’ cognitive tasks, it was completely incompetent in more ‘mundane’ tasks, such as vision or kinesthetic coordination. As roboticist Hans Moravec famously observed, it is paradoxically easier to replicate the higher functions of human cognition than to endow a machine with the perceptive and mobility skills of a one-year-old. What this means is that symbolic thinking is not how human intelligence really works.

The Advent of Machine Learning

Photo by Kevin Ku on Unsplash

What replaced symbolic AI since roughly the turn of the millennium is the approach known as machine learning (ML). One subset of ML that has proved wildly successful is deep learning, which uses layers of artificial neural networks. Loosely inspired by the brain’s anatomy, this algorithm aims to be a better approximation of human cognition. Unlike previous AI versions, it is not instructed on how to think. Instead, these programs are being fed huge sets of selected data, in order to develop their own rules for how the data should be interpreted.

For example, instead of teaching an ML algorithm that a cat is a furry mammal with four paws, pointed ears, and so forth, the program is trained on hundreds of thousands of pictures of cats and non-cats, by being ‘rewarded’ or ‘punished’ every time it makes a guess about what’s in the picture. After extensive training, some neural pathways become strengthened, while others are weakened or discarded. The end result is that the algorithm does learn to recognize cats. The flip side, however, is that its human programmers no longer necessarily understand how the conclusions are reached. It is a sort of mathematical magic.

ML algorithms of this kind are behind the impressive successes of contemporary AI. They can recognize objects and faces, spot cancer better than human pathologists, translate text instantly from one language to another, produce coherent prose, or simply converse with us as smart assistants. Does this mean that AI is finally starting to think like us? Not really.

When machines fail, they fail badly, and for different reasons than us.

Even when machines manage to achieve human or super-human level in certain cognitive tasks, they do it in a very different fashion. Humans don’t need millions of examples to learn something, they sometimes do very fine with at as little as one example. Humans can also usually provide explanations for their conclusions, whereas ML programs are often these ‘black boxes’ that are too complex to interrogate.

More importantly, the notion of common sense is completely lacking in AI algorithms. Even when their average performance is better than that of human experts, the few mistakes that they do make reveal a very disturbing lack of understanding from their part. Images that are intentionally perturbed so slightly that the adjustment is imperceptible to humans can still cause algorithms to misclassify them completely. It has been shown, for example, that sticking minuscule white stickers, almost imperceptible to the human eye, on a Stop sign on the road causes the AI algorithms used in self-driving vehicles to misclassify it as a Speed Limit 45 sign. When machines fail, they fail badly, and for different reasons than us.

Machine Learning vs Human Intelligence

Perhaps the most important difference between artificial and human intelligence is the former’s complete lack of any form of consciousness. In the words of philosophers Thomas Nagel and David Chalmers, “it feels like something” to be a human or a bat, although it is very difficult to pinpoint exactly what that feeling is and how it arises. However, we can intuitively say that very likely it doesn’t feel like anything to be a computer program or a robot, or at least not yet. So far, AI has made significant progress in problem-solving, but it has made zero progress in developing any form of consciousness or ‘inside-out-ness.’

Current AI is therefore very different from human intelligence. Although we might notice a growing functional overlap between the two, they differ strikingly in terms of structure, methodology, and some might even say ontology. Artificial and human intelligence might be capable of similar things, but that does not make them similar phenomena. Machines have in many respects already reached human level, but in a very non-humanlike fashion.

For Christian anthropology, such observations are particularly important, because they can inform how we think of the developments in AI and how we understand our own distinctiveness as intelligent beings, created in the image of God. In the next post, we look into the future, imagining what kind of creature an intelligent robot might be, and how humanlike we might expect human-level AI to become.