Blog

How Coded Bias Makes a Powerful Case for Algorithmic Justice

What do you do when your computer can’t recognize your face? In a previous blog, we explored the potential applications for emotional AI. At the heart of this technology is the ability to recognize faces. Facial recognition is gaining widespread attention for its hidden dangers. This Coded Bias short review summarizes the story of female researchers who opened the black box of major applications that use FR. What they found is a warning to all of us making Coded Bias a bold call for algorithmic justice.


Official Trailer

Coded Bias Short Review: Exposing the Inaccuracies of Facial Recognition

The secret is out, FR algorithms are a lot better at recognizing white male faces than of any other group. The difference is not trivial. Joy Buolamwini, MIT researcher and main character in the film, found that dark-skinned women were miss-classified up to 35% of the time compared to less than 1% for male white faces! Error rates of this level can have life-altering consequences when used in policing, judicial decisions, or surveillance applications.

Screen Capture

It all started when Joy was looking for facial recognition software to recognize her face for an art project. She would have to put a white mask on in order to be detected by the camera. This initial experience led her down to a new path of research. If she was experiencing this problem, who else and how would this be impacting others that looked like her. Eventually, she stumbled upon the work of Kathy O’Neil, Weapons of Math Destruction: How How Big Data Increases Inequality and Threatens Democracy, discovering the world of Algorithmic activism already underway.

The documentary weaves in multiple cases where FR misclassification is having a devastating impact on people’s lives. Unfortunately, the burden is falling mostly on the poor and people of color. From an apartment complex in Brooklyn, the streets of London, and a school district in Houston, local activists are mobilizing political energy to expose the downsides of FR. In doing so, Netflix Coded Bias shows not only the problem but also sheds light on the growing movement that arose to correct it. In that, we can find hope.

If this wasn’t clear before, here it is: watch the documentary Coded Bias multiple times. This one is worth your time.

The Call for Algorithmic Justice

The fight for equality in the 21st century will be centered on algorithmic justice. What does that mean? Algorithms are fast becoming embedded in growing areas of decision-making. From movie recommendations to hiring, cute apps to judicial decisions, self-driving cars to who gets to rent a house, algorithms are influencing and dictating decisions.

Yet, they are only as good as the data used to train them. If that data contains present inequities and or is biased towards ruling majorities, they will inevitably disproportionately impact minorities. Hence, the fight for algorithmic justice starts with the regulation and monitoring of their results. The current lack of transparency in the process is no longer acceptable. While some corporations may intended to discriminate, their neglect of oversight makes them culpable.

Because of its ubiquitous impact, the struggle for algorithmic justice is not just the domain of data scientists and lawmakers. Instead, this is a fight that belongs to all of us. In the next blog, I’ll be going over recent efforts to regulate facial recognition. This marks the next step in Coded Bias call for algorithmic justice.

Stay tuned.

3 Ways to Discover Strong Spiritual Connection Through Zoom

When our first online groups for Integral Christian Network started meeting in 2019, we spent a decent amount of time getting people up to speed on using a somewhat unfamiliar video conferencing technology called “Zoom.”

Obviously, that is no longer quite so necessary. Now we find the opposite problem, which has been deemed “Zoom fatigue.” To which I say, not all zoom meetings are created equal.

When we participate through technological systems of connection, what is the relationship between what we bring as active partakers and the limitations and offerings of the system itself? We might recognize that the platform is not neutral, but do we see how we also are not neutral as well? We are co-creating with technology to create new forms of connection and engagement.

Photo by Robert Lukeman on Unsplash

Deepening our Connections Online

At ICN, we gather together in groups of 5-10 for what we call “WeSpace.” These communities of practice connect those from around the world to share together in a meditative prayer practice of “Whole-Body Mystical Awakening.” As you might suppose, these are not meetings of passive, detached online “conferencing.”

Rather, we are seeking to actively engage with one another in spiritual and energetic ways that involve our whole bodies and our spiritual faculties—and a felt-sense of the interconnected space among us, not just our own separate, interior experiences in proximity to others. To do this, we must be present and engaged with one another with a fuller sort of attention, with openness and genuine care.

Sound a little scary? It can be. But don’t we all both fear and crave intimacy?

A surprising bit of feedback that we’ve received often is that it may actually be easier to be present in this way online. Coming from the safety of our own home, we are in a comfortable space. Women talk about not having to be on alert for any threats of unwanted advances or physical danger. The exit door is always just a click away—not that we want to be halfway out the door of course, but it’s some comfort to know you can always bail if things get dicey.

We are also face-to-face with one another. Or as we say it, heart-to-heart. This has a different felt sense than the circular or horizontal shoulder-to-shoulder dynamics of shared physical space in churches or otherwise.

In our groups, we engage the body in our meditative practice, bringing awareness and presence to our physical embodiment in the time and space we are sharing. We do this for many reasons, but it also serves to counter the sometimes “disembodied” presence many bring to digital spaces. This allows us to be more present to the fullness of ourselves—but also to one another in the WeSpace “field” of interconnection.

Photo by Jr Korpa on Unsplash

Creating a Field

If you’ve ever been a part of a zoom meeting where all participants have their videos off except for the presenter/teacher, you know the opposite of what I’m talking about. We might as well be watching a YouTube video.

And yet, can you feel a difference? Even those black boxes with names or pictures reflect a presence that you not just know is there, but perhaps even feel a little. You have the awareness of some kind of collective, shared space. It isn’t the same as watching a YouTube video, is it?

What does it look like to lean into the opposite movement, to press into rather than pull away from the interconnected space together? Of course, you need the right type of group and setting—though you can do it yourself in any meeting. Just like you can be more or less present to others when you are sharing a physical space. Though there are some differences for online space.

Here are a few things we’ve found that help.

First, overcome skepticism.

One of the things we hear over and over is the surprise people express about just how much they can actually feel and sense. Many come in skeptical that they can feel as connected to one another and God in an online space. “I didn’t think this would work over zoom” is a regular refrain.

Much more is possible than you might think.

Research from the HeartMath Institute has shown that our hearts create an electromagnetic field that can be detected up to three feet away from our bodies. In our meetings, we have seen over and over again that the spiritual energetics between us are not bound by space at all—perhaps even not by time as well.

Being fully present is a challenge both online and off – and we are not always aware how our movement of attention through digital portals affects our presence.

Of course, we don’t have the research here yet, nor do I know quite how it would be measured. But repeated anecdotal evidence continues to mount in our and other group experiences.

Second, enter the space.

In our meetings, we ask people to keep their heart facing the group and have their videos on the majority of the time. To create a shared field, we must be present to one another with attention and engagement. It’s not only distracting when someone is checking their phone or looking at something else, it can literally be felt as a diminishment of their presence and therefore the energy of the collective field.

Being fully present is a challenge both online and off—and we’re not always aware how our movement of attention through digital portals affects our presence. We need to become more conscious of this effect and seek to cultivate spaces with fewer distractions and more compelling engagement. This doesn’t mean everyone must speak, but that we keep attention and give ourselves to one another energetically.

We’ve found this comes not through putting on a better show to capture attention, but engaging more than just the mind in our shared space. When we’re present with our hearts, grounded in our bodies, and centered in our guts, we find that we’re less easily taken away by the wanderings of our mind.

Third, discover WeSpace

We are not separate from one another. Many are beginning to see this in the way our systems and technologies work. Further recognition of collective values and cultural conditioning show that our inner lives and decisions are not nearly as independent as we once thought. And spiritually, the age of individualism is fading. “The next Buddha will be a Sangha” Thich Nhat Hahn has declared, meaning that community is the great spiritual teacher.

Technology is often viewed as a consumer good to serve individuals and systems. But what if we begin more and more to utilize it not just for profits and efficiency, but for enhancing our ability to craft and cultivate authentic community of depth and presence with one another?

In so doing, we just might discover the next great spiritual teacher.

Us.  


Luke Healy is the co-founder of Integral Christian Network, an endeavor to help further the loving evolution of Christianity and the world. He is passionate about pioneering innovation in forms of spiritual community, in gathering like-minded and like-hearted pilgrims on the spiritual journey, and making mystical experience of God accessible in individual and collective practice.

4 Surprising Ways Emotional AI is Making Life Better

It’s been a long night and you have driven for over 12 hours. The exhaustion is such that you are starting to blackout. As your eyes close and your head drops, the car slows down, moves to the shoulder, and stops. You wake up and realize your car saved your life. This is just one of many examples of how emotional AI can do good.

It doesn’t take much to see the ethical challenges of computer emotion recognition. Worse case scenarios of control and abuse quickly pop into mind. In this blog, I will explore the potential of emotional AI for human flourishing through 4 examples. We need to examine these technologies with a holistic view that weighs their benefits against their risks. Hence, here are 4 examples of how affecting computing could make life better.

1. Alert distracted drivers

Detecting signs of fatigue or alcohol intoxication early enough can be the difference between life and death. This applies not only to the driver but also to passengers and occupants of nearby vehicles. Emotional AI can detect blurry eyes, excessive blinking, and other facial signs that the driver is losing focus. As this mental state is detected early, the system can intervene through many means.

For example, it could alert the driver that they are too tired to drive. It could lower the windows or turn on loud music to jolt the driver into focus. More extreme interventions would include shocking the drivers’ hands through the steering wheel, and also slowing or stopping the car in a safe area.

As an additional benefit, this technology could also detect other volatile mental states such as anger, mania, and euphoria. This could lead to interventions like changing temperature, music, or even locking the car to keep the driver inside. In effect, this would not only reduce car accidents but could also diminish episodes of road rage.

2. Identify Depression in Patients

As those who suffer from depression would attest, the symptoms are not always clear to patients themselves. In fact, some of us can go years suffering the debilitating impacts of mental illness and think it is just part of life. This is especially true for those who live alone and therefore do not have the feedback of another close person to rely on.

Emotional AI trained to detect signs of depression in the face could therefore play an important role in moving clueless patients into awareness. While protecting privacy, in this case, is paramount, adding this to smartphones or AI companions could greatly help improve mental health.

Our faces let out a lot more than we realize. In this case, they may be alerting those around us that we are suffering in silence.

3. Detect emotional stress in workplaces

Workplaces can be toxic environments. In such cases, the fear of retaliation may keep workers from being honest with their peers or supervisors. A narrow focus on production and performance can easily make employees feel like machines. Emotional AI systems embedded through cameras and computer screens could detect a generalized increase in stress by collecting facial data from multiple employees. This in turn could be sent over to responsible leaders or regulators for appropriate intervention.

Is this too invasive? Well, it depends on how it is implemented. Many tracking systems are already present in workplaces where employee activity in computers and phones are monitored 24-7. Certainly, this could only work in places where there is trust, transparency and consent. It also depends on who has access to this data. An employee may not be comfortable with their bosses having this data but may agree to ceding this data to an independent group of peers.

4. Help autistic children socialize in schools

The last example shows how emotional AI can play a role in education. Autistic children process and respond to social queues differently. In this case, emotional AI in devices or a robot could gently teach the child to both interpret and respond to interactions with less anxiety.

This is not an attempt to put therapists or special-needs workers out of a job. It is instead an important enhancement to their essential work. The systems can be there to augment, expand and inform their work with each individual child. It can also provide a consistency that humans also fail to provide. This is especially important for kids who tend to thrive in structured environments. As in the cases above, privacy and consent must be at the forefront.

These are just a few examples of the promise of emotional AI. As industries start discovering and perfecting emotional AI technology, more use cases will emerge.

How does reading these examples make you feel? Do they sound promising or threatening? What other examples can you think of?

A Beginner’s Guide to Emotional AI, Its Challenges and Opportunities

You walk into your living room, Alexa dims the lights, lowers the temperature, and says: “You look really sad today. Would you like me to play Adele for you?” This could be a reality in a few years. Are we prepared? This beginner’s guide to emotional AI will introduce the technology, its applications and ethical challenges.

We will explore both the opportunities and dangers of this emerging AI application. This is part of the broader discipline of affective computing that use different inputs from the human body (i.e.: heartbeat, sweat, facial expression, speech, eye movement, etc) to interpret, emulate and predict emotion. For this piece, we’ll focus on the use of facial expressions to infer emotion.

According to Gartner, by 2022, 10% of smartphones will have affective computing capabilities. Latest Apple phones can already detect your identity through your face. The next step is detecting your mental state through that front camera. Estimates of the Emotive AI market range around $36 Billion in 5 years. Human emotion detection is no longer a Sci-Fi pipe dream but a reality poised to transform societies. Are we ready for it?

How does Emotional AI work?

Our beginner’s guide to emotional AI must start with explaining how it works. While this technology is relatively new, its foundation dates back to the mid-nineteenth century. founded primarily in the idea humans display universal facial cues for their emotions. Charles Darwin was one of the first to put forth this idea. A century later, American psychologist Paul Eckman further elaborated on it through extensive field studies. Recently, scholars have challenged this universality and there is now no consensus on its validity. AI entrepreneurs bet that we can find universal patterns. Their endeavors are testing this theory in real-time with machine learning.

The first step is “training” a computer to read emotions through a process of supervised learning. This entails feeding pictures of people’s faces along with labels that define that person’s emotion. For example, one could feed the picture of someone smiling with the label “happy.” For the learning process to be effective thousands if not millions of these examples are created.

The computer then uses machine learning algorithms to detect common patterns in the many examples of each emotion. This enables it to establish a general idea of what each emotion looks like in every face. Therefore, it is able to classify new cases, including faces it has never encountered before, with these emotions.

By Prawny from Pixabay

Commercial and Public Applications

As you can imagine, there are manifold applications to this type of technology. For example, one of the greatest challenges in marketing is collecting accurate feedback from customers. Satisfaction surveys are few and far between and often inaccurate. Hence, companies could use emotional AI to capture instantaneous human reactions to an ad or experience not from a survey but from their facial reactions.

Affectiva, a leading company using this technology, already claims it can detect emotions from any face. It collected 10M expressions from 87 countries, hand-labeled by crowd workers from Cairo. With its recent merger with Smart Eye, the company is poised to become the leader in Automotive, in-cabin driver mental state recognition for the automotive industry. This could be a valuable safety feature detecting when a driver is under the influence, sleepy, or in emotional distress.

More controversial applications include using it for surveillance as it is in the case of China’s treatment of the Uighur population. Police departments could use it as a lie-detection device in interrogations. Governments could use it to track the general mood of the population by scanning faces in the public square. Finally, employers could use it as part of the interview process to measure the mental state of an applicant.

Ethical Challenges for Emotional AI

No beginner’s guide on emotional AI without considering the ethics of its impact. Kate Crawford, a USC research professor, has sounded the alarm on emotional AI. In her recently published book and through a number of articles, she makes the case for regulating this technology. Her primary argument is that using facial recognition to detect emotions is based on shaky science. That is, the overriding premise that human emotion can universally be categorized through a set of facial expressions is faulty. It minimizes a plethora of cultural factors lending itself to dangerous bias.

This is not just conjecture, a recent University of Maryland study detected an inherent bias that tends to show black faces with more negative emotions than white faces. She also notes that the machine learning process is questionable as it is based on pictures of humans emulating emotions. The examples come from people that were told to make a type of facial expression as opposed to capturing a real reaction. This can lead to an artificial establishment of what a facial representation of emotion should look like rather than real emotional displays.

This is not limited to emotion detection. Instead it is part of a broader partner of error in facial recognition. In 2018 paper, MIT researcher Joy Bulowanmini analyzed disparities in the effectiveness of commercial facial recognition applications. She found that misclassification rates for dark-skinned women were up to 34% compared to 0.8% for white males.

Photo by Azamat Zhanisov on Unsplash

The Sanctity of the Human Face

The face is the window to the human soul. It the part of the body most identified with an individual’s unique identity. When we remember someone, it is their countenance that shows up in our mind. It is indeed the best indicator of our internal emotional state which may often betray the very things we speak.

Up to recently, interpreting the mystery of our faces was the job of humans and animals. What does it mean to now have machines that can intelligently decipher our inner states? This is certainly a new frontier in the human-ai interface that we must tread carefully if for no other reason than to respect the sanctity of the human soul. If left for commerce to decide, the process will most likely be faster than we as a society are comfortable with. That is where the calls for regulation are spot on.

Like every technology – the devil is in the details. It would be premature to outlaw the practice altogether as the city of Seattle has done recently. We should, however, limit and monitor its uses – especially in areas where the risk of bias can have a severe adverse impact on the individual. We must also ponder whether we want to live in a society where even our facial expressions are subject to monitoring.

Netflix “Oxygen”: Life, Technology and Hope French Style


I am hesitant to watch French movies as the protagonist often dies at the end. Would this be another case of learning to love the main character only to see her die at the end? Given the movie premise, it was worth the risk. Similar to Eden,  Netflix Oxygen is a powerful exploration of the intersection of hope and technology.

It is uncommon to see a French movie make it to the top charts of American audiences. Given our royal laziness, we tend to stay away from anything that has subtitles preferring more the glorified-theatrics-simplistic plots of Hollywood. The French are too sophisticated for that. For them, movies are not entertainment but an art form.


Realizing I had never watched a French Sci-Fi Thriller, maybe it was time to walk down that road. I am glad I did. The next day, I reflected on the movie’s plot after re-telling the whole story to my wife and my daughter. Following the instigating conversation that ensued, I realized there was enough material for an AI theology review.

Simple Plot of Human AI Partnership


You wake up and find yourself trapped in a capsule. You knock on the walls eventually activating an AI that informs you that you are in a cryogenic chamber. There is no way of knowing how you got there and how you can get out. You have 90 minutes before the oxygen runs out. The clock is ticking and you need to find a way to survive or simply accept your untimely death.


Slowly the main character played by Melanie Laurent, Elizabeth, discovers pieces and puzzles about who she is, why she is in the chamber and ultimately what options she has. This journey is punctuated by painful discoveries and a few close calls building the suspense through out the feature.


Her only companion throughout this ordeal is the chamber AI voice assistant, Milo. She converses, argues and pleads with him through out as she struggles to find a way to survive. The movie revolves around their unexpected partnership, as the AI is her only way to learn about her past and communicate with the outside world. The contrast between his calm monotone voice with her desperate cries further energize the movie’s dramatic effect.


In my view, the plot’s simple premise along with Melanie’s superb performance makes the movie work even as it stays centered on one life and one location the whole time.


Spoiler Alert: The next sections give away key parts of the plot.

AI Ethics, Cryogenics and Space Travel


Oxygen is the type of film that you wake up the next day thinking about it. That is, the impact is not clearly felt until later. There is so much to process that its meaning does not become clear right away. The viewer is so involved in the main character’s ordeal that you don’t have time to reflect on the major ethical, philosophical and theological issues that emerge in the story.


For example, once Elizabeth wakes up, one of the first things Milo offers her is sedatives. She refuses, preferring to be alert in her struggle for survival rather than calmly accepting her slow death. In one of the most dramatic scenes of the movie, Milo follows protocol to euthanize her as she is reaching the end of her oxygen supply. In an ironic twist that Elizabeth picks up on: the AI asks her permission for sedatives but does not consult her about the ultimate decision to end her life. While a work of fiction, this may very well be sign of things to come, as assisted suicide becomes legal in many parts of the world. Is it assisted-suicide of humane end-of-life care?


In an interesting combination, Oxygen portrays cryogenics, cloning and space travel as the ultimate solution for human survival. As humanity faced a growing host of incurable diseases they send a spaceship with thousands of clones in cryogenic chambers to find the cure in another planet. Elizabeth, as she learns mid-way, is a clone of a famous cryogenics scientist carrying her memories and DNA. This certainly raises interesting questions about the singularity of the human soul. Can it really transfer to clones or are they altogether different beings? Is memory and DNA the totality of our beings or are there transcending parts impossible to replicate in a lab?

Photo by Joshua Earle on Unsplash

Co-Creating Hopeful Futures


In the end, human ingenuity prevails. Through a series of discoveries, Liz finds a way to survive. It entails asking Milo to transfer the oxygen from other cryogenic chambers into hers. Her untimely awakening was the result of an asteroid collision that affected a number of other chambers. After ensuring there were no other survivors in these damaged chambers, she asks for the oxygen transfer.


To my surprise, the movie turns out to be a colossal affirmation of life. Where the flourishing of life is, there is also theology. While having no religious content, the story shows how the love for self and others can lead us to fight for life. Liz learns that her husband’s clone is in the spaceship which gives her a reason to go on. This stays true even after she learns she herself is a clone and in effect have never met or lived with him. The memory of their life together is enough to propel her forward, overcoming insurmountable odds to stay alive.


The story also illustrates the power of augmentation, how humans enabled through technology can find innovative solutions that extend life. In that sense, the movie aligns with a Christian Transhumanist view – one that sees humans co-creating hopeful futures with the divine.


Even if God is not present explicitly, the divine seems to whisper through Milo’s reassuring voice.

Netflix “Eden”: Human Frailty in a Technological Paradise

Recently, my 11 year-old daughter told me she wanted to watch animes. I have watched a few and was a bit concerned about her request. While I have come to really appreciate this art form, I feared that some thematic elements would not be appropriate to her 11 year-old mind. Yet, after watching the first episode of Netflix Eden, my concerns were appeased and I invited my two oldest (11 and 9) to watch it with me. With only 4 episodes of 25 minutes each, the series make it for a great way to spend a lazy Saturday afternoon. Thankfully, beyond being suitable there was enough that for me to reflect on. In fact, captivating characters and an enchanting soundtrack moved me to tears making Netflix Eden a delightful exploration of human frailty.

Here is my review of this heart-warming, beautifully written story.

Official Trailer

A Simple but Compelling Plot

Without giving a way much, the story revolves around a simple structure. From the onset we learn that no human have lived on earth for 1,000 years. Self-sufficient robots successfully turned a polluted wasteland into a lush oasis. The first scenes show groups of robots tending and harvesting an apple orchard.

Two of these robots stumble into an unlikely finding: a human child. Preserved in a cryogenic capsule, the toddler stumbles out and wails. The robots are confused and helpless as to how to respond. They quickly identify her as a human bio-form but cannot comprehend what her crying means.

After the initial shock, the toddler turns to the robots and calls them “papa” and “mama” kicking off the story. The plot develops around the idea of two robots raising a human child in a human-less planet earth. We also learn that humans are perceived as a threat and to be surrendered to the authorities. In spite of their programming, the robots choose to hide and protect the girl.

Photo by Bruno Melo on Unsplash

Are Humans Essential for Life to Flourish on Earth?

Even with only 4 episodes, the anime packs quite a philosophical punch. From a theological perspective, the careful observer quickly sees why the show is named after the Biblical garden. It is an illusion to the Genesis’ story where life begins on earth yet it includes with a twist. Now Eden is lush and thriving without human interference. It is as if God is recreating earth through technological means. This echoes Francis Bacon’s vision of technology as a way to mitigate the destructive effects of the fall.

Later we learn the planet had become uninhabitable. The robot creators envisioned a solution that entailed freezing cryogenically a number of humans while the robots worked to restore earth back to its previous glory. The plan apparently works except for the wrinkle of this girl waking up before her assigned time. Just like in the original story of Eden, humans come to mess it up.

Embedded in this narrative is the provocative question of human ultimate utility for life in the planet. After all, if machines are able to manage the earth flawlessly, why introduce human error? Of course, the flip side of the question is the belief that machines in themselves are free of error. Putting that aside, the question is still valid.

Photo by Alesia Kazantceva on Unsplash

Human Frailty and Renewal

Watching the story unfold, I could not help but reflect on Dr. Dorabantu’s past post on how AI would help us see the image of God in our vulnerability. That is, learning that robots could surpass us in rationality, we would have to attribute our uniqueness not to a narrow view of intelligence but our ability to love. The anime seems to be getting at the heart of this question and it gets there by using AI. It is in the Robot’s journey to understand human’s essence that we learn about what makes us unique in creation. In this way, the robots become the mirrors that reflect our image back to us.

Another parallel here is with the biblical story of Noah. In a world destroyed by pollution and revived through technological ingenuity, the ark is no longer a boat but a capsule. Humans are preserved by pausing the aging process in their bodies, a clear nod to Transhumanism. The combination of cryogenics and advanced AI can preseve human life on earth albeit for a limited number of humans.

I left the story feeling grateful for our imperfect humanity. It is unfortunate that Christian theology in an effort to paint a perfect God have in turn made human vulnerability seem undesirable. Without denying our potential for harm and destruction, namely our sinfulness, it is time Christian theology embraces and celebrate human vulnerability as part of our Imago Dei. This way, Netflix Eden, helps put human frailty back in the conversation.

How AI and Faith Communities Can Empower Climate Resilience in Cities

AI technologies continue to empower humanity for good. In a previous blog, we explored how AI was empowering government agencies to fight deforestation in the Amazon. In this blog, we discuss the role AI is playing to build climate resilience in cities. We will also look at how faith communities can use AI-enabled microgrids to serve communities hit by climate disassters.

A Changing Climate Puts Cities in Harm way.

I recently listened to an insightful Technopolis podcast on how cities are preparing for an increased incidence of natural disasters. The episode discussed manifold ways city leaders are using technology to prepare, predict and mitigate the impact of climate events. This is a complex challenge that requires a combination of good governance, technological tools, and planning to tackle.

Climate resilience is not just about decreasing carbon footprint, it is also about preparing for the increased incidence of extreme weather. Whether there are fires in California, Tifoons in East Asia, or severe droughts in Northern Africa, the planet is in for a bumpy ride in the coming decades. They will also exacerbate existing problems such as air pollution, water scarcity and heat diseases in urban areas. Governments and civic society groups need to start bracing for this reality by taking bold preventive steps in the present.

Cities illustrate the costs of delaying action on climate change by enshrining resource-intensive infrastructure and behaviors. The choices cities make today will determine their ability to handle climate change and reap the benefits of resource-efficient growth. Currently, 51% of the world’s population lives in cities and within a generation, an estimated two-thirds of the world’s population will live in cities. Hence, addressing cities’ vulnerabilities will be crucial for human life on the planet.

Photo by Karim MANJRA on Unsplash

AI and Climate Resilience

AI is a powerful tool to build climate resilience. We can use it to understand our current reality better, predict future weather events, create new products and services, and minimize human impact. By doing so, we can not only save and improve lives but also create a healthier world while also making the economy more efficient.

Deep learning, for example, enables better predictions and estimates of climate change than ever before. This information can be used to identify major vulnerabilities and risk zones. For example, in the case of fires, better prediction can not only identify risk areas but also help understand how it will spread in those areas. As you can imagine, predicting the trajectory of a fire is a complex task that involves a plethora of variables related to wind, vegetation, humidity, and other factors

The Gifts of Satellite Imagery

Another crucial area in that AI is becoming essential is satellite imagery. Research led by Google, the Mila Institute and the German Aerospace Center harness AI to develop and make sense of extensive datasets on Earth. This in turn empowers us to better understand climate change from a global perspective and to act accordingly.

Combining integrated global imagery with sophisticated modeling capabilities gives communities at risk precious advance warning to prepare. Governments can work with citizens living in these areas to strengthen their ability to mitigate extreme climate impacts. This will become particularly salient in coastal communities that should see their shores recede in the coming decades.

This is just one example of how AI can play a prominent role in climate resilience. A recent paper titled “Tackling Climate Change with Machine Learning,” revealed 13 areas where ML can be developed. They include but are not limited to energy consumption, CO2 removal, education, solar energy, engineering, and finance. Opportunities in these areas include the creation of new low-carbon materials, better monitoring of deforestation, and cleaner transport.

Photo by Biel Morro on Unsplash

Microgrids and Faith Communities

If climate change is the defining test of our generation, then technology alone will not be enough. As much as AI can help find solutions, the threat calls for collective action at unprecedented levels. This is both a challenge and an opportunity for faith communities seeking to re-imagine a future where their relevance surpasses the confines of their pews.

Thankfully, faith communities already play a crucial role in disaster relief. Their buildings often double as shelter and service centers when calamity strikes. Yet, if climate-related events will become more frequent, these institutions must expand their range of services offered to affected populations.

An example of that is in the creation of AI-managed microgrids. They are small, easily controllable electricity systems consisting of one or more generating units connected to nearby users and operated locally. Microgrids contain all the elements of a complex energy system, but because they maintain a balance between production and consumption, they operate independently of the grid. These systems work well with renewable energy sources further decreasing our reliance on fossil fuels

When climate disaster strikes, one of the first things to go is electricity. What if houses of worship, equipped with microgrids, become the places to go for those out of power? When the grid fails, houses of worship could become the lifeline for a neighborhood helping impacted populations communicate with family, charge their phones, and find shelter from cold nights. Furthermore, they could sell their excess energy units in the market finding new sources of funding for their spiritual mission.

Microgrids in churches, synagogues, and mosques – that’s an idea the world can believe in. It is also a great step towards climate resilience.

Klara and the Sun: Robotic Redemption for a Dystopian World

In the previous blog, we discussed how Klara, the AI and the main character of Kazuo Ishiguro’s latest novel, develops a religious devotion to the Sun. In the second and final installment of this book review, I explore how Klara impacts the people around her. Klara and the Sun, shows how they become better humans for interacting with her in a dystopian world.

Photo by Randy Jacob on Unsplash

Gene Inequality

Because humans are only supporting characters in this novel, we only learn about their world later in the book. The author does not give out a year but places that story in a near future. Society is sharply divided along with class and racial lines. Gene editing has become a reality and now parents can opt to have children born with the traits that will help them succeed in life.

This stark choice does not only affect the family’s fate but re-orients the way society allocates opportunities. Colleges no longer accept average kids meaning that a natural birth path puts a child at a disadvantage. Yet, this choice comes at a cost. Experimenting with genes also means a higher mortality rate for children and adolescents. That is the case for the family that purchases Klara, they have lost their first daughter and now their second one is sick.

These gene-edited children receive special education in their home remotely by specialized tutors. This turned out to be an ironic trait in a pandemic year where most children in the world learned through Zoom. They socialize through prearranged gatherings in homes. Those that are well-to-do live in gated communities, supposedly because the world had become unsafe. This is just one of the many aspects of the dystopian world of Klara and the Sun.

Photo by Andy Kelly on Unsplash

AI Companionship and Remembrance

A secondary plot-line in the novel is the relationship between the teenage Josie, Klara’s owner, and her friend Rick who is not gene-edited. The teens are coming of age in this tumultuous period where the viability of their relationship is in question. The adults discuss whether they should even be together in a society that delineates separate paths assigned at birth. One has a safe passage into college and stable jobs while the other is shut out from opportunity by the sheer fact their parents did not interfere with nature.

In this world, droids are common companions to wealthy children. Since many don’t go to school anymore, the droid plays the role of nanny, friend, guardian, and at times tutor. Even so, there is resistance to them in the public square where resentful segments of society see their presence with contempt. They represent a symbol of status for the affluent and a menace to the working class. Even so, their owners often treat them as merchandise. At best they were seen as servants and at worse as disposable toys that could be tossed around for amusement.

The novel also hints at the use of AI to extend the life of loved ones. AI remembrance, shall we say. That is, programming AI droids to take the place of a diseased human. This seems like a natural complement in a world where parents have no guarantee that their gene-edited children will live to adulthood. For some, the AI companion could live out the years their children were denied.

Klara The Therapist

In the world described above, the AF (artificial friend) plays a pivotal role in family life not just for the children that they accompany but also for the parents. In effect, because of her robotic impartiality, Klara serves as a safe confidant to Josie, Rick, her mother, and her dad. The novel includes intimate one-on-one conversations where Klara offers a fresh take on their troubles. Her gentle and unpretentious perspective prods them to do what is right even when it is hard. In this way, she also plays a moral role, reminding humans of their best instincts.

Yet, humans are not the only ones impacted. Klara also grows and matures through her interaction with them. Navigating the tensions, joys, and sorrows of human relationships, she uncovers the many layers of human emotion. Though lacking tear ducts and a beating heart, she is not a prisoner to detached rationality. She suffers with the pain of the humans around her, she cares deeply about their well-being and she is willing to sacrifice her own future to ensure they have one. In short, she is programmed to serve them not as a dutiful pet but as a caring friend. In doing so, she embodies the best of human empathy.

The reader joins Klara in her path to maturity and it is a delightful ride. As she observes and learns about the people around her, the human readers get a mirror to themselves. We see our struggles, our pettiness, our hopes and expectations reflected in this rich story. For the ones that read with an open heart, the book also offers an opportunity for transformation and growth.

Final Reflections

In an insightful series of 4 blogs, Dr. Dorabantu argues that future general AI will be hyper-rational forcing us to re-imagine the essence of who we are. Yet, Ishiguro presents an alternative hypothesis. What if instead, AI technology led to the development of empathetic servant companions? Could a machine express both rational and emotional intelligence?

Emotionally intelligent AI would help us redefine the image of God not by contrast but by reinforcement. That is, instead of simply demonstrating our limitations in rationality it could expand our potential for empathy. The novel shows how AI can act as a therapist or spiritual guide. Through empathetic dialogue, they can help us find the best of our moral senses. In short, it can help us love better.

Finally, the book raises important ethical questions about gene editing’s promises and dangers. What would it look like to live in a world where “designer babies” are commonplace? Could gene-editing combining with AI lead to the harrowing scenario where droids serve as complete replacements for humans? While Ishuguro’s future is fictitious, he speculates on technologies that already exist now. Gene editing and narrow AI are a reality while General AI is plausibly within reach.

We do well to seriously consider their impact before a small group in Silicon Valley decides how to maximize profit from them. This may be the greatest lesson we can take from Klara and the Sun and its dystopian world.

Klara and the Sun: Robotic Faith for an Unbelieving Humanity

In his first novel since winning the Nobel prize of literature, Kazuo Ishiguro explores the world through the lens of an AI droid. The novel retains many of the features that made him famous for previous bestsellers such as concentrating in confined spaces, building up emotional tension, and fragmented story-telling. All of this gains a fresh relevance when applied to the Sci-Fi genre and more specifically to the relationship between humanity and sentient technology. I’ll do my best to keep you from any spoilers as I suspect this will become a motion picture in the near future. Suffice it to say that Klara and the Sun is a compelling statement for robotic faith. How? Read on.

Introducing the Artificial Friend

Structured into 6 long sections, the book starts by introducing us to Klara. She is an AF (Artificial Friend), a humanoid equipped with Artificial Intelligence and designed commercially to be a human companion. At least, this is what we can deduce from the first pages as no clear explanation is given. In fact, this is a key trait in the story: we learn about the world along with Klara. She is the one and only narrator throughout the novel.

Klara is shaped like a short woman with brown hair. The story starts in the store where she is on display for sale. There she interacts with customers, other AFs, and “Manager”, the human responsible for the store. All humans are referred to by their capitalized job or function. Otherwise, they are classified by their appearance or something peculiar to them.

The first 70 pages occur inside the store where she is on display. We learn about her personality, the fact that she is very observant, and what peer AFs think of her. At times, she is placed near the front window of the store. That is when we get a glimpse of the outside world. This is probably where Ishiguro’s brilliance shines through as he carefully creates a worldview so unique, compelling, humane but in many ways also true to a robotic personality. The reader slowly grows fond of her as she immerses us in her whimsical perspective of the outside world. To her, a busy city street is a rich mixture of sights with details we most often take for granted.

We also get to learn how Klara processes emotions and even has a will of her own. At times she mentions feeling fear. She is also very sensitive to conflict, trying to avoid it at all costs. With that said, she is no push over. Once she sabotages a customer attempt to buy her because she had committed herself to another prospect. She also seems to stand out compared to the other AFs instilling both contempt and admiration from them.

Book cover from Amazon.com

The World Through Klara’s Eyes

She is sensitive, captivating and always curious. Her observations are unique and honest. She brings together an innocence of a child with the mathematical ability of a scientist. This often leads to some quirky observations as she watches the world unfold in front of her. In one instance, she describes a woman as “food-blender-shaped.”

Klara also has an acute ability to detect complex emotions in faces. In this way, she is able to peer through the crevices of the body and see the soul. In one instance, she spots how a child is both smiling at her AF while her eyes portray anger. When observing a fight, she could see the intensity of anger in the men’s faces describing them as horrid shapes as if they were no longer human. When seeing a couple embrace, she captures both the joy and the pain of that moment and struggles to understand how it could be so.

This uncanny ability to read human emotion becomes crucial when Klara settles in her permanent home. Being a quiet observer, she is able understand the subtle unspoken dynamics that happen in family relationships. In some instances, she could see the love between mother and daughter even as they argued vehemently. She could see through the housekeeper’s hostility towards her not as a threat but as concern. In this way, her view of humans tended err on the side of charity rather than malice.

Though being a keen human observer, it is her relationship with the sun that drives the narrative forward. From the first pages, Klara notices the presence of sun rays in most situations. She will usually start her description of an image by emphasizing how the sun rays were entering a room. We quickly learned that the sun is her main source of energy and nourishment. Hence it is not surprising that its looms so large in her worldview.

Yet, Ishiguro’s takes this relationship further. Similar to ancient humans, Klara develops a religious-like devotion to the sun. The star is not only her source of nourishment but becomes a source of meaning and a god-like figure that she looks to when in fear or in doubt.

That is when the novel gets theologically interesting.

Robotic Faith and Hope

As the title suggests, the sun plays a central role in Klara’s universe. This is true not only physiologically as she runs on solar energy, but also a spiritual role. This nods towards a religious relationship that starts through observation. Already understanding the power of the sun to give her energy, she witnesses how the sun restores a beggar and his dog back to health. Witnessing this event become Klara’s epiphany of the healing powers of the sun. She holds that memory dear and it becomes an anchor of hope for her later in the book when she realizes that her owner is seriously ill.

Klara develops a deep devotion toward the sun and like the ancients, she starts praying to it. The narrative moves forward when Klara experiences theophanies worthy of human awe. Her pure faith is so compelling that the reader cannot help but hope along with her that what she believes is true. In this way, Klara points us back to the numinous.

Her innocent and captivating faith has an impact in the human characters of the novel. For some reason, they start hoping for the best even as there is no reason to do so. In spite of overwhelming odds, they start seeing a light at the end of the tunnel. Some of them willingly participate, in this case the men in the novel, in her religious rites without understanding the rationale behind her actions. Yet, unlike human believers who often like to proselytize, she keeps her faith secret from all. In fact, secrecy is part of her religious relationship with the sun. In this way, she invites humans to transcend their reason and step into a child-like faith.

This reminds me of a previous blog where I explore this idea of pure faith and robots . But I digress.

Conclusion

I hope the first part of this review sparks your interest in reading this novel. It beautifully explores how AI can help us find faith again. Certainly, we are still decades away from the kind of AI that Ishiguro’s portrays in this book. Yet, like most works of Science Fiction, they help us extrapolate present directions so we can reflect on our future possibilities.

Contrasting to the dominant narrative of “robot trying to kill us”, the author opts for one that highlights the possibility that they can reflect the best in us. As they do so, they can change us into better human beings rather than allowing us to devolve into our worse vices. Consequently, Ishiguro gives us a vivid picture of how technology can work towards human flourishing.

In the next blog, I will explore the human world in which Klara lives. There are some interesting warnings and rich reflection in the dystopian situation described in the book. While our exposure to it is limited, maybe this is one part I wish the author had expanded a bit more, we do get enough ponder about the impact of emerging technologies in our society. This is especially salient for a digital native generation who is learning to send tweets before they give their first kiss.

Vulnerable like God: Perfect Machines and Imperfect Humans

This four-part series started with the suggestion that AI can be of real help to theologians, in their attempt to better understand what makes humans distinctive and in the image of God. We have since noted how different machine intelligence is from human intelligence, and how alien-like an intelligent robot could be ‘on the inside’, in spite of its humanlike outward behavior.

For theological anthropology, the main takeaway is that intelligence – understood as rationality and problem-solving – is not the defining feature of human nature. We’ve long been the most intelligent and capable creature in town, but that might soon change, with the emergence of AI. What makes us special and in the image of God is thus not some intellectual capacity (in theology, this is known as the substantive interpretation), nor something that we can do on God’s behalf (the functional interpretation), because AI could soon surpass us in both respects.

The interpretation of the imago Dei that seems to account best for the scenario of human-level AI is the relational one. According to it, the image of God is our special I-Thou relationship with God, the fact that we can be an authentic Thou, capable to receive God’s love and respond back. We exist only because God calls us into existence. Our relationship with God is therefore the deepest foundation of our ontology. Furthermore, we are deeply relational beings. Our growth and fulfillment can only be realized in authentic personal relationships with other human beings and with God.

AI and Authentic Relationality

It is not surprising that the key to human distinctiveness is profoundly relational. Alan Turing tapped right into this intuition when he designed his eponymous test for AI. Turing’s test is, in fact, a measurement of AI’s ability to relate like us. Unsurprisingly, the most advanced AIs still struggle when it comes to simulating relationships, and none has yet passed the Turing test.

But even if a program will someday convincingly relate to humans, will that be an authentic relationship? We’ve already seen that human-level AI will be anything but humanlike ‘on the inside.’ Intelligent robots might become capable to speak and act like us, but they will be completely different from us in terms of their internal motivation or meaning systems. What kind of relationship could there be between us and them, when we’d have so little in common?

We long for other humans precisely because we are not self-sufficient. Hence, we seek others precisely because we want to discover them and our own selves through relationships. We fall in love because we are not completely rational. Human-level AI will be the opposite of that: self-sufficient, perfectly rational, and with a quasi-complete knowledge of itself.

The Quirkiness of Human intelligence

Our limitations are instrumental for the kind of relationships that we have with each other. An argument can thus be made that a significant degree of cognitive and physical vulnerability is required for authentic relationality to be possible. There can be no authentic relationship without the two parts intentionally making themselves vulnerable to each other, opening to one another outside any transactional logic.

Photo by Duy Pham on Unsplash

A hyper-rational being would likely have very serious difficulties to engage fully in relationships and make oneself totally vulnerable to the loved other. It surely does not sound very smart.

Nevertheless, we, humans, do this tirelessly and often at high costs, exactly, perhaps, because we are not that intelligent and goal oriented as AI. Although that appears to be illogic, it is such experiences that give meaning and fulfillment to our lives.

From an evolutionary perspective, it is puzzling that our species evolved to be this way. Evolution promotes organisms that are better at adapting to the challenges of their environment, thus at solving practical survival and reproduction problems. It is therefore unsurprising that intelligence-as-problem-solving is a common feature of evolved organisms, and this is precisely the direction in which AI seems to develop.

If vulnerability is so important for the image of God as relationship, does this imply that God too is vulnerable?

What is strange in the vast space of possible intelligences is our quirky type of intelligence, one heavily optimized for relationship, marked by a bizarre thirst for meaning, and plagued by a surprising degree of irrationality. In the previous post I called out the strangeness of strong AI, but it is we who seem to be strange ones. However, it is specifically this kind of intellectual imperfection, or vulnerability, that enables us to dwell in the sort of Goldilocks of intelligence where personal relationships and the image of God are possible.

Vulnerability, God, Humans and Robots

Photo by Jordan Whitt on Unsplash

If vulnerability is so important for the image of God as relationship, does this imply that God too is vulnerable? Indeed, that seems to be the conclusion, and it’s not surprising at all, especially when we think of Christ. Through God’s incarnation, suffering, and voluntary death, we have been revealed a deeply vulnerable side of the divine. God is not an indifferent creator of the world, nor a dispassionate almighty, all-intelligent ruler. God cares deeply for creation, to the extent of committing to the supreme self-sacrifice to redeem it (Jn. 3: 16).

This means that we are most like God not when we are at our smartest or strongest, but when we engage in this kind of hyper-empathetic, though not entirely logical, behavior.

Compared to AI, we might look stupid, irrational, and outdated, but it is paradoxically due to these limitations that we are able to cultivate our divine likeness through loving, authentic, personal relationships. If looking at AI teaches theologians anything, it is that our limitations are just as important as our capabilities. We are vulnerable, just as our God has revealed to be vulnerable. Being like God does not necessarily mean being more intelligent, especially when intelligence is seen as rationality or problem solving

Christ – whether considered historically or symbolically – shows that what we value most about human nature are traits like empathy, meekness, and forgiveness, which are eminently relational qualities. Behind such qualities are ways of thinking rooted more in the irrational than in the rational parts of our minds. We should then wholeheartedly join the apostle Paul in “boast[ing] all the more gladly about [our] weaknesses […] for when [we] are weak, then [we] are strong” (2 Cor. 12: 9-10).