AI for Good in the Majority World: Data Science Nigeria

Data Science Nigeria has an ambitious goal: to train 1 million Nigerian data scientists by the end of the decade. Yet, it does not end there, the non-profit aims to make the largest African nation a leading player in the growing global AI industry. Hence, DSN is a shining example of the growing trend of AI for good in the majority world.

AI holds great potential to solve intractable socioeconomic problems. It is not a silver-bullet solution, but a great enabler to speed up, optimize, and greatly improve decision making. Hence, it is not surprising to see the burgeoning AI for good trend emerging in the majority of the world. Yet, what makes DSN stand apart is that it goes a step further. It seeks not only to solve social problems but also to create economic opportunity that would not exist otherwise.

It is this abundance mentality that will best align AI with the flourishing of life.

Re-framing Who Are AI’s Customers

I learned about DSN while attending Pew Research recent webinar on AI ethics. One of its panelist was Dr. Uyi Stewart, DSN board member and IBM distinguished engineer, whose perspective stood out. While others discussed AI ethics in abstract terms, he proposed that AI should be about solving problems for 75% of the world population. That is, AI is not limited to solving complex business problems for the world’s largest corporations. Instead, it can and should be part of the daily life of those living in remote villages and cramped urban centers in the Southern Hemisphere.

Photo by Nqobile Vundla on Unsplash

How so? He went further to provide an example. The world’s poor today face life-and-death choice around the scarcity of resources. The farmers must contend with the fluctuations of a warming climate. The urban dweller, must make key decisions with very limited financial resources. Most of them already own a phone. Hence, he believes industry should develop decision support solutions through their devices so they can make better choices. These are not ways to optimize profit but can represent the difference between life and death for some.

Where most see a social problem, Dr. Stewart envisions a potent market opportunity.

From Scarcity to Abundance

Our economic system is mostly based on the concept of scarcity. That is, the idea that resources are finite and therefore must be allocated efficiently. It is scarcity mentality that drives the market to increase prices for commodities even when they are abundant. Moreover, companies and government may limit production of a product simply to simulate this effect and therefore achieve higher profit margins.

The digital economy has turned the concept of scarcity on its head. When knowledge is digitized and storage is cheap, we move from finite resources to limitless solutions. Even so, one must first optimize these solutions which is why AI becomes crucial in the digital economy. The promise of AI for good in the majority world is unleashing this wealth of opportunity in places where physical resources are scarce. DSN is leading the way by empowering young Nigerians to become data scientists. With this knowledge, they can unlock hidden opportunities in the communities they live.

By investing in the Nigerian youth, this organization is tapping into the majority world’s greatest resource. This is what AI for good is all about: technology for the flourishing of humanity in places of scarcity.

3 Concerning Ways Israel Used AI Warfare in Gaza

We now have evidence that Israel used AI warfare in the conflict with Hamas. In this post, we’ll cover briefly the three main ways Israel used warfare AI and the ethical concerns they raise. Shifting from facial recognition posts, we now turn to this controversial topic.

Semi-Autonomous Machine Gun Robot

Photos surfaced on the Internet showing a machine-gun-mounted robot patrolling the Gaza wall. The intent is to deter Hamas militants from crossing the border or dealing with civil unrest. Just to be clear, they are not fully autonomous as they still have a remote human controller that must make the ultimate striking decision.

Yet, they are more autonomous than drones and other remote-controlled weapons. They can respond to fire or use non-lethal forces if challenged by enemy forces. They are also able to maneuver independently around obstacles.

Israel Defence Force (IDF) seeks to replace soldier patrols with semi-autonomous technologies like this. From their side, the benefits are clear: less risk for soldiers, cheaper patrolling power, and more efficient control of the border. It is part of a broader strategy of creating a smart border wall.

Less clear is how these Jaguars (as they are called) will do a better job in distinguishing enemy combatants from civilians.

US military Robot similar to the one used in Gaza (from Wikipedia)

Anti-Artilhery Systems

Hamas’s primary tactic to attack Israel is through short-range rockets – lots of them. Just in the last conflict, Hamas launched 4,000 rockets against Israeli targets. Destroying them mid-air was a crucial but extremely difficult and costly task.

For decades, IDF has improved its anti-ballistic capability in order to neutralize this threat. The most recent addition in this defense arsenal is using AI to better predict incoming missiles trajectories. By collecting a wealth of data from actual launches, IDF can train better models. This allows them to use anti-missile projectiles sparingly, leaving those destined to uninhabited areas alone.

This strategy not only improves accuracy, which now stands at 90% but also can save the IDF money. At $50K a pop, IDF must use anti-missile projectiles wisely. AI warfare is helping Israel save on resources and hopefully some lives as well.

Target Intelligence

The wealth of data was useful in other areas as well. IDF used intelligence to improve its targeted strikes in Gaza. Using 3-D models of the Gaza territory, they could focus on hitting weapons depots and missile bases. Furthermore, AI technology was employed for other facets of warfare. For example, military strategists used AI to ascertain the best routes for ground force invasions.

This was only possible because of the rich data coming from satellite imaging, surveillance cameras, and intercepted communications. As this plethora of information flowed in, partner-finding algorithms were essential in translating them into actionable intelligence.

Employing AI technology clearly gave Israel a considerable military advantage. Hence, the number of casualties on each side speaks for themselves: 253 in Palestine versus 12 on the Israeli side. AI warfare was a winner for Israel.

With that said, wars are no longer won on the battlefield alone. Can AI do more than giving one side an advantage? Could it actually diminish the human cost of war?

Photo by Clay Banks on Unsplash

Ethical Concerns with AI warfare

As I was writing this blog I reached out to Christian Ethicists and author of Robotic Persons, Dr. Josh Smith for reactions. He sent me the following (edited for length) statement:

“The greatest concern I have is that these defensive systems, which most AI weapons systems are, is that they become a necessary evil. Because every nation is a ‘threat’ we must have weapon systems to defend our liberty, and so on. The economic cost is high. Many children in the world die from a lack of clean water and pneumonia. Yet, we invest billions into AI for the sake of security.

Dr Josh Smith

I could not agree more. As the case for IDF illustrates, AI can do a lot to give military advantage in a conflict. AI warfare is not just about killer robots but encompasses an ecosystem of applications that can improve effectiveness and efficiency. Yet, is that really the best use of AI.

Finally, As we have stated before AI is best when employed for the flourishing of life. Can that happen in warfare? The jury is still out but it is hard to reconcile the flourishing of life with an activity focused on death and destruction.

Making a Difference: Facial Recognition Regulation Efforts in the US

As our review of Coded Bias demonstrates, concerns over facial recognition are mounting. In this blog, I’ll outline current efforts of facial recognition regulation while also point you to resources. While this post focus on the United States, this topic has global relevance. If you know of efforts in your region, drop us a note. Informed and engaged citizens are the best weapons to curb FR abuse.

National Level Efforts to Curb Public Use

Bipartisan consensus is emerging on the need to curb big tech power. However, there are many differences in how to address it. The most relevant piece of legislation moving through Congress is the Facial Recognition and Biometric Technology Moratorium Act. If approved, this bill would:

  • Ban the use of facial recognition technology by federal entities, which can only be lifted with an act of Congress
  • Condition federal grant funding to state and local entities moratoria on the use of facial recognition and biometric technology
  • Stop the use of federal dollars for biometric surveillance systems
  • Stop the use of biometric data in violation of the Act from any judicial proceedings
  • Empower individuals whose biometric data is used in violation of the Act and allow for enforcement by state Attorneys General

Beyond the issues outlined above, it would allow states and localities to enact their own laws regarding the use of facial recognition and biometric technologies.

Photo by Tingey Injury Law Firm on Unsplash

What about Private Use?

A glaring omission from the bill above is that it does nothing to curb private companies use of facial recognition. While stopping police and judicial use of FR is a step in the right direction, the biggest users of this technology is not the government.

On that front, other bills have emerged but have not gone far. One of them is National Biometric Information Privacy Act of 2020, cosponsored by Senadors Jeff Merkley (D-Ore.) and Bernie Sanders (I-Vt.). This law would make it illegal for corporations to use facial recognition to identify people without their consent. Moreover, they would have to prove they are using it for a “valid business purpose”. It models after a recent Illinois law that spurred lawsuits against companies like Facebook and Clearview.

Another promising effort is Republican Senator Jerry Moran Consumer Data Privacy and Security Act of 2021. This bill seeks to establish comprehensive regulation on data privacy that would include facial recognition. In short, the bill would create a federal standard for how companies use personal data and allow consumers to have more control over what is done with their data. On the other side of the aisle, Senator Gillibrand introduced a bill that would create a federal agency to regulate data use in the nation.

Cities have also entered the battle to regulate facial recognition. In 2020, the city of Portland passed a sweeping ban on FR that includes not only public use but also business use in places of “public accommodation”. On the other hand, the state of Washington passed a landmark law that curbs but still allows for the use of the technology. Not surprisingly, the efforts gained support from Seattle’s corporate giants Amazon and Microsoft. Whether that’s a good or bad sign, I’ll let you decide.

What can you do?

What is the best approach? There is no consensus on how to tackle this problem but leaving for the market to decide is certainly not a viable option. While consent is key, there are differences on whether the use is of FR is legitimate. For some, an outright ban is the best option. Others believe it should be highly regulated but still applied to areas like policing. In fact, a majority of Americans are in favor of law enforcement’s use of FR.

The first step is informed engagement. I encourage you to reach out to your Senator and Representative and express your concern over Facial Recognition. In these days, even getting FR regulation in legislator’s radar is a step on the right direction.

Look out for local efforts in your area that are addressing this issue. If none are present, maybe it is your opportunity to be a catalyst for action. While least covered by the media, local laws are often the ones that most impact our lives. Is your police department using FR? If so, what safeguards do they have to avoid racial and gender bias?

Finally, discuss the issue with friends, family and your social network. One good step is sharing this blog (shameless plug 🙂 ) with others in social media. You can do that using the buttons below. Regardless of where you stand on this issue, it is imperative we widen the conversation on facial recognition regulation.

How Coded Bias Makes a Powerful Case for Algorithmic Justice

What do you do when your computer can’t recognize your face? In a previous blog, we explored the potential applications for emotional AI. At the heart of this technology is the ability to recognize faces. Facial recognition is gaining widespread attention for its hidden dangers. This Coded Bias short review summarizes the story of female researchers who opened the black box of major applications that use FR. What they found is a warning to all of us making Coded Bias a bold call for algorithmic justice.


Official Trailer

Coded Bias Short Review: Exposing the Inaccuracies of Facial Recognition

The secret is out, FR algorithms are a lot better at recognizing white male faces than of any other group. The difference is not trivial. Joy Buolamwini, MIT researcher and main character in the film, found that dark-skinned women were miss-classified up to 35% of the time compared to less than 1% for male white faces! Error rates of this level can have life-altering consequences when used in policing, judicial decisions, or surveillance applications.

Screen Capture

It all started when Joy was looking for facial recognition software to recognize her face for an art project. She would have to put a white mask on in order to be detected by the camera. This initial experience led her down to a new path of research. If she was experiencing this problem, who else and how would this be impacting others that looked like her. Eventually, she stumbled upon the work of Kathy O’Neil, Weapons of Math Destruction: How How Big Data Increases Inequality and Threatens Democracy, discovering the world of Algorithmic activism already underway.

The documentary weaves in multiple cases where FR misclassification is having a devastating impact on people’s lives. Unfortunately, the burden is falling mostly on the poor and people of color. From an apartment complex in Brooklyn, the streets of London, and a school district in Houston, local activists are mobilizing political energy to expose the downsides of FR. In doing so, Netflix Coded Bias shows not only the problem but also sheds light on the growing movement that arose to correct it. In that, we can find hope.

If this wasn’t clear before, here it is: watch the documentary Coded Bias multiple times. This one is worth your time.

The Call for Algorithmic Justice

The fight for equality in the 21st century will be centered on algorithmic justice. What does that mean? Algorithms are fast becoming embedded in growing areas of decision-making. From movie recommendations to hiring, cute apps to judicial decisions, self-driving cars to who gets to rent a house, algorithms are influencing and dictating decisions.

Yet, they are only as good as the data used to train them. If that data contains present inequities and or is biased towards ruling majorities, they will inevitably disproportionately impact minorities. Hence, the fight for algorithmic justice starts with the regulation and monitoring of their results. The current lack of transparency in the process is no longer acceptable. While some corporations may intended to discriminate, their neglect of oversight makes them culpable.

Because of its ubiquitous impact, the struggle for algorithmic justice is not just the domain of data scientists and lawmakers. Instead, this is a fight that belongs to all of us. In the next blog, I’ll be going over recent efforts to regulate facial recognition. This marks the next step in Coded Bias call for algorithmic justice.

Stay tuned.

4 Surprising Ways Emotional AI is Making Life Better

It’s been a long night and you have driven for over 12 hours. The exhaustion is such that you are starting to blackout. As your eyes close and your head drops, the car slows down, moves to the shoulder, and stops. You wake up and realize your car saved your life. This is just one of many examples of how emotional AI can do good.

It doesn’t take much to see the ethical challenges of computer emotion recognition. Worse case scenarios of control and abuse quickly pop into mind. In this blog, I will explore the potential of emotional AI for human flourishing through 4 examples. We need to examine these technologies with a holistic view that weighs their benefits against their risks. Hence, here are 4 examples of how affecting computing could make life better.

1. Alert distracted drivers

Detecting signs of fatigue or alcohol intoxication early enough can be the difference between life and death. This applies not only to the driver but also to passengers and occupants of nearby vehicles. Emotional AI can detect blurry eyes, excessive blinking, and other facial signs that the driver is losing focus. As this mental state is detected early, the system can intervene through many means.

For example, it could alert the driver that they are too tired to drive. It could lower the windows or turn on loud music to jolt the driver into focus. More extreme interventions would include shocking the drivers’ hands through the steering wheel, and also slowing or stopping the car in a safe area.

As an additional benefit, this technology could also detect other volatile mental states such as anger, mania, and euphoria. This could lead to interventions like changing temperature, music, or even locking the car to keep the driver inside. In effect, this would not only reduce car accidents but could also diminish episodes of road rage.

2. Identify Depression in Patients

As those who suffer from depression would attest, the symptoms are not always clear to patients themselves. In fact, some of us can go years suffering the debilitating impacts of mental illness and think it is just part of life. This is especially true for those who live alone and therefore do not have the feedback of another close person to rely on.

Emotional AI trained to detect signs of depression in the face could therefore play an important role in moving clueless patients into awareness. While protecting privacy, in this case, is paramount, adding this to smartphones or AI companions could greatly help improve mental health.

Our faces let out a lot more than we realize. In this case, they may be alerting those around us that we are suffering in silence.

3. Detect emotional stress in workplaces

Workplaces can be toxic environments. In such cases, the fear of retaliation may keep workers from being honest with their peers or supervisors. A narrow focus on production and performance can easily make employees feel like machines. Emotional AI systems embedded through cameras and computer screens could detect a generalized increase in stress by collecting facial data from multiple employees. This in turn could be sent over to responsible leaders or regulators for appropriate intervention.

Is this too invasive? Well, it depends on how it is implemented. Many tracking systems are already present in workplaces where employee activity in computers and phones are monitored 24-7. Certainly, this could only work in places where there is trust, transparency and consent. It also depends on who has access to this data. An employee may not be comfortable with their bosses having this data but may agree to ceding this data to an independent group of peers.

4. Help autistic children socialize in schools

The last example shows how emotional AI can play a role in education. Autistic children process and respond to social queues differently. In this case, emotional AI in devices or a robot could gently teach the child to both interpret and respond to interactions with less anxiety.

This is not an attempt to put therapists or special-needs workers out of a job. It is instead an important enhancement to their essential work. The systems can be there to augment, expand and inform their work with each individual child. It can also provide a consistency that humans also fail to provide. This is especially important for kids who tend to thrive in structured environments. As in the cases above, privacy and consent must be at the forefront.

These are just a few examples of the promise of emotional AI. As industries start discovering and perfecting emotional AI technology, more use cases will emerge.

How does reading these examples make you feel? Do they sound promising or threatening? What other examples can you think of?

A Beginner’s Guide to Emotional AI, Its Challenges and Opportunities

You walk into your living room, Alexa dims the lights, lowers the temperature, and says: “You look really sad today. Would you like me to play Adele for you?” This could be a reality in a few years. Are we prepared? This beginner’s guide to emotional AI will introduce the technology, its applications and ethical challenges.

We will explore both the opportunities and dangers of this emerging AI application. This is part of the broader discipline of affective computing that use different inputs from the human body (i.e.: heartbeat, sweat, facial expression, speech, eye movement, etc) to interpret, emulate and predict emotion. For this piece, we’ll focus on the use of facial expressions to infer emotion.

According to Gartner, by 2022, 10% of smartphones will have affective computing capabilities. Latest Apple phones can already detect your identity through your face. The next step is detecting your mental state through that front camera. Estimates of the Emotive AI market range around $36 Billion in 5 years. Human emotion detection is no longer a Sci-Fi pipe dream but a reality poised to transform societies. Are we ready for it?

How does Emotional AI work?

Our beginner’s guide to emotional AI must start with explaining how it works. While this technology is relatively new, its foundation dates back to the mid-nineteenth century. founded primarily in the idea humans display universal facial cues for their emotions. Charles Darwin was one of the first to put forth this idea. A century later, American psychologist Paul Eckman further elaborated on it through extensive field studies. Recently, scholars have challenged this universality and there is now no consensus on its validity. AI entrepreneurs bet that we can find universal patterns. Their endeavors are testing this theory in real-time with machine learning.

The first step is “training” a computer to read emotions through a process of supervised learning. This entails feeding pictures of people’s faces along with labels that define that person’s emotion. For example, one could feed the picture of someone smiling with the label “happy.” For the learning process to be effective thousands if not millions of these examples are created.

The computer then uses machine learning algorithms to detect common patterns in the many examples of each emotion. This enables it to establish a general idea of what each emotion looks like in every face. Therefore, it is able to classify new cases, including faces it has never encountered before, with these emotions.

By Prawny from Pixabay

Commercial and Public Applications

As you can imagine, there are manifold applications to this type of technology. For example, one of the greatest challenges in marketing is collecting accurate feedback from customers. Satisfaction surveys are few and far between and often inaccurate. Hence, companies could use emotional AI to capture instantaneous human reactions to an ad or experience not from a survey but from their facial reactions.

Affectiva, a leading company using this technology, already claims it can detect emotions from any face. It collected 10M expressions from 87 countries, hand-labeled by crowd workers from Cairo. With its recent merger with Smart Eye, the company is poised to become the leader in Automotive, in-cabin driver mental state recognition for the automotive industry. This could be a valuable safety feature detecting when a driver is under the influence, sleepy, or in emotional distress.

More controversial applications include using it for surveillance as it is in the case of China’s treatment of the Uighur population. Police departments could use it as a lie-detection device in interrogations. Governments could use it to track the general mood of the population by scanning faces in the public square. Finally, employers could use it as part of the interview process to measure the mental state of an applicant.

Ethical Challenges for Emotional AI

No beginner’s guide on emotional AI without considering the ethics of its impact. Kate Crawford, a USC research professor, has sounded the alarm on emotional AI. In her recently published book and through a number of articles, she makes the case for regulating this technology. Her primary argument is that using facial recognition to detect emotions is based on shaky science. That is, the overriding premise that human emotion can universally be categorized through a set of facial expressions is faulty. It minimizes a plethora of cultural factors lending itself to dangerous bias.

This is not just conjecture, a recent University of Maryland study detected an inherent bias that tends to show black faces with more negative emotions than white faces. She also notes that the machine learning process is questionable as it is based on pictures of humans emulating emotions. The examples come from people that were told to make a type of facial expression as opposed to capturing a real reaction. This can lead to an artificial establishment of what a facial representation of emotion should look like rather than real emotional displays.

This is not limited to emotion detection. Instead it is part of a broader partner of error in facial recognition. In 2018 paper, MIT researcher Joy Bulowanmini analyzed disparities in the effectiveness of commercial facial recognition applications. She found that misclassification rates for dark-skinned women were up to 34% compared to 0.8% for white males.

Photo by Azamat Zhanisov on Unsplash

The Sanctity of the Human Face

The face is the window to the human soul. It the part of the body most identified with an individual’s unique identity. When we remember someone, it is their countenance that shows up in our mind. It is indeed the best indicator of our internal emotional state which may often betray the very things we speak.

Up to recently, interpreting the mystery of our faces was the job of humans and animals. What does it mean to now have machines that can intelligently decipher our inner states? This is certainly a new frontier in the human-ai interface that we must tread carefully if for no other reason than to respect the sanctity of the human soul. If left for commerce to decide, the process will most likely be faster than we as a society are comfortable with. That is where the calls for regulation are spot on.

Like every technology – the devil is in the details. It would be premature to outlaw the practice altogether as the city of Seattle has done recently. We should, however, limit and monitor its uses – especially in areas where the risk of bias can have a severe adverse impact on the individual. We must also ponder whether we want to live in a society where even our facial expressions are subject to monitoring.

Netflix “Oxygen”: Life, Technology and Hope French Style


I am hesitant to watch French movies as the protagonist often dies at the end. Would this be another case of learning to love the main character only to see her die at the end? Given the movie premise, it was worth the risk. Similar to Eden,  Netflix Oxygen is a powerful exploration of the intersection of hope and technology.

It is uncommon to see a French movie make it to the top charts of American audiences. Given our royal laziness, we tend to stay away from anything that has subtitles preferring more the glorified-theatrics-simplistic plots of Hollywood. The French are too sophisticated for that. For them, movies are not entertainment but an art form.


Realizing I had never watched a French Sci-Fi Thriller, maybe it was time to walk down that road. I am glad I did. The next day, I reflected on the movie’s plot after re-telling the whole story to my wife and my daughter. Following the instigating conversation that ensued, I realized there was enough material for an AI theology review.

Simple Plot of Human AI Partnership


You wake up and find yourself trapped in a capsule. You knock on the walls eventually activating an AI that informs you that you are in a cryogenic chamber. There is no way of knowing how you got there and how you can get out. You have 90 minutes before the oxygen runs out. The clock is ticking and you need to find a way to survive or simply accept your untimely death.


Slowly the main character played by Melanie Laurent, Elizabeth, discovers pieces and puzzles about who she is, why she is in the chamber and ultimately what options she has. This journey is punctuated by painful discoveries and a few close calls building the suspense through out the feature.


Her only companion throughout this ordeal is the chamber AI voice assistant, Milo. She converses, argues and pleads with him through out as she struggles to find a way to survive. The movie revolves around their unexpected partnership, as the AI is her only way to learn about her past and communicate with the outside world. The contrast between his calm monotone voice with her desperate cries further energize the movie’s dramatic effect.


In my view, the plot’s simple premise along with Melanie’s superb performance makes the movie work even as it stays centered on one life and one location the whole time.


Spoiler Alert: The next sections give away key parts of the plot.

AI Ethics, Cryogenics and Space Travel


Oxygen is the type of film that you wake up the next day thinking about it. That is, the impact is not clearly felt until later. There is so much to process that its meaning does not become clear right away. The viewer is so involved in the main character’s ordeal that you don’t have time to reflect on the major ethical, philosophical and theological issues that emerge in the story.


For example, once Elizabeth wakes up, one of the first things Milo offers her is sedatives. She refuses, preferring to be alert in her struggle for survival rather than calmly accepting her slow death. In one of the most dramatic scenes of the movie, Milo follows protocol to euthanize her as she is reaching the end of her oxygen supply. In an ironic twist that Elizabeth picks up on: the AI asks her permission for sedatives but does not consult her about the ultimate decision to end her life. While a work of fiction, this may very well be sign of things to come, as assisted suicide becomes legal in many parts of the world. Is it assisted-suicide of humane end-of-life care?


In an interesting combination, Oxygen portrays cryogenics, cloning and space travel as the ultimate solution for human survival. As humanity faced a growing host of incurable diseases they send a spaceship with thousands of clones in cryogenic chambers to find the cure in another planet. Elizabeth, as she learns mid-way, is a clone of a famous cryogenics scientist carrying her memories and DNA. This certainly raises interesting questions about the singularity of the human soul. Can it really transfer to clones or are they altogether different beings? Is memory and DNA the totality of our beings or are there transcending parts impossible to replicate in a lab?

Photo by Joshua Earle on Unsplash

Co-Creating Hopeful Futures


In the end, human ingenuity prevails. Through a series of discoveries, Liz finds a way to survive. It entails asking Milo to transfer the oxygen from other cryogenic chambers into hers. Her untimely awakening was the result of an asteroid collision that affected a number of other chambers. After ensuring there were no other survivors in these damaged chambers, she asks for the oxygen transfer.


To my surprise, the movie turns out to be a colossal affirmation of life. Where the flourishing of life is, there is also theology. While having no religious content, the story shows how the love for self and others can lead us to fight for life. Liz learns that her husband’s clone is in the spaceship which gives her a reason to go on. This stays true even after she learns she herself is a clone and in effect have never met or lived with him. The memory of their life together is enough to propel her forward, overcoming insurmountable odds to stay alive.


The story also illustrates the power of augmentation, how humans enabled through technology can find innovative solutions that extend life. In that sense, the movie aligns with a Christian Transhumanist view – one that sees humans co-creating hopeful futures with the divine.


Even if God is not present explicitly, the divine seems to whisper through Milo’s reassuring voice.

Netflix “Eden”: Human Frailty in a Technological Paradise

Recently, my 11 year-old daughter told me she wanted to watch animes. I have watched a few and was a bit concerned about her request. While I have come to really appreciate this art form, I feared that some thematic elements would not be appropriate to her 11 year-old mind. Yet, after watching the first episode of Netflix Eden, my concerns were appeased and I invited my two oldest (11 and 9) to watch it with me. With only 4 episodes of 25 minutes each, the series make it for a great way to spend a lazy Saturday afternoon. Thankfully, beyond being suitable there was enough that for me to reflect on. In fact, captivating characters and an enchanting soundtrack moved me to tears making Netflix Eden a delightful exploration of human frailty.

Here is my review of this heart-warming, beautifully written story.

Official Trailer

A Simple but Compelling Plot

Without giving a way much, the story revolves around a simple structure. From the onset we learn that no human have lived on earth for 1,000 years. Self-sufficient robots successfully turned a polluted wasteland into a lush oasis. The first scenes show groups of robots tending and harvesting an apple orchard.

Two of these robots stumble into an unlikely finding: a human child. Preserved in a cryogenic capsule, the toddler stumbles out and wails. The robots are confused and helpless as to how to respond. They quickly identify her as a human bio-form but cannot comprehend what her crying means.

After the initial shock, the toddler turns to the robots and calls them “papa” and “mama” kicking off the story. The plot develops around the idea of two robots raising a human child in a human-less planet earth. We also learn that humans are perceived as a threat and to be surrendered to the authorities. In spite of their programming, the robots choose to hide and protect the girl.

Photo by Bruno Melo on Unsplash

Are Humans Essential for Life to Flourish on Earth?

Even with only 4 episodes, the anime packs quite a philosophical punch. From a theological perspective, the careful observer quickly sees why the show is named after the Biblical garden. It is an illusion to the Genesis’ story where life begins on earth yet it includes with a twist. Now Eden is lush and thriving without human interference. It is as if God is recreating earth through technological means. This echoes Francis Bacon’s vision of technology as a way to mitigate the destructive effects of the fall.

Later we learn the planet had become uninhabitable. The robot creators envisioned a solution that entailed freezing cryogenically a number of humans while the robots worked to restore earth back to its previous glory. The plan apparently works except for the wrinkle of this girl waking up before her assigned time. Just like in the original story of Eden, humans come to mess it up.

Embedded in this narrative is the provocative question of human ultimate utility for life in the planet. After all, if machines are able to manage the earth flawlessly, why introduce human error? Of course, the flip side of the question is the belief that machines in themselves are free of error. Putting that aside, the question is still valid.

Photo by Alesia Kazantceva on Unsplash

Human Frailty and Renewal

Watching the story unfold, I could not help but reflect on Dr. Dorabantu’s past post on how AI would help us see the image of God in our vulnerability. That is, learning that robots could surpass us in rationality, we would have to attribute our uniqueness not to a narrow view of intelligence but our ability to love. The anime seems to be getting at the heart of this question and it gets there by using AI. It is in the Robot’s journey to understand human’s essence that we learn about what makes us unique in creation. In this way, the robots become the mirrors that reflect our image back to us.

Another parallel here is with the biblical story of Noah. In a world destroyed by pollution and revived through technological ingenuity, the ark is no longer a boat but a capsule. Humans are preserved by pausing the aging process in their bodies, a clear nod to Transhumanism. The combination of cryogenics and advanced AI can preseve human life on earth albeit for a limited number of humans.

I left the story feeling grateful for our imperfect humanity. It is unfortunate that Christian theology in an effort to paint a perfect God have in turn made human vulnerability seem undesirable. Without denying our potential for harm and destruction, namely our sinfulness, it is time Christian theology embraces and celebrate human vulnerability as part of our Imago Dei. This way, Netflix Eden, helps put human frailty back in the conversation.

How AI and Faith Communities Can Empower Climate Resilience in Cities

AI technologies continue to empower humanity for good. In a previous blog, we explored how AI was empowering government agencies to fight deforestation in the Amazon. In this blog, we discuss the role AI is playing to build climate resilience in cities. We will also look at how faith communities can use AI-enabled microgrids to serve communities hit by climate disassters.

A Changing Climate Puts Cities in Harm way.

I recently listened to an insightful Technopolis podcast on how cities are preparing for an increased incidence of natural disasters. The episode discussed manifold ways city leaders are using technology to prepare, predict and mitigate the impact of climate events. This is a complex challenge that requires a combination of good governance, technological tools, and planning to tackle.

Climate resilience is not just about decreasing carbon footprint, it is also about preparing for the increased incidence of extreme weather. Whether there are fires in California, Tifoons in East Asia, or severe droughts in Northern Africa, the planet is in for a bumpy ride in the coming decades. They will also exacerbate existing problems such as air pollution, water scarcity and heat diseases in urban areas. Governments and civic society groups need to start bracing for this reality by taking bold preventive steps in the present.

Cities illustrate the costs of delaying action on climate change by enshrining resource-intensive infrastructure and behaviors. The choices cities make today will determine their ability to handle climate change and reap the benefits of resource-efficient growth. Currently, 51% of the world’s population lives in cities and within a generation, an estimated two-thirds of the world’s population will live in cities. Hence, addressing cities’ vulnerabilities will be crucial for human life on the planet.

Photo by Karim MANJRA on Unsplash

AI and Climate Resilience

AI is a powerful tool to build climate resilience. We can use it to understand our current reality better, predict future weather events, create new products and services, and minimize human impact. By doing so, we can not only save and improve lives but also create a healthier world while also making the economy more efficient.

Deep learning, for example, enables better predictions and estimates of climate change than ever before. This information can be used to identify major vulnerabilities and risk zones. For example, in the case of fires, better prediction can not only identify risk areas but also help understand how it will spread in those areas. As you can imagine, predicting the trajectory of a fire is a complex task that involves a plethora of variables related to wind, vegetation, humidity, and other factors

The Gifts of Satellite Imagery

Another crucial area in that AI is becoming essential is satellite imagery. Research led by Google, the Mila Institute and the German Aerospace Center harness AI to develop and make sense of extensive datasets on Earth. This in turn empowers us to better understand climate change from a global perspective and to act accordingly.

Combining integrated global imagery with sophisticated modeling capabilities gives communities at risk precious advance warning to prepare. Governments can work with citizens living in these areas to strengthen their ability to mitigate extreme climate impacts. This will become particularly salient in coastal communities that should see their shores recede in the coming decades.

This is just one example of how AI can play a prominent role in climate resilience. A recent paper titled “Tackling Climate Change with Machine Learning,” revealed 13 areas where ML can be developed. They include but are not limited to energy consumption, CO2 removal, education, solar energy, engineering, and finance. Opportunities in these areas include the creation of new low-carbon materials, better monitoring of deforestation, and cleaner transport.

Photo by Biel Morro on Unsplash

Microgrids and Faith Communities

If climate change is the defining test of our generation, then technology alone will not be enough. As much as AI can help find solutions, the threat calls for collective action at unprecedented levels. This is both a challenge and an opportunity for faith communities seeking to re-imagine a future where their relevance surpasses the confines of their pews.

Thankfully, faith communities already play a crucial role in disaster relief. Their buildings often double as shelter and service centers when calamity strikes. Yet, if climate-related events will become more frequent, these institutions must expand their range of services offered to affected populations.

An example of that is in the creation of AI-managed microgrids. They are small, easily controllable electricity systems consisting of one or more generating units connected to nearby users and operated locally. Microgrids contain all the elements of a complex energy system, but because they maintain a balance between production and consumption, they operate independently of the grid. These systems work well with renewable energy sources further decreasing our reliance on fossil fuels

When climate disaster strikes, one of the first things to go is electricity. What if houses of worship, equipped with microgrids, become the places to go for those out of power? When the grid fails, houses of worship could become the lifeline for a neighborhood helping impacted populations communicate with family, charge their phones, and find shelter from cold nights. Furthermore, they could sell their excess energy units in the market finding new sources of funding for their spiritual mission.

Microgrids in churches, synagogues, and mosques – that’s an idea the world can believe in. It is also a great step towards climate resilience.

Klara and the Sun: Robotic Redemption for a Dystopian World

In the previous blog, we discussed how Klara, the AI and the main character of Kazuo Ishiguro’s latest novel, develops a religious devotion to the Sun. In the second and final installment of this book review, I explore how Klara impacts the people around her. Klara and the Sun, shows how they become better humans for interacting with her in a dystopian world.

Photo by Randy Jacob on Unsplash

Gene Inequality

Because humans are only supporting characters in this novel, we only learn about their world later in the book. The author does not give out a year but places that story in a near future. Society is sharply divided along with class and racial lines. Gene editing has become a reality and now parents can opt to have children born with the traits that will help them succeed in life.

This stark choice does not only affect the family’s fate but re-orients the way society allocates opportunities. Colleges no longer accept average kids meaning that a natural birth path puts a child at a disadvantage. Yet, this choice comes at a cost. Experimenting with genes also means a higher mortality rate for children and adolescents. That is the case for the family that purchases Klara, they have lost their first daughter and now their second one is sick.

These gene-edited children receive special education in their home remotely by specialized tutors. This turned out to be an ironic trait in a pandemic year where most children in the world learned through Zoom. They socialize through prearranged gatherings in homes. Those that are well-to-do live in gated communities, supposedly because the world had become unsafe. This is just one of the many aspects of the dystopian world of Klara and the Sun.

Photo by Andy Kelly on Unsplash

AI Companionship and Remembrance

A secondary plot-line in the novel is the relationship between the teenage Josie, Klara’s owner, and her friend Rick who is not gene-edited. The teens are coming of age in this tumultuous period where the viability of their relationship is in question. The adults discuss whether they should even be together in a society that delineates separate paths assigned at birth. One has a safe passage into college and stable jobs while the other is shut out from opportunity by the sheer fact their parents did not interfere with nature.

In this world, droids are common companions to wealthy children. Since many don’t go to school anymore, the droid plays the role of nanny, friend, guardian, and at times tutor. Even so, there is resistance to them in the public square where resentful segments of society see their presence with contempt. They represent a symbol of status for the affluent and a menace to the working class. Even so, their owners often treat them as merchandise. At best they were seen as servants and at worse as disposable toys that could be tossed around for amusement.

The novel also hints at the use of AI to extend the life of loved ones. AI remembrance, shall we say. That is, programming AI droids to take the place of a diseased human. This seems like a natural complement in a world where parents have no guarantee that their gene-edited children will live to adulthood. For some, the AI companion could live out the years their children were denied.

Klara The Therapist

In the world described above, the AF (artificial friend) plays a pivotal role in family life not just for the children that they accompany but also for the parents. In effect, because of her robotic impartiality, Klara serves as a safe confidant to Josie, Rick, her mother, and her dad. The novel includes intimate one-on-one conversations where Klara offers a fresh take on their troubles. Her gentle and unpretentious perspective prods them to do what is right even when it is hard. In this way, she also plays a moral role, reminding humans of their best instincts.

Yet, humans are not the only ones impacted. Klara also grows and matures through her interaction with them. Navigating the tensions, joys, and sorrows of human relationships, she uncovers the many layers of human emotion. Though lacking tear ducts and a beating heart, she is not a prisoner to detached rationality. She suffers with the pain of the humans around her, she cares deeply about their well-being and she is willing to sacrifice her own future to ensure they have one. In short, she is programmed to serve them not as a dutiful pet but as a caring friend. In doing so, she embodies the best of human empathy.

The reader joins Klara in her path to maturity and it is a delightful ride. As she observes and learns about the people around her, the human readers get a mirror to themselves. We see our struggles, our pettiness, our hopes and expectations reflected in this rich story. For the ones that read with an open heart, the book also offers an opportunity for transformation and growth.

Final Reflections

In an insightful series of 4 blogs, Dr. Dorabantu argues that future general AI will be hyper-rational forcing us to re-imagine the essence of who we are. Yet, Ishiguro presents an alternative hypothesis. What if instead, AI technology led to the development of empathetic servant companions? Could a machine express both rational and emotional intelligence?

Emotionally intelligent AI would help us redefine the image of God not by contrast but by reinforcement. That is, instead of simply demonstrating our limitations in rationality it could expand our potential for empathy. The novel shows how AI can act as a therapist or spiritual guide. Through empathetic dialogue, they can help us find the best of our moral senses. In short, it can help us love better.

Finally, the book raises important ethical questions about gene editing’s promises and dangers. What would it look like to live in a world where “designer babies” are commonplace? Could gene-editing combining with AI lead to the harrowing scenario where droids serve as complete replacements for humans? While Ishuguro’s future is fictitious, he speculates on technologies that already exist now. Gene editing and narrow AI are a reality while General AI is plausibly within reach.

We do well to seriously consider their impact before a small group in Silicon Valley decides how to maximize profit from them. This may be the greatest lesson we can take from Klara and the Sun and its dystopian world.