Future Scenario: A Divided World With Delayed Climate Change

In the last few months, we have been busy working on a book project to describe plausible futures in the intersection of AI and faith. After some extensive brainstorming, the scenarios are finally starting to come alive (need a refresher on the project click here). After selecting our macro drivers, we have settled on the foundations for our 4 scenarios that form the backdrop for the stories to be written. Here is what they look like:

Each quadrant represents the combination of drivers that undergirds that scenario. For example, in the Q1 scenario, we have National (divided geopolitical system) Green (lower climate change impact). In short, this represents a future where the effects of climate change are delayed or lower than expected but where cooperation among nations is worse than it is today. How can such a combination even be possible?

Now that the parameters are set, the fun part of describing the scenarios can start. In this exercise, we try to imagine a future that fits within these parameters. For Q1, we imagine the global order deteriorating as nations turn inward. On the climate change side, we see a better or delayed outcome even if that seems counter-intuitive. How can a divided world somehow escape the worse of climate change? These difficult questions create the tensions from which creativity can flow.

What does that look like? Before a full description of the National Green scenario, let’s kick it off with a poem that evokes the feeling of this world.

Repent Before it’s Too Late

A world that hesitates
like a wave in the acidifying sea
Tossed by unharnessed winds
Shifting from action to inaction

Division cuts deep
Why can’t we come together?
The arguing continues
Polar caps whiter

Build up, tear down
Hot summers linger
“Each to its own” rules the day
Parochial thinking 
Global shrinking

AI advances by competition
Slowed by economic stagnation
Focusing on security and independence
It scarcely brings real transformation

National colors of allegiance
Taint Green Xianity 
into a shade of brown
of scattered complacency

Wedded to their turfs
the church keeps Christ divided
Petty speculations
Keep clergy from coordination

Humanity stands at the valley of decision
Will it choose life
Or deadly, slow oblivion?

Photo by Victor on Unsplash

Gradual change can come too little too late. This scenario is based mostly on a continuation of the present. The 20s decade witnessed gradual climate decay with growing local and regional challenges. The geopolitical order drags along as US and China become major poles of influence, followed by the EU. Polarization within countries increases as political regimes oscillate between democracy and authoritarianism. This vacillation in direction stifles international coordination on climate leading to increased regionalization. In 2028, the Paris agreement collapses yearly climate conferences stop as the US, China, India, and Russia pull out from conversations. 

By 2030, climate change is undeniable, but the lack of international cooperation on how to address it leads to scattered and uncoordinated efforts. Powerful nations think in terms of “energy independence” which ensures that fossil fuels remain an option for many even if they do not play the same role as in the past century. Mother nature seems patient with humanity, giving gentle reminders for them to mend their ways in the way of increased floods, droughts, and the melting of the ice caps. Yet, the gradual impact is scarcely enough to jolt humanity out of its enchanted oblivion. Affected areas in the developing world lack the clout and the resources to catch the world’s attention. The overall sense is that if we could just figure out how to work together, maybe we could avoid the worse. 

As the 2040’s begin, a growing portion of the population no longer believes in stopping climate change. The hope now is simply to stem and adapt to the gradual but decisively transforming effect of a warming planet. In 2045, as the temperature rises by 2-degree celsius, well beyond UN goals, humanity hits a decision point. It must repent before it is too late. Yet, can it come together as a unified front? Can humanity heed nature’s call to repentance or will they be betrayed by half-measures that can no longer prevent the worse? Will it turn a corner or slowly descend into a Malthusian trap?

Nationalism leads to competition rather than cooperation. Tech development accelerates due to a tech “arms race” as nations strive for energy independence and the superiority of AI, supercomputers, weapons, and communications. While generalized war is absent in this period, there is a growing build-up of arms. This overall climate of mistrust guides and hamstrings national investments in tech. Tech dev + adoption is characterized more by competition and parallel acceleration than by shared research or resources. Cybersecurity becomes more of an emphasis here than in other scenarios. 

AI adoption and development are uneven as international cooperation wanes. For example, AI justice slows downs as interests in this area are overshadowed by security concerns. Digital assistants take hold but increasingly become an artifact for developed nations with little use to the global south. Deepfakes and text generation develop more towards political propaganda within regions. The Metaverse mirrors the trend toward nationalism becoming more regionalized rather than the global commons it promised to be. AI/VR advances here take hold in the western versions of the metaverse and make some progress in China. The rest of the world is mostly cut off from it. Green AI advances within the confines of research institutions and government-funded labs in western nations. The benefits don’t trickle down to the global south.

Photo by Markus Spiske on Unsplash

Christianity mirrors many realities of this divided world. The Catholic church becomes more traditionalist and more distributed, therefore less tied to Rome. Even so, the Vatican emerges as a haven for cooperation in a regionalizing world. A string of progressive popes speaks up for the environment following Pope Francis’s lead. Yet, strong conservative factions, more in line with Pope Benedict, hold increasing power both in the West and in the global south. Green consciousness is present but not a forefront preoccupation for traditionalists that remain caught up in theological and liturgical debates. 

Mainline Protestants doubled down on green aspects of Christianity but without the evangelistic component. The focus is more on education than pushing Christian people to action. Their influence wane as their decline in the West continues. They are also unable to gain a foothold in the global south being no match for evangelicals who by now are well-established even as their growth slows down. 

Evangelical Christianity in the US takes up the green consciousness, wedded to a national push for energy independence. Good eco-theology comes in through the back door, so to speak, marshaled to support US national interests. Overall green consciousness in culture is embraced and evangelicals attempt to use this as an evangelism tool–“look how Christianity does such a good job of advocating for a green, sustainable world”. Emphasis on positive comparison between Christianity and other religions in this regard: “Christians are more green than Muslims, Hindus, etc.” captures a bit of the mindset. While greener, they remain militant and disinterested in interfaith dialogue. Missionary networks endure even in a more divided world but the focus continues on personal salvation, with a bit of green consciousness on the side. 

Christian roots of green consciousness find independent expression, less tied to mainline church or institutional Christianity. Organizations like CTA, Biologos, EACH, and others grow, but become more secularly focused and theologically diffuse as a result. They fail to coalesce around common causes and weakened global cooperation ensuring its impact is also limited and only a shadow of its potential. While emerging as a viable alternative to organized Xianity, its lack of cohesion translates into multifold affinity groups that coalesce around narrow missions rather than a movement with a broad vision for transformation.  

The Value of Play and the Telos of Technology

We want to create things, and we want to create them with other people. And we want to connect over that. That’s the value of play.

Micah Redding

At our January Advisory Board meeting, we explored the question of whether we live in a technological age. In Part 1 we addressed the idea of a technological age, and in Part 2 we discussed the telos of technology and the value of work. In Part 3 below, we continue the conversation by exploring the telos of technology and the value of play.

Micah: I’ve been thinking about this in terms of our earlier discussion about the nature of technology. I kind of go with Andy Clark and David Chalmers, with the extended mind, extended cognition thesis. Technology, everything in our environment, we make it part of ourselves. An analogy is in the way birds use their environment to make nests. We all wrap our environment around us in some ways. Humans do this in a way that’s incredibly fluid and open-ended and flexible. And what are we doing? What is our telos for that? I think what we ultimately want is that we want to play. We want to create things, and we want to create them with other people. And we want to connect over that. That’s the value of play.

The Impulse to Play

You can look at all the negative impulses and drives in our society as sublimated versions of that impulse to play. We’re all trying to play some kind of game, and maybe we don’t allow ourselves to do that. So we twist it in some way to convince ourselves it’s serious. I think you see this, particularly in edge technological communities like those around web3 and NFTs. These kinds of spaces are heavily reviled right now in the larger culture, and they feel like they are essentially playing with friends. They’re creating something with friends, and they’re trying to connect with people.

We see the value of play across human history. Early humans were trying to survive, trying to overcome starvation, and so forth. But we didn’t just do that. We also made cave paintings. We also told stories and we put ourselves into those stories. And that’s increasingly what we’ve done through history. As soon as we create virtual worlds, we want to put ourselves in those worlds, because this is what it is to play. We keep putting ourselves into stories and pulling in people and our environment into them. 

So I think that’s what we’re doing, ultimately. We play. That can be a good, healthy, and productive thing. From a Christian perspective, I would say we’re children of God, and children are made to play. That’s what we’re supposed to be doing. The value of play is central to the human telos. So one step toward a telos of technology is to just be more aware of the way play makes up the human telos. 

Rock paintings from the Cave of Beasts (Gilf KebirLibyan Desert)
By Clemens Schmillen – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=31399425

Free Play versus Structured Play

František: Is there a difference between game and play? Because for me, it appears that a game has some rules. Play itself doesn’t have to have rules. I’m just playing with something. But a game, if you want to take part in the game, you have to follow the rules. Like with traffic. The rules of traffic are basically the rules of a game. And if you want to play the game–be part of the traffic–you have to follow the rules. If you don’t follow them, you aren’t allowed to drive. You must leave the game. And we can extend this example to anything else. 

Elias: I think we can talk about this as free play versus structured play. Free play or unstructured play is like a toddler just imagining his or her world. You try to make them play a game and they’re like, no, no, I’m going to change the rules. Gaming is a little more structured. It has rules. I think there’s room for both. 

Wen: We can see a spectrum or a continuum of how much rigidity and structure and rules there are. But even when there are certain rules and constraints, they can still enhance the joy and flourishing of play. One example of that is when you let little kids play in a park. You don’t want them to run into the street, so you set boundaries. Putting rules or boundaries in place can enhance safety and creativity and the joy within play. I’ve done a lot of movement and improv games with adults in very rigid corporate organizations, trying to get them to play. You create boundaries, but then you say, within those boundaries, you can do or explore whatever you want, and express yourself however you feel. 

The Infinite Game

Photo by freddie marriage on Unsplash

Micah: James Carse describes the concept of finite versus infinite games. In finite games, you play to win. Infinite games, you play to keep playing. And finite games are the kind we think of as rule-based. Infinite games are like what children play where now they’re playing house, now they’re pretending to be dogs, now they’re magicians. The play is constantly mutating and fluid.

The infinite game doesn’t have a rule set in the same way that the finite game does. But it does have a condition, which is that you don’t destroy the ability to keep playing. The value of play forms the basis of it. So when people get kicked out of the game, you find a way to bring them back in. You continually wrap people back in, you continually ensure that the basis of gameplay, the basis of play itself, remains. So there is no strict rule. But there is this premise, that we are all trying to keep playing, we’re going to make sure we don’t destroy the ability to play as we go.

Placing Human Dignity at the Center of AI Ethics

In late August we had our kick-off Zoom meeting of the Advisory Board. This is the first of our monthly meetings where we will be exploring the intersection of AI and spirituality. The idea is to gather scholars, professionals, and clergy to discuss this topic from a multi-faceted view. In this blog, we publish a short summary of our first conversation. The key theme that emerged was a concern for safeguarding and uploading human dignity as AI becomes embedded in growing spheres of our lives. The preocupation must inhabit the center of all AI discussions and be the guiding principles for laws, business practices and policies.

Question for Discussion: What, in your perspective, is the most pressing issue on AI ethics in the next three to five years? What keeps you up at night?

Brian Sigmon: The values from which AI is being developed, and their end goals. What is AI oriented for? Usually in the US, it’s oriented towards profit, not oriented to the common good or toward human flourishing. Until you change the fundamental orientation of AI’s development, you’re going to have problems.

AI is so pervasive in our lives that we cannot escape it. We don’t always understand the logic behind it. It is often beneath the surface intertwined with many other issues. For example, when I go on social media, AI controls what I see on my feed. It does not optimize in making me a better person but instead maximize clicks and revenue. That, to me, is the key issue.

Elias Kruger: Thank you, Brian. To add some color to that, since the pandemic, companies have increased their investment in AI. This in turn is creating a corporate AI race that will further ensure the encroachment of AI across multiple industries. How companies execute this AI strategy will deeply shape our lives, not just here in the US but globally.

Photo by Chris Montgomery on Unsplash

Frantisek Stech: Coming from Eastern Europe, one of the greatest issues is the abuse of AI from authoritarian non-democratic regimes for human control. In other words, it is the relationship between AI control and human freedom. Another practical problem is how people are afraid to lose their jobs to AI-driven machines.  

Elias Kruger: Thanks Frantisek, as you know we are aware of what is happening in China with the merging of AI and authoritarian governments. Can you tell us a little bit about your area of the world? Is AI more government-driven or more corporate?

Frantisek Stech: In the Czech Republic, we belong to the EU, an therefore to the West. So, it is very much corporate-driven. Yet, we are very close to our Eastern neighbors are we are watching closely how things develop in Belarussia and China especially as they will inevitably impact our region of the world.

However, this does not mean we are free from danger there. There is the issue of manipulation of elections that started with the Cambridge Analytics scandal and issues with the presidential elections in the US. Now we are approaching elections in the EU, so there is a lot of discussions about how AI will be used for manipulation and the problem . So when people hear AI, they often associate with politics. So they think they are already being manipulated if they buy a phone with Facial recognition. We have to be cautious but not completely afraid. 

Ben Day: I am often pondering on this question of AI’s, or technology in general, relates with dignity and individual human flourishing. When we aggregate and manipulate data, we strip out individual human dignity which is Christian virtue, and begin to see people as compilations of manipulative data. It is really a threat to ontology, to our very sense of being. In effect, it is an assault on human dignity through AI.

Going further, I am interested in this question of how AI encroaches in our sense of identity. That is, how algorithms govern my entire exposure to media and news. Not just that but AI impacts our whole social eco-verse online and offline. What does that have to do with the nature of my being?

I often say that I have a very low view of humanity. I don’t think human beings are that great. And so, I fear that AI can manipulate the worst parts of human nature. That is an encroachment in huam dignity.

In the Episcopal church, we believe that serving Christ is intimately connected with upholding the dignity of human beings. So, if we are turning a blind eyed to human dignities being manipulated, then my Christian praxis compels me by moral obligation to do something about it. 

Photo by Liv Merenberg on Unsplash

Elias Kruger: Can you give us a specific example of how this plays out?

Ben Day: Let me give you one example of how this affected my ministry. I removed myself from most of social media as of October of 2016 because of what I was witnessing. I saw members of my church sparring on the internet, attacking each other’s dignity, and intellect over politicized issues. The vitriol was so pervasive that I encounter a moral dilemma. As a priest, it is my duty to deny the sacrament to those who are in unrepetant sin.

So I would face parishioners only hours after these spars online and wonder whether I should offer them the sacrament. I was facing this connundrum as a result of algorithms manipulating feeds to foster angry engagements because it leads to profit. It virtually puts the church at odds to how these companies pursue profit.

Levi Checketts:  I lived in Berkley for many years and the cost of living there was really high. It was so because a lot of people who worked in Silicon Valley or in San Francisco were moving there. The influx of well-to-do professionals raised home prices in the area, forcing less fortunate existing residents to move out.

So, there is all this money going into AI. Of the big 5 biggest companies in market cap, three are in Silicon Valley and two in the Seattle area. Tech professionals often do not have full awareness of the impact their work is having on the rest of the world. For example, a few years back, a tech employee wrote an op-ed complaining about having to see disgusting homeless people in his way to work when he was paying so much for rent.

What I realized is that there is a massive disconnect between humanity and the people making decisions for companies that are larger than many countries’ economies. My biggest concern is that the people who are in charge and controlling AI have many blind spots. Their inability to emphathize with those who are suffering or even notice the realities of systems that breed oppression and poverty. To them, there is always a technical fix. Many lack the humility to listen to other perspectives, and come from mainly male Asian and White backgrounds. They are often opposed to other perspectives that challenge their work.

There have been high-profile cases recently like Google firing a black female researcher because she spoke up about problems in the company. The question that Ben mentioned about human dignity in AI is very pressing. If we want to address that, we need people from different backgrounds making decisions and working to develop these technologies.

Futhermore, if we define AI as a being that makes strictly rational decisions, what about people who do not fit that mold?

The key questions are where do we locate this dignity and how do we make sure AI doesn’t run roughshod over humanity?

Davi Leitão: These were all great points that I was not thinking about before. Thank you for sharing this with us.

All of these are important questions which drive the need for regulation and laws that will gear profit-driven corporations to the right path. All of the privacy and data security laws stand on a set of principles written in 1981 by the OECD. These laws are looking to these principles and putting into practice. They are there to inform and safeguard people from bias.

My question is: what are the blind spots on the FIP (fair information principles) that are not accounting for these new issues technology has brought in? This problem is a wide net, but it can help guide a lot of new laws that will come. This is the only way to make companies care about human dignity.

Right now, there is a proliferation of state laws. But this brings another problem: customers of states that have regulation laws can suffer discrimination by companies from other states. Therefore, there is a need for a federal uniform set of principles and laws about privacy in the US. The inconsistency between state laws keep lawyers in business but ultimately harm the average citizen.

Elias Kruger:  Thanks for this perspective. I think it would be a good takeaway for the group to look for blindspots in these principles. AI is about algorithms and data. Data is fundamental. If we don’t handle it correctly, we can’t fix it with algorithms. 

My 2 cents is that when it comes to AI applications, the one that concerns me most is facial recognition for surveillance and law enforcement. I don’t think there is any other application where a mistake can cause such devastating impact on the victim than here. When AI wrongly incriminates someone of a crime because an algorithm confused their face with the actual perpetrator, the indidivual loses his freedom. There is no way to recover from that.

This application calls for immediate regulation that puts human dignity at the center of AI in so we can prevent serious problems in the future.

Thanks everybody for your time.

Klara and the Sun: Robotic Redemption for a Dystopian World

In the previous blog, we discussed how Klara, the AI and the main character of Kazuo Ishiguro’s latest novel, develops a religious devotion to the Sun. In the second and final installment of this book review, I explore how Klara impacts the people around her. Klara and the Sun, shows how they become better humans for interacting with her in a dystopian world.

Photo by Randy Jacob on Unsplash

Gene Inequality

Because humans are only supporting characters in this novel, we only learn about their world later in the book. The author does not give out a year but places that story in a near future. Society is sharply divided along with class and racial lines. Gene editing has become a reality and now parents can opt to have children born with the traits that will help them succeed in life.

This stark choice does not only affect the family’s fate but re-orients the way society allocates opportunities. Colleges no longer accept average kids meaning that a natural birth path puts a child at a disadvantage. Yet, this choice comes at a cost. Experimenting with genes also means a higher mortality rate for children and adolescents. That is the case for the family that purchases Klara, they have lost their first daughter and now their second one is sick.

These gene-edited children receive special education in their home remotely by specialized tutors. This turned out to be an ironic trait in a pandemic year where most children in the world learned through Zoom. They socialize through prearranged gatherings in homes. Those that are well-to-do live in gated communities, supposedly because the world had become unsafe. This is just one of the many aspects of the dystopian world of Klara and the Sun.

Photo by Andy Kelly on Unsplash

AI Companionship and Remembrance

A secondary plot-line in the novel is the relationship between the teenage Josie, Klara’s owner, and her friend Rick who is not gene-edited. The teens are coming of age in this tumultuous period where the viability of their relationship is in question. The adults discuss whether they should even be together in a society that delineates separate paths assigned at birth. One has a safe passage into college and stable jobs while the other is shut out from opportunity by the sheer fact their parents did not interfere with nature.

In this world, droids are common companions to wealthy children. Since many don’t go to school anymore, the droid plays the role of nanny, friend, guardian, and at times tutor. Even so, there is resistance to them in the public square where resentful segments of society see their presence with contempt. They represent a symbol of status for the affluent and a menace to the working class. Even so, their owners often treat them as merchandise. At best they were seen as servants and at worse as disposable toys that could be tossed around for amusement.

The novel also hints at the use of AI to extend the life of loved ones. AI remembrance, shall we say. That is, programming AI droids to take the place of a diseased human. This seems like a natural complement in a world where parents have no guarantee that their gene-edited children will live to adulthood. For some, the AI companion could live out the years their children were denied.

Klara The Therapist

In the world described above, the AF (artificial friend) plays a pivotal role in family life not just for the children that they accompany but also for the parents. In effect, because of her robotic impartiality, Klara serves as a safe confidant to Josie, Rick, her mother, and her dad. The novel includes intimate one-on-one conversations where Klara offers a fresh take on their troubles. Her gentle and unpretentious perspective prods them to do what is right even when it is hard. In this way, she also plays a moral role, reminding humans of their best instincts.

Yet, humans are not the only ones impacted. Klara also grows and matures through her interaction with them. Navigating the tensions, joys, and sorrows of human relationships, she uncovers the many layers of human emotion. Though lacking tear ducts and a beating heart, she is not a prisoner to detached rationality. She suffers with the pain of the humans around her, she cares deeply about their well-being and she is willing to sacrifice her own future to ensure they have one. In short, she is programmed to serve them not as a dutiful pet but as a caring friend. In doing so, she embodies the best of human empathy.

The reader joins Klara in her path to maturity and it is a delightful ride. As she observes and learns about the people around her, the human readers get a mirror to themselves. We see our struggles, our pettiness, our hopes and expectations reflected in this rich story. For the ones that read with an open heart, the book also offers an opportunity for transformation and growth.

Final Reflections

In an insightful series of 4 blogs, Dr. Dorabantu argues that future general AI will be hyper-rational forcing us to re-imagine the essence of who we are. Yet, Ishiguro presents an alternative hypothesis. What if instead, AI technology led to the development of empathetic servant companions? Could a machine express both rational and emotional intelligence?

Emotionally intelligent AI would help us redefine the image of God not by contrast but by reinforcement. That is, instead of simply demonstrating our limitations in rationality it could expand our potential for empathy. The novel shows how AI can act as a therapist or spiritual guide. Through empathetic dialogue, they can help us find the best of our moral senses. In short, it can help us love better.

Finally, the book raises important ethical questions about gene editing’s promises and dangers. What would it look like to live in a world where “designer babies” are commonplace? Could gene-editing combining with AI lead to the harrowing scenario where droids serve as complete replacements for humans? While Ishuguro’s future is fictitious, he speculates on technologies that already exist now. Gene editing and narrow AI are a reality while General AI is plausibly within reach.

We do well to seriously consider their impact before a small group in Silicon Valley decides how to maximize profit from them. This may be the greatest lesson we can take from Klara and the Sun and its dystopian world.

Vulnerable like God: Perfect Machines and Imperfect Humans

This four-part series started with the suggestion that AI can be of real help to theologians, in their attempt to better understand what makes humans distinctive and in the image of God. We have since noted how different machine intelligence is from human intelligence, and how alien-like an intelligent robot could be ‘on the inside’, in spite of its humanlike outward behavior.

For theological anthropology, the main takeaway is that intelligence – understood as rationality and problem-solving – is not the defining feature of human nature. We’ve long been the most intelligent and capable creature in town, but that might soon change, with the emergence of AI. What makes us special and in the image of God is thus not some intellectual capacity (in theology, this is known as the substantive interpretation), nor something that we can do on God’s behalf (the functional interpretation), because AI could soon surpass us in both respects.

The interpretation of the imago Dei that seems to account best for the scenario of human-level AI is the relational one. According to it, the image of God is our special I-Thou relationship with God, the fact that we can be an authentic Thou, capable to receive God’s love and respond back. We exist only because God calls us into existence. Our relationship with God is therefore the deepest foundation of our ontology. Furthermore, we are deeply relational beings. Our growth and fulfillment can only be realized in authentic personal relationships with other human beings and with God.

AI and Authentic Relationality

It is not surprising that the key to human distinctiveness is profoundly relational. Alan Turing tapped right into this intuition when he designed his eponymous test for AI. Turing’s test is, in fact, a measurement of AI’s ability to relate like us. Unsurprisingly, the most advanced AIs still struggle when it comes to simulating relationships, and none has yet passed the Turing test.

But even if a program will someday convincingly relate to humans, will that be an authentic relationship? We’ve already seen that human-level AI will be anything but humanlike ‘on the inside.’ Intelligent robots might become capable to speak and act like us, but they will be completely different from us in terms of their internal motivation or meaning systems. What kind of relationship could there be between us and them, when we’d have so little in common?

We long for other humans precisely because we are not self-sufficient. Hence, we seek others precisely because we want to discover them and our own selves through relationships. We fall in love because we are not completely rational. Human-level AI will be the opposite of that: self-sufficient, perfectly rational, and with a quasi-complete knowledge of itself.

The Quirkiness of Human intelligence

Our limitations are instrumental for the kind of relationships that we have with each other. An argument can thus be made that a significant degree of cognitive and physical vulnerability is required for authentic relationality to be possible. There can be no authentic relationship without the two parts intentionally making themselves vulnerable to each other, opening to one another outside any transactional logic.

Photo by Duy Pham on Unsplash

A hyper-rational being would likely have very serious difficulties to engage fully in relationships and make oneself totally vulnerable to the loved other. It surely does not sound very smart.

Nevertheless, we, humans, do this tirelessly and often at high costs, exactly, perhaps, because we are not that intelligent and goal oriented as AI. Although that appears to be illogic, it is such experiences that give meaning and fulfillment to our lives.

From an evolutionary perspective, it is puzzling that our species evolved to be this way. Evolution promotes organisms that are better at adapting to the challenges of their environment, thus at solving practical survival and reproduction problems. It is therefore unsurprising that intelligence-as-problem-solving is a common feature of evolved organisms, and this is precisely the direction in which AI seems to develop.

If vulnerability is so important for the image of God as relationship, does this imply that God too is vulnerable?

What is strange in the vast space of possible intelligences is our quirky type of intelligence, one heavily optimized for relationship, marked by a bizarre thirst for meaning, and plagued by a surprising degree of irrationality. In the previous post I called out the strangeness of strong AI, but it is we who seem to be strange ones. However, it is specifically this kind of intellectual imperfection, or vulnerability, that enables us to dwell in the sort of Goldilocks of intelligence where personal relationships and the image of God are possible.

Vulnerability, God, Humans and Robots

Photo by Jordan Whitt on Unsplash

If vulnerability is so important for the image of God as relationship, does this imply that God too is vulnerable? Indeed, that seems to be the conclusion, and it’s not surprising at all, especially when we think of Christ. Through God’s incarnation, suffering, and voluntary death, we have been revealed a deeply vulnerable side of the divine. God is not an indifferent creator of the world, nor a dispassionate almighty, all-intelligent ruler. God cares deeply for creation, to the extent of committing to the supreme self-sacrifice to redeem it (Jn. 3: 16).

This means that we are most like God not when we are at our smartest or strongest, but when we engage in this kind of hyper-empathetic, though not entirely logical, behavior.

Compared to AI, we might look stupid, irrational, and outdated, but it is paradoxically due to these limitations that we are able to cultivate our divine likeness through loving, authentic, personal relationships. If looking at AI teaches theologians anything, it is that our limitations are just as important as our capabilities. We are vulnerable, just as our God has revealed to be vulnerable. Being like God does not necessarily mean being more intelligent, especially when intelligence is seen as rationality or problem solving

Christ – whether considered historically or symbolically – shows that what we value most about human nature are traits like empathy, meekness, and forgiveness, which are eminently relational qualities. Behind such qualities are ways of thinking rooted more in the irrational than in the rational parts of our minds. We should then wholeheartedly join the apostle Paul in “boast[ing] all the more gladly about [our] weaknesses […] for when [we] are weak, then [we] are strong” (2 Cor. 12: 9-10).

Human-level, but not Humanlike: The Strangeness of Strong AI

The emergence of AI opens up exciting new avenues of thought, promising to add some clarity to our understanding of intelligence and of the relation between intelligence and consciousness. For Christian anthropology, observing which aspects of human cognition are easily replicated in machines can be of particular help in refining the theological definition of human distinctiveness and the image of God.

However, by far the most theologically exciting scenario is the possibility of human-level AI, or artificial general intelligence (AGI), the Holy Grail of AI research. AGI would be capable to convincingly replicate human behavior. It could, in principle, pass as human, if it chose to. This is precisely how the Turing Test is designed to work. But how humanlike would a human-level AI really be?

Computer programs have already become capable of impressive things, which, when done by humans, require some of our ‘highest’ forms of intelligence. However, the way AI approaches such tasks is very non-humanlike, as explained in the previous post. If the current paradigm continues its march towards human-level intelligence, what could we expect AGI to be like? What kind of creature might such an intelligent robot be? How humanlike would it be? The short answer is, not much, or even not at all.

The Problem of Consciousness

Philosophically, there is a huge difference between what John Searle calls ‘strong’ and ‘weak’ AI. While strong AI would be an emulation of intelligence, weak AI would be a mere simulation. The two would be virtually indistinguishable on the ‘outside,’ but very different ‘on the inside.’ Strong AI would be someone, a thinking entity, endowed with conscience, while weak AI would be a something, a clockwork machine completely empty on the inside.

It is still too early to know whether AGI will be strong or weak, because we currently lack a good theory of how consciousness arises from inert matter. In philosophy, this is known as “the hard problem of consciousness.” But if current AI applications are any indication, weak AGI is a much more likely scenario than strong AGI. So far, AI has made significant progress in problem-solving, but it has made zero progress in developing any form of consciousness or ‘inside-out-ness.’

Even if AGI does somehow become strong AI (how could we even tell?), there are good reasons to believe that it would be a very alien type of intelligence.

What makes human life enjoyable is arguably related to the chunk of our mind that is not completely rational.

Photo by Maximalfocus on Unsplash

The Mystery of Being Human

As John McCarthy – one of the founding fathers of AI – argues, an AGI would have complete access to its internal states and algorithms. Just think about how weird that is! Humans have a very limited knowledge of what happens ‘on their inside.’ We only see the tip of the iceberg, because only a tiny fraction of our internal processes enter our stream of consciousness. Most information remains unconscious, and that is crucial for how we perceive, feel, and act.

Most of the time we have no idea why we do the things we do, even though we might fabricate compelling, post-factum rationalizations of our behavior. But would we really want to know such things and always act in a perfectly rational manner? Or, even better, would we be friends or lovers with such a hyper-rational person? I doubt.

Part of what makes us what we are and what makes human life enjoyable is arguably related to the chunk of our mind that is not completely rational. Our whole lives are journeys of self-discovery, and with each experience and relationship we get a better understanding of who we are. That is largely what motivates us to reach beyond our own self and do stuff. Just think of how much of human art is driven precisely by a longing to touch deeper truths about oneself, which are not easily accessible otherwise.

Strong AI could be the opposite of that. Robots might understand their own algorithms much better than we do, without any need to discover anything further. They might be able to communicate such information directly as raw data, without needing the approximation/encryption of metaphors. As Ian McEwan’s fictitious robot character emphatically declares, most human literature would be completely redundant for such creatures.

The Uncanny Worldview of an Intelligent Robot

Intelligent robots would likely have a very different perception of the world. With access to Bluetooth and WiFi, they would be able to ‘smell’ other connected devices and develop a sort of ‘sixth sense’ of knowing when a particular person is approaching merely from their Bluetooth signature. As roboticist Rodney Brooks shows, robots will soon be able to measure one’s breathing and heart rate without any biometric sensor, simply by analyzing how a person’s physical presence slightly changes the behavior of WiFi signals.

The technology for this already exists, and it could enable the robot to have access to a totally different kind of information about the humans around, such as their emotional state or health. Similar technologies of detecting changes in the radio field could allow the robots to do something akin to echolocation and know if they are surrounded by wood, stone, or metal. Just imagine how alien a creature endowed with such senses would be!

AGI might also perceive time very differently from us because they would think much faster. The ‘wetware’ of our biological brains constrains the speed at which electrical signals can travel. Electronic brains, however, could enable speeds closer to the ultimate physical limit, the speed of light. Minds running on such faster hardware would also think proportionally faster, making their experience of the passage of time proportionally slower.

If AGI would think ‘only’ ten thousand times faster than humans, a conservative estimation, they would inhabit a completely different world. It is difficult to imagine how such creatures might regard humans, but futurist James Lovelock chillingly estimates that “the experience of watching your garden grow gives you some idea of how future AI systems will feel when observing human life.”

The way AGI is depicted in sci-fi (e.g. Terminator, Ex Machina, or Westworld) might rightly give us existential shivers. But if the predictions above are anywhere near right, then AGI might turn out to be weirder than our wildest sci-fi dreams. AI might reach human-level, but it would most likely be radically non-humanlike.

Is this good or bad news for theological anthropology? How would the emergence of such an alien type of affect our understanding of humanity and its imago Dei status? The next post, the last one in this four-part series, wrestles head-on with this question.

Artificial Intelligence: The Disguised Friend of Christian Anthropology

AI is making one significant leap after the other. Computer programs can nowadays convincingly converse with us, generate plausible prose, diagnose disease better than human experts, and totally trash us in strategy games like Go and chess, once considered the epitome of human intelligence. Could they one day reach human-level intelligence? It would be extremely unwise to discount such a possibility without very good reasons. Time, after all, is on AI’s side, and the kind of things that machines are capable of today used to be seen as quasi-impossible just a generation ago.

How could we possibly speak of human distinctiveness when robots become indistinguishable from us?

The scenario of human-level AI, also known as artificial general intelligence (AGI), would be a game-changer for every aspect of human life and society, but it would raise particularly difficult questions for theological anthropology. Since the dawn of the Judeo-Christian tradition, humans have perceived themselves as a creature unlike any other. The very first chapter of the Bible tells us that we are special, because only we of all creatures are created in the image of God (imago Dei). However, the Copernican revolution showed us that we are not the center of the universe (not literally, at least), and the Darwinian revolution revealed that we are not ontologically different from non-human animals. AGI is set to deliver the final blow, by conquering the last bastion of our distinctiveness: our intelligence.

By definition, AGI would be capable of doing anything that a standard human can do, at a similar or superior level. How could we possibly speak of human distinctiveness when robots become indistinguishable from us? Christian anthropology would surely be doomed, right? Well, not really, actually quite the contrary. Instead of rendering us irrelevant and ordinary, AI could in fact represent an unexpected opportunity to better understand ourselves and what makes us in God’s image.

Science’s Contribution to the Imago Dei

To explain why, it is useful to step back a little and acknowledge how much the imago Dei theology has benefitted historically from an honest engagement with the science of its time. Based solely on the biblical text, it is impossible to decide what the image of God is supposed to mean exactly. The creation story in Genesis 1 tells us that only humans are created in the image and likeness of God, but little else about what the image means. The New Testament does not add much, except for affirming that Jesus Christ is the perfect image. Ever since Patristic times, Christian anthropology has constantly wrestled with how to define the imago Dei, without much success or consensus.

The obvious way to tackle the question of our distinctiveness is to examine what differentiates us from the animals, the only others with which we can meaningfully compare ourselves. For the most part of Christian history, this difference has been located in our intellectual capacities, an approach heavily influenced by Aristotelian philosophy, which defined the human as the rational animal. But then came Darwin and showed us that we are not as different from the animals as we thought we were.

Theologian Aubrey Moore famously said that Darwin “under the guise of a foe, did the work of a friend” for Christianity.

Furthermore, the following century and a half of ethology and evolutionary science revealed that our cognitive capacities are not bestowed upon us from above. Instead, they are rooted deep within our evolutionary history, and most of them are shared with at least some of the animals. If there is no such thing as a uniquely human capacity, then surely we were wrong all along to regard ourselves as distinctive, right?

Not quite. Theologian Aubrey Moore famously said that Darwin “under the guise of a foe, did the work of a friend” for Christianity. Confronted with the findings of evolutionary science, theologians were forced to abandon the outdated Aristotelian model of human distinctiveness and look for more creative ways to define the image of God. Instead of equating the image with a capacity that humans have, post-Darwinian theology regards the imago Dei in terms of something we are called to do or to be.

Defining the Imago Dei

Some theologians interpret the image functionally, as our election to represent God in the universe and exercise stewardship over creation. Others go for a relational interpretation, defining the image through the prism of the covenantal ‘I-Thou’ relationship that we are called to have with God, which is the fundament of human existence. To be in the image of God is to be in a personal, authentic relationship with God and with other human beings. Finally, there are others who interpret the imago Dei eschatologically, as a special destiny for human beings, a sort of gravitational pull that directs us toward existential fulfilment in the fellowship with God, in the eschaton. Which of these interpretations is the best? Well, hard to say. Without going into detail, let’s just say that there are good theological arguments for each of them.

If purely theological debate does not produce clear answers, we might then try to compare ourselves with the animals. This, though, does not lead us too far either. Although ‘technically’ we are not very different from the animals and we share with them similar bodily and cognitive structures, in practical terms the difference is huge. Our mental lives, our societies and our achievements are so radically different than theirs, that it is actually impossible to pinpoint just one dimension that represents the decisive difference. Animals are simply no match for us. This is good news for human distinctiveness, but it also means that we might be stuck in a never-ending theological debate on how to interpret the image of God, with so many options on our hand.

How Can AI help Define Who We Are?

This is where the emergence of human-level AI can be a game-changer. For the first time, we would be faced with the possibility of an equal or superior other, one that could potentially (out)match us in everything, from our intellectual capacities, to what we can do in the world, our relational abilities, or the complexity of our mental lives. Instead of agonizing about AI replacing us or rendering us irrelevant, we could relish the opportunity to better understand our distinctiveness through the insights brought about by the emergence of this new other.

The hypothesis of AGI might present theologians with an extraordinary opportunity to narrow down their definitions of human distinctiveness and the image of God. Looking at what would differentiate us from human-level AI, if indeed anything at all, may provide just the right amount of conceptual constraint needed for a better definition of the imago Dei. In this respect, our encounter with AI might prove to be our best shot at comparing ourselves with a different type of intelligence, apart from maybe the possibility of ever finding extra-terrestrial intelligence in the universe.


Dr. Marius Dorobantu is a research associate in science & theology at VU Univ. Amsterdam (NL). His PhD thesis (2020, Univ. of Strasbourg, FR) analysed the potential challenges of human-level AI for theological anthropology. The project he’s currently working on, funded by the Templeton WCF within the “Diverse Intelligences” initiative, is entitled ‘Understanding spiritual intelligence: psychological, theological, and computational approaches.

Is Transhumanism a Challenge or an Opportunity for the Christian Faith?

This week, Ravi Zachariah’s Institute here in Atlanta hosted an event entitled: “Should we fear Artificial Intelligence?” In it, British Mathematician and Christian Apologist John Lennox gave a lecture on the challenge of AI and Transhumanism to the Christian faith. Dr. Lennox’s talk covered a wide range of topics including the difference of general and narrow AI,  emerging Transhumanism, relevant literature and theological responses.

Coming from an apologist (defender of the faith) approach, the professor focused on how the emergence of AI diverges from Classical Christianity. While affirming some of the possibilities this technology brings, Lennox’s emphasized in how it was contrary to a Judeo-Christian understanding of the world. By citing many examples, he sees the rise of AI and Transhumanism as another tower of babel project. In Transhumanism, more specifically, he sees a direct counterfeit of Christian eschatology. That is, while the New Testament speaks of a final human transformation through the Second Coming of Christ, the first speak of a similar transformation through technology. Furthermore, Dr. Lennox saw echoes of Revelation in the rise of Superintelligence as a possible tool for global social control. To drive this point he drew a parallel between Max Tegard’s image of Prometheus and the biblical figure of the beast.

In his view, there was a clear difference between AI and humanity. The first a invention of humans while the latter being God’s creation. In doing so, he reinforced a separation between technology and biology as opposing endeavors with little connection. His main concern was that by focusing too much on AI, that he rightly defined it as algorithms, we could lose sight of the Imago Dei of humanity. In short, while not explicitly telling us to fear AI, Dr. Lennox driving narrative was one of caution and concern. In his view, Transhumanism is a re-formulation of the Second Century heresy of Gnosticism. With that said, he affirmed the Christ would rise victorious at the end even if AI could bring havoc to the world. 

From Confrontation to Dialogue

Dr. Lennox presentation rightly uncovered and explored the the idolatrous tendencies in the Transhumanist movement. Pushing the boundaries of immortality can be an exercise in human-centrism in a direct defiance to God’s sovereignty. The optimism that intoxicates the movement can well be tempered by a healthy dose of Judeo-Christian skepticism. For Christians and Jews, humanity is steeped in sin which makes any human endeavor to achieve ultimate good suspect. 

Yet, by painting Transhumanism as an offshoot of atheistic naturalism, he misses an opportunity to see how it can enter into a fruitful dialogue with Christianity. What do I mean by that? Well, If Christianity and Transhumanism both preach transformation of humans into an elevated ideal state, could there be parallels among them that are worth exploring? For centuries Christianity has preached spiritual transformation as humans are shaped into the God-human Christ. Can technology be part of this transformation? Can the transformation of individuals and communities include technology, to enact here a picture of the coming kingdom of God?

I suspect that to enter into this dialogue, two prior movements are necessary. The first is re-framing the relationship between Christianity and Science. While not explicitly said, Dr. Lennox seem to espouse the view that Science (and more specifically Evolutionary Biology) contradicts the claim of Genesis and therefore cannot be reconciled. In this binary view, there is only atheistic naturalism or theistic supernaturalism where God’s action is confined to a strictly literal view of the first books of the Pentateuch. If that is the case than the idea of past evolution proposed by Darwin and future evolution proposed by Transhumanism is a direct threat to the Christian faith. If, however, science can be harmonized with the Biblical view of creation, then evolution (either past of future) are no longer challenge to Christianity. Instead, it can find parallels with the Christian idea of Deification (East) or Sanctification (West). 

The second movement is re-defining the separation between nature and technology. Dr. Lennox spent portions of his talk differentiating AI from human intelligence. His main point was that the biological one was divinely made while the latter was human created. By framing these two as opposing ideas, the connection between them is lost. Technology will always be an inferior pursuit compared to the biological reality around us. What if these two were not opposing phenomena but two sides of a continuum? What if technology was God’s way to further perpetuate Creation?

A New Strategy

I recognize that asking these questions pose tremendous challenges to a classical (Modernist) understanding of Christianity. The avenues explored above are not new nor am I the first to suggest it. They are well fleshed out in the writings of Teilhard de Chardin. The Jesuit Paleontologist initiated this dialogue in the middle of 20th century, well before digital technology transformed our lives. We do well to re-visit his ideas. 

Yet, a traditional view of apologetics that simply fits AI and Transhumanism as past heresy will not suffice. It overlooks the breadth and depth of how these developments are re-defining humanity. It also pegs them to past ideological challenges that while similar in the surface belong to very different historical contexts. 

To establish boundaries and define what is right and wrong is a good first step. However, in a time of fast-paced change, these boundaries will have to be re-visited often making the whole enterprise inadequate. Moreover, such strategy may help keep some in the faith but will certainly do little to attract new comers to the faith. For the latter purpose, there is no alternative but to engage more deeply with the challenge that AI and Transhumanism pose to our time. 

There is much to be said on that. For now, I propose the outlines of an alternative Apologetics through a few provocative questions. What if instead of challenging competing ideologies directly we instead try to subvert them? What if instead of exposing fault lines between Christianity and a competing ideology to defend orthodoxy, we appropriate Transhumanist’s aspirations and direct them towards Christian aims?

Shifting Towards Education: A New Direction for 2018

Happy new year, everybody!

After a hiatus for the holiday season, I am now back to blogging with a renewed focus. For those of you who follow this blog or know me personally, last year was an encouraging beginning as I posted here my musings on the intersection between Theology and Artificial Intelligence. Above all, I’ve been encouraged by the conversation some of the posts have started.

After some reflection over the hiatus, I decided to shift the focus of the blog. As you may know, there are not a lot of voices speaking on this field. So the opportunities for making a contribution are vast. Moreover, I don’t see the topic of AI becoming less important in the coming years. The question I asked myself was how could I best contribute considering my skills, passion and knowledge. Promoting discussion on the topic was a good start but I was not satisfied in just being a thoughtful observer. The best insights often come from those who are immersed in practicing the field they are discussing.

Even as I type there are hundreds of AI startups starting to shape the future we’ll live in. There is a growing group of academics, consultants and enthusiasts speculating about what that would look like. Moreover, there are thousands of Data Scientists currently shaping the future of existing organizations building AI applications that will transform these enterprises for years to come. Eventually, politicians will catch up and start discussing policy and laws to regulate how AI is used.

While all this is happening, I think about my children. Will they have the tools they need to navigate this AI future? Will they be ready not only to survive but also thrive in this uncertain future?

When I look at the educational system they are in, it is clearly not up to the task. While I appreciate the wonderful work teachers do daily all over the world, the problem is systemic. The Western educational system was built in the last century to raise industrial workers. The economy required workers to learn a fixed trade that would last them through their lifetime.  Moreover, the academic system is always preparing students for the next level of education. Regardless of whether they pursue a job or continue their studies, a high school degree prepares the student for college, which prepares them for Masters’ work which, except for professional degrees, prepare them for pursuing PhDs. Hence, students are conditioned to excel within the academic “bubble” and have little interaction with the real world of jobs, leadership and service. Aside from a few exceptions, students are expected to figure out on their own how to apply the knowledge they learn into real workplace scenarios. While the system forces students to study separate disciplines, life is lived in multi-disciplinary spaces.

Staying out of the politically-charged discussion of “how to save our schools”, I rather work on how to offer something that will build on what the schools already offer. In my view, STEM (Science, Technology, Engineering and Math) education continues to be a challenge even as we have made progress in the past years. The concern I have with the current focus is that it separates these disciplines from humanities. In this way, students are taught only the “how” but rarely the “why” of STEM. This approach only perpetuates an uncritical consumerist relationship with technology, where we never stop to ask why are they being created in the first place and how they benefit humanity. Therefore the challenge is to engage young minds critically with STEM early on, empowering them to become creators with rather than consumers of technology.

While I can write about this frequently on the blog, being a detached analyst is not enough. That is why I am planning to develop actual learning experiences that address this gap. I am currently connecting with partners “glocally” to make that a reality. It will have both a classroom component as well as an online component. Stay tuned for more details.

How will that look like in aitheology.com?

The blog will flow from this journey of becoming an education entrepreneur. In this way, it serves as a platform for reflection, discussion, idea exchange and hopefully challenging some of you to join in this new endeavor. While I will continue to explore the themes of AI and theology, there will be an educational focus both in the topics discussed as well as in the way they are conveyed.

I also recognize that in our age, writing is not the most effective way to spread ideas and engage in conversation. Towards that end, I plan to add podcast in the near future so you can interact with AI theology in new ways. Finally, there will be plenty of opportunities for you to get involved in emerging projects.

I am excited for what this New Year will bring to us. I pray for wisdom and guidance in this new phase and I ask you to pray with me as well (if you are not religious, sending good thoughts would do).

Good News: The World is Getting Better

This week I want to discuss how our perception of the world shapes and guides our decisions about the future. In a previous blog, I discussed the power of narrative and how important that is in constructing reality. In this post, I want to challenge a prevailing perspective of doom that dominates the narrative in airwaves, broadcasts and most of social media. The dominant message is that our present world order is falling into chaos with no hope for redemption. This is not just a problem for large Media corporations that need to prey on fear in order to sell news but it has become the de facto perspective on any conversation about national and/or global affairs.

I want to start by making a simple statement: the world is getting better.

Let me take a step back and propose a new paradigm. What if we look at the globe not from the anecdotal evidence highlighted by media stories but actually more like a CEO looks at his/her company? What would that look like? Working for a large corporation for many years, I spent countless hours preparing presentations for executive leaders so they could understand the state of their business. The story told in these meetings is built on numbers and data. The narrative flows from pre-determined agreed-upon measures of success that allow the leader to see whether their unit is on track or not to meet their goals.

One could say that such view does not tell the whole story. That is indeed true. A company may be doing really well but that may not safeguard all employees from the threat of being laid off. For that employee, a profitable quarter means nothing. However, if the numbers are showing times of distress ahead, the story of many employees will be impacted. If the business goes bankrupt – everybody loses their job. Therefore, regardless of how dry numbers may be, they point to eminent signs of trouble that we must attend to. We ignore them at our own peril.

So, if you are the CEO of the globe, what would be some important performance metrics to look for? Thankfully I found this great blog by Bj Murphy that does exactly that, highlighting the trends around important issues like extreme poverty, wars, life expectancy, child mortality and others. The numbers show an undeniable improvement in all these key measures for the last 50 years. Believe it or not, these measures disprove the adage that “things were better in the olden days”. This is not to say that everything is getting better but such overwhelming data should make us pause to celebrate. Things are getting better in many fronts if we just have the eyes to see them.

Are we happier then? Well, if data is any indication the answer is “no.” In fact, quite the opposite, rates of depression are increasing world-wide. There could be many reasons for that. It is not clear, for example, whether people are simply more depressed or whether now we are able to better diagnose it and hence see an increase. Even allowing for that, this data is a sharp contrast to the one from the paragraph above. At least from these two pieces of data, we can conclude that a better world may not necessarily be a happier world.

Re-imagining the Present for Creating a Better Future

An unsung hero of the advances touted in BJ’s post is the rise of technology and science in the last century. If there has been a positive story, it is how science and technology have improved the quality of life. Yet, one can never forget the technology also brought the atomic bomb to our planet. They themselves could never be the answer for a better world but they have certainly enabled dreamers to make it a reality. This seems to be not only reality but also perception. In a recent Pew research survey, 42% of Americans indicated that technology has made their lives better, by far the biggest factor in a list that included medicine, civil rights and the economy. Technology advancement is one of the few narratives of hope in a sea of depressing storylines.

Here is important to highlight that perception is very relative to where we stand in relation to the past. Recently, white older men in the United States as a cohort have experience rising rates of depression and anxiety. One explanations is the sense that their life conditions have deteriorated compared to their parents. The question is not whether the world is getting better but whether “my” world getting better. This is not particular to the cohort of white older males but to all of us. This question is always asked with a point of reference in mind. Yet, is it possible to celebrate positive change even if our personal universe has deteriorated?

The first step towards imagining a new future is assessing the present from the perspective of the most vulnerable. If the world has indeed improved for them, then there is reason to celebrate. The data above supports this perspective. While there have been losers in recent change and much work is left to be done, the good news is undeniable.

From Tech Consumers to Tech Creators

If technology has made life better, it has also made it more complicated. Any PC user who had to endure using Windows for a while will realize that all the convenience brought by technology comes at a cost in complexity and troubleshooting. I believe part of the problem is that most of us approach technology as demanding consumers. That is, we expect technology to provide a pain-free solution to our problems. This is precisely the message large tech companies want us to believe: technology will solve all our problems and make life easier for everybody. That is often not the case.

To fully harness the benefits of technology we must move from consumers to creators of technology. Last week I was inspired by this story of an 11 year-old girl who invented a water tester to detect water contamination. When interviewed, the girl said she was moved by the story of water contamination in Flint, MI and wanted to do something about it. She exemplifies a true technology creator who took upon herself to solve a problem she cared about. Technology creators do not just use tech for convenience, they leverage it to solve problems. They use their God-given creativity to make the world a better place.

What if we could educate children and young adults to do more of that? What type of world could we build?