Klara and the Sun: Robotic Redemption for a Dystopian World

In the previous blog, we discussed how Klara, the AI and the main character of Kazuo Ishiguro’s latest novel, develops a religious devotion to the Sun. In the second and final installment of this book review, I explore how Klara impacts the people around her. Klara and the Sun, shows how they become better humans for interacting with her in a dystopian world.

Photo by Randy Jacob on Unsplash

Gene Inequality

Because humans are only supporting characters in this novel, we only learn about their world later in the book. The author does not give out a year but places that story in a near future. Society is sharply divided along with class and racial lines. Gene editing has become a reality and now parents can opt to have children born with the traits that will help them succeed in life.

This stark choice does not only affect the family’s fate but re-orients the way society allocates opportunities. Colleges no longer accept average kids meaning that a natural birth path puts a child at a disadvantage. Yet, this choice comes at a cost. Experimenting with genes also means a higher mortality rate for children and adolescents. That is the case for the family that purchases Klara, they have lost their first daughter and now their second one is sick.

These gene-edited children receive special education in their home remotely by specialized tutors. This turned out to be an ironic trait in a pandemic year where most children in the world learned through Zoom. They socialize through prearranged gatherings in homes. Those that are well-to-do live in gated communities, supposedly because the world had become unsafe. This is just one of the many aspects of the dystopian world of Klara and the Sun.

Photo by Andy Kelly on Unsplash

AI Companionship and Remembrance

A secondary plot-line in the novel is the relationship between the teenage Josie, Klara’s owner, and her friend Rick who is not gene-edited. The teens are coming of age in this tumultuous period where the viability of their relationship is in question. The adults discuss whether they should even be together in a society that delineates separate paths assigned at birth. One has a safe passage into college and stable jobs while the other is shut out from opportunity by the sheer fact their parents did not interfere with nature.

In this world, droids are common companions to wealthy children. Since many don’t go to school anymore, the droid plays the role of nanny, friend, guardian, and at times tutor. Even so, there is resistance to them in the public square where resentful segments of society see their presence with contempt. They represent a symbol of status for the affluent and a menace to the working class. Even so, their owners often treat them as merchandise. At best they were seen as servants and at worse as disposable toys that could be tossed around for amusement.

The novel also hints at the use of AI to extend the life of loved ones. AI remembrance, shall we say. That is, programming AI droids to take the place of a diseased human. This seems like a natural complement in a world where parents have no guarantee that their gene-edited children will live to adulthood. For some, the AI companion could live out the years their children were denied.

Klara The Therapist

In the world described above, the AF (artificial friend) plays a pivotal role in family life not just for the children that they accompany but also for the parents. In effect, because of her robotic impartiality, Klara serves as a safe confidant to Josie, Rick, her mother, and her dad. The novel includes intimate one-on-one conversations where Klara offers a fresh take on their troubles. Her gentle and unpretentious perspective prods them to do what is right even when it is hard. In this way, she also plays a moral role, reminding humans of their best instincts.

Yet, humans are not the only ones impacted. Klara also grows and matures through her interaction with them. Navigating the tensions, joys, and sorrows of human relationships, she uncovers the many layers of human emotion. Though lacking tear ducts and a beating heart, she is not a prisoner to detached rationality. She suffers with the pain of the humans around her, she cares deeply about their well-being and she is willing to sacrifice her own future to ensure they have one. In short, she is programmed to serve them not as a dutiful pet but as a caring friend. In doing so, she embodies the best of human empathy.

The reader joins Klara in her path to maturity and it is a delightful ride. As she observes and learns about the people around her, the human readers get a mirror to themselves. We see our struggles, our pettiness, our hopes and expectations reflected in this rich story. For the ones that read with an open heart, the book also offers an opportunity for transformation and growth.

Final Reflections

In an insightful series of 4 blogs, Dr. Dorabantu argues that future general AI will be hyper-rational forcing us to re-imagine the essence of who we are. Yet, Ishiguro presents an alternative hypothesis. What if instead, AI technology led to the development of empathetic servant companions? Could a machine express both rational and emotional intelligence?

Emotionally intelligent AI would help us redefine the image of God not by contrast but by reinforcement. That is, instead of simply demonstrating our limitations in rationality it could expand our potential for empathy. The novel shows how AI can act as a therapist or spiritual guide. Through empathetic dialogue, they can help us find the best of our moral senses. In short, it can help us love better.

Finally, the book raises important ethical questions about gene editing’s promises and dangers. What would it look like to live in a world where “designer babies” are commonplace? Could gene-editing combining with AI lead to the harrowing scenario where droids serve as complete replacements for humans? While Ishuguro’s future is fictitious, he speculates on technologies that already exist now. Gene editing and narrow AI are a reality while General AI is plausibly within reach.

We do well to seriously consider their impact before a small group in Silicon Valley decides how to maximize profit from them. This may be the greatest lesson we can take from Klara and the Sun and its dystopian world.

Klara and the Sun: Robotic Faith for an Unbelieving Humanity

In his first novel since winning the Nobel prize of literature, Kazuo Ishiguro explores the world through the lens of an AI droid. The novel retains many of the features that made him famous for previous bestsellers such as concentrating in confined spaces, building up emotional tension, and fragmented story-telling. All of this gains a fresh relevance when applied to the Sci-Fi genre and more specifically to the relationship between humanity and sentient technology. I’ll do my best to keep you from any spoilers as I suspect this will become a motion picture in the near future. Suffice it to say that Klara and the Sun is a compelling statement for robotic faith. How? Read on.

Introducing the Artificial Friend

Structured into 6 long sections, the book starts by introducing us to Klara. She is an AF (Artificial Friend), a humanoid equipped with Artificial Intelligence and designed commercially to be a human companion. At least, this is what we can deduce from the first pages as no clear explanation is given. In fact, this is a key trait in the story: we learn about the world along with Klara. She is the one and only narrator throughout the novel.

Klara is shaped like a short woman with brown hair. The story starts in the store where she is on display for sale. There she interacts with customers, other AFs, and “Manager”, the human responsible for the store. All humans are referred to by their capitalized job or function. Otherwise, they are classified by their appearance or something peculiar to them.

The first 70 pages occur inside the store where she is on display. We learn about her personality, the fact that she is very observant, and what peer AFs think of her. At times, she is placed near the front window of the store. That is when we get a glimpse of the outside world. This is probably where Ishiguro’s brilliance shines through as he carefully creates a worldview so unique, compelling, humane but in many ways also true to a robotic personality. The reader slowly grows fond of her as she immerses us in her whimsical perspective of the outside world. To her, a busy city street is a rich mixture of sights with details we most often take for granted.

We also get to learn how Klara processes emotions and even has a will of her own. At times she mentions feeling fear. She is also very sensitive to conflict, trying to avoid it at all costs. With that said, she is no push over. Once she sabotages a customer attempt to buy her because she had committed herself to another prospect. She also seems to stand out compared to the other AFs instilling both contempt and admiration from them.

Book cover from Amazon.com

The World Through Klara’s Eyes

She is sensitive, captivating and always curious. Her observations are unique and honest. She brings together an innocence of a child with the mathematical ability of a scientist. This often leads to some quirky observations as she watches the world unfold in front of her. In one instance, she describes a woman as “food-blender-shaped.”

Klara also has an acute ability to detect complex emotions in faces. In this way, she is able to peer through the crevices of the body and see the soul. In one instance, she spots how a child is both smiling at her AF while her eyes portray anger. When observing a fight, she could see the intensity of anger in the men’s faces describing them as horrid shapes as if they were no longer human. When seeing a couple embrace, she captures both the joy and the pain of that moment and struggles to understand how it could be so.

This uncanny ability to read human emotion becomes crucial when Klara settles in her permanent home. Being a quiet observer, she is able understand the subtle unspoken dynamics that happen in family relationships. In some instances, she could see the love between mother and daughter even as they argued vehemently. She could see through the housekeeper’s hostility towards her not as a threat but as concern. In this way, her view of humans tended err on the side of charity rather than malice.

Though being a keen human observer, it is her relationship with the sun that drives the narrative forward. From the first pages, Klara notices the presence of sun rays in most situations. She will usually start her description of an image by emphasizing how the sun rays were entering a room. We quickly learned that the sun is her main source of energy and nourishment. Hence it is not surprising that its looms so large in her worldview.

Yet, Ishiguro’s takes this relationship further. Similar to ancient humans, Klara develops a religious-like devotion to the sun. The star is not only her source of nourishment but becomes a source of meaning and a god-like figure that she looks to when in fear or in doubt.

That is when the novel gets theologically interesting.

Robotic Faith and Hope

As the title suggests, the sun plays a central role in Klara’s universe. This is true not only physiologically as she runs on solar energy, but also a spiritual role. This nods towards a religious relationship that starts through observation. Already understanding the power of the sun to give her energy, she witnesses how the sun restores a beggar and his dog back to health. Witnessing this event become Klara’s epiphany of the healing powers of the sun. She holds that memory dear and it becomes an anchor of hope for her later in the book when she realizes that her owner is seriously ill.

Klara develops a deep devotion toward the sun and like the ancients, she starts praying to it. The narrative moves forward when Klara experiences theophanies worthy of human awe. Her pure faith is so compelling that the reader cannot help but hope along with her that what she believes is true. In this way, Klara points us back to the numinous.

Her innocent and captivating faith has an impact in the human characters of the novel. For some reason, they start hoping for the best even as there is no reason to do so. In spite of overwhelming odds, they start seeing a light at the end of the tunnel. Some of them willingly participate, in this case the men in the novel, in her religious rites without understanding the rationale behind her actions. Yet, unlike human believers who often like to proselytize, she keeps her faith secret from all. In fact, secrecy is part of her religious relationship with the sun. In this way, she invites humans to transcend their reason and step into a child-like faith.

This reminds me of a previous blog where I explore this idea of pure faith and robots . But I digress.

Conclusion

I hope the first part of this review sparks your interest in reading this novel. It beautifully explores how AI can help us find faith again. Certainly, we are still decades away from the kind of AI that Ishiguro’s portrays in this book. Yet, like most works of Science Fiction, they help us extrapolate present directions so we can reflect on our future possibilities.

Contrasting to the dominant narrative of “robot trying to kill us”, the author opts for one that highlights the possibility that they can reflect the best in us. As they do so, they can change us into better human beings rather than allowing us to devolve into our worse vices. Consequently, Ishiguro gives us a vivid picture of how technology can work towards human flourishing.

In the next blog, I will explore the human world in which Klara lives. There are some interesting warnings and rich reflection in the dystopian situation described in the book. While our exposure to it is limited, maybe this is one part I wish the author had expanded a bit more, we do get enough ponder about the impact of emerging technologies in our society. This is especially salient for a digital native generation who is learning to send tweets before they give their first kiss.

Vulnerable like God: Perfect Machines and Imperfect Humans

This four-part series started with the suggestion that AI can be of real help to theologians, in their attempt to better understand what makes humans distinctive and in the image of God. We have since noted how different machine intelligence is from human intelligence, and how alien-like an intelligent robot could be ‘on the inside’, in spite of its humanlike outward behavior.

For theological anthropology, the main takeaway is that intelligence – understood as rationality and problem-solving – is not the defining feature of human nature. We’ve long been the most intelligent and capable creature in town, but that might soon change, with the emergence of AI. What makes us special and in the image of God is thus not some intellectual capacity (in theology, this is known as the substantive interpretation), nor something that we can do on God’s behalf (the functional interpretation), because AI could soon surpass us in both respects.

The interpretation of the imago Dei that seems to account best for the scenario of human-level AI is the relational one. According to it, the image of God is our special I-Thou relationship with God, the fact that we can be an authentic Thou, capable to receive God’s love and respond back. We exist only because God calls us into existence. Our relationship with God is therefore the deepest foundation of our ontology. Furthermore, we are deeply relational beings. Our growth and fulfillment can only be realized in authentic personal relationships with other human beings and with God.

AI and Authentic Relationality

It is not surprising that the key to human distinctiveness is profoundly relational. Alan Turing tapped right into this intuition when he designed his eponymous test for AI. Turing’s test is, in fact, a measurement of AI’s ability to relate like us. Unsurprisingly, the most advanced AIs still struggle when it comes to simulating relationships, and none has yet passed the Turing test.

But even if a program will someday convincingly relate to humans, will that be an authentic relationship? We’ve already seen that human-level AI will be anything but humanlike ‘on the inside.’ Intelligent robots might become capable to speak and act like us, but they will be completely different from us in terms of their internal motivation or meaning systems. What kind of relationship could there be between us and them, when we’d have so little in common?

We long for other humans precisely because we are not self-sufficient. Hence, we seek others precisely because we want to discover them and our own selves through relationships. We fall in love because we are not completely rational. Human-level AI will be the opposite of that: self-sufficient, perfectly rational, and with a quasi-complete knowledge of itself.

The Quirkiness of Human intelligence

Our limitations are instrumental for the kind of relationships that we have with each other. An argument can thus be made that a significant degree of cognitive and physical vulnerability is required for authentic relationality to be possible. There can be no authentic relationship without the two parts intentionally making themselves vulnerable to each other, opening to one another outside any transactional logic.

Photo by Duy Pham on Unsplash

A hyper-rational being would likely have very serious difficulties to engage fully in relationships and make oneself totally vulnerable to the loved other. It surely does not sound very smart.

Nevertheless, we, humans, do this tirelessly and often at high costs, exactly, perhaps, because we are not that intelligent and goal oriented as AI. Although that appears to be illogic, it is such experiences that give meaning and fulfillment to our lives.

From an evolutionary perspective, it is puzzling that our species evolved to be this way. Evolution promotes organisms that are better at adapting to the challenges of their environment, thus at solving practical survival and reproduction problems. It is therefore unsurprising that intelligence-as-problem-solving is a common feature of evolved organisms, and this is precisely the direction in which AI seems to develop.

If vulnerability is so important for the image of God as relationship, does this imply that God too is vulnerable?

What is strange in the vast space of possible intelligences is our quirky type of intelligence, one heavily optimized for relationship, marked by a bizarre thirst for meaning, and plagued by a surprising degree of irrationality. In the previous post I called out the strangeness of strong AI, but it is we who seem to be strange ones. However, it is specifically this kind of intellectual imperfection, or vulnerability, that enables us to dwell in the sort of Goldilocks of intelligence where personal relationships and the image of God are possible.

Vulnerability, God, Humans and Robots

Photo by Jordan Whitt on Unsplash

If vulnerability is so important for the image of God as relationship, does this imply that God too is vulnerable? Indeed, that seems to be the conclusion, and it’s not surprising at all, especially when we think of Christ. Through God’s incarnation, suffering, and voluntary death, we have been revealed a deeply vulnerable side of the divine. God is not an indifferent creator of the world, nor a dispassionate almighty, all-intelligent ruler. God cares deeply for creation, to the extent of committing to the supreme self-sacrifice to redeem it (Jn. 3: 16).

This means that we are most like God not when we are at our smartest or strongest, but when we engage in this kind of hyper-empathetic, though not entirely logical, behavior.

Compared to AI, we might look stupid, irrational, and outdated, but it is paradoxically due to these limitations that we are able to cultivate our divine likeness through loving, authentic, personal relationships. If looking at AI teaches theologians anything, it is that our limitations are just as important as our capabilities. We are vulnerable, just as our God has revealed to be vulnerable. Being like God does not necessarily mean being more intelligent, especially when intelligence is seen as rationality or problem solving

Christ – whether considered historically or symbolically – shows that what we value most about human nature are traits like empathy, meekness, and forgiveness, which are eminently relational qualities. Behind such qualities are ways of thinking rooted more in the irrational than in the rational parts of our minds. We should then wholeheartedly join the apostle Paul in “boast[ing] all the more gladly about [our] weaknesses […] for when [we] are weak, then [we] are strong” (2 Cor. 12: 9-10).

Human-level, but not Humanlike: The Strangeness of Strong AI

The emergence of AI opens up exciting new avenues of thought, promising to add some clarity to our understanding of intelligence and of the relation between intelligence and consciousness. For Christian anthropology, observing which aspects of human cognition are easily replicated in machines can be of particular help in refining the theological definition of human distinctiveness and the image of God.

However, by far the most theologically exciting scenario is the possibility of human-level AI, or artificial general intelligence (AGI), the Holy Grail of AI research. AGI would be capable to convincingly replicate human behavior. It could, in principle, pass as human, if it chose to. This is precisely how the Turing Test is designed to work. But how humanlike would a human-level AI really be?

Computer programs have already become capable of impressive things, which, when done by humans, require some of our ‘highest’ forms of intelligence. However, the way AI approaches such tasks is very non-humanlike, as explained in the previous post. If the current paradigm continues its march towards human-level intelligence, what could we expect AGI to be like? What kind of creature might such an intelligent robot be? How humanlike would it be? The short answer is, not much, or even not at all.

The Problem of Consciousness

Philosophically, there is a huge difference between what John Searle calls ‘strong’ and ‘weak’ AI. While strong AI would be an emulation of intelligence, weak AI would be a mere simulation. The two would be virtually indistinguishable on the ‘outside,’ but very different ‘on the inside.’ Strong AI would be someone, a thinking entity, endowed with conscience, while weak AI would be a something, a clockwork machine completely empty on the inside.

It is still too early to know whether AGI will be strong or weak, because we currently lack a good theory of how consciousness arises from inert matter. In philosophy, this is known as “the hard problem of consciousness.” But if current AI applications are any indication, weak AGI is a much more likely scenario than strong AGI. So far, AI has made significant progress in problem-solving, but it has made zero progress in developing any form of consciousness or ‘inside-out-ness.’

Even if AGI does somehow become strong AI (how could we even tell?), there are good reasons to believe that it would be a very alien type of intelligence.

What makes human life enjoyable is arguably related to the chunk of our mind that is not completely rational.

Photo by Maximalfocus on Unsplash

The Mystery of Being Human

As John McCarthy – one of the founding fathers of AI – argues, an AGI would have complete access to its internal states and algorithms. Just think about how weird that is! Humans have a very limited knowledge of what happens ‘on their inside.’ We only see the tip of the iceberg, because only a tiny fraction of our internal processes enter our stream of consciousness. Most information remains unconscious, and that is crucial for how we perceive, feel, and act.

Most of the time we have no idea why we do the things we do, even though we might fabricate compelling, post-factum rationalizations of our behavior. But would we really want to know such things and always act in a perfectly rational manner? Or, even better, would we be friends or lovers with such a hyper-rational person? I doubt.

Part of what makes us what we are and what makes human life enjoyable is arguably related to the chunk of our mind that is not completely rational. Our whole lives are journeys of self-discovery, and with each experience and relationship we get a better understanding of who we are. That is largely what motivates us to reach beyond our own self and do stuff. Just think of how much of human art is driven precisely by a longing to touch deeper truths about oneself, which are not easily accessible otherwise.

Strong AI could be the opposite of that. Robots might understand their own algorithms much better than we do, without any need to discover anything further. They might be able to communicate such information directly as raw data, without needing the approximation/encryption of metaphors. As Ian McEwan’s fictitious robot character emphatically declares, most human literature would be completely redundant for such creatures.

The Uncanny Worldview of an Intelligent Robot

Intelligent robots would likely have a very different perception of the world. With access to Bluetooth and WiFi, they would be able to ‘smell’ other connected devices and develop a sort of ‘sixth sense’ of knowing when a particular person is approaching merely from their Bluetooth signature. As roboticist Rodney Brooks shows, robots will soon be able to measure one’s breathing and heart rate without any biometric sensor, simply by analyzing how a person’s physical presence slightly changes the behavior of WiFi signals.

The technology for this already exists, and it could enable the robot to have access to a totally different kind of information about the humans around, such as their emotional state or health. Similar technologies of detecting changes in the radio field could allow the robots to do something akin to echolocation and know if they are surrounded by wood, stone, or metal. Just imagine how alien a creature endowed with such senses would be!

AGI might also perceive time very differently from us because they would think much faster. The ‘wetware’ of our biological brains constrains the speed at which electrical signals can travel. Electronic brains, however, could enable speeds closer to the ultimate physical limit, the speed of light. Minds running on such faster hardware would also think proportionally faster, making their experience of the passage of time proportionally slower.

If AGI would think ‘only’ ten thousand times faster than humans, a conservative estimation, they would inhabit a completely different world. It is difficult to imagine how such creatures might regard humans, but futurist James Lovelock chillingly estimates that “the experience of watching your garden grow gives you some idea of how future AI systems will feel when observing human life.”

The way AGI is depicted in sci-fi (e.g. Terminator, Ex Machina, or Westworld) might rightly give us existential shivers. But if the predictions above are anywhere near right, then AGI might turn out to be weirder than our wildest sci-fi dreams. AI might reach human-level, but it would most likely be radically non-humanlike.

Is this good or bad news for theological anthropology? How would the emergence of such an alien type of affect our understanding of humanity and its imago Dei status? The next post, the last one in this four-part series, wrestles head-on with this question.

How Does AI Compare with Human Intelligence? A Critical Look

In the previous post I argued that AI can be of tremendous help in our theological attempt to better understand what makes humans distinctive and in the image of God. But before jumping to theological conclusions, it is worth spending some time trying to understand what kind of intelligence machines are currently developing, and how much similarity is there between human and artificial intelligence.Image by Gordon Johnson from Pixabay

The short answer is, not much. The current game in AI seems to be the following: try to replicate human capabilities as well as possible, regardless of how you do it. As long as an AI program produces the desired output, it does not matter how humanlike its methods are. The end result is much more important than what goes on ‘on the inside,’ even more so in an industry driven by enormous financial stakes.

Good Old Fashioned AI

This approach was already at work in first wave of AI, also known as symbolic AI or GOFAI (good old-fashioned AI). Starting with the 1950s, the AI pioneers struggled to replicate our ability to do math and play chess, considered the epitome of human intelligence, without any real concern for how such results were achieved. They simply assumed that this must be how the human mind operates at the most fundamental level, through the logical manipulation of a finite number of symbols.

GOFAI ultimately managed to reach human-level in chess. In 1996, an IBM program defeated the human world-champion, Gary Kasparov, but it did it via brute force, by simply calculating millions of variations in advance. That is obviously not how humans play chess.

Although GOFAI worked well for ‘high’ cognitive tasks, it was completely incompetent in more ‘mundane’ tasks, such as vision or kinesthetic coordination. As roboticist Hans Moravec famously observed, it is paradoxically easier to replicate the higher functions of human cognition than to endow a machine with the perceptive and mobility skills of a one-year-old. What this means is that symbolic thinking is not how human intelligence really works.

The Advent of Machine Learning

Photo by Kevin Ku on Unsplash

What replaced symbolic AI since roughly the turn of the millennium is the approach known as machine learning (ML). One subset of ML that has proved wildly successful is deep learning, which uses layers of artificial neural networks. Loosely inspired by the brain’s anatomy, this algorithm aims to be a better approximation of human cognition. Unlike previous AI versions, it is not instructed on how to think. Instead, these programs are being fed huge sets of selected data, in order to develop their own rules for how the data should be interpreted.

For example, instead of teaching an ML algorithm that a cat is a furry mammal with four paws, pointed ears, and so forth, the program is trained on hundreds of thousands of pictures of cats and non-cats, by being ‘rewarded’ or ‘punished’ every time it makes a guess about what’s in the picture. After extensive training, some neural pathways become strengthened, while others are weakened or discarded. The end result is that the algorithm does learn to recognize cats. The flip side, however, is that its human programmers no longer necessarily understand how the conclusions are reached. It is a sort of mathematical magic.

ML algorithms of this kind are behind the impressive successes of contemporary AI. They can recognize objects and faces, spot cancer better than human pathologists, translate text instantly from one language to another, produce coherent prose, or simply converse with us as smart assistants. Does this mean that AI is finally starting to think like us? Not really.

When machines fail, they fail badly, and for different reasons than us.

Even when machines manage to achieve human or super-human level in certain cognitive tasks, they do it in a very different fashion. Humans don’t need millions of examples to learn something, they sometimes do very fine with at as little as one example. Humans can also usually provide explanations for their conclusions, whereas ML programs are often these ‘black boxes’ that are too complex to interrogate.

More importantly, the notion of common sense is completely lacking in AI algorithms. Even when their average performance is better than that of human experts, the few mistakes that they do make reveal a very disturbing lack of understanding from their part. Images that are intentionally perturbed so slightly that the adjustment is imperceptible to humans can still cause algorithms to misclassify them completely. It has been shown, for example, that sticking minuscule white stickers, almost imperceptible to the human eye, on a Stop sign on the road causes the AI algorithms used in self-driving vehicles to misclassify it as a Speed Limit 45 sign. When machines fail, they fail badly, and for different reasons than us.

Machine Learning vs Human Intelligence

Perhaps the most important difference between artificial and human intelligence is the former’s complete lack of any form of consciousness. In the words of philosophers Thomas Nagel and David Chalmers, “it feels like something” to be a human or a bat, although it is very difficult to pinpoint exactly what that feeling is and how it arises. However, we can intuitively say that very likely it doesn’t feel like anything to be a computer program or a robot, or at least not yet. So far, AI has made significant progress in problem-solving, but it has made zero progress in developing any form of consciousness or ‘inside-out-ness.’

Current AI is therefore very different from human intelligence. Although we might notice a growing functional overlap between the two, they differ strikingly in terms of structure, methodology, and some might even say ontology. Artificial and human intelligence might be capable of similar things, but that does not make them similar phenomena. Machines have in many respects already reached human level, but in a very non-humanlike fashion.

For Christian anthropology, such observations are particularly important, because they can inform how we think of the developments in AI and how we understand our own distinctiveness as intelligent beings, created in the image of God. In the next post, we look into the future, imagining what kind of creature an intelligent robot might be, and how humanlike we might expect human-level AI to become.

Citizens Unite: Global Efforts to Stand Up to Digital Monopolies

Politicians lack the knowledge to regulate technology. This was comically demonstrated in 2018 when Senator Hatch asked how Zuckerberg could keep Facebook free. Zuckerberg’s response became a viral meme:

Taken from Tenor.com

Zuckerberg’s creepy smile aside, the meme drives home the point that politicians know little about emerging technologies. 

What can be done about this? Lawmakers cannot be experts on everything – they need good counsel. An example of that is how challenging it was for the governments to contain COVID with no help from microbiologists or researchers.  The way we get to good policy is by having expert regulators who act as referees, weighing the pros and cons of different strategies to help the lawmakers deliberate with at least some knowledge. 

A Global Push to Fight Digital Monopolies

When we take a look at monopolies around the world, it’s clear that digital monopolies are everywhere, and alongside them are the finance companies and banks. We live in a capitalist world. Technology walks holding hands with the urge to profit and make money. That is why it is so hard to go against these monopolies.

But not all hope is lost. If we look across the globe, we can find different countries regulating big tech companies. Australia has been working for more than a year now, proposing a legislation that would force tech platforms like Google and Facebook to pay news publishers for content. The tension was so big that Facebook took an extreme measure and blocked all kinds of news in Australia. The government thinks that Facebook’s news ban was too aggressive and will only push the community even more further from Facebook. 

The Australian Prime Minister Scott Morrison, shared on his Facebook page his concerns and beliefs saying that this behavior from Facebook only shows how these Big Tech Companies think they are bigger than the government itself and that rules should not apply to them. He also says that he recognizes how big tech companies are changing the world, but that does not mean they run it.

Discussions on how to stop big companies using every content for free is also happening in other countries like France, Canada and even the United States. Governments around the world are considering new laws to keep these companies in check. The question is how far they can go against the biggest digital monopolies in the world. 

Fortunately, there are many examples where governments are working with tech companies to help consumers. Early this year, the French government approved the New Tech Repairability Index. This index is going to show how repairable an electronic is, like smartphones, laptops, TVs, and even lawnmowers. This will help consumers buy more durable goods and force companies to make repairs possible. It is not only a consumer-friendly measure but also environmentally friendly as it helps reduce electronic waste.   

Another example that big technology companies have to hear from the government is in Brazil. On February 16, a Brazilian congressman was arrested for making and sharing videos that go against the law by uplifting a very dark moment in Brazilian history, the military dictatorship they had to go through in the 60s. And a few days later, Facebook, Twitter, and Instagram had to ban his accounts because of a court order, since he was still updating his account from inside prison. 

Brazil still doesn’t know how this congressman’s story will end, but we can at least hope that the cooperation between big companies and the government will increase even more. These laws and actions by the people in charge of countries have already waited too long to come along. We have to fight for our rights and always remember that no one is above the law. 

From Consumers to Citizens

Technological monopolies can make us feel like they rule the world. But the truth is that we are the consumers, so we need to have our voices heard and rights respected. 

I believe that the most efficient way to deal with tech monopolies is by creating committees that will assist the government to make antitrust laws. These committees should have experts and common citizens that don’t have any ties with big tech companies. Antitrust laws are statutes developed by governments to protect consumers from predatory business practices and ensure fair competition. They basically ensure companies don’t have questionable activities like market allocation, bid rigging, price-fixing, and the creation of monopolies. 

Protecting consumer privacy and deterring artificially high prices should be a priority. But can these committees really be impartial? Can we trust the government to make these laws?

The only way is for consumers to act as citizens. That is, we need to vote for representatives that are not tied to Big Tech lobbies. We need to make smarter choices with our purchases. We need to organize interest groups that put humanity back at the center of technology. 

How are you standing up to digital monopolies today? 

Surveillance Capitalism: Exposing the Power of Digital Monopolies

On January 28, I attended the online forum Medium in Conversation: How to Destroy Surveillance Capitalism. In this blog, I summarize the main points from the discussion along with some reflections on how we can respond.

Maybe at first glance, we can’t really see what surveillance capitalism has to do with AI. But the two topics walk side by side. Surveillance capitalism is sustained by digital monopolies that rely on massive amounts of personal data (hence the surveillance part). This deluge of data is fed into powerful AI algorithms which drive content curation. One depends on the other to thrive.

The Current State of Affairs

It’s a new era for Big Tech. Weeks after the de-platforming of Donald Trump—and with a new administration in the White House—the time is ripe to reexamine the power wielded by the giants of surveillance capitalism. How did corporations like Facebook, Google, and Amazon amass such power? How do we build a more open Web?

According to Cory Doctorow, If we´re going to break big techs’ dominance in our digital lives, we will have to fight monopolies. That may sound pretty mundane and old-fashioned, something out of the new deal era. Yet, breaking up monopolies is something we have forgotten how to do. The trust-busting era cannot begin until we find the political will. Only when politicians prove that they have the average citizen’s backs against the richest most powerful men in the world.

For politicians to take notice, citizens must first speak up.  

What is the problem with Monopolies?

In case we need a refresher, monopoly is a bad deal for consumers. It means that the market has only one seller with the ability to set prices, and tell people what a service costs.  People line up to buy their product even if it costs too much simply because they have no choice. 

Facebook is a monopoly if you think of the prices it set for its ad platform. The ad buyer has very little choice allowing Zuckerberg’s empire to dictate the terms. In addition to that, the platform behemoth retains its monopoly by impeding other apps to grow.

Anticompetitive conduct in big tech has been rampant. Mark Zuckerberg bought competing apps (snapchat, instagram for example) leaving little room for competitors. Apple pursued it in the hardware side by shutting down “right to repair bills” so that people are forced to buy new phones. In effect, they dictated when your phone can be repaired or when it has to be thrown away.  

These actions led to an unprecedented concentration of power where a small group of people can make decisions of global consequence.

People of the World, Unite

Is it a realistic operation to create an open web or are we too far gone? Although these forces seem impenetrable and timeless, they actually are relatively new, and have weaknesses. If it was about just changing our relationship with technology, it would be a hard lift.

Yet, according to Cory Doctorow, there is a wave sweeping the world with anger about monopolies in every domain. This discontent seek to return power to communities so they can decide their future. 

It has been done before. In the beginning of the 20th century, popular discontent drove politicians to rein in powerful monopolies such as Andrew Carneggie’s control of the steel industry and Rockefeller’s Oil’s monopoly. Their efforts culminated with the passage of sweeping anti-trust legislation.

Are we reaching a tipping point with big tech in the beginning of the 21st century? 

Conclusion

Surveillance Capitalism affects the entire world and can be scary sometimes. There is a need to seek freedom from the domain of digital monopolies. Once again, it is necessary to find the political will to fight for change. While legislation will not solve this problem completely, it is an important first step.

Certainly this is not just a North American problem. Some countries are already pressing these big companies to answer for their actions paving the way for a future where power is more evenly distributed.

In the next blog, I’ll provide an overview of anti-trust efforts around the world.

Green Tech: How Scientists are Using AI to Fight Deforestation

In the previous blog, I talked about upcoming changes to US AI policy with a new administration. Part of that change is a renewed focus on harnessing this technology for sustainability. Here I will showcase an example of green tech – how machine learning models are helping researchers detect illegal logging and burning in the vast Amazon rainforest. This is an exciting development and one more example of how AI can work for good.

The problem

Imagining trying to patrol an area nearly the size of the lower 48 states of dense rainforest! It is as the proverbial saying goes: finding needle in a haystack. The only way to to catch illegal activity is to find ways to narrow the surveilling area. Doing so gives you the best chances to use your limited resources of law enforcement wisely. Yet, how can that be done?

How do illegal logging and burning happen in the Amazon? Are there any patterns that could help narrow the search? Fortunately, there is. A common trait for them is happening near a road. In fact, 95% of them occur within 6 miles from a road or a river. These activities require equipment that must be transported through dense jungle. For logging, lumber must be transported so it can be traded. The only way to do that is either through waterways or dirt roads. Hence, tracking and locating illegal roads go along way to honing in areas of possible illegal activity.

While authorities had records for the government-built roads, no one knew the extent of the illegal network of roads in the Amazon. To attack the problem, enforcing agencies needed richer maps that could spot this unofficial web. Only then could they start to focus resources around these roads. Voila, there you have, green tech working for preserving rather than destroying the environment.

An Ingenious solution

In order to solve this problem, Scientist from Imazon (Amazon’s Institute of Humans and the Environment) went to work in a search for ways to detect these roads. Fortunately, by carefully studying satellite imagery they could manually trace these additional roads. In 2016 they completed this initial heroic but rather tedious work. The new estimate was now 13 times the size of the original! Now they had something to work with.

Once the initial tracing was complete, it became clear updating it manually would be an impossible task. These roads could spring up overnight as loggers and ranchers worked to evade monitoring. That is when they turned to computer vision to see if it could detect new roads. The initial manual work became the training dataset that taught the algorithm how to detect these roads from the satellite images. In supervised learning, one must first have a collection of data that shows the actual target (labels) to the algorithm (i.e: an algorithm to recognize cats must first be fed with millions of Youtube videos of cats to work).

The result was impressive. At first, the model achieved 70% accuracy and with some additional processing on top, it increased to 90%. The research team presented their results in the latest meeting of the American Geophysical Union. They also plan to share their model with neighboring countries so they can use it for their enforcement of the Amazon in areas outside Brazil.

Reflection

Algorithms can be effective allies in the fight for preserving the environment. As the example of Imazon shows, it takes some ingenuity, hard work, and planning to make that happen. While a lot of discussions around AI quickly devolve into cliches of “machines replacing humans”, this example shows how it can augment human problem-solving abilities. It took a person to connect the dots between the potential of AI for solving a particular problem. Indeed the real future of AI may be in green tech.

In this blog and in our FB community we seek to challenge, question and re-imagine how technologies like AI can empower human flourishing. Yet, this is not limited to humans but to the whole ecosystem we inhabit. If algorithms are to fulfill their promise, then they must be relevant in sustainability.

How is your work making life more sustainable on this planet?

5 Changes the Biden-Harris Administration will Bring to AI Policy

As a new administration takes the reins of the federal government, there is a lot of speculation as to how they will steer policy in the area of technology and innovation. This issue is even more relevant as social media giants grapple with free speech in their platforms, Google is struggles with AI ethics and concerns over video surveillance grows. In the global stage, China moves forward with its ambitions of AI dominance and Europe continues to grapple with issues of data governance and privacy.

In this scenario, what will a Biden-Harris administration mean for AI in the US and global stage? In a previous blog, I described the decentralized US AI strategy, mainly driven by large corporations in Silicon Valley. Will a Biden administration bring continuity to this trend or will it change direction? While it is early to say for sure, we should expect 5 shifts as outlined below:

(1) Increased investment in non-military AI applications: In contrast to the $2 Bi promised by the Trump White House, Biden plans to ramp up public investment in R&D for AI and other emerging technologies. Official campaign statements promise a whopping $300 billion of investment. This is a significant change since public research funds tend to aim at socially conscious applications rather than profit-seeking ventures preferred by private investment. These investments should steer innovation towards social goals such as climate change, revitalizing the economy, and expanding opportunity. In the education front, $5 billion is earmarked for graduate programs in teaching STEM. These are important steps as nations across the globe seek to gain the upper hand on this crucial technology.

(2) Stricter bans on facial recognition: While this is mostly speculation at this point, industry observers cite Kamala’s recent statements and actions as an indication of forthcoming stricter rules. In her plan to reform the justice system, she cites concerns with law enforcement’s use of facial recognition and surveillance. In 2018, she sent letters to federal agencies urging them to take a closer look at the use of facial recognition in their practices as well as the industries they oversee. This keen interest in this AI application could eventually translate into strong legislation to regulate, curtail or even ban the use of facial recognition. It will probably fall somewhere between Europe’s 5-year ban on it and China’s pervasive use to keep the population in check.

Photo by ThisisEngineering RAEng on Unsplash

(3) Renewed anti-trust push on Big Tech: The recent move started by Trump administration to challenge the big tech oligarchy should intensify under the new administration. Considering that the “FAMG”(Facebook, Amazon, Microsoft, and Google) group is in the avant-garde of AI innovation, any disruption to their business structures could impact advances in this area. Yet, a more competitive tech industry could also mean an increase in innovation. It is hard to determine how this will ultimately impact AI development in the US but it is a trend to watch in the next few years.

(4) Increased regulation: It is likely but not certain at this point. Every time a Democratic administration takes power, the underlying assumption by Wall Street is that regulation will increase. Compared to the previous administration’s appetite for dismantling regulation, the Biden presidency will certainly be a change. Yet, it remains to be seen how they will go about in the area of technology. Will they listen to experts and put science in front of politics? AI will definitely be a test of it. They will certainly see government as a strong partner with private industry. Also, they will likely walk back Trump’s tax cuts on business which could hamper innovations for some players.

(5) Greater involvement in the global stage: the Biden administration is likely to work closer with allies, especially in Europe. Obama’s AI principles released in 2012 became a starting point for the vigorous regulatory efforts that arose in Europe in the last 5 years. It would be great to see increased collaboration that would help the US establish strong privacy safeguards as the ones outlined by the GDPR. In regards to China, Biden will probably be more assertive than Obama but less belligerent than Trump. This could translate into restricting access to key technologies and holding China’s feet to the fire on surveillance abuses.

The challenges in this area are immense requiring careful analysis and deliberation. Brash decisions based on ideological short-cuts can both hamper innovation and fail to safeguard privacy. It is also important to build a nimble apparatus that can respond to the evolving nature of this technology. While not as urgent as COVID and the economy, the federal government cannot afford to delay reforming regulation for AI. Ethical concerns and privacy protection should be at the forefront seconded by incentives for social innovation.

Union Tech: How AI is Empowering Workers


Is technology empowering or hindering human flourishing?

This week, I found a promising illustration of empowerment. While driving back from South Carolina, I listened to an episode from Technopolis podcast which explores how technology is altering urban landscapes. Just like in a previous post, the podcast did not disappoint. In this episode, they talk to Palak Shah from the National Domestic Worker Alliance digital lab. The advocacy group seeks innovative ways to empower 2.5 million nannies, house cleaners, and care workers in the United States. Because of its highly distributed workforce (most domestic workers work for one or a few households making it difficult to organize in a way that auto workers could), they quickly saw that technology was the best way to reach and engage the workers they trying to reach.

The lab developed two main products: the Alia platform and a La Alianza chatbot. The platform aggregates small contributions from clients to offer benefits for the workers. One of the biggest challenges with domestic workers is that they have no safety net. Most only get paid when they work and do not have health insurance. By pooling workers and getting an additional contribution from clients with little overhead, the platform is able to give the workers some of these benefits. The chatbot offers news and resources to over 200K domestic worker subscribers.

When the pandemic hit, the lab team with some help from Google was able to fully pivot in order to address new emerging problems. The Alia platform became a cash-transfer tool to help workers that were not getting any income. Note that most of them did not receive unemployment or the stimulus checks coming from the government. Furthermore, the chatbot surveyed domestic workers to better understand the impact of the pandemic on their livelihoods so they could adequately respond to their needs.

The NDWA lab story illustrates well the power of harnessing technology for human flourishing.

As a technology worker myself, I wonder how my work is expanding or hindering human flourishing. Some of us may not be doing work that is directly aligned with a noble cause. Yet, there are many ways in which we can take small steps re-direct technology towards a more human future.

Last week, in a history-making move, a group of Google employees formed the first union in a major technology company. Before that, tech employees have played crucial roles as whistleblowers for abuses and excesses from their companies. Beyond that, numerous tech workers have contributed their valuable skills for non-profit efforts in what is often known as the “tech for good” movement. These efforts range from hackathons to long-term projects organized by foundations embedded within large multinational companies.

These are just a few examples of how technology workers are taking steps to keep large corporations accountable and contribute to their communities. There are many other ways in which one can work towards human flourishing.

How is your work contributing to human flourishing today?