Blog

Human-level, but not Humanlike: The Strangeness of Strong AI

The emergence of AI opens up exciting new avenues of thought, promising to add some clarity to our understanding of intelligence and of the relation between intelligence and consciousness. For Christian anthropology, observing which aspects of human cognition are easily replicated in machines can be of particular help in refining the theological definition of human distinctiveness and the image of God.

However, by far the most theologically exciting scenario is the possibility of human-level AI, or artificial general intelligence (AGI), the Holy Grail of AI research. AGI would be capable to convincingly replicate human behavior. It could, in principle, pass as human, if it chose to. This is precisely how the Turing Test is designed to work. But how humanlike would a human-level AI really be?

Computer programs have already become capable of impressive things, which, when done by humans, require some of our ‘highest’ forms of intelligence. However, the way AI approaches such tasks is very non-humanlike, as explained in the previous post. If the current paradigm continues its march towards human-level intelligence, what could we expect AGI to be like? What kind of creature might such an intelligent robot be? How humanlike would it be? The short answer is, not much, or even not at all.

The Problem of Consciousness

Philosophically, there is a huge difference between what John Searle calls ‘strong’ and ‘weak’ AI. While strong AI would be an emulation of intelligence, weak AI would be a mere simulation. The two would be virtually indistinguishable on the ‘outside,’ but very different ‘on the inside.’ Strong AI would be someone, a thinking entity, endowed with conscience, while weak AI would be a something, a clockwork machine completely empty on the inside.

It is still too early to know whether AGI will be strong or weak, because we currently lack a good theory of how consciousness arises from inert matter. In philosophy, this is known as “the hard problem of consciousness.” But if current AI applications are any indication, weak AGI is a much more likely scenario than strong AGI. So far, AI has made significant progress in problem-solving, but it has made zero progress in developing any form of consciousness or ‘inside-out-ness.’

Even if AGI does somehow become strong AI (how could we even tell?), there are good reasons to believe that it would be a very alien type of intelligence.

What makes human life enjoyable is arguably related to the chunk of our mind that is not completely rational.

Photo by Maximalfocus on Unsplash

The Mystery of Being Human

As John McCarthy – one of the founding fathers of AI – argues, an AGI would have complete access to its internal states and algorithms. Just think about how weird that is! Humans have a very limited knowledge of what happens ‘on their inside.’ We only see the tip of the iceberg, because only a tiny fraction of our internal processes enter our stream of consciousness. Most information remains unconscious, and that is crucial for how we perceive, feel, and act.

Most of the time we have no idea why we do the things we do, even though we might fabricate compelling, post-factum rationalizations of our behavior. But would we really want to know such things and always act in a perfectly rational manner? Or, even better, would we be friends or lovers with such a hyper-rational person? I doubt.

Part of what makes us what we are and what makes human life enjoyable is arguably related to the chunk of our mind that is not completely rational. Our whole lives are journeys of self-discovery, and with each experience and relationship we get a better understanding of who we are. That is largely what motivates us to reach beyond our own self and do stuff. Just think of how much of human art is driven precisely by a longing to touch deeper truths about oneself, which are not easily accessible otherwise.

Strong AI could be the opposite of that. Robots might understand their own algorithms much better than we do, without any need to discover anything further. They might be able to communicate such information directly as raw data, without needing the approximation/encryption of metaphors. As Ian McEwan’s fictitious robot character emphatically declares, most human literature would be completely redundant for such creatures.

The Uncanny Worldview of an Intelligent Robot

Intelligent robots would likely have a very different perception of the world. With access to Bluetooth and WiFi, they would be able to ‘smell’ other connected devices and develop a sort of ‘sixth sense’ of knowing when a particular person is approaching merely from their Bluetooth signature. As roboticist Rodney Brooks shows, robots will soon be able to measure one’s breathing and heart rate without any biometric sensor, simply by analyzing how a person’s physical presence slightly changes the behavior of WiFi signals.

The technology for this already exists, and it could enable the robot to have access to a totally different kind of information about the humans around, such as their emotional state or health. Similar technologies of detecting changes in the radio field could allow the robots to do something akin to echolocation and know if they are surrounded by wood, stone, or metal. Just imagine how alien a creature endowed with such senses would be!

AGI might also perceive time very differently from us because they would think much faster. The ‘wetware’ of our biological brains constrains the speed at which electrical signals can travel. Electronic brains, however, could enable speeds closer to the ultimate physical limit, the speed of light. Minds running on such faster hardware would also think proportionally faster, making their experience of the passage of time proportionally slower.

If AGI would think ‘only’ ten thousand times faster than humans, a conservative estimation, they would inhabit a completely different world. It is difficult to imagine how such creatures might regard humans, but futurist James Lovelock chillingly estimates that “the experience of watching your garden grow gives you some idea of how future AI systems will feel when observing human life.”

The way AGI is depicted in sci-fi (e.g. Terminator, Ex Machina, or Westworld) might rightly give us existential shivers. But if the predictions above are anywhere near right, then AGI might turn out to be weirder than our wildest sci-fi dreams. AI might reach human-level, but it would most likely be radically non-humanlike.

Is this good or bad news for theological anthropology? How would the emergence of such an alien type of affect our understanding of humanity and its imago Dei status? The next post, the last one in this four-part series, wrestles head-on with this question.

How Does AI Compare with Human Intelligence? A Critical Look

In the previous post I argued that AI can be of tremendous help in our theological attempt to better understand what makes humans distinctive and in the image of God. But before jumping to theological conclusions, it is worth spending some time trying to understand what kind of intelligence machines are currently developing, and how much similarity is there between human and artificial intelligence.Image by Gordon Johnson from Pixabay

The short answer is, not much. The current game in AI seems to be the following: try to replicate human capabilities as well as possible, regardless of how you do it. As long as an AI program produces the desired output, it does not matter how humanlike its methods are. The end result is much more important than what goes on ‘on the inside,’ even more so in an industry driven by enormous financial stakes.

Good Old Fashioned AI

This approach was already at work in first wave of AI, also known as symbolic AI or GOFAI (good old-fashioned AI). Starting with the 1950s, the AI pioneers struggled to replicate our ability to do math and play chess, considered the epitome of human intelligence, without any real concern for how such results were achieved. They simply assumed that this must be how the human mind operates at the most fundamental level, through the logical manipulation of a finite number of symbols.

GOFAI ultimately managed to reach human-level in chess. In 1996, an IBM program defeated the human world-champion, Gary Kasparov, but it did it via brute force, by simply calculating millions of variations in advance. That is obviously not how humans play chess.

Although GOFAI worked well for ‘high’ cognitive tasks, it was completely incompetent in more ‘mundane’ tasks, such as vision or kinesthetic coordination. As roboticist Hans Moravec famously observed, it is paradoxically easier to replicate the higher functions of human cognition than to endow a machine with the perceptive and mobility skills of a one-year-old. What this means is that symbolic thinking is not how human intelligence really works.

The Advent of Machine Learning

Photo by Kevin Ku on Unsplash

What replaced symbolic AI since roughly the turn of the millennium is the approach known as machine learning (ML). One subset of ML that has proved wildly successful is deep learning, which uses layers of artificial neural networks. Loosely inspired by the brain’s anatomy, this algorithm aims to be a better approximation of human cognition. Unlike previous AI versions, it is not instructed on how to think. Instead, these programs are being fed huge sets of selected data, in order to develop their own rules for how the data should be interpreted.

For example, instead of teaching an ML algorithm that a cat is a furry mammal with four paws, pointed ears, and so forth, the program is trained on hundreds of thousands of pictures of cats and non-cats, by being ‘rewarded’ or ‘punished’ every time it makes a guess about what’s in the picture. After extensive training, some neural pathways become strengthened, while others are weakened or discarded. The end result is that the algorithm does learn to recognize cats. The flip side, however, is that its human programmers no longer necessarily understand how the conclusions are reached. It is a sort of mathematical magic.

ML algorithms of this kind are behind the impressive successes of contemporary AI. They can recognize objects and faces, spot cancer better than human pathologists, translate text instantly from one language to another, produce coherent prose, or simply converse with us as smart assistants. Does this mean that AI is finally starting to think like us? Not really.

When machines fail, they fail badly, and for different reasons than us.

Even when machines manage to achieve human or super-human level in certain cognitive tasks, they do it in a very different fashion. Humans don’t need millions of examples to learn something, they sometimes do very fine with at as little as one example. Humans can also usually provide explanations for their conclusions, whereas ML programs are often these ‘black boxes’ that are too complex to interrogate.

More importantly, the notion of common sense is completely lacking in AI algorithms. Even when their average performance is better than that of human experts, the few mistakes that they do make reveal a very disturbing lack of understanding from their part. Images that are intentionally perturbed so slightly that the adjustment is imperceptible to humans can still cause algorithms to misclassify them completely. It has been shown, for example, that sticking minuscule white stickers, almost imperceptible to the human eye, on a Stop sign on the road causes the AI algorithms used in self-driving vehicles to misclassify it as a Speed Limit 45 sign. When machines fail, they fail badly, and for different reasons than us.

Machine Learning vs Human Intelligence

Perhaps the most important difference between artificial and human intelligence is the former’s complete lack of any form of consciousness. In the words of philosophers Thomas Nagel and David Chalmers, “it feels like something” to be a human or a bat, although it is very difficult to pinpoint exactly what that feeling is and how it arises. However, we can intuitively say that very likely it doesn’t feel like anything to be a computer program or a robot, or at least not yet. So far, AI has made significant progress in problem-solving, but it has made zero progress in developing any form of consciousness or ‘inside-out-ness.’

Current AI is therefore very different from human intelligence. Although we might notice a growing functional overlap between the two, they differ strikingly in terms of structure, methodology, and some might even say ontology. Artificial and human intelligence might be capable of similar things, but that does not make them similar phenomena. Machines have in many respects already reached human level, but in a very non-humanlike fashion.

For Christian anthropology, such observations are particularly important, because they can inform how we think of the developments in AI and how we understand our own distinctiveness as intelligent beings, created in the image of God. In the next post, we look into the future, imagining what kind of creature an intelligent robot might be, and how humanlike we might expect human-level AI to become.

Artificial Intelligence: The Disguised Friend of Christian Anthropology

AI is making one significant leap after the other. Computer programs can nowadays convincingly converse with us, generate plausible prose, diagnose disease better than human experts, and totally trash us in strategy games like Go and chess, once considered the epitome of human intelligence. Could they one day reach human-level intelligence? It would be extremely unwise to discount such a possibility without very good reasons. Time, after all, is on AI’s side, and the kind of things that machines are capable of today used to be seen as quasi-impossible just a generation ago.

How could we possibly speak of human distinctiveness when robots become indistinguishable from us?

The scenario of human-level AI, also known as artificial general intelligence (AGI), would be a game-changer for every aspect of human life and society, but it would raise particularly difficult questions for theological anthropology. Since the dawn of the Judeo-Christian tradition, humans have perceived themselves as a creature unlike any other. The very first chapter of the Bible tells us that we are special, because only we of all creatures are created in the image of God (imago Dei). However, the Copernican revolution showed us that we are not the center of the universe (not literally, at least), and the Darwinian revolution revealed that we are not ontologically different from non-human animals. AGI is set to deliver the final blow, by conquering the last bastion of our distinctiveness: our intelligence.

By definition, AGI would be capable of doing anything that a standard human can do, at a similar or superior level. How could we possibly speak of human distinctiveness when robots become indistinguishable from us? Christian anthropology would surely be doomed, right? Well, not really, actually quite the contrary. Instead of rendering us irrelevant and ordinary, AI could in fact represent an unexpected opportunity to better understand ourselves and what makes us in God’s image.

Science’s Contribution to the Imago Dei

To explain why, it is useful to step back a little and acknowledge how much the imago Dei theology has benefitted historically from an honest engagement with the science of its time. Based solely on the biblical text, it is impossible to decide what the image of God is supposed to mean exactly. The creation story in Genesis 1 tells us that only humans are created in the image and likeness of God, but little else about what the image means. The New Testament does not add much, except for affirming that Jesus Christ is the perfect image. Ever since Patristic times, Christian anthropology has constantly wrestled with how to define the imago Dei, without much success or consensus.

The obvious way to tackle the question of our distinctiveness is to examine what differentiates us from the animals, the only others with which we can meaningfully compare ourselves. For the most part of Christian history, this difference has been located in our intellectual capacities, an approach heavily influenced by Aristotelian philosophy, which defined the human as the rational animal. But then came Darwin and showed us that we are not as different from the animals as we thought we were.

Theologian Aubrey Moore famously said that Darwin “under the guise of a foe, did the work of a friend” for Christianity.

Furthermore, the following century and a half of ethology and evolutionary science revealed that our cognitive capacities are not bestowed upon us from above. Instead, they are rooted deep within our evolutionary history, and most of them are shared with at least some of the animals. If there is no such thing as a uniquely human capacity, then surely we were wrong all along to regard ourselves as distinctive, right?

Not quite. Theologian Aubrey Moore famously said that Darwin “under the guise of a foe, did the work of a friend” for Christianity. Confronted with the findings of evolutionary science, theologians were forced to abandon the outdated Aristotelian model of human distinctiveness and look for more creative ways to define the image of God. Instead of equating the image with a capacity that humans have, post-Darwinian theology regards the imago Dei in terms of something we are called to do or to be.

Defining the Imago Dei

Some theologians interpret the image functionally, as our election to represent God in the universe and exercise stewardship over creation. Others go for a relational interpretation, defining the image through the prism of the covenantal ‘I-Thou’ relationship that we are called to have with God, which is the fundament of human existence. To be in the image of God is to be in a personal, authentic relationship with God and with other human beings. Finally, there are others who interpret the imago Dei eschatologically, as a special destiny for human beings, a sort of gravitational pull that directs us toward existential fulfilment in the fellowship with God, in the eschaton. Which of these interpretations is the best? Well, hard to say. Without going into detail, let’s just say that there are good theological arguments for each of them.

If purely theological debate does not produce clear answers, we might then try to compare ourselves with the animals. This, though, does not lead us too far either. Although ‘technically’ we are not very different from the animals and we share with them similar bodily and cognitive structures, in practical terms the difference is huge. Our mental lives, our societies and our achievements are so radically different than theirs, that it is actually impossible to pinpoint just one dimension that represents the decisive difference. Animals are simply no match for us. This is good news for human distinctiveness, but it also means that we might be stuck in a never-ending theological debate on how to interpret the image of God, with so many options on our hand.

How Can AI help Define Who We Are?

This is where the emergence of human-level AI can be a game-changer. For the first time, we would be faced with the possibility of an equal or superior other, one that could potentially (out)match us in everything, from our intellectual capacities, to what we can do in the world, our relational abilities, or the complexity of our mental lives. Instead of agonizing about AI replacing us or rendering us irrelevant, we could relish the opportunity to better understand our distinctiveness through the insights brought about by the emergence of this new other.

The hypothesis of AGI might present theologians with an extraordinary opportunity to narrow down their definitions of human distinctiveness and the image of God. Looking at what would differentiate us from human-level AI, if indeed anything at all, may provide just the right amount of conceptual constraint needed for a better definition of the imago Dei. In this respect, our encounter with AI might prove to be our best shot at comparing ourselves with a different type of intelligence, apart from maybe the possibility of ever finding extra-terrestrial intelligence in the universe.


Dr. Marius Dorobantu is a research associate in science & theology at VU Univ. Amsterdam (NL). His PhD thesis (2020, Univ. of Strasbourg, FR) analysed the potential challenges of human-level AI for theological anthropology. The project he’s currently working on, funded by the Templeton WCF within the “Diverse Intelligences” initiative, is entitled ‘Understanding spiritual intelligence: psychological, theological, and computational approaches.

How to Integrate the Sacred with the Technical: an AI worldview

At first glance, the combination between AI and theology may sound like strange bedfellows. After all, what does technology have to do with spirituality? In our compartmentalization-prone western view, these disciplines are dealt with separately. Hence the first step on this journey is to reject this separation, aiming instead to hold these different perspectives in view simultaneously. Doing so fosters a new avenue for knowledge creation. Let’s begin by examining an AI worldview

What is AI?

AI is not simply a technology defined by algorithms that create outcomes out of binary code. Instead, AI brings with it a unique perspective on reality. For AI, in its present form, to exist there must be first algorithms, data, and adequate hardware. The first one came on the scene in the 1950s while the other two became a reality mostly in the last two decades. This may partially explain why we have been hearing about AI for a long time while only now it is actually impacting our lives on a large scale. 

The algorithm in its basic form consists of a set of instructions to perform, such as to transform input into output. This can be as simple as taking the inputs (2,3), passing through an instruction (add them), and getting an output (5). If you ever made that calculation in your head, congratulations: you have used an algorithm. It is logical, linear, and repeatable. This is what gives it “machine” quality. It is an automated process to create outputs.

Data + Algorithms + Hardware = AI

Data is the very fuel of AI in its dominant form today. Without it, nothing would be accomplished. This is what differentiates programming from AI (machine learning). The first depends on a human person to imagine, direct and define the outcomes of an input. Machine learning is an automated process that takes data and transforms it into the desired outcome. It is learning because, although the algorithm is repeatable, the variability in the data makes the outcome unique and at times hard to predict. It involves risk but it also yields new knowledge. The best that human operators can do is to monitor the inputs and outputs while the machine “learns” from new data. 

Data is digitized information so that it can be processed by algorithms. Human brains operate in an analog perspective, taking information from the world and processes them through neural pulses. Digital computers need information to be first translated into binary code before they can “understand” it. As growing chunks of our reality are digitized, the more the machines can learn.  

All of this takes energy to take shape. If data is like the soul, algorithms like the mind, then hardware is like the body. It was only in the last few decades when, through fast advancement, it was possible to apply AI algorithms to the commensurate amount of data needed for them to work properly. The growth in computing power is one of the most underrated wonders of our time. This revolution is the engine that allowed algorithms to process and make sense of the staggering amount of data we now produce. The combination of the three made possible the emergence of an AI ecosystem of knowledge creation. Not only that but the beginning of an AI worldview.

Photo by Franki Chamaki on Unsplash

Seeing the World Through Robotic Eyes

How can AI be a worldview? How does it differ from existing human-created perspectives? It is so because its peculiar process of information processing in effect crafts a new vision of the world. This complex machine-created perspective has some unique traits worth mentioning. It is backward-looking but only to recent history. While we have a wealth of data nowadays, our record still does not go back for more than 20-30 years. This is important because it means it will bias the recent past and the present as it looks into the future.

Furthermore, an AI worldview while recent past-based is quite sensitive to emerging shifts. In fact, algorithms can detect variations much faster than humans. That is an important trait in providing decision-makers with timely warnings of trouble or opportunities ahead. In that sense, if foreseeing a world that is about to arrive. A reality that is here but not yet. Let the theologians understand. 

It is inherently evidence-based. That it is, it approaches data with no presuppositions. Instead, especially at the beginning of a training process, it looks at from the equivalent of a child’s eyes. This is both an asset and a liability. This open view of the world enables it to discover new insights that would otherwise pass unnoticed to human brains that rely on assumptions to process information. It is also a liability because it can mistake an ordinary even for extraordinary simply because it is the first time it encounters it. In short, it is prime for forging innovation as long as it is accompanied by human wisdom. 

Rationality Devoid of Ethics  

Finally, and this is its more controversial trait, It approaches the world with no moral compass. It applies logic devoid of emotion and makes decisions without the benefit of high-level thinking. This makes it superior to human capacity in narrow tasks. However, it is utterly inadequate for making value judgments.

It is true that with the development of AGI (artificial general intelligence), it may acquire capabilities more like human wisdom than it is today. However, since machine learning (narrow AI) is the type of technology mostly present in our current world, it is fair to say that AI is intelligent but not wise, fast but not discerning, and accurate but not righteous.

This concludes the first part of this series of blogs. In the next blog, I’ll define the other side of this integration: theology. Just like AI, theology requires some preliminary definitions before we can pursue integration.

Citizens Unite: Global Efforts to Stand Up to Digital Monopolies

Politicians lack the knowledge to regulate technology. This was comically demonstrated in 2018 when Senator Hatch asked how Zuckerberg could keep Facebook free. Zuckerberg’s response became a viral meme:

Taken from Tenor.com

Zuckerberg’s creepy smile aside, the meme drives home the point that politicians know little about emerging technologies. 

What can be done about this? Lawmakers cannot be experts on everything – they need good counsel. An example of that is how challenging it was for the governments to contain COVID with no help from microbiologists or researchers.  The way we get to good policy is by having expert regulators who act as referees, weighing the pros and cons of different strategies to help the lawmakers deliberate with at least some knowledge. 

A Global Push to Fight Digital Monopolies

When we take a look at monopolies around the world, it’s clear that digital monopolies are everywhere, and alongside them are the finance companies and banks. We live in a capitalist world. Technology walks holding hands with the urge to profit and make money. That is why it is so hard to go against these monopolies.

But not all hope is lost. If we look across the globe, we can find different countries regulating big tech companies. Australia has been working for more than a year now, proposing a legislation that would force tech platforms like Google and Facebook to pay news publishers for content. The tension was so big that Facebook took an extreme measure and blocked all kinds of news in Australia. The government thinks that Facebook’s news ban was too aggressive and will only push the community even more further from Facebook. 

The Australian Prime Minister Scott Morrison, shared on his Facebook page his concerns and beliefs saying that this behavior from Facebook only shows how these Big Tech Companies think they are bigger than the government itself and that rules should not apply to them. He also says that he recognizes how big tech companies are changing the world, but that does not mean they run it.

Discussions on how to stop big companies using every content for free is also happening in other countries like France, Canada and even the United States. Governments around the world are considering new laws to keep these companies in check. The question is how far they can go against the biggest digital monopolies in the world. 

Fortunately, there are many examples where governments are working with tech companies to help consumers. Early this year, the French government approved the New Tech Repairability Index. This index is going to show how repairable an electronic is, like smartphones, laptops, TVs, and even lawnmowers. This will help consumers buy more durable goods and force companies to make repairs possible. It is not only a consumer-friendly measure but also environmentally friendly as it helps reduce electronic waste.   

Another example that big technology companies have to hear from the government is in Brazil. On February 16, a Brazilian congressman was arrested for making and sharing videos that go against the law by uplifting a very dark moment in Brazilian history, the military dictatorship they had to go through in the 60s. And a few days later, Facebook, Twitter, and Instagram had to ban his accounts because of a court order, since he was still updating his account from inside prison. 

Brazil still doesn’t know how this congressman’s story will end, but we can at least hope that the cooperation between big companies and the government will increase even more. These laws and actions by the people in charge of countries have already waited too long to come along. We have to fight for our rights and always remember that no one is above the law. 

From Consumers to Citizens

Technological monopolies can make us feel like they rule the world. But the truth is that we are the consumers, so we need to have our voices heard and rights respected. 

I believe that the most efficient way to deal with tech monopolies is by creating committees that will assist the government to make antitrust laws. These committees should have experts and common citizens that don’t have any ties with big tech companies. Antitrust laws are statutes developed by governments to protect consumers from predatory business practices and ensure fair competition. They basically ensure companies don’t have questionable activities like market allocation, bid rigging, price-fixing, and the creation of monopolies. 

Protecting consumer privacy and deterring artificially high prices should be a priority. But can these committees really be impartial? Can we trust the government to make these laws?

The only way is for consumers to act as citizens. That is, we need to vote for representatives that are not tied to Big Tech lobbies. We need to make smarter choices with our purchases. We need to organize interest groups that put humanity back at the center of technology. 

How are you standing up to digital monopolies today? 

Surveillance Capitalism: Exposing the Power of Digital Monopolies

On January 28, I attended the online forum Medium in Conversation: How to Destroy Surveillance Capitalism. In this blog, I summarize the main points from the discussion along with some reflections on how we can respond.

Maybe at first glance, we can’t really see what surveillance capitalism has to do with AI. But the two topics walk side by side. Surveillance capitalism is sustained by digital monopolies that rely on massive amounts of personal data (hence the surveillance part). This deluge of data is fed into powerful AI algorithms which drive content curation. One depends on the other to thrive.

The Current State of Affairs

It’s a new era for Big Tech. Weeks after the de-platforming of Donald Trump—and with a new administration in the White House—the time is ripe to reexamine the power wielded by the giants of surveillance capitalism. How did corporations like Facebook, Google, and Amazon amass such power? How do we build a more open Web?

According to Cory Doctorow, If we´re going to break big techs’ dominance in our digital lives, we will have to fight monopolies. That may sound pretty mundane and old-fashioned, something out of the new deal era. Yet, breaking up monopolies is something we have forgotten how to do. The trust-busting era cannot begin until we find the political will. Only when politicians prove that they have the average citizen’s backs against the richest most powerful men in the world.

For politicians to take notice, citizens must first speak up.  

What is the problem with Monopolies?

In case we need a refresher, monopoly is a bad deal for consumers. It means that the market has only one seller with the ability to set prices, and tell people what a service costs.  People line up to buy their product even if it costs too much simply because they have no choice. 

Facebook is a monopoly if you think of the prices it set for its ad platform. The ad buyer has very little choice allowing Zuckerberg’s empire to dictate the terms. In addition to that, the platform behemoth retains its monopoly by impeding other apps to grow.

Anticompetitive conduct in big tech has been rampant. Mark Zuckerberg bought competing apps (snapchat, instagram for example) leaving little room for competitors. Apple pursued it in the hardware side by shutting down “right to repair bills” so that people are forced to buy new phones. In effect, they dictated when your phone can be repaired or when it has to be thrown away.  

These actions led to an unprecedented concentration of power where a small group of people can make decisions of global consequence.

People of the World, Unite

Is it a realistic operation to create an open web or are we too far gone? Although these forces seem impenetrable and timeless, they actually are relatively new, and have weaknesses. If it was about just changing our relationship with technology, it would be a hard lift.

Yet, according to Cory Doctorow, there is a wave sweeping the world with anger about monopolies in every domain. This discontent seek to return power to communities so they can decide their future. 

It has been done before. In the beginning of the 20th century, popular discontent drove politicians to rein in powerful monopolies such as Andrew Carneggie’s control of the steel industry and Rockefeller’s Oil’s monopoly. Their efforts culminated with the passage of sweeping anti-trust legislation.

Are we reaching a tipping point with big tech in the beginning of the 21st century? 

Conclusion

Surveillance Capitalism affects the entire world and can be scary sometimes. There is a need to seek freedom from the domain of digital monopolies. Once again, it is necessary to find the political will to fight for change. While legislation will not solve this problem completely, it is an important first step.

Certainly this is not just a North American problem. Some countries are already pressing these big companies to answer for their actions paving the way for a future where power is more evenly distributed.

In the next blog, I’ll provide an overview of anti-trust efforts around the world.

Integrating Technology and Religion in a Post-Secular World

This blog discusses how the post-secular can be a fitting stage for the promising dialogue between religion, science and technology.

Last Friday I “zoomed into” a stimulating academic dialogue entitled “Theology, Technology and the Post-Secular.” In it, a world-class team of scholars explored how the intersection of theology, science, and technology has evolved in the last 50 years and where it is going in the future. In this blog, I’ll provide a short overview of the conversation while also offering reflections on how the discussion enriches our dialogue in the AI theology community.

If the post-secular is our reality, it is time we learn how to build bridges there.

An Overview of the Field

The talk started with Dr. Tirosh-Samuelson asking Dr. Burdett to provide a short overview of the burgeoning field of religion and science. In the United States, the establishing of the Zygon journal of religion and science inaugurated the dialogue in 1966. In essence, the challenge was to find a place where these two can interact. Science tends to bracket the question of metaphysics (why things are the way they are) while religion lives in that space. This can often lead to misunderstanding and members of each side talking past each other.

Rejecting the notion of incompatibility, Dr. Burdett prefers to define the relationship as complex. For example, on the one hand, theology paved the way for scientific inquiry by first positing a belief in an orderly world. On the other hand, Christian Geocentrism clashed directly with Galileo’s accurate Heliocentric view. Therefore, the theologian believes in forging integrative models where conflict is not glossed over but carefully sorted out through respectful dialogue.

According to Dr. Burdett, the field is currently undergoing a shift from natural to human sciences. While the conversation started in topics like the implications Big Bang and Evolution, the focus now is on Neuroscience, questions of personhood and cognitive science of religion. The field has zoomed in from the macro view of cosmology to the micro view of anthropology.

Furthermore, the field is shying away from theoretical discussions opting instead to work on concrete questions. This new focus highlights where science and religion meet in the social-political stage. For example, how does religion and science interact when someone is considering in vitro fertilization? How do religion and science meet in people’s decision to take the vaccine? How does one comprehend the motivation of climate change deniers? These are just a few questions fueling research in this nascent field.

Image by Michael Schwarzenberger from Pixabay

A Theologian in a Tech-saturated World

In the next segment, Dr. Gaymon Bennett asked Dr. Burdett to speak about the role of the theologian in a technology-saturated world. How can a theologian tell a compelling story in the public square to those who do not align with his religious beliefs? Do religious perspectives still have a place in a secular world?

In his answer, Dr. Burdett pointed to Vatican II’s formula of Ressourcement and Agiornamiento. The first word has to do with a return to the sources, namely, the traditions and writings of the faith. It means examining carefully what we received through tradition and practices from past generations. The second points to updating that knowledge to the current context. How can these sources speak fresh insight into new evolving questions? The dual movement of reaching for the past while engaging with the present becomes a vital framework on how to do public theology in our times.

To illustrate the point, Dr. Burdett shared a personal anecdote about his journey to scholarship. Growing up in Northern California in the 1990s, he asked “what are the main driving forces shaping culture?” To him, it was clear that the rise of PCs, the Internet, and smartphones would categorically transform society. What would theology have to say about that? He wanted to know it from a technical perspective so he could see it from the inside. This is what moved him to focus his studies on the intersection of theology and technology after a stint in the industry.

Photo by Natalya Letunova on Unsplash

Grappling with the Post-Secular

Closer to the end, the conversation shifted towards grappling with the term “post-secular.” For decades, western society divided the world between the secular and the religious, with little intersection between the two. Science and technology have in effect been the major driving forces of secularism. Yet, we now find Silicon Valley, arguably the global center of this marriage, teeming with religious aspirations.

Even so, Dr. Burdett suggested that we still live in a God-haunted world. The removal of religion from public life left a jarring vacuum yet to be replaced. Along with religion was also any notion of the supernatural, all sacrificed in the altar of Modernity. Victorian poet Matthew Arnold expresses this sentiment well in the following verses from Dover Beach:

  The Sea of Faith
  Was once, too, at the full, and round earth’s shore
  Lay like the folds of a bright girdle furled.
  But now I only hear
  Its melancholy, long, withdrawing roar,
  Retreating, to the breath
  Of the night-wind, down the vast edges drear
  And naked shingles of the world.

This vacuum generated a thirst for new avenues of meaning. This in turn dethroned science as the sole arbiter of truth as it proved inadequate to fill humanity’s soul. The post-secular dashes the illusion that science and technology are sufficient to explain the world and therefore cannot be elevated above other views. Instead, it is a space where religious, mystical, and secular (scientific and technological) views are on the same footing again. The task, therefore, is to bring all these disparate perspectives into respectful dialogue while recognizing their common goals.

Reflections and Implications

Here I offer a few reflections. The first one relates to an important clarification. Throughout the dialogue, the unspoken assumption was that the relationship between religion and science was equivalent to that of religion and technology. However, it is worth noting that while science and technology are deeply intertwined today, that was not always the case. Hence, I would love to see an interdisciplinary branch that focuses on questions of religion and technology independent of science.

It was also illuminating to see scholars name a phenomenon we have been experiencing for a while now. While I have not heard of the term before, its reality resonates well. Nowhere else is this more true than in the cyber global space of social media. Given the pervasive nature of these platforms, this reality is also spilling over to other spheres of human connection. University, churches, companies, and non-profits are also becoming post-secular spaces. This is a fascinating, harrowing, and alarming development all at once.

Finally, I would add that it is not just about connecting with ultimate meaning but also about a return to nature. Whether it is the climate crisis or the blatant confession of how disconnected we are from creation, the post-secular is about digging down to our roots.

Maybe the sea of faith is not just calling us to ultimate meaning but also to encounter the oceans again.

Does God Hear Robot Prayers? A Modern Day Parable

The short video above portrays Juanello Turiano’s (1500-1585 AD) invention, an automated monk that recites prayers while moving in a circle. It was commissioned by King Philip II to honor a Friar whom he believed had healed his son. The engineer delivered a work of art, creepy but surprisingly life-like, in a time where Artificial Intelligence was but a distant dream. This Medieval marvel now sits at the Smithsonian museum in Washington, DC.

Take a pause to watch the 2 minute video before reading on.

What can this marvelous work of religious art teach us today, nearly 5 centuries later, about our relationship with machines?

In a beautifully well-written piece for Aeon, Ed Simon asks whether robots can pray. In discussing the automated monk, he argues that the medieval invention was not simply simulating prayer. It was actually praying! Its creation was an offer of thanksgiving to the Christian God and till this day continues to recite its petitions.

Such reflection opens the door for profound theological questions. For if the machine is indeed communicating with the divine, would God listen?

Can an inanimate object commune with the Creator?

We now turn to a short parable portraying different responses to the medieval droid.

A Modern Day Parable

Photo by Drew Willson on Unsplash

In an effort to raise publicity for its exhibit, the Smithsonian takes Turiano’s invention above in a road show. Aiming to create a buzz, they place the automated monk in a crowded square in New York city along with a sign that asks:

When this monk prays, does God listen?

They place hidden cameras to record peoples’ reaction.

A few minutes go by and a scientist approaches to inspect the scene. Upon reading the sign he quickly dismisses it as an artifact from a bygone era. “Of course, machines cannot pray” – he mulls. He posits that because they are not alive, one cannot ascribe to them human properties. That would be anthropomorphising. That is when people project human traits on non-human entities. “Why even bother asking why God would listen if prayer itself is a human construct?” Annoyed by the whole matter, he walks away hurriedly as he realizes he is late for work.

Moments later, a priest walks by and stops to examine the exhibit. The religious person is taken aback by such question. “Of course, machines cannot pray, they are mere human artifacts” – he mulls. “They are devoid of God’s image which is exclusive property of humans” he continues. “Where in Scripture can one find a example of an object that prays? Machines are works of the flesh, worldly pursuits not worthy of an eternal God’s attention” he concludes. Offended by the blasphemous display, the priest walks away from the moving monk on to holier things.

Finally, a child approaches the center of the square. She sees the walking monk and runs to the droid filled with wonder. “Look at the cool moving monk, mom!” she yells. Soon, she gives it a name: monk Charlie. She sits down and watches mesmerized by the intricate movements of his mouth. The child notices the etched sandals on his feet. She also pays attention to the movement of his arms and mouth.

After a while, she answers: “Yes, God listens to Charlie.” She joins with him, imitating his movement with sheer delight. In that moment, the droid becomes her new playmate.

How would you respond?

Green Tech: How Scientists are Using AI to Fight Deforestation

In the previous blog, I talked about upcoming changes to US AI policy with a new administration. Part of that change is a renewed focus on harnessing this technology for sustainability. Here I will showcase an example of green tech – how machine learning models are helping researchers detect illegal logging and burning in the vast Amazon rainforest. This is an exciting development and one more example of how AI can work for good.

The problem

Imagining trying to patrol an area nearly the size of the lower 48 states of dense rainforest! It is as the proverbial saying goes: finding needle in a haystack. The only way to to catch illegal activity is to find ways to narrow the surveilling area. Doing so gives you the best chances to use your limited resources of law enforcement wisely. Yet, how can that be done?

How do illegal logging and burning happen in the Amazon? Are there any patterns that could help narrow the search? Fortunately, there is. A common trait for them is happening near a road. In fact, 95% of them occur within 6 miles from a road or a river. These activities require equipment that must be transported through dense jungle. For logging, lumber must be transported so it can be traded. The only way to do that is either through waterways or dirt roads. Hence, tracking and locating illegal roads go along way to honing in areas of possible illegal activity.

While authorities had records for the government-built roads, no one knew the extent of the illegal network of roads in the Amazon. To attack the problem, enforcing agencies needed richer maps that could spot this unofficial web. Only then could they start to focus resources around these roads. Voila, there you have, green tech working for preserving rather than destroying the environment.

An Ingenious solution

In order to solve this problem, Scientist from Imazon (Amazon’s Institute of Humans and the Environment) went to work in a search for ways to detect these roads. Fortunately, by carefully studying satellite imagery they could manually trace these additional roads. In 2016 they completed this initial heroic but rather tedious work. The new estimate was now 13 times the size of the original! Now they had something to work with.

Once the initial tracing was complete, it became clear updating it manually would be an impossible task. These roads could spring up overnight as loggers and ranchers worked to evade monitoring. That is when they turned to computer vision to see if it could detect new roads. The initial manual work became the training dataset that taught the algorithm how to detect these roads from the satellite images. In supervised learning, one must first have a collection of data that shows the actual target (labels) to the algorithm (i.e: an algorithm to recognize cats must first be fed with millions of Youtube videos of cats to work).

The result was impressive. At first, the model achieved 70% accuracy and with some additional processing on top, it increased to 90%. The research team presented their results in the latest meeting of the American Geophysical Union. They also plan to share their model with neighboring countries so they can use it for their enforcement of the Amazon in areas outside Brazil.

Reflection

Algorithms can be effective allies in the fight for preserving the environment. As the example of Imazon shows, it takes some ingenuity, hard work, and planning to make that happen. While a lot of discussions around AI quickly devolve into cliches of “machines replacing humans”, this example shows how it can augment human problem-solving abilities. It took a person to connect the dots between the potential of AI for solving a particular problem. Indeed the real future of AI may be in green tech.

In this blog and in our FB community we seek to challenge, question and re-imagine how technologies like AI can empower human flourishing. Yet, this is not limited to humans but to the whole ecosystem we inhabit. If algorithms are to fulfill their promise, then they must be relevant in sustainability.

How is your work making life more sustainable on this planet?

5 Changes the Biden-Harris Administration will Bring to AI Policy

As a new administration takes the reins of the federal government, there is a lot of speculation as to how they will steer policy in the area of technology and innovation. This issue is even more relevant as social media giants grapple with free speech in their platforms, Google is struggles with AI ethics and concerns over video surveillance grows. In the global stage, China moves forward with its ambitions of AI dominance and Europe continues to grapple with issues of data governance and privacy.

In this scenario, what will a Biden-Harris administration mean for AI in the US and global stage? In a previous blog, I described the decentralized US AI strategy, mainly driven by large corporations in Silicon Valley. Will a Biden administration bring continuity to this trend or will it change direction? While it is early to say for sure, we should expect 5 shifts as outlined below:

(1) Increased investment in non-military AI applications: In contrast to the $2 Bi promised by the Trump White House, Biden plans to ramp up public investment in R&D for AI and other emerging technologies. Official campaign statements promise a whopping $300 billion of investment. This is a significant change since public research funds tend to aim at socially conscious applications rather than profit-seeking ventures preferred by private investment. These investments should steer innovation towards social goals such as climate change, revitalizing the economy, and expanding opportunity. In the education front, $5 billion is earmarked for graduate programs in teaching STEM. These are important steps as nations across the globe seek to gain the upper hand on this crucial technology.

(2) Stricter bans on facial recognition: While this is mostly speculation at this point, industry observers cite Kamala’s recent statements and actions as an indication of forthcoming stricter rules. In her plan to reform the justice system, she cites concerns with law enforcement’s use of facial recognition and surveillance. In 2018, she sent letters to federal agencies urging them to take a closer look at the use of facial recognition in their practices as well as the industries they oversee. This keen interest in this AI application could eventually translate into strong legislation to regulate, curtail or even ban the use of facial recognition. It will probably fall somewhere between Europe’s 5-year ban on it and China’s pervasive use to keep the population in check.

Photo by ThisisEngineering RAEng on Unsplash

(3) Renewed anti-trust push on Big Tech: The recent move started by Trump administration to challenge the big tech oligarchy should intensify under the new administration. Considering that the “FAMG”(Facebook, Amazon, Microsoft, and Google) group is in the avant-garde of AI innovation, any disruption to their business structures could impact advances in this area. Yet, a more competitive tech industry could also mean an increase in innovation. It is hard to determine how this will ultimately impact AI development in the US but it is a trend to watch in the next few years.

(4) Increased regulation: It is likely but not certain at this point. Every time a Democratic administration takes power, the underlying assumption by Wall Street is that regulation will increase. Compared to the previous administration’s appetite for dismantling regulation, the Biden presidency will certainly be a change. Yet, it remains to be seen how they will go about in the area of technology. Will they listen to experts and put science in front of politics? AI will definitely be a test of it. They will certainly see government as a strong partner with private industry. Also, they will likely walk back Trump’s tax cuts on business which could hamper innovations for some players.

(5) Greater involvement in the global stage: the Biden administration is likely to work closer with allies, especially in Europe. Obama’s AI principles released in 2012 became a starting point for the vigorous regulatory efforts that arose in Europe in the last 5 years. It would be great to see increased collaboration that would help the US establish strong privacy safeguards as the ones outlined by the GDPR. In regards to China, Biden will probably be more assertive than Obama but less belligerent than Trump. This could translate into restricting access to key technologies and holding China’s feet to the fire on surveillance abuses.

The challenges in this area are immense requiring careful analysis and deliberation. Brash decisions based on ideological short-cuts can both hamper innovation and fail to safeguard privacy. It is also important to build a nimble apparatus that can respond to the evolving nature of this technology. While not as urgent as COVID and the economy, the federal government cannot afford to delay reforming regulation for AI. Ethical concerns and privacy protection should be at the forefront seconded by incentives for social innovation.