How to Integrate the Sacred with the Technical: an AI worldview

At first glance, the combination between AI and theology may sound like strange bedfellows. After all, what does technology have to do with spirituality? In our compartmentalization-prone western view, these disciplines are dealt with separately. Hence the first step on this journey is to reject this separation, aiming instead to hold these different perspectives in view simultaneously. Doing so fosters a new avenue for knowledge creation. Let’s begin by examining an AI worldview

What is AI?

AI is not simply a technology defined by algorithms that create outcomes out of binary code. Instead, AI brings with it a unique perspective on reality. For AI, in its present form, to exist there must be first algorithms, data, and adequate hardware. The first one came on the scene in the 1950s while the other two became a reality mostly in the last two decades. This may partially explain why we have been hearing about AI for a long time while only now it is actually impacting our lives on a large scale. 

The algorithm in its basic form consists of a set of instructions to perform, such as to transform input into output. This can be as simple as taking the inputs (2,3), passing through an instruction (add them), and getting an output (5). If you ever made that calculation in your head, congratulations: you have used an algorithm. It is logical, linear, and repeatable. This is what gives it “machine” quality. It is an automated process to create outputs.

Data + Algorithms + Hardware = AI

Data is the very fuel of AI in its dominant form today. Without it, nothing would be accomplished. This is what differentiates programming from AI (machine learning). The first depends on a human person to imagine, direct and define the outcomes of an input. Machine learning is an automated process that takes data and transforms it into the desired outcome. It is learning because, although the algorithm is repeatable, the variability in the data makes the outcome unique and at times hard to predict. It involves risk but it also yields new knowledge. The best that human operators can do is to monitor the inputs and outputs while the machine “learns” from new data. 

Data is digitized information so that it can be processed by algorithms. Human brains operate in an analog perspective, taking information from the world and processes them through neural pulses. Digital computers need information to be first translated into binary code before they can “understand” it. As growing chunks of our reality are digitized, the more the machines can learn.  

All of this takes energy to take shape. If data is like the soul, algorithms like the mind, then hardware is like the body. It was only in the last few decades when, through fast advancement, it was possible to apply AI algorithms to the commensurate amount of data needed for them to work properly. The growth in computing power is one of the most underrated wonders of our time. This revolution is the engine that allowed algorithms to process and make sense of the staggering amount of data we now produce. The combination of the three made possible the emergence of an AI ecosystem of knowledge creation. Not only that but the beginning of an AI worldview.

Photo by Franki Chamaki on Unsplash

Seeing the World Through Robotic Eyes

How can AI be a worldview? How does it differ from existing human-created perspectives? It is so because its peculiar process of information processing in effect crafts a new vision of the world. This complex machine-created perspective has some unique traits worth mentioning. It is backward-looking but only to recent history. While we have a wealth of data nowadays, our record still does not go back for more than 20-30 years. This is important because it means it will bias the recent past and the present as it looks into the future.

Furthermore, an AI worldview while recent past-based is quite sensitive to emerging shifts. In fact, algorithms can detect variations much faster than humans. That is an important trait in providing decision-makers with timely warnings of trouble or opportunities ahead. In that sense, if foreseeing a world that is about to arrive. A reality that is here but not yet. Let the theologians understand. 

It is inherently evidence-based. That it is, it approaches data with no presuppositions. Instead, especially at the beginning of a training process, it looks at from the equivalent of a child’s eyes. This is both an asset and a liability. This open view of the world enables it to discover new insights that would otherwise pass unnoticed to human brains that rely on assumptions to process information. It is also a liability because it can mistake an ordinary even for extraordinary simply because it is the first time it encounters it. In short, it is prime for forging innovation as long as it is accompanied by human wisdom. 

Rationality Devoid of Ethics  

Finally, and this is its more controversial trait, It approaches the world with no moral compass. It applies logic devoid of emotion and makes decisions without the benefit of high-level thinking. This makes it superior to human capacity in narrow tasks. However, it is utterly inadequate for making value judgments.

It is true that with the development of AGI (artificial general intelligence), it may acquire capabilities more like human wisdom than it is today. However, since machine learning (narrow AI) is the type of technology mostly present in our current world, it is fair to say that AI is intelligent but not wise, fast but not discerning, and accurate but not righteous.

This concludes the first part of this series of blogs. In the next blog, I’ll define the other side of this integration: theology. Just like AI, theology requires some preliminary definitions before we can pursue integration.

Citizens Unite: Global Efforts to Stand Up to Digital Monopolies

Politicians lack the knowledge to regulate technology. This was comically demonstrated in 2018 when Senator Hatch asked how Zuckerberg could keep Facebook free. Zuckerberg’s response became a viral meme:

Taken from Tenor.com

Zuckerberg’s creepy smile aside, the meme drives home the point that politicians know little about emerging technologies. 

What can be done about this? Lawmakers cannot be experts on everything – they need good counsel. An example of that is how challenging it was for the governments to contain COVID with no help from microbiologists or researchers.  The way we get to good policy is by having expert regulators who act as referees, weighing the pros and cons of different strategies to help the lawmakers deliberate with at least some knowledge. 

A Global Push to Fight Digital Monopolies

When we take a look at monopolies around the world, it’s clear that digital monopolies are everywhere, and alongside them are the finance companies and banks. We live in a capitalist world. Technology walks holding hands with the urge to profit and make money. That is why it is so hard to go against these monopolies.

But not all hope is lost. If we look across the globe, we can find different countries regulating big tech companies. Australia has been working for more than a year now, proposing a legislation that would force tech platforms like Google and Facebook to pay news publishers for content. The tension was so big that Facebook took an extreme measure and blocked all kinds of news in Australia. The government thinks that Facebook’s news ban was too aggressive and will only push the community even more further from Facebook. 

The Australian Prime Minister Scott Morrison, shared on his Facebook page his concerns and beliefs saying that this behavior from Facebook only shows how these Big Tech Companies think they are bigger than the government itself and that rules should not apply to them. He also says that he recognizes how big tech companies are changing the world, but that does not mean they run it.

Discussions on how to stop big companies using every content for free is also happening in other countries like France, Canada and even the United States. Governments around the world are considering new laws to keep these companies in check. The question is how far they can go against the biggest digital monopolies in the world. 

Fortunately, there are many examples where governments are working with tech companies to help consumers. Early this year, the French government approved the New Tech Repairability Index. This index is going to show how repairable an electronic is, like smartphones, laptops, TVs, and even lawnmowers. This will help consumers buy more durable goods and force companies to make repairs possible. It is not only a consumer-friendly measure but also environmentally friendly as it helps reduce electronic waste.   

Another example that big technology companies have to hear from the government is in Brazil. On February 16, a Brazilian congressman was arrested for making and sharing videos that go against the law by uplifting a very dark moment in Brazilian history, the military dictatorship they had to go through in the 60s. And a few days later, Facebook, Twitter, and Instagram had to ban his accounts because of a court order, since he was still updating his account from inside prison. 

Brazil still doesn’t know how this congressman’s story will end, but we can at least hope that the cooperation between big companies and the government will increase even more. These laws and actions by the people in charge of countries have already waited too long to come along. We have to fight for our rights and always remember that no one is above the law. 

From Consumers to Citizens

Technological monopolies can make us feel like they rule the world. But the truth is that we are the consumers, so we need to have our voices heard and rights respected. 

I believe that the most efficient way to deal with tech monopolies is by creating committees that will assist the government to make antitrust laws. These committees should have experts and common citizens that don’t have any ties with big tech companies. Antitrust laws are statutes developed by governments to protect consumers from predatory business practices and ensure fair competition. They basically ensure companies don’t have questionable activities like market allocation, bid rigging, price-fixing, and the creation of monopolies. 

Protecting consumer privacy and deterring artificially high prices should be a priority. But can these committees really be impartial? Can we trust the government to make these laws?

The only way is for consumers to act as citizens. That is, we need to vote for representatives that are not tied to Big Tech lobbies. We need to make smarter choices with our purchases. We need to organize interest groups that put humanity back at the center of technology. 

How are you standing up to digital monopolies today?