Human Mercy is the Antidote to AI-driven Bureaucracy

If bureaucracies are full of human cogs, what’s the difference in replacing them with AI?

(For this entry following the main topic of classifications by machines vs. humans, we consider classifications and their union with judgments for their prospect of life-altering decisions. It is inspired by a sermon given today by pastor Jim Thomas of The Village Chapel, Nashville TN)

Esau’s fateful Choice

In Genesis 25:29–34 we see Esau, the firstborn son of Abraham, coming in from the fields famished, and finding his younger brother Jacob in possession of hearty stew, Esau pleads for some. My paraphrase follows:   Jacob replies, “First you have to give me your birthright.”   “Whatever,” says Esau, “You can have it, just gimme some STEWWW!” …”And thus Esau sold his birthright for a mess of pottage.”

Simply put, it is a bad idea make major, life-altering decisions while in a stressed state, examples of which are often drawn from the acronym HALT:

  • Hungry
  • Angry
  • Lonely
  • Tired

Sometimes HALT becomes SHALT by adding “Sad”.

When we’re in these (S)HALT states, our brains are operating relying on quick inferences “burned” into them either via instinct or training. Dual Process Theory of psychology calls this “System 1” or “Type 1” reasoning, according to the (cf. Kahneman, 2003Strack & Deutch 2004). System 1 includes the fight-or-flight response. While System 1 is fast, it’s also often prone to make errors, oversimplify, and operate based on of biases such as stereotypes and prejudices.

System 1 relies on only a tiny subset of the brain’s overall capacity, the part usually associated with involuntary and regulatory systems of the body governed by the cerebellum and medulla, rather than the cerebrum with its higher-order reasoning capabilities and creativity. Thus trying to make important decisions (if they’re not immediate and life-threatening) while in a System 1 state is inadvisable if waiting is possible.

At a later time we may be more relaxed, content, and able to engage in so-called System 2 reasoning, which is able to consider alternatives, question assumptions, perform planning and goal-alignment, display generosity, seek creative solutions, etc.

Hangry Computers Making Hasty Decisions

Machine Learning systems, other statistics-based models, and even rule-based symbolic AI systems, as sophisticated as they may currently be, are at best operating in a System 1 capacity — to the extent that the analogy to the human brain holds (See, e.g., Turing Award winner Yoshua Bengio’s invited lecture at NeurIPS 2019: video, slides.)

This analogy between human System 1 and AI systems is the reason for this post. AI systems are increasingly serving as proxies for human reasoning, even for important, life-altering decisions. And as such, news stories appear daily with instances of AI systems displaying bias and unjustly employing stereotypes.

So if humans are discouraged from making important decisions while in a System 1 state, and machines are currently capable of only System 1, then why are machines entrusted with important decision-making responsibilities? This is not simply a matter of which companies may choose to offer AI systems for speed and scale; governments do this too.

Government is a great place to look to further this discussion, because government bodies are chock full of humans making life-altering decisions (for others) based on of System 1 reasoning — tired people, implementing decisions based on procedures and rules – bureaucracy.1 In this way, whether it is a human being following a procedure or a machine following its instruction set, the result is quite similar.

Human Costs and Human Goods

By Harald Groven taken from Flickr.com

The building of a large bureaucratic system provides a way to scale and enforce a kind of (to borrow from AI Safety lingo) “value alignment,” whether for governments, companies, or non-profits. The movies of Terry Gilliam (e.g., Brazil) illustrated well the excesses of this through a vast office complex of desks after desks of office drones. The socio-political theorist Max Weber, who advanced many of our conceptions of bureaucracy as a positive means to maximize efficiency and eliminate favoritism, was aware of the danger of excess:

“It is horrible to think that the world could one day be filled with nothing but those little cogs, little men clinging to little jobs and striving towards bigger ones… That the world should know no men but these: it is such an evolution that we are already caught up, and the great question is, therefore, not how we can promote and hasten it, but what can we oppose to this machinery in order to keep a portion of mankind free from this parcelling-out of the soul, from this supreme mastery of the bureaucratic way of life.”

Max Weber, Gesammelte Augsaetze zur Soziologie und Sozialpolitik, pp. 412, (1909).

Thus by outsourcing some of this drudgery to machines, we can “free” some workers from having to serve as “cogs.” This bears some similarity to the practice of replacing human assembly-line workers with robots in hazardous conditions (e.g., welding, toxic environments), whereas in the bureaucratic sense we are removing people from mentally or emotionally taxing situations. Yet one may ask what the other costs of such an enterprise may be, if any: If the system is already “soulless,” then what do we lose by having the human “cogs” in the bureaucratic machine replaced by machines?

The Heart of the Matter

So, what is different about machines doing things, specifically performing classifications (judgments, grading, etc.) as opposed to humans?

One difference between the automated and human forms of bureaucracy is the possibility of discretionary action on the part of humans, such as the demonstration of mercy in certain circumstances. God exhorts believers in Micah 6:8 “to love mercy.” In contrast, human bureaucrats going through the motions of following the rules of their organization can result in what Hannah Arendt termed “the banality of evil,” typified in her portrayal of Nazi war criminal Adolph Eichmann who she described as “neither perverted nor sadistic,” but rather “terrifyingly normal.”

“The sad truth is of the matter is that most evil is done by people who never make up their minds to be or do evil or good.”

Hannah Arendt, The Life of the Mind, Volume 1: Thinking, p.180 (1977).

Here again we see the potential for AI systems, as the ultimate “neutral” rule-followers, to facilitate evil on massive scales. So if machines could somehow deviate from the rules and show mercy on occasion, how would that even work? Which AI researchers are working on the “machine ethics” issue of determining when and how to show mercy? (At the time of writing, this author is unaware of such efforts). Given that human judges have a tendency to show favoritism and bias in the selective granting of mercy to certain ethnicities more than others. Also, the automated systems have shown bias even in rule-following, would the matter of “mercy” simply be a new opportunity for automated unfairness? It is a difficult issue with no clear answers.

Photo by Clay Banks on Unsplash
Photo by Clay Banks on Unsplash

The Human Factor

One other key, if the pedantic difference, between human vs machine “cogs” is the simple fact that with a human being “on the line,” you can try to break out of the limited options presented by menus and if-then decision trees. Even the latest chatbot helper interfaces currently deployed are nothing more than natural language front ends to menus. Whereas with a human being, you can explain your situation, and they can (hopefully) work with you or connect you to someone with the authority to do so.

I suspect that in the next ten years we will see machine systems with increasing forays into System 2 reasoning categories (e.g., causality, planning, self-examination). I’m not sure how I feel about the prospect of pleading with a next-gen chatbot to offer me an exception because the rule shouldn’t apply in this case, or some such. 😉 But it might happen — or more likely such a system will decide whether to kick the matter up to a real human.

Summary

We began by talking about Jacob and Esau. Jacob, the creative swindling deal-broker, and Esau who quite literally “goes with his gut.”. Then we talked about reasoning according to the two systems described by Dual Process Theory, noting that machines currently can do System 1 quite well. The main question was: if humans make numerous erroneous and unjust decisions in a System 1 state, how do we justify the use of machines? And the easy answers available seem to be a cop-out: the incentive of scale, speed, and lower cost. And this is not just “capitalism,” rather these incentives would still be drivers in a variety of socio-economic situations.

Another answer came in the form of bureaucracy. System 1 already exists albeit with humans as operators. We explored “what’s different” between a bureaucracy implemented via humans vs. machines. We realized that “what is lost” is the humans’ ability to transcend, if not their authority in the organization, at least the rigid and deficient set of software designs imposed by vendors of bureaucratic IT systems. Predicting how the best of these systems will improve in the coming years is hard. Yet, given the prevalence of shoddy software in widespread use, I prefer talking to a human in Mumbai rather than “Erica” the Bank of America Chatbot for quite some time.


[1]    Literally “government by the desk,” a term coined originally by 16th-century French economist Jacques Claude Marie Vincent de Gournay as a pejorative, but has since entered common usage.

Scott H. Hawley, Ph.D., Professor of Physics, Belmont University. Webpage: https://hedges.belmont.edu/~shawley/

Acknowledgment: The author thanks L.M. Sacasas for the helpful conversation while preparing this post.

Warfare AI in Ukraine: How Algorithms are Changing Combat

There is a war in Europe, again. Two weeks in and the world is watching in disbelief as Russian forces invade Ukraine. While the conflict is still confined to the two nations, the proximity to NATO nations and the unpredictability of the Russian autocrat has caused the world to get the jitters. It is too soon to speak of WWIII but the prospect is now closer than it has ever been.

No doubt this is the biggest story of the moment with implications that span multiple levels. In this piece, I want to focus on how it is impacting the conversation on AI ethics. This encompasses not only the potential for AI weapons but also the involvement of algorithms in cyber warfare and in addressing the refugee crisis that result from it. In a previous blog, I outlined the first documented uses of AI in an armed conflict. This instance requires a more extensive treatment.

Andrew Ng Rethinks AI warfare

In the AI field, few command as much respect as Andrew Ng. Former Chief Scientist of Baidu and co-founder of Google Brain, he has recently shifted his focus to education and helping startups lead innovation in AI. He prefaces his most recent newsletter this way:

I’ve often thought about the role of AI in military applications, but I haven’t spoken much about it because I don’t want to contribute to the proliferation of AI arms. Many people in AI believe that we shouldn’t have anything to do with military use cases, and I sympathize with that idea. War is horrific, and perhaps the AI community should just avoid it. Nonetheless, I believe it’s time to wrestle with hard, ugly questions about the role of AI in warfare, recognizing that sometimes there are no good options.

Andrew Ng

He goes on to explain how in a globally connected world where a lot of code is open-source, there is no way to ensure these technologies will not fall in the wrong hands. Andrew Ng still defends recent UN guidance that affirms that a human decision-maker should be involved in any warfare system. The thought leader likens it to the treatment of atomic weapons where a global body audits and verifies national commitments. In doing so, he opens the door for the legitimate development of such weapons as long as there are appropriate controls.

Photo by Katie Godowski from Pexels

Andrew’s most salient point is that this is no longer a conversation we can avoid. It needs to happen now. It needs to include military experts, political leaders, and scientists. Moreover, it should include a diverse group of members from civil society as civilians are still the ones who suffer the most in these armed conflicts.

Are we ready to open this pandora’s box? This war may prove that it has already been open.

AI Uses in the Ukraine’s war

While much is still unclear, reports are starting to surface on some AI uses on both sides of the conflict. Ukraine is using semi-autonomous Turkish-made drones that can drop laser-guided bombs. A human operator is still required to pull the triggers but the drone can take off, fly and land on its own. Russia is opting for kamikazi drones that will literally crash into its targets after finding and circling them for a bit. This is certainly a terrifying sight, straight out of Sci-fi movies – a predator machine that will hunt down and strike its enemies with cold precision.

Yet, AI uses are not limited to the battlefield. 21st-century wars are no longer fought with guns and ammunition only but now extend to bits and bytes. Russian troll farms are creating fake faces for propagandist profiles. They understand that any military conflict in our age is followed by an information war to control the narrative. Hence the use of bots or another automated posting mechanism will come in handy in a situation like this.

Photo by Tima Miroshnichenko from Pexels

Furthermore, there is a parallel and very destructive cyber war happening alongside the war in the streets. From the very beginning of the invasion, reports surfaced of Russian cyberattacks on Ukraine’s infrastructure. There are also multi-national cyber defense teams formed to counteract and stop such attempts. While cyber-attacks do not always entail AI techniques, the pursuit to stop them or scale them most often does. This, therefore, ensures AI will be a vital part of the current conflict

Conclusion

While I would hope we could guarantee a war-free world for my children, this is not a reality. The prospect of war will continue and therefore it must be part of our discussions on AI and ethics. This becomes even more relevant as contemporary wars are extending into the digital sphere in unprecedented ways. This is uncharted territory in some ways. In others, it is not, as technology has always been at the center of armed conflict.

As I write this, I pray and hope for a swift resolution to the conflict in Ukraine. Standing with the Ukrainian people and against unilateral aggression, I hope that a mobilized global community will be enough to stop a dictator. I suspect it will not. In the words of the wise prophet Sting, we all hope the Russians love their children too.

Teilhard’s Hope: Technology as an Enabler of Cosmic Evolution

In a previous piece, we explored faithfulness in a technological age through Jacques Ellul’s critical view. In his view, technology, with its fixation on perfection, was stifling to the human spirit and an antithesis to nature. While providing an important contribution to the debate, Ellul’s perspective falls short by failing to recognize that technology is, in its essence, a human phenomenon. In doing so, he highlights the dangers and pitfalls but fails to see their potentialities. What if technology is not opposed to but a result of nature through cosmic evolution?

To complement the previous view, we must turn to another 20th century Frenchman, Pierre Teilhard de Chardin. This Jesuit Paleontologist offered a paradigm-changing perspective by fusing evolution with Christianity. Because of his faith, Teilhard saw evolution not as a heresy to be disproven but as the mechanism through which God created the cosmos. It is this integrative vision that sheds a very different life on what technology is and how it can lead to a flourishing future.

Humanity as a Cosmic Phenomenom

To understand Teilhard’s view of technology one must first turn to his view of the universe. Published in 1955, Le Phénomène human is probably Teilhard’s most complete vision of a purposeful human evolution. To the French Jesuit, evolution is not just a mechanism to explain the diversity of being on earth. It is instead the process by which the cosmos came to be. It is God’s way of bringing us out of stardust, slowly creating order and harmony from the primordial chaos of the Big Bang.

From Pexels.com

In his perspective, cosmic evolution was leading to both diversity and complexity. At the pinnacle of this complexity was human consciousness. That is, evolution brought humanity to earth but as humans became self-aware, this marked a new stage of the cosmic evolution. In this new phase, evolution was pointing towards a future that transcended humanity. This future is what he called the omega point. The future of evolution would lead humanity to a convergence of consciousness, what some now call the singularity.

To fully unpack Teilhard’s teleological view of the cosmic future, one must first understand his concept of the Noosphere. While difficult to explain in a few sentences, the Noosphere is an expanded view of human intelligence that encompasses not just the material reality of human brains but also the more abstract notion of human knowledge. It is not contained in one person but it is present in between all humanity as the air we breathe. The closest analogy we can get is the Internet itself where most of human knowledge is distributed and easily accessible.

The Technological Age as part of Cosmic Evolution

How does technology fit into this rich perspective of cosmic evolution? It is part and parcel of the Noosphere. Teilhard’s expansive concept contained three main parts: 1) heredity; 2) apparatus; and 3) thoughts. The first one has to do with genetic and cultural transfer. Every person receives a set of information both from their parents and their surrounding culture that enables them to function in this world.

From Pexels.com

The second concept encompasses the vast area of human-created tools which we often associate with technology. In addition to our genetic and cultural material, humans now rely on a complex network of artifacts that extend their reach and impact in the natural world. From clothes to fast computers, this apparatus, in Teilhard’s view, is another part of the noosphere.

The reason why this is important is that by placing technology as an extension of human evolution, the French Jesuit connects it back to nature. Unlike in Ellul’s perspective, where technology is a force opposing nature, Teilhard sees continuity. Instead, it is a vital part of the human ecosystem and therefore it is teleological. By placing technology in the noosphere, Teilhard gives it a purpose and direction. It is not a force of destruction but a result and an enabler of cosmic evolution to the omega point.

In other words, nature and cosmic history converge to create this technological age. In that, Teilhard adds a sense of inevitability around technology. This is not to say that he was a blind enthusiast. A European who lived through two world wars, Teilhard is inoculated against the illusion of perpetual progress. Instead, he takes the long view and sees technology as an essential part of the long arch of history towards Cosmic redemption promised in Christian eschatology.

Hope, Caution and Courage

I can say with no reservation that my Christian faith would not exist today if it wasn’t for Teilhard’s integrative theology. In the despair of a false choice between pre-critical Biblical faith and materialistic humanism, I found the third way of Chardian synthesis. This is a story for another time but suffice it to say that the power of Teilhard’s Christian vision is its integration between science and religion. This integration then allows us to have a different conversation about the technological age.

For one, it destroys the illusion of separation between nature and technology. Without negating the dangers and disorientation that technological progress has brought, we can rightfully see it as an extension of cosmic history. Yet, how do we account for the uneasiness we instinctively feel towards it? Why does it not feel natural? One explanation is that as any evolutionary process, it takes time to fully form. Therefore, it is not an issue of substance but of time.

With that said, I cannot shake off reservations with Teilhard’s view of technology as part of cosmic evolution. There is a quasi naivete in his optimist belief in the inevitable evolution of humanity. We all hope he is right but are too afraid to bet our lives on it. The disappointment would be too grave and devastating. Perhaps that is the greatest asset of his view, one that requires courage and faith. One does not need faith to prepare for a dystopian future of technology overlords running the world. It does, however, require a courageous and terrifying faith to believe that technology can fulfill its full potential as another step in human evolution.

Social Unrest, AI Chefs and Keeping Big Tech in line

This year we are starting something new. Some of you may be aware that we keep a repository for articles that are relevant to the topics we discuss in the portal such as AI ethics, AI for good, culture & entertainment, and imagination (theology). We would like to take a step further and publish a monthly re-cap of the most important news on these areas. We are constantly tracking developments in these areas and would like to curate a summary for your edification. This is our first one so I ask you to be patient with us as we figure out formatting.

Can you believe it?

Harvesting data from prayers: Data is everywhere and companies are finding more and more uses for it. As the practice of data harvesting becomes commonplace, nothing is sacred anymore. Not even religion is safe, as pray.com now is collecting and sometimes sharing with other companies like Meta. As the market of religious heats up, investors are flocking to back new ventures. This probably means a future of answered prayers, not by God but by Amazon.

Read more here: BuzzFeed.

Predicting another Jan 6th: What if we could predict and effectively address social unrest before it becomes destructive? This is the promise of new algorithms that helps predict the next social unrest event. Coupcast, developed by the University of Central Florida uses AI and machine learning to predict civil unrest and electoral violence. Regardless of its accuracy, using ML in this arena raises many ethical questions. Is predicting social unrest a cover for suppressing it? Who gets to decide whether social unrest is legitimate or not? Hence we are left with many more questions but little guidance at this moment.

Read more here: WashingtonPost

IRS is looking for your selfies: IRS using facial recognition to identify taxpayers: opportunity or invitation to disaster? You tell me. Either way, the government quietly launched the initiative requiring individuals to sign up with the facial recognition company if they want to check the status of their filling. Needless to say, this move was not well received by civil liberty advocates. In the past, we dove into the ethical challenges of this growing AI practice.

To read more click here: CBS news

Meta’s announces a new AI quantum computer: Company will launch a powerful new quantum AI computer in Q1. This is another sign Meta refuses to listen to its critiques only marching on to its own techno-optimistic vision of the future – one in which it makes Billions of dollars, of course. What is not clear is how this new computer will enhance the company’s ability to create worlds in the metaverse. Game changer or window-dressing? Only time will tell

To read more click here: Venture Beat

AI Outside the Valley

While our attention is on the next move coming from Silicon Valley, a lot is happening in AI and other emerging technologies throughout the world. I would propose, that is actually where the future of these technologies lie. Here is a short selection of related updates from the globe.

Photo by Hitesh Choudhary on Unsplash

Digital Surveillance in South Asia: As activists and dissidents move their activity online, so does their repression. In this interesting article, Antonia Timmerman outlines 5 main ways authoritarian regimes are using cyber tools to suppress dissent.

To read more click here: Rest of the World

Using AI for health, you better be in a rich country: As we have discussed in previous blogs, AI algorithms are only as good as the data we feed them. Take eye illness, because most available images are coming from Europe, US, and China, researchers worry they will not be able to detect problems in under-represented groups. This example highlights that a true democratization of AI must include first an expansion of data sources.

To read more click here: Wired

US companies fighting for Latin American talent: Not all is bad news for the developing world. As the search for tech talent in the developed centers is returning empty, many are turning to overlooked areas. Latin American developers are currently on high demand, driving wages up but also creating problems for local companies who are unable to compete with foreign recruiters.

To read more click here: Rest of the World

Global Race for AI Regulation Marches On

unsplash

The window for new regulation in the US congress may be ending as mid-term elections approach. This will ensure the country will remain lagging behind global efforts to rein in Big Tech’s growing market power and mounting abuses.

As governments fail to take action or do it slowly, some are thinking about a different route. Could self-regulation be the answer? With that in mind, leading tech companies are joining forces to come up with rules for the metaverse as the technology unfolds. Will that be enough?

Certainly not for the Chinese government if you ask. The Asian super-power released the first global efforts to regulate deepfakes. With this unprecedented move, China leads the way being the first government to address this growing concern. Could this be a blueprint for other countries?

Finally, the EU fines for violations of GDPR hit a staggering 1.2 Billion. Amazon alone was slapped with an $850 Million penalty for its poor handling of customer data. While this is welcome news, one cannot assume it will lead to a change in behavior. Given mounting profit margins, Big Tech may see these fines not as a deterrent but simply as a cost of doing business in Europe. We certainly hope not but would be naive not to consider this possibility.

Cool Stuff

NASA’s latest and largest-ever Telescope reached its final destination. James Webb is now ready to start collecting data. Astrophysicists and space geeks (like myself) are excited about the possibilities of seeing well into the cosmic past. The potential for new discoveries and new knowledge is endless.

To read more click here: Nature

Chef AI, coming to a kitchen near you. In an interesting application, chefs are using AI to tinker and improve on their recipes. The results have been delicious. Driven in part by a trend away from animal protein, Chefs need to get more creative and AI is here to help.

To read more click here: BBC

That’s it. This is our update for January. Many blessings and see you next month!

Kora, our new addition to the family says hi and thank you for reading.

Finding Hope in a Sea of Skepticism over Facebook Algorithms

The previous blog summarized the first part of our discussion on Facebook algorithms and how they can become accountable to users. This blog summarizes the second part where we took a look at the potential and reasons for hope in this technology. While the temptation of algorithm misuse for profit maximization will always continue, can these technologies also work for the good? Here are some thoughts on this direction.


Elias: I never know where the discussion is going to go, but I’m loving this. I loved the question about tradition. Social media and Facebook are part of a new tradition that emerged out of Silicon Valley. But I would say that they are part of the broader tradition emerging out of cyberspace (Internet), which is now roughly 25 years old. I would also mention Transhumanism as one of the traditions influencing Big Tech titans and many of its leaders.  The mix of all of them forms a type of Techno Capitalism that is slowly conquering the world.  

Levi:  This reminds me of a post on the Facebook group that Jennifer posted a few months ago. It was a fascinating video from a Toronto TV station where they looked 20 years back and showed an interview with a couple of men about the internet. They were talking with excitement about the internet. They then interviewed the same men today. Considering how many things have changed, he was very skeptical. There was so much optimism and then everything became a sort of capitalist money-grabbing goal. I used to teach business ethics for 6 years in the Bay area. One of the main things I taught my students about is the questions we need to ask when looking at a company.  What is their mission, and values? What does the company say they uphold? These questions tell you a lot about what the company’s tradition is. 

The second thing is what is the actual corporate culture? One of the projects I would have the students do is every week they would present some ethical problem in the news related to some business. It’s never hard to find topics, which is depressing. We found a lot of companies that have had really terrible corporate cultures. Some people were incentivized from the top to do unethical things. When that is your standard, meeting a certain monetary goal, everything else becomes subordinated to that. 

Milton Friedman said 50 years ago that the social responsibility of a business is to increase its profit. According to Friedman, anything we do legally to obtain this is acceptable. If the goal is simply this then the legal aspect is subordinate to the goal then we can change that by changing laws in our favor. The challenge is that this focus has to come from the top. In a company like Facebook, Zuckerberg has the majority of shares, and then the board of directors are people he has hand-picked. So there is very little actual oversight. 

Within the question about tradition, Facebook has made it very clear that their tradition is sharing. That means sharing your personal information with other people. We would want to do that to some extent, but he is also sharing your data with third-party companies that are buying the data to make money. If profit is the goal everything becomes subordinated to that. Whether the sharing is positive or negative is less of a question of is it being shared and if it’s making money.

Photo by Mae Dulay on UnsplashPhoto by Mae Dulay on Unsplash
Photo by Mae Dulay on Unsplash

Glimpses of Hope in a Sea of Skepticism 

Elias: I would like to invite Micah, president of the Christian Transhumanist Association to share some thoughts on this topic. We have extensively identified the ethical challenges in this area. What does Christian Transhumanism has to say and are there any reasons for hope?

Micah:  On the challenge of finding hope and optimism, I was thinking if we compare this to the Christian tradition and development of the creeds, you are seeing some people looking at this emergence and saying that it is a radical, hopeful, and optimistic option in a world of pessimism. If you think about ideas of resurrection and other topics like this, it is a radical optimism about what will happen to the created order. 

The problem you run into (even in the New Testament) is a point of disappointed expectations. People are “where is he coming, where is the transformation, when will all this be made right?” So the apostles and the Christian community have to come in and explain the process of waiting, it will take a while but we can’t lose hope.  So a good Christian tradition is to maintain optimism and hope in the face of disappointed expectations and failures as a community. In the midst of bad news, they stayed anchored on the future good news.

There is a lesson in this tradition of looking at the optimism of the early internet community and seeing how people maintain that over time. You have to have a long-term view that figures out a way to redemptively take into account the huge hurdles and downfalls you encounter along the way. This is what the Christian and theological perspectives have to offer. I’ve heard from influential people from Silicon Valley that you can’t maintain that kind of perspective from a secular angle, if you only see from a secular angle you will be sorely disappointed. Bringing the theological perspective allows you to understand that the ups and downs are a part of the process, so you have to engage redemptively to aim for something else on the other side. 

Taken from Unsplash.com

Explainability and Global Differences

Micah: From a technical perspective, I want to raise the prospect of explainability AI and algorithms. I liked what Maggie pointed out about the ecosystems where the developers don’t actually understand what’s going on, that’s also been my experience. It’s what we’ve been baking into our algorithm, this lack of understanding of what is actually happening. I think a lot of people have the hope that we can make our algorithms self-explanatory, and I do definitely think we can make algorithms that explain themselves. But from a philosophical perspective, I think we can never trust those because even we can’t fully understand our mental processes. Yet, even if we could explain and we could trust them perfectly there are still unintended consequences.

I believe we need to move the focus of the criteria. Instead of seeking the perfect algorithm, focus on what are the inputs and outputs of this algorithm.  It has to move to a place of intentionality where we are continually revisiting and criticizing our intentions. How are we measuring it (algorithm) and how are we feeding them information that shapes it? These are just some questions to shift the direction of our thinking  

Yvonne: You have shared very interesting ideas. I’ve been having some different thoughts on what I’ve been reading. In terms of regulation and how companies would operate in one region versus the other. I have a running group with some Chinese women. They often talk to me about how the rules in China are very strict towards social media companies. Even Facebook isn’t allowed to operate fully there. They have their own Chinese versions of social network companies.

Leadership plays a crucial role in what unfolds in a company and any kind of environment. When I join a company or a group, I can tell the atmosphere based on how the leadership operates. A lot of big companies like Facebook, their leadership, and decision-makers have the same mindset and thoughts on profits. Unless regulation enforces morality and ethics most companies will get away with whatever they want to. That’s where we come in. I believe we, as Christians, can influence how things unfold and how things operate using our Christian perspective.  

In the past year, we have all seen how useful technology can be. Even this group is a testimony of how even with different time zones we can have a meeting without having to take plane tickets, which would be more expensive. I think technology has its upsides when applied correctly. That defines whether it will be helpful or detrimental to society. 

Brian:  Responding to the first part of what Micah said when we think about technology and its role it can be easier if we think about two perspectives. One as a creative vector. Where we can create value and good things. But at every step, there is the possibility of bias to creep in. I mean bias very broadly, it can be discrimination or simple mistakes that multiply over time. So there has to be a “healing” vector where bias is corrected. Once the healing vector is incorporated, the creative vector can be a leading force again. I believe that the healing vector has to start outside ourselves. The central thought of the Christian faith is that we can’t save ourselves, we require God’s intervention and grace.  This grace moves through people and communities so that we can actively participate in it. 

Elias: I think this also comes from the concept of co-creation. The partnership between humanity and God, embracing both our limitation (what some call sin) but also our immense potential as divine image-bearers.

I look forward to our next discussion. Until then, blessings to all of you.


How Can Facebook Algorithms Be Accountable to Users?

Second AI Theology Advisory Board meeting: Part 1

In our past discussion, the theme of safeguarding human dignity came up as a central concern in the discussion of AI ethics. In our second meeting, I wanted us to dive deeper into a contemporary use case to see what themes would emerge. For that aim, our dialogue centered on the WSJ’s recent expose on Facebook’s unwillingness to address problems with its algorithms. This was the case even after internal research clearly identified and proposed solutions to the problems. While this is a classic case of how the imperative of profit can cloud ethical considerations, it also highlights algorithms can create self-enforcing vicious cycles never intended to be there in the first place.

Here is a summary of our time together:

Elias: Today we are going to focus on social media algorithms. The role of AI is directing the feed that everybody sees, and everybody sees something different. What I thought was interesting about the article is that in 2018 the company tried to make changes to improve the quality of engagement but it turned out doing the exact opposite.

I experienced that first hand. A while back, I noticed that the controversial and angry comments were the ones getting attention. So, sometimes I would poke a little bit at some members of our community that have more orthodox views of certain things, and that would get more engagement. In doing so, I also reinforced the vicious cycle. 

That is where I want to center the discussion. There are so many issues we can talk about Facebook algorithms. There is the technical side, there’s the legal side and there’s also the philosophical side. At the end of the day, Facebook algorithms are a reflection of who we are, even if it is run by a few.

These are my initial thoughts. I was wondering if we can start with Davi on the legal side. Some of these findings highlight the need for regulation.  The unfathomed power of social media companies can literally move elections. Davi, what are your thoughts on smart regulation, since shutting down isn’t the answer, what would safeguard human dignity and help put guardrails around companies like Facebook?    

By Pixabay

Davi: At least in the US, the internet is a wild west of any regulation or any type of influence in big tech.  Companies have tried to self-regulate, but they don’t have the incentive to really crack down on this. It’s always about profits, they talk the talk but don’t walk the walk. At the end of the day, the stock prices speak higher.

In the past, I’ve been approached by Facebook’s recruiters. In the package offered, their compensation was relatively low compared to industry standards but they offered life-changing stock rights. Hence, stock prices are key not only to management but to many employees as well.

Oligarchies do not allow serious regulation. As I researched data and privacy regulation, I came through the common things against discrimination, and most of the more serious cases are being brought to court through the federal trade commission to protect the consumers. Yet, regulations are being done in a very random kind of way. There are some task forces in Congress to come up with a  regulatory framework. But this is mostly in Washington, where it’s really hard to get anything done. 

Some states are trying to mimic what’s happening in Europe and bring the concept of human dignity, in comprehensive privacy laws like the ones of Europe. We have a comprehensive privacy law in California, Virginia, Colorado. And every day I review new legislation. Last week it was Connecticut. They are all going towards the European model. This fragmented approach is less than ideal. 

Levi: I have a question for Davi. You mentioned European regulation, and for what I understand, but because the EU represents such a large constituency when they shape policies for privacy it seems much easier for companies like FB to conform their overall strategy to fit EU policy, instead of making a tailored policy just for EU users, or California users, etc. Do you see that happening or is the intent to diversify according to different laws?  

Davi: I think that isn’t happening on Facebook. We can use 2010 as an example,  when the company launched a face recognition capability tool that would recognize people in photos so you could tag them more easily, increasing interactions. There were a lot of privacy issues with this technology. They shut it down in Europe in 2011, and then in 2013 for the US. Then they relaunched in the US. Europe has a big influential authority, but that is not enough to buck financial interests.

As a lawyer, that would be the best course of action. If multinationals based themselves on the strictest law and apply it to everyone the same way. That is the best option legal-wise, but then there can be different approaches.

Agile and Stock Options

Elias: Part of the issue of Facebook algorithms wasn’t about disparate impact per se. It was really about a systemic problem of increasing negativity. And one thing I thought was how sentiment analysis is very common right now. There is natural language processing, you can get a text and the computer can say “this person is happy – mad”. It’s mostly binary: negative and positive. What if we could have, just like in the stock market, a safety switch for when the market value drops too fast. At a certain threshold, the switch shuts down the whole thing. So I wonder, why can’t we have a negativity shut-off switch? If the platform is just off the charts with negativity, why don’t we just shut it down? Shut down FB for 30min and let people live their lives outside of the platform.  That’s just one idea for addressing this problem. 

I want to invite members that are coming from a technical background. What are some technical solutions to this problem?

Maggie: I feel like a lot of it comes from some of the engrained methodologies of agile gone wrong. That focus on the short term, iteration, and particular OKRs that were burst from Google where you are focusing on outlandish goals and just pushing for that short term. Echoing what Davi said on the stock options of big companies, and when you look at the packages, it’s mostly about stocks. 

You have to get people thinking about the long term. But these people are also invested in these companies. Thinking long term isn’t the only answer but I think a lot of the problems have to do with the short term iteration from the process. 

Noreen: Building on what Maggie said, a lot of it is the stock. In the sense that they don’t want to employ the number of people, they would have to employ to really take care of this. Right now, AI is just not at a point where it can work by itself. It’s a good tool. But turning it loose by itself isn’t sufficient. If they really are going to get a handle on what’s going on in the platform, they need to hire a lot more people to be working hand in hand with Artificial Intelligence using it as a tool, not as a substitute for moderators who actually work with this stuff.

Maggie: I think Amazon would just blow up by saying that because one of their core tenets is getting rid of humans. 

But of course, AI doesn’t need a stock option, it’s cheap. I think this is why they are going in this direction. This is an excellent example of something I’ve written over and over, AI is a great tool but not a substitute for human beings.  They would need to hire a bunch of people to be working with the programs, tweaking the programs, overseeing the programs. And that is what they don’t want to do. 

From Geralt on Pixabay

Transparency and Trade Secrets

Brian: This reminded me of how Facebook algorithms are using humans in a way, our human participation and using it as data to train them. However, there’s no transparency that would help me participate in a way that’s going to be more constructive.

Thinking about the article Elias shared from Wall Street Journal, that idea of ranking posts on Facebook. I had no idea that a simple like would get 1 point on the rank and a heart would get 5 points. If I had known that, I might have been more intentional using hearts instead of thumbs up to increase positivity. Just that simple bit of transparency to let us know how our actions affect what we see and how they affect the algorithm could go a long way. 

Davi: The challenge with transparency is trade secrets. Facebook algorithms are an example of a trade secret. Companies aren’t going to do it but if we can provide them with support and protection, and at the same time require this transparency. All technology companies tend to be so protective of their intellectual property, that is their gold, the only thing they have. This is a point where laws could really help both sides. Legislation that fosters transparency while still allowing for trade secrets to be safeguarded would go a long way.

Maggie: Companies are very protective of this information. And I would gamble that they don’t know how it works themselves. Because developers do not document things well. So people also might not know. 

Frantisek: I’m from Europe, and as far as I’m concerned here the regulation at a national level, or directly from the EU, has a lot to do with taxation. They are trying to figure out how to tax these big companies like Google, Amazon, and Facebook. It’s an attempt to not let the money stay somewhere else, but actually gets taxed and paid in the EU. This is a big issue right now.

In relation to theology, this issue of regulation is connected with the issue of authority. We are also dealing with authority in theology and in all churches. Who is deciding what is useful and what isn’t?

Now the question is, what is the tradition of social networks? As we talk about Facebook algorithms, there is a tradition. Can we always speak of tradition? How is this tradition shaped?              

From the perspective of the user. I think there might be an issue with anonymous users. Because the problem of social media is that you can make up a nickname and make a mess and get away with anything, that at least is what people think. Social networks are just an extension of normal life, trying to act the same way I do in normal life and in cyberspace as a Christian. The life commandment from Jesus is to treat your neighbors well. 

China and the EU jump ahead of the US in the Race for Ethical AI

Given the crucial role AI technologies are playing in the defense industry, it is no secret that leading nations will be seeking the upper hand. US, China, and the EU have put forth plans to guide and prioritize research in this area. At AI Theology, we are interested in a very different kind of race. One that is less about technological supremacy and more about ensuring the flourishing of life. We call it the race for ethical AI.

What are governments doing to minimize, contain and limit the harm of AI applications? Looking at the leaders in this area, two show signs of making progress while one is still sitting on the sidelines. Though through different methods, China and the EU took decisive action to start addressing the challenge. The US has been awfully quiet in this area, even as most leading AI organizations have their headquarters on its soil.

The EU Tackles Facial Recognition

Last week, the European Parliament passed a ground-breaking resolution curbing the use of AI for mass surveillance and predictive policing based on behavioral data. In its language, the document calls out companies like Clearview for their controversial use of facial recognition in law enforcement. It also lists many examples in which this technology has erroneously targeted minorities. The legislative body also calls for greater transparency and human involvement in the decision process that comes from algorithms.

Photo by Maksim Chernishev on Unsplash

While not legally binding, this is a first and important step in regulating the use of computer vision in law enforcement. This is part of a bigger effort the EU is taking to draft regulations on AI to address multiple applications of the technology. In this sense, the multi-state body becomes the pioneer in attempting to place guardrails on AI use possibly becoming the standard for other countries to follow.

Even though ethical AI is not limited to regulation, government action can have a sweeping impact in curbing abuse and protecting the vulnerable. It also sends a strong signal to companies acting in this space that more accountability is on its way. This will likely force big tech and AI start-ups to take a hard look at how they develop products and deliver their services. In short, good legislation can be a catalyst for the type of change we need. In this way, the EU leaps forward in the race for ethical AI.

China Take Steps towards Ethical AI

On the other side of the world, another AI leader put forth guidelines on the use of the technology. The document outlines principles for algorithm governance, protect privacy, and give users more autonomy over their data. Besides its significance, it is notable that the guidelines include language around making the technology “people-oriented” and appeals to common values.

Photo by Timothée Gidenne on Unsplash

The guidelines for ethical AI are part of a broader effort to rein big tech power within the world’s most populous nation. Earlier this year, the government published a policy to better control recommendation algorithms on the Internet. This and other measures are sending a strong signal to the Chinese budding digital sector that the government is watching and will keep them accountable. Such a move also contributes to the centralization of power in the government in a way many western societies would not be comfortable with. However, in this case, they seem to align with the public good.

Regardless of how these guidelines will be implemented, it is notable that China would be at the forefront of publishing these guidelines. It shows that Beijing is taking the threat of AI misuse seriously, at least when it is perpetrated by business enterprises.

US Fragmented Efforts

What about the North American AI leader? Unfortunately, to date, there is no sweeping national effort to address AI abuse in the US. This is not to say that nothing is happening. States like California and Illinois are working on legislation on data privacy and AI surveillance. Biden’s chief science advisor recently is called for an AI Bill of rights. In a previous blog, I outlined US efforts to address bias in facial recognition as well.

Yet, nothing concrete has happened at a national level. The best we got was a FB former employee’s account of the company’s reluctance to curb AI abuse. It made for great television but no sweeping legislation to follow.

If there is a race for ethical AI, the North American competitor is behind. If this trend continues, AI ethics will be at the mercy of large company boardrooms in the Yankee nation. Company boards are never free of conflict of interest as the next quarter’s profit often takes precedence over human flourishing.

Self-regulation has not worked. It is time we move towards more active government intervention for the sake of the common good. This is a race the US cannot afford to sit out. It is time to hop on the track.

Placing Human Dignity at the Center of AI Ethics

In late August we had our kick-off Zoom meeting of the Advisory Board. This is the first of our monthly meetings where we will be exploring the intersection of AI and spirituality. The idea is to gather scholars, professionals, and clergy to discuss this topic from a multi-faceted view. In this blog, we publish a short summary of our first conversation. The key theme that emerged was a concern for safeguarding and uploading human dignity as AI becomes embedded in growing spheres of our lives. The preocupation must inhabit the center of all AI discussions and be the guiding principles for laws, business practices and policies.

Question for Discussion: What, in your perspective, is the most pressing issue on AI ethics in the next three to five years? What keeps you up at night?

Brian Sigmon: The values from which AI is being developed, and their end goals. What is AI oriented for? Usually in the US, it’s oriented towards profit, not oriented to the common good or toward human flourishing. Until you change the fundamental orientation of AI’s development, you’re going to have problems.

AI is so pervasive in our lives that we cannot escape it. We don’t always understand the logic behind it. It is often beneath the surface intertwined with many other issues. For example, when I go on social media, AI controls what I see on my feed. It does not optimize in making me a better person but instead maximize clicks and revenue. That, to me, is the key issue.

Elias Kruger: Thank you, Brian. To add some color to that, since the pandemic, companies have increased their investment in AI. This in turn is creating a corporate AI race that will further ensure the encroachment of AI across multiple industries. How companies execute this AI strategy will deeply shape our lives, not just here in the US but globally.

Photo by Chris Montgomery on Unsplash

Frantisek Stech: Coming from Eastern Europe, one of the greatest issues is the abuse of AI from authoritarian non-democratic regimes for human control. In other words, it is the relationship between AI control and human freedom. Another practical problem is how people are afraid to lose their jobs to AI-driven machines.  

Elias Kruger: Thanks Frantisek, as you know we are aware of what is happening in China with the merging of AI and authoritarian governments. Can you tell us a little bit about your area of the world? Is AI more government-driven or more corporate?

Frantisek Stech: In the Czech Republic, we belong to the EU, an therefore to the West. So, it is very much corporate-driven. Yet, we are very close to our Eastern neighbors are we are watching closely how things develop in Belarussia and China especially as they will inevitably impact our region of the world.

However, this does not mean we are free from danger there. There is the issue of manipulation of elections that started with the Cambridge Analytics scandal and issues with the presidential elections in the US. Now we are approaching elections in the EU, so there is a lot of discussions about how AI will be used for manipulation and the problem . So when people hear AI, they often associate with politics. So they think they are already being manipulated if they buy a phone with Facial recognition. We have to be cautious but not completely afraid. 

Ben Day: I am often pondering on this question of AI’s, or technology in general, relates with dignity and individual human flourishing. When we aggregate and manipulate data, we strip out individual human dignity which is Christian virtue, and begin to see people as compilations of manipulative data. It is really a threat to ontology, to our very sense of being. In effect, it is an assault on human dignity through AI.

Going further, I am interested in this question of how AI encroaches in our sense of identity. That is, how algorithms govern my entire exposure to media and news. Not just that but AI impacts our whole social eco-verse online and offline. What does that have to do with the nature of my being?

I often say that I have a very low view of humanity. I don’t think human beings are that great. And so, I fear that AI can manipulate the worst parts of human nature. That is an encroachment in huam dignity.

In the Episcopal church, we believe that serving Christ is intimately connected with upholding the dignity of human beings. So, if we are turning a blind eyed to human dignities being manipulated, then my Christian praxis compels me by moral obligation to do something about it. 

Photo by Liv Merenberg on Unsplash

Elias Kruger: Can you give us a specific example of how this plays out?

Ben Day: Let me give you one example of how this affected my ministry. I removed myself from most of social media as of October of 2016 because of what I was witnessing. I saw members of my church sparring on the internet, attacking each other’s dignity, and intellect over politicized issues. The vitriol was so pervasive that I encounter a moral dilemma. As a priest, it is my duty to deny the sacrament to those who are in unrepetant sin.

So I would face parishioners only hours after these spars online and wonder whether I should offer them the sacrament. I was facing this connundrum as a result of algorithms manipulating feeds to foster angry engagements because it leads to profit. It virtually puts the church at odds to how these companies pursue profit.

Levi Checketts:  I lived in Berkley for many years and the cost of living there was really high. It was so because a lot of people who worked in Silicon Valley or in San Francisco were moving there. The influx of well-to-do professionals raised home prices in the area, forcing less fortunate existing residents to move out.

So, there is all this money going into AI. Of the big 5 biggest companies in market cap, three are in Silicon Valley and two in the Seattle area. Tech professionals often do not have full awareness of the impact their work is having on the rest of the world. For example, a few years back, a tech employee wrote an op-ed complaining about having to see disgusting homeless people in his way to work when he was paying so much for rent.

What I realized is that there is a massive disconnect between humanity and the people making decisions for companies that are larger than many countries’ economies. My biggest concern is that the people who are in charge and controlling AI have many blind spots. Their inability to emphathize with those who are suffering or even notice the realities of systems that breed oppression and poverty. To them, there is always a technical fix. Many lack the humility to listen to other perspectives, and come from mainly male Asian and White backgrounds. They are often opposed to other perspectives that challenge their work.

There have been high-profile cases recently like Google firing a black female researcher because she spoke up about problems in the company. The question that Ben mentioned about human dignity in AI is very pressing. If we want to address that, we need people from different backgrounds making decisions and working to develop these technologies.

Futhermore, if we define AI as a being that makes strictly rational decisions, what about people who do not fit that mold?

The key questions are where do we locate this dignity and how do we make sure AI doesn’t run roughshod over humanity?

Davi Leitão: These were all great points that I was not thinking about before. Thank you for sharing this with us.

All of these are important questions which drive the need for regulation and laws that will gear profit-driven corporations to the right path. All of the privacy and data security laws stand on a set of principles written in 1981 by the OECD. These laws are looking to these principles and putting into practice. They are there to inform and safeguard people from bias.

My question is: what are the blind spots on the FIP (fair information principles) that are not accounting for these new issues technology has brought in? This problem is a wide net, but it can help guide a lot of new laws that will come. This is the only way to make companies care about human dignity.

Right now, there is a proliferation of state laws. But this brings another problem: customers of states that have regulation laws can suffer discrimination by companies from other states. Therefore, there is a need for a federal uniform set of principles and laws about privacy in the US. The inconsistency between state laws keep lawyers in business but ultimately harm the average citizen.

Elias Kruger:  Thanks for this perspective. I think it would be a good takeaway for the group to look for blindspots in these principles. AI is about algorithms and data. Data is fundamental. If we don’t handle it correctly, we can’t fix it with algorithms. 

My 2 cents is that when it comes to AI applications, the one that concerns me most is facial recognition for surveillance and law enforcement. I don’t think there is any other application where a mistake can cause such devastating impact on the victim than here. When AI wrongly incriminates someone of a crime because an algorithm confused their face with the actual perpetrator, the indidivual loses his freedom. There is no way to recover from that.

This application calls for immediate regulation that puts human dignity at the center of AI in so we can prevent serious problems in the future.

Thanks everybody for your time.

3 Effective Ways to Improve AI Ethics Discussions

Let’s face it: the quality of discourse in the blogosphere and social media is dismal! What gets traffic is most often not the best representation of a topic but instead the most outrageous click-bait title. As a recent WSJ report suggests, content creators face the constant temptation (including myself and this portal) to trade well-crafted arguments for divisive pieces that emphasize controversy. When it comes to discussions on AI ethics, the situation is no different. Useless outrageous claims abound while a nuanced conversation that would help improve AI ethics discussions are rare.

It is time to raise the level of discourse on AI impact. While I am encouraged to see this topic get the attention is getting, I fear that it is fraught with hyperboles and misinformation which degrade rather than improve dialogue. Consequently, most pieces lead to precipitated conclusions rather than the thoughtful dialogue the topic requires. In this blog, I put forth three ways to improve the quality of dialogue in this space. By keeping them in mind, you can differentiate what is worth your attention from what can be ignored.

Impact is not the same as Intent

The narrative of big business or government seeking to hurt the small guy is an attractive one. We are hard-wired to choose simple explanations and simple storylines of good and evil fit the bill. Most often, they are a good front for our addiction to escape-goating. By putting all evil in one entity, we are excused from looking at evil among ourselves. Most importantly, they undermine the reality that a lot of evil happens as unintended consequences of well-intended efforts.

When it comes to AI bias, I am concerned that too many stories imply a definite villain without probing further to understand systemic dynamics. Consider this article from TNW, titled “Stop Calling it Bias: AI is Racist.” The click-bait title should you give a reason to pause. Moreover, the author seems to assign a human intent to complex systems without probing further into the causes. This type of hyperbolic rhetoric does more harm than good, assigning blame towards one group while ignoring the technical complexities of the issue at hand.

Photo by Christina @ wocintechchat.com on Unsplash

By implying intent to impact, these pieces miss the opportunity to ask broader questions such as: what environmental factors amplified the harmful impact of this problem? How could other actors such as consumers, users, businesses, and regulators play a part in mitigating risks in the future? What technical limitations helped cause or expand the problem? These are a few questions that can elevate the discussion on AI’s impact. Above all, they help us get past the idea that for every harmful impact lies a morally deficient actor behind it.

Generalizations Do More Harm than Good

Ironically, this is precisely at the root of the problem. What do I mean by that? AI algorithms err because they rely on generalizations of past data. Then, when they see new cases, they tend to, as the adage goes, “jump to conclusions.” While many times this has harmless consequences, such as recommending Strawberry Shortcake for me in my Netflix cue, other times this selection can cause serious harm.

Yet, when it comes to articles on AI, the problem is when the author takes one case and generalizes to all cases. Consider this Forbes article about AI. It takes a few statements by Elon Musk, one study, and a lot of speculation to tell us that AI is dangerous. While some of the points are valid, the article does nothing to help us understand why exactly it is dangerous and what we can do about it. In that sense, it does more harm than good giving the reader reasons for worry without grounding them on evidence or proposing solutions.

Taking anecdotal evidence (one case) devoid of statistical backing and stating it as the norm is very misleading. We often pay attention to these stories because they tend to describe extreme cases of AI adverse impact. Not that we should dismiss them outright but consider them in context. We should also ask how prevalent is the problem? Is it increasing or decreasing? What can be done to ensure such cases remain rare or non-existent? By staying at a general level we miss the opportunity to better understand the problem. Thus, it does very little to improve AI ethics discussions

Show me the Data

This leads me to the third and final recommendation. Discussions on AI ethics must stand on empirical evidence. In fact, given that data is at the foundation of algorithm formation, data is also readily available to evaluate its impact. The accessibility of data is both an advantage and also a reminder that transparency is crucial. This is probably one of the key roles regulators can play, ensuring that companies, government, and NGOs make their data available to the public.

This is not limited to impact but extends into the inputs. That is, understanding the data that algorithms train on is as important as understanding the downstream impact they create. For example, if past data shows that white applicants get higher approval rates for mortgages than people of color, guess what? The models will inevitably replicate this bias in their results. This is certainly a problem that needs to be recognized in the front-end rather than monitored on the outcomes only.

Discussions on AI ethics must include statistical evidence for the issues at hand. That is, when presenting a problem, the claims must be accompanied by numbers. Consider this article from the World Economic Forum. It makes appropriate claims, avoids generalizations, and backs up each claim with research. It not only informs the reader but provides references for further. By doing so, it goes a long way to improve AI ethics discussions.

Conclusion

It is encouraging to see the growing interest in AI. The public must engage with his topic as it affects many aspects of our lives. Expanding dialogue and welcoming new voices to the table is critical to ensure AI will work towards human flourishing. With that said, it is now time to ground AI ethics discussions on real evidence and sober assessments of cause and effect. We must resist the temptation of escape-goating and lazy generalizing. Instead, we must pursue the careful path of relentless probing and examining evidence.

This starts with content creators. As those who frame the topic, we must do a better job to represent the issues clearly. We must avoid misleading statements that attract eyeballs but confuse minds. We must also have a commitment to accurate reporting and transparency with our sources.

Can we take the challenge to improve AI ethics discussions? I hope so.

Developing an E-Bike Faith: Divine Power with Human Effort

What can technology teach about faith? In a past blog, I spoke of the mystical qubit. Previously I spoke on how AI can expand our view of God. In this blog, I explore a different technology that is now becoming a common fixture of our cities: e-bikes. A few weeks ago I bought a used one and have loved riding it ever since. For those wondering, you still get your exercise minus the heart palpitations in the uphill climbs. But I digress, this is not a blog about the benefits of an e-bike but of how its hybrid nature can teach us about faith and spirituality.

Biking to Seminary

Eight years ago, we moved to sunny Southern California so I could attend seminary. We found a house about 5 miles from the campus which in my mind meant that I could commute by bike. The distance was reasonable and the wonderful weather seem to conspire in my favor. I could finally free myself from the shackles of motorized dependency.

On our first weekend there, I decided to go for a trial bike ride. The way to the campus went by like a breeze. In no more than 15 minutes I was arriving at Fuller seminary beaming in delight. Yet, I had a nagging suspicion the way back home would be different. One thing that did not enter my calculations was that though we were only 5 miles away from campus, our house was at the foothills of Altadena. That meant that the only way home was uphill. The first 2 miles were bearable yet by mile 3, my legs were giving out. I eventually made it back home drenched in sweat and disappointment.

It became clear that this would not be a ride I could take often. My dreams of biking to seminary ended that day. Back to the gasoline cages for the rescue, not as exciting but definitely more practical.

Photo by Federico Beccari on Unsplash

Divine Electricity

We now live in the Atlanta area and often go to Chattanooga for day trips. This charming Tennesee jewel offers a beautiful riverfront with many attractions for families like ours. Like many cities seeking to attract Millenials, they offer a network of public bikes for a small cost. Among them, I noticed they had some e-bikes available. For a while, I was curious to try one but not enough to shell out the thousands of dollars they cost. Timidly, I pick one for a leisure ride in the city.

From the beginning, I could sense the difference. I still had to pedal normally like I would on a normal bike. Yet, as I pedaled it was like I got a little push that made my pedaling more effective. I would dash by other bikers glancing back at them triumphantly. I then decided to test in an uphill. Would the push sustain or eventually fizzle out because of gravity?

To my contentment, that was when the e-bike shined. For those accustomed to biking, you know that right before going uphill you pedal fast to get as much speed as you can. As you start climbing, you switch to lower gears until the bike is barely moving while you pedal intensively. You make up for the weight relief by tripling your pedal rotations. It can be demoralizing to pedal like a maniac but move like a turtle which is why many dismount and walk. It is like all that effort dissipates by the gravitational pull on the bike.

Pedaling uphill in an e-bike is a completely different experience. First, there is no need to maximize your speed coming into it. You pedal normally and as the bike slows down, the electric motor kicks in to propel you forward. You end up keeping the same speed while pedaling at the same rate.

Goodbye frantic-pedaling-slow-going uphill, hello eternal-e-bike-flatlands

It is as if the hand of God is pushing you from behind when your leg muscles can keep the speed. Going up is no longer a drag but a thrill, all thanks to the small electric motor in the back wheel capable of pushing up a grown man and a 50 lbs bike.

Humanity Plus

If I could change one thing in the Western Christian tradition, that would be the persistent and relentless loathing for humanity. From very early on, and at times even expressed in the biblical text, there is a tendency to make humans look bad in order to make God look good. The impetus stems from a desire to curb our constant temptation to hubris. Sure, we all, especially those whom society put on a pedestal, need to remember our puny frailty lest we overestimate our abilities.

Yet, we are mysteriously beautiful and unpredictable. Once I let go of this indoctrinated loathing, I could face this intricate concoction of flesh in a whole new way. Humanity is a spectacular outcome for an insanely long and painful process of evolution. In fact, that is what often leads us back to the belief in God. The lucid beauty of our humanity is what points us to the invisible divine.

This loathing of humanity often translates into the confusing and ineffective grace-versus-work theology. Stretching Pauline letters to ways never intended by the beloved apostle, theologians have produced miles of literature on the topic. While some of it is helpful (maybe 2%, who knows?), most of it devolves into a tendency to deny the role of human effort in spirituality. In an effort to address transactional legalism, many overshoot in emphasizing divine activity in the process. This is unfortunate because removing the role of human effort in spirituality is a grave mistake. We need both.

Photo by Fabrizio Conti on Unsplash

The Two Sides of Spiritual Growth

Human empowerment plays a pivotal role in a healthy spirituality. If pride is a problem so is its passive-aggressive counterpart low self-loathing. To have an inordinately negative view of self does not lead to godliness but it is a sure path to depression. Along with a realistic view of self comes the understanding that human effort is key to accomplishing things on this earth.

Yet, just like pedaling uphill, human effort can only take you so far. Sometimes you need a divine push. For a long time, I thought divine empowerment worked independently from human effort. What if it is less like a car and more like an e-bike? That is, you still need to pedal, tending for this earth and lifting fellow humans from the curse of entropy. Yet, as you faithfully do it, you are propelled by divine power to reach new heights.

Had e-bikes existed 8 years ago, my idea of commuting to seminary would have been viable. I could have conquered those grueling hills of Altadena with elegant pedaling. I would have made it home without breaking a sweat and still kiss my wife and kids without repelling them with my body odor. It would have been glorious.

Conclusion

Human effort without divine inspiration is not much different from trying to bike uphill. It requires initial concentrated effort only to get us to a state of profuse effort with little movement. Engaging the world without sacred imagination can and will often lead to burnout.

As we face mounting challenges with a stubborn pandemic that will relentlessly destroy our plans, let’s hold on to an e-bike faith. One the calls us to action fueled by divine inspiration. One that reminds us of our human limitation but focuses on a limitless God. That is when we can soar to new heights as divine electricity propel us into new beginnings.