Latest on Ethics, Democratization of AI, and the War in Ukraine

There is a lot happening in the world of AI. In this short update we explore AI ethics, democratization, and tech updates from the war in Ukraine. For more on the latter, check out our recent piece where we dove into how AI is changing the landscape of warfare and possibly tilting the balance of power to smaller actors.

Let me begin with wise words from Andrew Ng from his recent newsletter:

When developers write software, there’s an economic temptation to focus on serving people who have power: How can one show users of a website who have purchasing power an advertisement that motivates them to click? To build a fairer society, let’s also make sure that our software treats all people well, including the least powerful among us.

Andrew Ng

Yes, Andrew. That is what AI theology is all about: rethinking how we do technology to build a world where all life can flourish.

Next Steps in the Democratization of AI

When we talk about democratization of AI, it is often in the context of spreading AI knowledge and benefits to the margins. However, it also means extending AI beyond the technical divide, enabling those with little technical ability to use AI. Though many AI and data science courses have sprung up in recent years, machine learning continues to be the practice of a few.

Big Tech is trying to change that. New Microsoft and Google tools allow more and more users to train models without code. As machine learning becomes a point-and-click affair, I can only imagine the potential of such developments as well as the danger they bring. The prospect of harnessing insight from millions of spreadsheets is promising. It could boost productivity and help many advance in their careers.

taken from pexel.com

One thing is for certain in AI applications: while coding may be optional, ethical reflection will never be. That is why, here in AI Theology, we are serious about expanding the dialogue to the non-technical masses. A good starting point for anyone seeking to better understand AI technologies is our guide. There you can find just enough information to have a big picture view of AI and its applications.

Trends in AI Ethics

The AI Index report from Stanford University has good news: AI ethics has become a thing! The topic is no longer restricted to academics but is now commonplace among industry-funded research. It is becoming part of mainstream organizations. Along with that, legislation efforts to regulate AI have also increased. Spain, the UK, and the US lead the way.

Furthermore, in the US, the FTC is levying penalties on companies that build models on improperly acquired data. In one of the latest instances, Weight Watchers had to destroy its algorithms developed on this type of data. This represents a massive loss for companies. Developing and deploying these models cost millions of dollars, and algorithm destruction prevents organizations from realizing their benefits.

This is an interesting and encouraging development. The threat of algorithm destruction could lead to more responsible data collection and retention practices. Data governance is a key foundation for ethical AI that no one (except for lawyers, of course) wants to talk about. With that said, ensuring good collection practices is not enough to address inherent bias in existing data.

War in Ukraine

A Zelensky deepfake was caught early, but it will likely not be the last. This is just a taste of what is to come as a war on the ground translates into a war of propaganda and cyber attacks. In the meantime, Russia is experiencing a tech worker exodus which could have severe consequences for the country’s IT sector for years to come.

Photo by Katie Godowski from Pexels

On the Ukrainian side, thousands continue to join the cyber army as Anonymous (the world’s largest hacking group) officially declared war on Russia. Multinational tech companies are also lining up to hire Ukrainian coders fleeing their homeland. Yet, challenges still remain around work visas as European countries struggle to absorb the heavy influx of refugees.

The war in Ukraine has been a global conflict from the start. Yet, unlike the major wars of the 20th century, the global community is overwhelmingly picking one side and fighting through multiple fronts outside of military action. While this global solidarity with the invaded nation is encouraging, this also raises the prospect of the military combat spilling into other countries.

Human Mercy is the Antidote to AI-driven Bureaucracy

If bureaucracies are full of human cogs, what’s the difference in replacing them with AI?

(For this entry following the main topic of classifications by machines vs. humans, we consider classifications and their union with judgments for their prospect of life-altering decisions. It is inspired by a sermon given today by pastor Jim Thomas of The Village Chapel, Nashville TN)

Esau’s fateful Choice

In Genesis 25:29–34 we see Esau, the firstborn son of Abraham, coming in from the fields famished, and finding his younger brother Jacob in possession of hearty stew, Esau pleads for some. My paraphrase follows:   Jacob replies, “First you have to give me your birthright.”   “Whatever,” says Esau, “You can have it, just gimme some STEWWW!” …”And thus Esau sold his birthright for a mess of pottage.”

Simply put, it is a bad idea make major, life-altering decisions while in a stressed state, examples of which are often drawn from the acronym HALT:

  • Hungry
  • Angry
  • Lonely
  • Tired

Sometimes HALT becomes SHALT by adding “Sad”.

When we’re in these (S)HALT states, our brains are operating relying on quick inferences “burned” into them either via instinct or training. Dual Process Theory of psychology calls this “System 1” or “Type 1” reasoning, according to the (cf. Kahneman, 2003Strack & Deutch 2004). System 1 includes the fight-or-flight response. While System 1 is fast, it’s also often prone to make errors, oversimplify, and operate based on of biases such as stereotypes and prejudices.

System 1 relies on only a tiny subset of the brain’s overall capacity, the part usually associated with involuntary and regulatory systems of the body governed by the cerebellum and medulla, rather than the cerebrum with its higher-order reasoning capabilities and creativity. Thus trying to make important decisions (if they’re not immediate and life-threatening) while in a System 1 state is inadvisable if waiting is possible.

At a later time we may be more relaxed, content, and able to engage in so-called System 2 reasoning, which is able to consider alternatives, question assumptions, perform planning and goal-alignment, display generosity, seek creative solutions, etc.

Hangry Computers Making Hasty Decisions

Machine Learning systems, other statistics-based models, and even rule-based symbolic AI systems, as sophisticated as they may currently be, are at best operating in a System 1 capacity — to the extent that the analogy to the human brain holds (See, e.g., Turing Award winner Yoshua Bengio’s invited lecture at NeurIPS 2019: video, slides.)

This analogy between human System 1 and AI systems is the reason for this post. AI systems are increasingly serving as proxies for human reasoning, even for important, life-altering decisions. And as such, news stories appear daily with instances of AI systems displaying bias and unjustly employing stereotypes.

So if humans are discouraged from making important decisions while in a System 1 state, and machines are currently capable of only System 1, then why are machines entrusted with important decision-making responsibilities? This is not simply a matter of which companies may choose to offer AI systems for speed and scale; governments do this too.

Government is a great place to look to further this discussion, because government bodies are chock full of humans making life-altering decisions (for others) based on of System 1 reasoning — tired people, implementing decisions based on procedures and rules – bureaucracy.1 In this way, whether it is a human being following a procedure or a machine following its instruction set, the result is quite similar.

Human Costs and Human Goods

By Harald Groven taken from Flickr.com

The building of a large bureaucratic system provides a way to scale and enforce a kind of (to borrow from AI Safety lingo) “value alignment,” whether for governments, companies, or non-profits. The movies of Terry Gilliam (e.g., Brazil) illustrated well the excesses of this through a vast office complex of desks after desks of office drones. The socio-political theorist Max Weber, who advanced many of our conceptions of bureaucracy as a positive means to maximize efficiency and eliminate favoritism, was aware of the danger of excess:

“It is horrible to think that the world could one day be filled with nothing but those little cogs, little men clinging to little jobs and striving towards bigger ones… That the world should know no men but these: it is such an evolution that we are already caught up, and the great question is, therefore, not how we can promote and hasten it, but what can we oppose to this machinery in order to keep a portion of mankind free from this parcelling-out of the soul, from this supreme mastery of the bureaucratic way of life.”

Max Weber, Gesammelte Augsaetze zur Soziologie und Sozialpolitik, pp. 412, (1909).

Thus by outsourcing some of this drudgery to machines, we can “free” some workers from having to serve as “cogs.” This bears some similarity to the practice of replacing human assembly-line workers with robots in hazardous conditions (e.g., welding, toxic environments), whereas in the bureaucratic sense we are removing people from mentally or emotionally taxing situations. Yet one may ask what the other costs of such an enterprise may be, if any: If the system is already “soulless,” then what do we lose by having the human “cogs” in the bureaucratic machine replaced by machines?

The Heart of the Matter

So, what is different about machines doing things, specifically performing classifications (judgments, grading, etc.) as opposed to humans?

One difference between the automated and human forms of bureaucracy is the possibility of discretionary action on the part of humans, such as the demonstration of mercy in certain circumstances. God exhorts believers in Micah 6:8 “to love mercy.” In contrast, human bureaucrats going through the motions of following the rules of their organization can result in what Hannah Arendt termed “the banality of evil,” typified in her portrayal of Nazi war criminal Adolph Eichmann who she described as “neither perverted nor sadistic,” but rather “terrifyingly normal.”

“The sad truth is of the matter is that most evil is done by people who never make up their minds to be or do evil or good.”

Hannah Arendt, The Life of the Mind, Volume 1: Thinking, p.180 (1977).

Here again we see the potential for AI systems, as the ultimate “neutral” rule-followers, to facilitate evil on massive scales. So if machines could somehow deviate from the rules and show mercy on occasion, how would that even work? Which AI researchers are working on the “machine ethics” issue of determining when and how to show mercy? (At the time of writing, this author is unaware of such efforts). Given that human judges have a tendency to show favoritism and bias in the selective granting of mercy to certain ethnicities more than others. Also, the automated systems have shown bias even in rule-following, would the matter of “mercy” simply be a new opportunity for automated unfairness? It is a difficult issue with no clear answers.

Photo by Clay Banks on Unsplash
Photo by Clay Banks on Unsplash

The Human Factor

One other key, if the pedantic difference, between human vs machine “cogs” is the simple fact that with a human being “on the line,” you can try to break out of the limited options presented by menus and if-then decision trees. Even the latest chatbot helper interfaces currently deployed are nothing more than natural language front ends to menus. Whereas with a human being, you can explain your situation, and they can (hopefully) work with you or connect you to someone with the authority to do so.

I suspect that in the next ten years we will see machine systems with increasing forays into System 2 reasoning categories (e.g., causality, planning, self-examination). I’m not sure how I feel about the prospect of pleading with a next-gen chatbot to offer me an exception because the rule shouldn’t apply in this case, or some such. 😉 But it might happen — or more likely such a system will decide whether to kick the matter up to a real human.

Summary

We began by talking about Jacob and Esau. Jacob, the creative swindling deal-broker, and Esau who quite literally “goes with his gut.”. Then we talked about reasoning according to the two systems described by Dual Process Theory, noting that machines currently can do System 1 quite well. The main question was: if humans make numerous erroneous and unjust decisions in a System 1 state, how do we justify the use of machines? And the easy answers available seem to be a cop-out: the incentive of scale, speed, and lower cost. And this is not just “capitalism,” rather these incentives would still be drivers in a variety of socio-economic situations.

Another answer came in the form of bureaucracy. System 1 already exists albeit with humans as operators. We explored “what’s different” between a bureaucracy implemented via humans vs. machines. We realized that “what is lost” is the humans’ ability to transcend, if not their authority in the organization, at least the rigid and deficient set of software designs imposed by vendors of bureaucratic IT systems. Predicting how the best of these systems will improve in the coming years is hard. Yet, given the prevalence of shoddy software in widespread use, I prefer talking to a human in Mumbai rather than “Erica” the Bank of America Chatbot for quite some time.


[1]    Literally “government by the desk,” a term coined originally by 16th-century French economist Jacques Claude Marie Vincent de Gournay as a pejorative, but has since entered common usage.

Scott H. Hawley, Ph.D., Professor of Physics, Belmont University. Webpage: https://hedges.belmont.edu/~shawley/

Acknowledgment: The author thanks L.M. Sacasas for the helpful conversation while preparing this post.

The Nature of Technology: Our Source of Fear and Hope

In previous blogs, I contrasted the critical view of Ellul toward technology with the more hopeful outlook of Teilhard. In this piece, I want to offer a third view of a technological age that is more detached yet still useful for our discussion: W. Brian Arthur’s emphasis on the link between technology and nature. If the positive and negative value judgments offered by Teilhard and Ellul form two ends of a spectrum, Arthur’s alternative takes us outside that spectrum. He provides a more neutral evaluation of what technology is and how humankind should approach it.

Hailing from the prestigious Santa Fe Institute, economist W. Brian Arthur was one of the first academics to tackle the question of how technology evolves. In his 2009 seminal work, The Nature of Technology: What It Is and How It Evolves, Arthur sketches the theoretical contours of how technology emerges. Using examples from the last two centuries, he builds a comprehensive case to show how new technologies build upon previous technologies, similar in nature but also with their own particularities. This book is a valuable resource for anyone seeking a deep-dive, theoretical perspective on the topic of technology’s emergence and evolution.

Given Arthur’s theoretical and technical approach, what could such a detached view contribute to our discussion on the technological age? How can his observations and framework inform a broader analysis of technology’s impact on society? I would like to highlight two main insights from the book that provide further nuance to our discussion.

https://www.simonandschuster.com/books/The-Nature-of-Technology/W-Brian-Arthur/9781416544067

Nature and Purpose

Arthur defines technology as an effort to harness natural phenomena for a purpose. If we break down this definition, two main insights emerge. The first is the recognition that technology is intricately tied to nature. That is, the technology we have today is only here because nature provided the conditions, parameters, and materials for them to exist. Technology does not exist in a vacuum and it is not self-referential. In order to make it work, at its foundation, one must understand the laws of nature that govern our planet, or at least enough of them to use and direct them for a purpose. This is why the marriage of science and technology that started in the 20th century has been so effective. Our understanding of nature grew by leaps and bounds in the last century, and so did technology.

Underneath this point, there is also a surprising realization. Because of nature’s primacy, it does not need technology to live on. After all, nature has progressed for billions of years on Earth without the help of human technologies. It can continue to do so in spite of the absence or failure of technologies in the future. Technology, on the other hand, cannot exist without nature. That is, if we move out to another planet, all our technologies must be reconfigured or redesigned to fit into a new world.

Photo by TeeFarm on pixabay.com

The second insight from Arthur’s definition of technology is equally illuminating. That is, technology starts with a purpose. If we press this question further, it often starts with a problem yearning to be resolved. The builders begin with a clear end goal, exploring the best approach that will best reach that goal. In this way, it is very similar to evolution in nature, an iterative process always in the search for the best way to enhance and perpetuate life in a given environment.

Source of Fear and Hope

Arthur’s work stays mostly on the technical and theoretical level for the vast majority of the book. He goes on in detail not just to explain his theory but also to demonstrate it with examples of how specific technologies evolved. Yet, towards the end, the book takes a more reflective tone. In that part, the author talks about the relationship humans have with technology that goes beyond just building them.

Arthur also evaluates human attitudes towards technology in pop culture and more specifically in science fiction. There he finds an interesting paradox: humans both fear and hope in technology. On the one hand, the technological artifacts that surround our lives give us a sense that they are unnatural, artificial. They are not always intuitive, nor do they blend well with our environments. They evoke strangeness to bodies that evolved for millions of years without their aid. We are unsettled by them. We fear them.

On the other hand, we often deposit our hope in technology. Nature can harm us, or limit us, and technology promises to help us harness nature in ways that allow us to surpass those limitations. In a world where nature has lost its enchantment, we turn our adoring eyes to technology. We look to technical solutions to calm our fears, reduce our anxieties, and provide comfort and distraction from the harsh realities of life. Though the author does not go that far, I would say that the cult of technology has become a religion in and of itself.

Conclusion

Where Ellul approaches technology with pessimism and Teilhard with optimism, Arthur’s perspective allows for both. The paradox of fear and hope undergirds and defines our technological age. There is hope that as technology advances, human suffering and death will diminish. There is also a profound sense of loss and a nagging desire to return to nature, the starting point for our bodies. That uneasiness is hard to shake off.

Above all, Dr. Arthur highlights technology’s dependence on nature. This is a remarkable insight that leads us back into reflections on identity and connection. If technology is dependent on nature, then one could argue that it is an extension of nature just like we are. If that is the case then it is time to remove the illusion of artificial versus natural. It is all natural.

If we see technology and nature as a continuum, we can enrich our conversation about technology. It is no longer a foreign agent that we need to deal with, but a reflection of who we are.

Warfare AI in Ukraine: How Algorithms are Changing Combat

There is a war in Europe, again. Two weeks in and the world is watching in disbelief as Russian forces invade Ukraine. While the conflict is still confined to the two nations, the proximity to NATO nations and the unpredictability of the Russian autocrat has caused the world to get the jitters. It is too soon to speak of WWIII but the prospect is now closer than it has ever been.

No doubt this is the biggest story of the moment with implications that span multiple levels. In this piece, I want to focus on how it is impacting the conversation on AI ethics. This encompasses not only the potential for AI weapons but also the involvement of algorithms in cyber warfare and in addressing the refugee crisis that result from it. In a previous blog, I outlined the first documented uses of AI in an armed conflict. This instance requires a more extensive treatment.

Andrew Ng Rethinks AI warfare

In the AI field, few command as much respect as Andrew Ng. Former Chief Scientist of Baidu and co-founder of Google Brain, he has recently shifted his focus to education and helping startups lead innovation in AI. He prefaces his most recent newsletter this way:

I’ve often thought about the role of AI in military applications, but I haven’t spoken much about it because I don’t want to contribute to the proliferation of AI arms. Many people in AI believe that we shouldn’t have anything to do with military use cases, and I sympathize with that idea. War is horrific, and perhaps the AI community should just avoid it. Nonetheless, I believe it’s time to wrestle with hard, ugly questions about the role of AI in warfare, recognizing that sometimes there are no good options.

Andrew Ng

He goes on to explain how in a globally connected world where a lot of code is open-source, there is no way to ensure these technologies will not fall in the wrong hands. Andrew Ng still defends recent UN guidance that affirms that a human decision-maker should be involved in any warfare system. The thought leader likens it to the treatment of atomic weapons where a global body audits and verifies national commitments. In doing so, he opens the door for the legitimate development of such weapons as long as there are appropriate controls.

Photo by Katie Godowski from Pexels

Andrew’s most salient point is that this is no longer a conversation we can avoid. It needs to happen now. It needs to include military experts, political leaders, and scientists. Moreover, it should include a diverse group of members from civil society as civilians are still the ones who suffer the most in these armed conflicts.

Are we ready to open this pandora’s box? This war may prove that it has already been open.

AI Uses in the Ukraine’s war

While much is still unclear, reports are starting to surface on some AI uses on both sides of the conflict. Ukraine is using semi-autonomous Turkish-made drones that can drop laser-guided bombs. A human operator is still required to pull the triggers but the drone can take off, fly and land on its own. Russia is opting for kamikazi drones that will literally crash into its targets after finding and circling them for a bit. This is certainly a terrifying sight, straight out of Sci-fi movies – a predator machine that will hunt down and strike its enemies with cold precision.

Yet, AI uses are not limited to the battlefield. 21st-century wars are no longer fought with guns and ammunition only but now extend to bits and bytes. Russian troll farms are creating fake faces for propagandist profiles. They understand that any military conflict in our age is followed by an information war to control the narrative. Hence the use of bots or another automated posting mechanism will come in handy in a situation like this.

Photo by Tima Miroshnichenko from Pexels

Furthermore, there is a parallel and very destructive cyber war happening alongside the war in the streets. From the very beginning of the invasion, reports surfaced of Russian cyberattacks on Ukraine’s infrastructure. There are also multi-national cyber defense teams formed to counteract and stop such attempts. While cyber-attacks do not always entail AI techniques, the pursuit to stop them or scale them most often does. This, therefore, ensures AI will be a vital part of the current conflict

Conclusion

While I would hope we could guarantee a war-free world for my children, this is not a reality. The prospect of war will continue and therefore it must be part of our discussions on AI and ethics. This becomes even more relevant as contemporary wars are extending into the digital sphere in unprecedented ways. This is uncharted territory in some ways. In others, it is not, as technology has always been at the center of armed conflict.

As I write this, I pray and hope for a swift resolution to the conflict in Ukraine. Standing with the Ukrainian people and against unilateral aggression, I hope that a mobilized global community will be enough to stop a dictator. I suspect it will not. In the words of the wise prophet Sting, we all hope the Russians love their children too.