Human Mercy is the Antidote to AI-driven Bureaucracy

If bureaucracies are full of human cogs, what’s the difference in replacing them with AI?

(For this entry following the main topic of classifications by machines vs. humans, we consider classifications and their union with judgments for their prospect of life-altering decisions. It is inspired by a sermon given today by pastor Jim Thomas of The Village Chapel, Nashville TN)

Esau’s fateful Choice

In Genesis 25:29–34 we see Esau, the firstborn son of Abraham, coming in from the fields famished, and finding his younger brother Jacob in possession of hearty stew, Esau pleads for some. My paraphrase follows:   Jacob replies, “First you have to give me your birthright.”   “Whatever,” says Esau, “You can have it, just gimme some STEWWW!” …”And thus Esau sold his birthright for a mess of pottage.”

Simply put, it is a bad idea make major, life-altering decisions while in a stressed state, examples of which are often drawn from the acronym HALT:

  • Hungry
  • Angry
  • Lonely
  • Tired

Sometimes HALT becomes SHALT by adding “Sad”.

When we’re in these (S)HALT states, our brains are operating relying on quick inferences “burned” into them either via instinct or training. Dual Process Theory of psychology calls this “System 1” or “Type 1” reasoning, according to the (cf. Kahneman, 2003Strack & Deutch 2004). System 1 includes the fight-or-flight response. While System 1 is fast, it’s also often prone to make errors, oversimplify, and operate based on of biases such as stereotypes and prejudices.

System 1 relies on only a tiny subset of the brain’s overall capacity, the part usually associated with involuntary and regulatory systems of the body governed by the cerebellum and medulla, rather than the cerebrum with its higher-order reasoning capabilities and creativity. Thus trying to make important decisions (if they’re not immediate and life-threatening) while in a System 1 state is inadvisable if waiting is possible.

At a later time we may be more relaxed, content, and able to engage in so-called System 2 reasoning, which is able to consider alternatives, question assumptions, perform planning and goal-alignment, display generosity, seek creative solutions, etc.

Hangry Computers Making Hasty Decisions

Machine Learning systems, other statistics-based models, and even rule-based symbolic AI systems, as sophisticated as they may currently be, are at best operating in a System 1 capacity — to the extent that the analogy to the human brain holds (See, e.g., Turing Award winner Yoshua Bengio’s invited lecture at NeurIPS 2019: video, slides.)

This analogy between human System 1 and AI systems is the reason for this post. AI systems are increasingly serving as proxies for human reasoning, even for important, life-altering decisions. And as such, news stories appear daily with instances of AI systems displaying bias and unjustly employing stereotypes.

So if humans are discouraged from making important decisions while in a System 1 state, and machines are currently capable of only System 1, then why are machines entrusted with important decision-making responsibilities? This is not simply a matter of which companies may choose to offer AI systems for speed and scale; governments do this too.

Government is a great place to look to further this discussion, because government bodies are chock full of humans making life-altering decisions (for others) based on of System 1 reasoning — tired people, implementing decisions based on procedures and rules – bureaucracy.1 In this way, whether it is a human being following a procedure or a machine following its instruction set, the result is quite similar.

Human Costs and Human Goods

By Harald Groven taken from Flickr.com

The building of a large bureaucratic system provides a way to scale and enforce a kind of (to borrow from AI Safety lingo) “value alignment,” whether for governments, companies, or non-profits. The movies of Terry Gilliam (e.g., Brazil) illustrated well the excesses of this through a vast office complex of desks after desks of office drones. The socio-political theorist Max Weber, who advanced many of our conceptions of bureaucracy as a positive means to maximize efficiency and eliminate favoritism, was aware of the danger of excess:

“It is horrible to think that the world could one day be filled with nothing but those little cogs, little men clinging to little jobs and striving towards bigger ones… That the world should know no men but these: it is such an evolution that we are already caught up, and the great question is, therefore, not how we can promote and hasten it, but what can we oppose to this machinery in order to keep a portion of mankind free from this parcelling-out of the soul, from this supreme mastery of the bureaucratic way of life.”

Max Weber, Gesammelte Augsaetze zur Soziologie und Sozialpolitik, pp. 412, (1909).

Thus by outsourcing some of this drudgery to machines, we can “free” some workers from having to serve as “cogs.” This bears some similarity to the practice of replacing human assembly-line workers with robots in hazardous conditions (e.g., welding, toxic environments), whereas in the bureaucratic sense we are removing people from mentally or emotionally taxing situations. Yet one may ask what the other costs of such an enterprise may be, if any: If the system is already “soulless,” then what do we lose by having the human “cogs” in the bureaucratic machine replaced by machines?

The Heart of the Matter

So, what is different about machines doing things, specifically performing classifications (judgments, grading, etc.) as opposed to humans?

One difference between the automated and human forms of bureaucracy is the possibility of discretionary action on the part of humans, such as the demonstration of mercy in certain circumstances. God exhorts believers in Micah 6:8 “to love mercy.” In contrast, human bureaucrats going through the motions of following the rules of their organization can result in what Hannah Arendt termed “the banality of evil,” typified in her portrayal of Nazi war criminal Adolph Eichmann who she described as “neither perverted nor sadistic,” but rather “terrifyingly normal.”

“The sad truth is of the matter is that most evil is done by people who never make up their minds to be or do evil or good.”

Hannah Arendt, The Life of the Mind, Volume 1: Thinking, p.180 (1977).

Here again we see the potential for AI systems, as the ultimate “neutral” rule-followers, to facilitate evil on massive scales. So if machines could somehow deviate from the rules and show mercy on occasion, how would that even work? Which AI researchers are working on the “machine ethics” issue of determining when and how to show mercy? (At the time of writing, this author is unaware of such efforts). Given that human judges have a tendency to show favoritism and bias in the selective granting of mercy to certain ethnicities more than others. Also, the automated systems have shown bias even in rule-following, would the matter of “mercy” simply be a new opportunity for automated unfairness? It is a difficult issue with no clear answers.

Photo by Clay Banks on Unsplash
Photo by Clay Banks on Unsplash

The Human Factor

One other key, if the pedantic difference, between human vs machine “cogs” is the simple fact that with a human being “on the line,” you can try to break out of the limited options presented by menus and if-then decision trees. Even the latest chatbot helper interfaces currently deployed are nothing more than natural language front ends to menus. Whereas with a human being, you can explain your situation, and they can (hopefully) work with you or connect you to someone with the authority to do so.

I suspect that in the next ten years we will see machine systems with increasing forays into System 2 reasoning categories (e.g., causality, planning, self-examination). I’m not sure how I feel about the prospect of pleading with a next-gen chatbot to offer me an exception because the rule shouldn’t apply in this case, or some such. 😉 But it might happen — or more likely such a system will decide whether to kick the matter up to a real human.

Summary

We began by talking about Jacob and Esau. Jacob, the creative swindling deal-broker, and Esau who quite literally “goes with his gut.”. Then we talked about reasoning according to the two systems described by Dual Process Theory, noting that machines currently can do System 1 quite well. The main question was: if humans make numerous erroneous and unjust decisions in a System 1 state, how do we justify the use of machines? And the easy answers available seem to be a cop-out: the incentive of scale, speed, and lower cost. And this is not just “capitalism,” rather these incentives would still be drivers in a variety of socio-economic situations.

Another answer came in the form of bureaucracy. System 1 already exists albeit with humans as operators. We explored “what’s different” between a bureaucracy implemented via humans vs. machines. We realized that “what is lost” is the humans’ ability to transcend, if not their authority in the organization, at least the rigid and deficient set of software designs imposed by vendors of bureaucratic IT systems. Predicting how the best of these systems will improve in the coming years is hard. Yet, given the prevalence of shoddy software in widespread use, I prefer talking to a human in Mumbai rather than “Erica” the Bank of America Chatbot for quite some time.


[1]    Literally “government by the desk,” a term coined originally by 16th-century French economist Jacques Claude Marie Vincent de Gournay as a pejorative, but has since entered common usage.

Scott H. Hawley, Ph.D., Professor of Physics, Belmont University. Webpage: https://hedges.belmont.edu/~shawley/

Acknowledgment: The author thanks L.M. Sacasas for the helpful conversation while preparing this post.

AI Artistic Parrots and the Hope of the Resurrection

Guest contributor Dr. Scott Hawley discusses the implications for generative models and resurrection. As this technology improves, the generation of new work attributed to the dead multiply. How does that square with the Christian hope for resurrection?

“It is the business of the future to be dangerous.”

(Fake) Ivan Illich

“The first thing that technology gave us was greater strength. Then it gave us greater speed. Now it promises us greater intelligence. But always at the cost of meaninglessness.”

(Fake) Ivan Illich

Playing with Generative Models

The previous two quotes are just a sample of 365 fake quotes in the style of philosopher/theologian Ivan Illich by feeding a page’s worth of real Illich quotes from GoodReads.com into OpenAI’s massive language model, GPT-3, and had it continue “writing” from there. The wonder of GPT-3 is that it exhibits what its authors describe as “few-shot learning.” That is, rather than requiring of 100+ pages of Illich as older models, it works with a few Illich quotes. Two to three original sayings and the GPT-3 can generate new quotes that are highly believable.

Have I resurrected Illich? Am I putting words into the mouth of Illich, now dead for nearly 20 years? Would he (or the guardians of his estate) approve? The answers to these questions are: No, Explicitly not (via my use of the word “Fake”), and Almost certainly not. Even generating them started to feel “icky” after a bit. Perhaps someone with as flamboyant a public persona as Marshall McLuhan would have been pleased to be ― what shall we say, “re-animated“? ― in such a fashion, but Illich likely would have recoiled. At least, such is the intuition of myself and noted Illich commentator L.M. Sacasas, who inspired my initial foray into creating an “IllichBot”:

…and while I haven’t abandoned the IllichBot project entirely, Sacasas and I both feel that it would be better if it posted real Illich quotes rather than fake rehashes via GPT-3 or some other model.

Re-creating Dead Artists’ Work

For the AI Theology blog, I was not asked to write about “IllichBot,” but rather on the story of AI creating Nirvana music in a project called “Lost Tapes of the 27 Club.” This story was originally mis-reported (and is still in the Rolling Stone headline metadata) as “Hear ‘New’ Nirvana Song Written, Performed by Artificial Intelligence,” but really the song was “composed” by the AI system and then performed by a (human) cover band. One might ask, how is this is different from humans deciding to imitate another artists?

For example, the artist known as The Weeknd sounds almost exactly like the late Michael Jackson. Greta Van Fleet make songs that sound like Led Zeppelin anew. Songwriters, musicians, producers, and promoters routinely refer to prior work as signifiers when trying to communicate musical ideas. When AI generates a song idea, is that just a “tool” for the human artists? Are games for music composition or songwriting the same as “AI”? These are deep questions regarding “what is art?” and I will refer the reader to Marcus du Sautoy’s bestselling survey The Creativity Code: Art and Innovation in the Age of AI. (See my review here.)

Since that book was published, newer, more sophisticated models have emerged that generate not just ideas and tools but “performance.” The work of OpenAI’s Jukebox effort and artist-researchers Dadabots generate completely new audio such as “Country, in the style of Alan Jackson“. Dadabots have even partnered with a heavy metal band and beatbox artist Reeps One to generate entirely new music. When Dadabots used Jukebox to produce the “impossible cover song” of Frank Sinatra singing a Britney Spears song, they received a copyright takedown notice on YouTube…although it’s still unclear who requested the takedown or why.

Photo by Michal Matlon on Unsplash

Theology of Generative Models?

Where’s the theology angle on this? Well, relatedly, mistyping “Dadabots” as “dadbots” in a Google search will get you stories such as “A Son’s Race to Give His Dying Father Artificial Immortality” in which, like our Fake Ivan Illich, a man has trained a generative language model on his father’s statements to produce a chatbot to emulate his dad after he’s gone. Now we’re not merely talking about fake quotes by a theologian, or “AI cover songs,” or even John Dyer’s Worship Song Generator, but “AI cover Dad.” In this case there’s no distraction of pondering interesting legal/copyright issues, and no side-stepping the “uncomfortable” feeling that I personally experience.

One might try to couch the “uncomfortable” feeling in theological terms, as some sort of abhorrence of “digital” divination. It echoes the Biblical story of the witch of Endor temporarily bringing the spirit of Samuel back from the dead at Saul’s request. It can also relate to age-old taboos about defiling the (memory of) the dead. One could try to introduce a distinction between taboo “re-animation” that is the stuff of multiple horror tropes vs. the Christian hope of the resurrection through the power of God in Christ.

However I would stop short of this, because the source of my “icky” feeling stems not from theology but from a simpler objection to anthropomorphism, the “ontological” confusion that results when people try to cast a generative (probabilistic) algorithm as a person. I identify with the scientist-boss in the digital-Frosty-the-Snowman movie Short Circuit:

“It’s a machine, Schroeder. It doesn’t get pissed off. It doesn’t get happy, it doesn’t get sad, it doesn’t laugh at your jokes. It just runs programs.”

Short Circuit

Materialists, given their requirement that the human mind is purely physical, can perhaps anthropomorphize with impunity. I submit our present round of language and musical models, however impressively they may perform, are only a “reflection, as in a glass darkly” of true human intelligence. The error of anthropomorphism goes back for millenia, however, the Christian hope for resurrection addresses being truly reunited with lost loved ones. That means being able to hear new compositions of Haydn, by Haydn himself!

Acknowledgement: The title is an homage to the “Stochastic Parrots” paper of the (former) Google AI ethics team.


Scott H. Hawley is Professor of Physics at Belmont University and a Founding Member of AI and Faith. His writings include the winning entry of FaithTech Institute’s 2020 Writing Contest and the most popular Acoustics Today article of 2020, and have appeared in Perspectives on Science and Christian Faith and The Transhumanism Handbook.