Human Mercy is the Antidote to AI-driven Bureaucracy

Facebook
Twitter
LinkedIn

If bureaucracies are full of human cogs, what’s the difference in replacing them with AI?

(For this entry following the main topic of classifications by machines vs. humans, we consider classifications and their union with judgments for their prospect of life-altering decisions. It is inspired by a sermon given today by pastor Jim Thomas of The Village Chapel, Nashville TN)

Esau’s fateful Choice

In Genesis 25:29–34 we see Esau, the firstborn son of Abraham, coming in from the fields famished, and finding his younger brother Jacob in possession of hearty stew, Esau pleads for some. My paraphrase follows:   Jacob replies, “First you have to give me your birthright.”   “Whatever,” says Esau, “You can have it, just gimme some STEWWW!” …”And thus Esau sold his birthright for a mess of pottage.”

Simply put, it is a bad idea make major, life-altering decisions while in a stressed state, examples of which are often drawn from the acronym HALT:

  • Hungry
  • Angry
  • Lonely
  • Tired

Sometimes HALT becomes SHALT by adding “Sad”.

When we’re in these (S)HALT states, our brains are operating relying on quick inferences “burned” into them either via instinct or training. Dual Process Theory of psychology calls this “System 1” or “Type 1” reasoning, according to the (cf. Kahneman, 2003Strack & Deutch 2004). System 1 includes the fight-or-flight response. While System 1 is fast, it’s also often prone to make errors, oversimplify, and operate based on of biases such as stereotypes and prejudices.

System 1 relies on only a tiny subset of the brain’s overall capacity, the part usually associated with involuntary and regulatory systems of the body governed by the cerebellum and medulla, rather than the cerebrum with its higher-order reasoning capabilities and creativity. Thus trying to make important decisions (if they’re not immediate and life-threatening) while in a System 1 state is inadvisable if waiting is possible.

At a later time we may be more relaxed, content, and able to engage in so-called System 2 reasoning, which is able to consider alternatives, question assumptions, perform planning and goal-alignment, display generosity, seek creative solutions, etc.

Hangry Computers Making Hasty Decisions

Machine Learning systems, other statistics-based models, and even rule-based symbolic AI systems, as sophisticated as they may currently be, are at best operating in a System 1 capacity — to the extent that the analogy to the human brain holds (See, e.g., Turing Award winner Yoshua Bengio’s invited lecture at NeurIPS 2019: video, slides.)

This analogy between human System 1 and AI systems is the reason for this post. AI systems are increasingly serving as proxies for human reasoning, even for important, life-altering decisions. And as such, news stories appear daily with instances of AI systems displaying bias and unjustly employing stereotypes.

So if humans are discouraged from making important decisions while in a System 1 state, and machines are currently capable of only System 1, then why are machines entrusted with important decision-making responsibilities? This is not simply a matter of which companies may choose to offer AI systems for speed and scale; governments do this too.

Government is a great place to look to further this discussion, because government bodies are chock full of humans making life-altering decisions (for others) based on of System 1 reasoning — tired people, implementing decisions based on procedures and rules – bureaucracy.1 In this way, whether it is a human being following a procedure or a machine following its instruction set, the result is quite similar.

Human Costs and Human Goods

By Harald Groven taken from Flickr.com

The building of a large bureaucratic system provides a way to scale and enforce a kind of (to borrow from AI Safety lingo) “value alignment,” whether for governments, companies, or non-profits. The movies of Terry Gilliam (e.g., Brazil) illustrated well the excesses of this through a vast office complex of desks after desks of office drones. The socio-political theorist Max Weber, who advanced many of our conceptions of bureaucracy as a positive means to maximize efficiency and eliminate favoritism, was aware of the danger of excess:

“It is horrible to think that the world could one day be filled with nothing but those little cogs, little men clinging to little jobs and striving towards bigger ones… That the world should know no men but these: it is such an evolution that we are already caught up, and the great question is, therefore, not how we can promote and hasten it, but what can we oppose to this machinery in order to keep a portion of mankind free from this parcelling-out of the soul, from this supreme mastery of the bureaucratic way of life.”

Max Weber, Gesammelte Augsaetze zur Soziologie und Sozialpolitik, pp. 412, (1909).

Thus by outsourcing some of this drudgery to machines, we can “free” some workers from having to serve as “cogs.” This bears some similarity to the practice of replacing human assembly-line workers with robots in hazardous conditions (e.g., welding, toxic environments), whereas in the bureaucratic sense we are removing people from mentally or emotionally taxing situations. Yet one may ask what the other costs of such an enterprise may be, if any: If the system is already “soulless,” then what do we lose by having the human “cogs” in the bureaucratic machine replaced by machines?

The Heart of the Matter

So, what is different about machines doing things, specifically performing classifications (judgments, grading, etc.) as opposed to humans?

One difference between the automated and human forms of bureaucracy is the possibility of discretionary action on the part of humans, such as the demonstration of mercy in certain circumstances. God exhorts believers in Micah 6:8 “to love mercy.” In contrast, human bureaucrats going through the motions of following the rules of their organization can result in what Hannah Arendt termed “the banality of evil,” typified in her portrayal of Nazi war criminal Adolph Eichmann who she described as “neither perverted nor sadistic,” but rather “terrifyingly normal.”

“The sad truth is of the matter is that most evil is done by people who never make up their minds to be or do evil or good.”

Hannah Arendt, The Life of the Mind, Volume 1: Thinking, p.180 (1977).

Here again we see the potential for AI systems, as the ultimate “neutral” rule-followers, to facilitate evil on massive scales. So if machines could somehow deviate from the rules and show mercy on occasion, how would that even work? Which AI researchers are working on the “machine ethics” issue of determining when and how to show mercy? (At the time of writing, this author is unaware of such efforts). Given that human judges have a tendency to show favoritism and bias in the selective granting of mercy to certain ethnicities more than others. Also, the automated systems have shown bias even in rule-following, would the matter of “mercy” simply be a new opportunity for automated unfairness? It is a difficult issue with no clear answers.

Photo by Clay Banks on Unsplash
Photo by Clay Banks on Unsplash

The Human Factor

One other key, if the pedantic difference, between human vs machine “cogs” is the simple fact that with a human being “on the line,” you can try to break out of the limited options presented by menus and if-then decision trees. Even the latest chatbot helper interfaces currently deployed are nothing more than natural language front ends to menus. Whereas with a human being, you can explain your situation, and they can (hopefully) work with you or connect you to someone with the authority to do so.

I suspect that in the next ten years we will see machine systems with increasing forays into System 2 reasoning categories (e.g., causality, planning, self-examination). I’m not sure how I feel about the prospect of pleading with a next-gen chatbot to offer me an exception because the rule shouldn’t apply in this case, or some such. 😉 But it might happen — or more likely such a system will decide whether to kick the matter up to a real human.

Summary

We began by talking about Jacob and Esau. Jacob, the creative swindling deal-broker, and Esau who quite literally “goes with his gut.”. Then we talked about reasoning according to the two systems described by Dual Process Theory, noting that machines currently can do System 1 quite well. The main question was: if humans make numerous erroneous and unjust decisions in a System 1 state, how do we justify the use of machines? And the easy answers available seem to be a cop-out: the incentive of scale, speed, and lower cost. And this is not just “capitalism,” rather these incentives would still be drivers in a variety of socio-economic situations.

Another answer came in the form of bureaucracy. System 1 already exists albeit with humans as operators. We explored “what’s different” between a bureaucracy implemented via humans vs. machines. We realized that “what is lost” is the humans’ ability to transcend, if not their authority in the organization, at least the rigid and deficient set of software designs imposed by vendors of bureaucratic IT systems. Predicting how the best of these systems will improve in the coming years is hard. Yet, given the prevalence of shoddy software in widespread use, I prefer talking to a human in Mumbai rather than “Erica” the Bank of America Chatbot for quite some time.


[1]    Literally “government by the desk,” a term coined originally by 16th-century French economist Jacques Claude Marie Vincent de Gournay as a pejorative, but has since entered common usage.

Scott H. Hawley, Ph.D., Professor of Physics, Belmont University. Webpage: https://hedges.belmont.edu/~shawley/

Acknowledgment: The author thanks L.M. Sacasas for the helpful conversation while preparing this post.

More Posts

Watching for AI-enabled falsehoods

This is a historical year for elections world-wide. This is also a time for unprecedented ai-enabled misinformation. The Misinformation Hub by AI and Faith focuses

A Priest at an AI Conference

Created through prompt by Dall-E Last month, I attended the 38th conference of the Association for the Advancement of Artificial Intelligence. The conference was held

Send Us A Message

Don't know where to start?

Download our free AI primer and get a comprehensive introduction to this emerging technology
Free