Human Mercy is the Antidote to AI-driven Bureaucracy

If bureaucracies are full of human cogs, what’s the difference in replacing them with AI?

(For this entry following the main topic of classifications by machines vs. humans, we consider classifications and their union with judgments for their prospect of life-altering decisions. It is inspired by a sermon given today by pastor Jim Thomas of The Village Chapel, Nashville TN)

Esau’s fateful Choice

In Genesis 25:29–34 we see Esau, the firstborn son of Abraham, coming in from the fields famished, and finding his younger brother Jacob in possession of hearty stew, Esau pleads for some. My paraphrase follows:   Jacob replies, “First you have to give me your birthright.”   “Whatever,” says Esau, “You can have it, just gimme some STEWWW!” …”And thus Esau sold his birthright for a mess of pottage.”

Simply put, it is a bad idea make major, life-altering decisions while in a stressed state, examples of which are often drawn from the acronym HALT:

  • Hungry
  • Angry
  • Lonely
  • Tired

Sometimes HALT becomes SHALT by adding “Sad”.

When we’re in these (S)HALT states, our brains are operating relying on quick inferences “burned” into them either via instinct or training. Dual Process Theory of psychology calls this “System 1” or “Type 1” reasoning, according to the (cf. Kahneman, 2003Strack & Deutch 2004). System 1 includes the fight-or-flight response. While System 1 is fast, it’s also often prone to make errors, oversimplify, and operate based on of biases such as stereotypes and prejudices.

System 1 relies on only a tiny subset of the brain’s overall capacity, the part usually associated with involuntary and regulatory systems of the body governed by the cerebellum and medulla, rather than the cerebrum with its higher-order reasoning capabilities and creativity. Thus trying to make important decisions (if they’re not immediate and life-threatening) while in a System 1 state is inadvisable if waiting is possible.

At a later time we may be more relaxed, content, and able to engage in so-called System 2 reasoning, which is able to consider alternatives, question assumptions, perform planning and goal-alignment, display generosity, seek creative solutions, etc.

Hangry Computers Making Hasty Decisions

Machine Learning systems, other statistics-based models, and even rule-based symbolic AI systems, as sophisticated as they may currently be, are at best operating in a System 1 capacity — to the extent that the analogy to the human brain holds (See, e.g., Turing Award winner Yoshua Bengio’s invited lecture at NeurIPS 2019: video, slides.)

This analogy between human System 1 and AI systems is the reason for this post. AI systems are increasingly serving as proxies for human reasoning, even for important, life-altering decisions. And as such, news stories appear daily with instances of AI systems displaying bias and unjustly employing stereotypes.

So if humans are discouraged from making important decisions while in a System 1 state, and machines are currently capable of only System 1, then why are machines entrusted with important decision-making responsibilities? This is not simply a matter of which companies may choose to offer AI systems for speed and scale; governments do this too.

Government is a great place to look to further this discussion, because government bodies are chock full of humans making life-altering decisions (for others) based on of System 1 reasoning — tired people, implementing decisions based on procedures and rules – bureaucracy.1 In this way, whether it is a human being following a procedure or a machine following its instruction set, the result is quite similar.

Human Costs and Human Goods

By Harald Groven taken from Flickr.com

The building of a large bureaucratic system provides a way to scale and enforce a kind of (to borrow from AI Safety lingo) “value alignment,” whether for governments, companies, or non-profits. The movies of Terry Gilliam (e.g., Brazil) illustrated well the excesses of this through a vast office complex of desks after desks of office drones. The socio-political theorist Max Weber, who advanced many of our conceptions of bureaucracy as a positive means to maximize efficiency and eliminate favoritism, was aware of the danger of excess:

“It is horrible to think that the world could one day be filled with nothing but those little cogs, little men clinging to little jobs and striving towards bigger ones… That the world should know no men but these: it is such an evolution that we are already caught up, and the great question is, therefore, not how we can promote and hasten it, but what can we oppose to this machinery in order to keep a portion of mankind free from this parcelling-out of the soul, from this supreme mastery of the bureaucratic way of life.”

Max Weber, Gesammelte Augsaetze zur Soziologie und Sozialpolitik, pp. 412, (1909).

Thus by outsourcing some of this drudgery to machines, we can “free” some workers from having to serve as “cogs.” This bears some similarity to the practice of replacing human assembly-line workers with robots in hazardous conditions (e.g., welding, toxic environments), whereas in the bureaucratic sense we are removing people from mentally or emotionally taxing situations. Yet one may ask what the other costs of such an enterprise may be, if any: If the system is already “soulless,” then what do we lose by having the human “cogs” in the bureaucratic machine replaced by machines?

The Heart of the Matter

So, what is different about machines doing things, specifically performing classifications (judgments, grading, etc.) as opposed to humans?

One difference between the automated and human forms of bureaucracy is the possibility of discretionary action on the part of humans, such as the demonstration of mercy in certain circumstances. God exhorts believers in Micah 6:8 “to love mercy.” In contrast, human bureaucrats going through the motions of following the rules of their organization can result in what Hannah Arendt termed “the banality of evil,” typified in her portrayal of Nazi war criminal Adolph Eichmann who she described as “neither perverted nor sadistic,” but rather “terrifyingly normal.”

“The sad truth is of the matter is that most evil is done by people who never make up their minds to be or do evil or good.”

Hannah Arendt, The Life of the Mind, Volume 1: Thinking, p.180 (1977).

Here again we see the potential for AI systems, as the ultimate “neutral” rule-followers, to facilitate evil on massive scales. So if machines could somehow deviate from the rules and show mercy on occasion, how would that even work? Which AI researchers are working on the “machine ethics” issue of determining when and how to show mercy? (At the time of writing, this author is unaware of such efforts). Given that human judges have a tendency to show favoritism and bias in the selective granting of mercy to certain ethnicities more than others. Also, the automated systems have shown bias even in rule-following, would the matter of “mercy” simply be a new opportunity for automated unfairness? It is a difficult issue with no clear answers.

Photo by Clay Banks on Unsplash
Photo by Clay Banks on Unsplash

The Human Factor

One other key, if the pedantic difference, between human vs machine “cogs” is the simple fact that with a human being “on the line,” you can try to break out of the limited options presented by menus and if-then decision trees. Even the latest chatbot helper interfaces currently deployed are nothing more than natural language front ends to menus. Whereas with a human being, you can explain your situation, and they can (hopefully) work with you or connect you to someone with the authority to do so.

I suspect that in the next ten years we will see machine systems with increasing forays into System 2 reasoning categories (e.g., causality, planning, self-examination). I’m not sure how I feel about the prospect of pleading with a next-gen chatbot to offer me an exception because the rule shouldn’t apply in this case, or some such. 😉 But it might happen — or more likely such a system will decide whether to kick the matter up to a real human.

Summary

We began by talking about Jacob and Esau. Jacob, the creative swindling deal-broker, and Esau who quite literally “goes with his gut.”. Then we talked about reasoning according to the two systems described by Dual Process Theory, noting that machines currently can do System 1 quite well. The main question was: if humans make numerous erroneous and unjust decisions in a System 1 state, how do we justify the use of machines? And the easy answers available seem to be a cop-out: the incentive of scale, speed, and lower cost. And this is not just “capitalism,” rather these incentives would still be drivers in a variety of socio-economic situations.

Another answer came in the form of bureaucracy. System 1 already exists albeit with humans as operators. We explored “what’s different” between a bureaucracy implemented via humans vs. machines. We realized that “what is lost” is the humans’ ability to transcend, if not their authority in the organization, at least the rigid and deficient set of software designs imposed by vendors of bureaucratic IT systems. Predicting how the best of these systems will improve in the coming years is hard. Yet, given the prevalence of shoddy software in widespread use, I prefer talking to a human in Mumbai rather than “Erica” the Bank of America Chatbot for quite some time.


[1]    Literally “government by the desk,” a term coined originally by 16th-century French economist Jacques Claude Marie Vincent de Gournay as a pejorative, but has since entered common usage.

Scott H. Hawley, Ph.D., Professor of Physics, Belmont University. Webpage: https://hedges.belmont.edu/~shawley/

Acknowledgment: The author thanks L.M. Sacasas for the helpful conversation while preparing this post.

AI Artistic Parrots and the Hope of the Resurrection

Guest contributor Dr. Scott Hawley discusses the implications for generative models and resurrection. As this technology improves, the generation of new work attributed to the dead multiply. How does that square with the Christian hope for resurrection?

“It is the business of the future to be dangerous.”

(Fake) Ivan Illich

“The first thing that technology gave us was greater strength. Then it gave us greater speed. Now it promises us greater intelligence. But always at the cost of meaninglessness.”

(Fake) Ivan Illich

Playing with Generative Models

The previous two quotes are just a sample of 365 fake quotes in the style of philosopher/theologian Ivan Illich by feeding a page’s worth of real Illich quotes from GoodReads.com into OpenAI’s massive language model, GPT-3, and had it continue “writing” from there. The wonder of GPT-3 is that it exhibits what its authors describe as “few-shot learning.” That is, rather than requiring of 100+ pages of Illich as older models, it works with a few Illich quotes. Two to three original sayings and the GPT-3 can generate new quotes that are highly believable.

Have I resurrected Illich? Am I putting words into the mouth of Illich, now dead for nearly 20 years? Would he (or the guardians of his estate) approve? The answers to these questions are: No, Explicitly not (via my use of the word “Fake”), and Almost certainly not. Even generating them started to feel “icky” after a bit. Perhaps someone with as flamboyant a public persona as Marshall McLuhan would have been pleased to be ― what shall we say, “re-animated“? ― in such a fashion, but Illich likely would have recoiled. At least, such is the intuition of myself and noted Illich commentator L.M. Sacasas, who inspired my initial foray into creating an “IllichBot”:

…and while I haven’t abandoned the IllichBot project entirely, Sacasas and I both feel that it would be better if it posted real Illich quotes rather than fake rehashes via GPT-3 or some other model.

Re-creating Dead Artists’ Work

For the AI Theology blog, I was not asked to write about “IllichBot,” but rather on the story of AI creating Nirvana music in a project called “Lost Tapes of the 27 Club.” This story was originally mis-reported (and is still in the Rolling Stone headline metadata) as “Hear ‘New’ Nirvana Song Written, Performed by Artificial Intelligence,” but really the song was “composed” by the AI system and then performed by a (human) cover band. One might ask, how is this is different from humans deciding to imitate another artists?

For example, the artist known as The Weeknd sounds almost exactly like the late Michael Jackson. Greta Van Fleet make songs that sound like Led Zeppelin anew. Songwriters, musicians, producers, and promoters routinely refer to prior work as signifiers when trying to communicate musical ideas. When AI generates a song idea, is that just a “tool” for the human artists? Are games for music composition or songwriting the same as “AI”? These are deep questions regarding “what is art?” and I will refer the reader to Marcus du Sautoy’s bestselling survey The Creativity Code: Art and Innovation in the Age of AI. (See my review here.)

Since that book was published, newer, more sophisticated models have emerged that generate not just ideas and tools but “performance.” The work of OpenAI’s Jukebox effort and artist-researchers Dadabots generate completely new audio such as “Country, in the style of Alan Jackson“. Dadabots have even partnered with a heavy metal band and beatbox artist Reeps One to generate entirely new music. When Dadabots used Jukebox to produce the “impossible cover song” of Frank Sinatra singing a Britney Spears song, they received a copyright takedown notice on YouTube…although it’s still unclear who requested the takedown or why.

Photo by Michal Matlon on Unsplash

Theology of Generative Models?

Where’s the theology angle on this? Well, relatedly, mistyping “Dadabots” as “dadbots” in a Google search will get you stories such as “A Son’s Race to Give His Dying Father Artificial Immortality” in which, like our Fake Ivan Illich, a man has trained a generative language model on his father’s statements to produce a chatbot to emulate his dad after he’s gone. Now we’re not merely talking about fake quotes by a theologian, or “AI cover songs,” or even John Dyer’s Worship Song Generator, but “AI cover Dad.” In this case there’s no distraction of pondering interesting legal/copyright issues, and no side-stepping the “uncomfortable” feeling that I personally experience.

One might try to couch the “uncomfortable” feeling in theological terms, as some sort of abhorrence of “digital” divination. It echoes the Biblical story of the witch of Endor temporarily bringing the spirit of Samuel back from the dead at Saul’s request. It can also relate to age-old taboos about defiling the (memory of) the dead. One could try to introduce a distinction between taboo “re-animation” that is the stuff of multiple horror tropes vs. the Christian hope of the resurrection through the power of God in Christ.

However I would stop short of this, because the source of my “icky” feeling stems not from theology but from a simpler objection to anthropomorphism, the “ontological” confusion that results when people try to cast a generative (probabilistic) algorithm as a person. I identify with the scientist-boss in the digital-Frosty-the-Snowman movie Short Circuit:

“It’s a machine, Schroeder. It doesn’t get pissed off. It doesn’t get happy, it doesn’t get sad, it doesn’t laugh at your jokes. It just runs programs.”

Short Circuit

Materialists, given their requirement that the human mind is purely physical, can perhaps anthropomorphize with impunity. I submit our present round of language and musical models, however impressively they may perform, are only a “reflection, as in a glass darkly” of true human intelligence. The error of anthropomorphism goes back for millenia, however, the Christian hope for resurrection addresses being truly reunited with lost loved ones. That means being able to hear new compositions of Haydn, by Haydn himself!

Acknowledgement: The title is an homage to the “Stochastic Parrots” paper of the (former) Google AI ethics team.


Scott H. Hawley is Professor of Physics at Belmont University and a Founding Member of AI and Faith. His writings include the winning entry of FaithTech Institute’s 2020 Writing Contest and the most popular Acoustics Today article of 2020, and have appeared in Perspectives on Science and Christian Faith and The Transhumanism Handbook.

AI for Scholarship: How Machine Learning can Transform the Humanities

 In a previous blog, I explored how AI will speed up scientific research. In this blog, I will examine the overlooked  potential that AI has to transform the Humanities. This connection may not be clear at first since most of these fields do not include an element of science or math. They are more preoccupied with developing theories than testing hypotheses through experimentation. Subjects like Literature, Philosophy, History, Languages and Religious Studies (and Theology) rely heavily in the interpretation and qualitative analysis of texts. In such environment, how could mathematical algorithms be of any use? 

Before addressing the question above, we must first look at the field of Digital Humanities that created a bridge from ancient texts to modern computation. The field dates back the 1930’s, before the emergence of Artificial Intelligence. Ironically, and interestingly relevant to this blog, the first project in this area was a collaboration between an English professor, a Jesuit Priest and IBM to create a concordance for Thomas Aquinas’ writings. As digital technology advanced and texts became digitized, the field has continued to grow in importance. Its primary purpose is to both apply digital methods to Humanities as well as reflect on its use. That is, they are not only interested in digitizing books but also evaluating how the use of digital medium affect human understanding of these texts. 

Building on the foundation of Digital Humanities, the connection with AI becomes all too clear. Once computers can ingest these texts, text mining and natural language processing are now a possibility. With the recent advances in machine learning algorithms, cheapening of computing power and the availability of open source tools the conditions are ripe for an AI revolution in the Humanities.

How can that happen? The use of machine learning in combination with Natural Language Processing can open avenues of meaning that were not possible before. For centuries, these academic subjects have relied on the accumulated analysis of texts performed by humans. Yet, human capacity to interpret, analyze and absorb texts is finite. Humans do a great job in capturing meaning and nuances in texts of hundreds or even a few thousand pages. Yet, as the volume increases, machine learning can detect patterns that  are not apparent to a human reader.  This can be especially critical in applications such as author attribution (determining who the writer was when that information is not clear or in question), analysis of cultural trends,  semantics, tone and relationship between disparate texts. 

Theology is a field that is particularly poised to benefit from this combination. For those unfamiliar with Theological studies, it is a long and lonely road. Brave souls aiming to master the field must undergo more schooling than Physicians. In most cases, aspiring scholars must a complete a five-year doctorate program on top of 2-4 years of master-level studies. Part of the reason is that the field has accumulated an inordinate amount of primary sources and countless interpretations of these texts. They were written in multiple ancient and modern languages and have a span over thousands of years. In short, when reams of texts can become Big Data, machine learning can do wonders to synthesize, analyze and correlate large bodies of texts. 

To be clear, that does not mean the machine learning will replace painstaking scholarly work. Quite the opposite, it has the potential to speed up and automate some tasks so scholars can focus on high level abstract thinking where humans still hold a vast advantage over machines. If anything it should make their lives easier and possibly shorter the time it takes to master the field.

Along these lines of augmentation, I am thinking about a possible project. What if we could employ machine learning algorithms in a theologian body of work and compare it to the scholarship work that interprets it? Could we find new avenues or meaning that could complement or challenge prevailing scholarship in the topic? 

I am curious to see what such experiment could uncover. 

Automated Research: How AI Will Speed Up Scientific Discovery

The potential of AI is boundless. Currently, there is a lot of buzz around how it will change industries like transportation, entertainment and healthcare. Less known but even more revolutionary is how AI could change science itself. In a previous blog, I speculated about the impact of AI on academic research through text mining. The implications of  automated research described here are even more far-reaching.

Recently, I came upon an article in Aeon that described exactly that. In it, biologist Ahmed Alkhateeb eloquently makes his argument in the excerpt below:

Human minds simply cannot reconstruct highly complex natural phenomena efficiently enough in the age of big data. A modern Baconian method that incorporates reductionist ideas through data-mining, but then analyses this information through inductive computational models, could transform our understanding of the natural world. Such an approach would enable us to generate novel hypotheses that have higher chances of turning out to be true, to test those hypotheses, and to fill gaps in our knowledge.

As a good academic, the author says a lot with a few words in the paragraph above. Let me unpack his statement a bit.

His first point is that in the age of big data, individual human minds are incapable of effectively analyzing, processing and making meaning of all the information available. There was a time where all the knowledge about a discipline was in books that could be read or at least summarized by one person. Furthermore, traditional ways of doing research whether through a lab experimentation, sampling, controlling for externalities, testing hypothesis take a long time and only give a narrow view of reality. Hence, in a time where big data is available, such approach will not be sufficient to harness all the knowledge that could be discovered.

His second point is to suggest a new approach that incorporates Artificial Intelligence through pattern seeking algorithms that can effectively and efficiently mine data. The Baconian method simply means the approach of discovering knowledge through disciplined collection and analysis of observations. He proposes an algorithmic approach that would mine data, come up with hypothesis through computer models then collect new data to test those hypotheses. Furthermore, this process would not be limited to an individual but would draw from the knowledge of a vast scientific community. In short, he proposes including AI in every step of scientific research as a way to improve quality and accuracy. The idea is that an algorithmic approach would produce better hypotheses and also test them more efficiently than humans.

As the author concedes, current algorithms and approaches are not fully adequate for the task. While AI can already mine numeric data well, text mining is more of a recent development. Computers think in numbers so to get them to make sense of text requires time-consuming processes to translate text into numeric values. Relevant to this topic, the Washington Post just put out an article about how computers have now, for the first time beat human performance in a reading and comprehension test. This is an important step if we want to see AI more involved in scientific research and discovery.

How will automated research impact our world?

The promise of AI-assisted scientific discovery is remarkable. It could lead to the cure of diseases, the discovery of new energy sources and unprecedented breakthroughs in technology. Another outcome would be the democratization of scientific research. As research gets automated, it becomes easier for others to do it just like Windows has made the computer accessible to people that do not code.

In spite of all this potential, such development should cause us to pause for reflection. It is impressive how much of our mental capacities are being outsourced to machines. How comfortable are we with this inevitable meshing of bodies and electronics? Who will lead, fund and direct the automated research? Will it lead to enriching corporations or improving quality of life for all? I disagree with the author’s statement that an automated research would make science “limitlessly free.” Even as machines are doing the work, humans are still controlling the direction and scope of the research. As we ship more human activity to machines, ensuring they reflect our ethical standards remains a human mandate.

AI Reformation: How Tech can Empower Biblical Scholarship

In a past blog I talked about how an AI-enabled Internet was bound to bring a new Reformation to the church. In this blog, I want to talk about how AI can revolutionize biblical scholarship. Just like the printing press brought the Bible to homes, AI-enabled technologies can bring advanced study tools to the individual. This change in itself can change the face of Christianity for decades to come.

The Challenges of Biblical Scholarship

First, it is important to define what Biblical scholarship is. For those of you not familiar with it, this field is probably one of the oldest academic disciplines in Western academia. The study of Scripture was one of the primary goals for the creation of Universities in the Middle Ages and hence boasts an arsenal of literature unparalleled by most other academic endeavors. Keep in mind this is not your average Bible study you may find in a church. Becoming a Bible scholar is an arduous and long journey. Students desiring to enter the field must learn at least three ancient languages (Hebrew, Greek and usually Aramaic or Akkadian), German, English (for non-native speakers) and usually a third modern language. It takes about 10 years of Graduate level work to get started. To top that off, those who are able to complete these initial requirements face dismal career options as both seminaries and research interest in the Bible have declined in the last decades. Needless to say, if you know a Bible Scholar pat him in the back and thank them. The work they do is very important not only for the church but also for society in general as the Bible has deeply influenced other fields of knowledge like Philosophy, Law, Ethics and History.

Because of the barriers of entry described above, it is not surprising that many who considered this path as an option (including the writer of this blog) have opted for alternative paths. You may be wondering what that has to do with AI. The reality is that while the supply of Bible scholars is dwindling, the demand for work is increasing. The Bible is by far the most copied text in Antiquity. Just the New Testament alone has a collection of over 5,000 manuscripts found in different geographies and time periods. Many were discovered in the last 50 years. On top of that, because the field has been around for centuries, there are countless commentaries and other works interpreting, disputing, dissecting and adding to the original texts. Needless to say, this looks like a great candidate for machine-enhanced human work. No human being could possibly research, analyze and distill all this information effectively.

AI to the Rescue

As you may know, computers do not see the world in pictures or words. Instead all they see is numbers (0s and 1s to be more exact). Natural Language Processing is the technique that translates words into numbers so the computer can understand it. One simple way to do that is to count all the times each word shows up in a text and list them in a table. This simple word count exercise can already shed light into what the text is about. More advanced techniques will not only account for word incidence but also how close they are from each other by meaning. I could go on but for now suffice it to say that NLP starts “telling the story” of a text albeit in a numeric form to the computer.

What I describe above is already present in leading Bible softwares where one can study word counts till Kingdom come (no pun intended). Yet, this is only the first step in enabling computers to start mining the text for meaning and insight. When you add AI to NLP, that is when things start getting interesting. Think more of a Watson type of algorithm that you can ask a question and it can find the answer in the text. Now one can analyze sentiment, genre, text structure to name a few in a more efficient way. With AI, computers are now able to make connections between text that was only possible previously by the human mind. Except that they can do it a lot faster and, when well-trained, with greater precision.

One example is sentiment analysis where the algorithm is not looking for the text itself but more subjective notions of tone expressed in a text. For example, this technique is currently used to analyze customer reviews in order to understand whether a review is positive or negative. I manually attempted this for a Old Testament class assignment in which I mapped out the “sentiment” of Isaiah. I basically categorized each verse with a color to indicate whether it was positive (blessing or worship) or negative (condemnation or lament). I then zoomed out to see how the book’s  sentiment oscillated throughout the chapters. This laborious analysis made me look at the book in a whole different lens. As AI applications become more common, these analysis and visuals could be created in a matter of seconds.

A Future for Biblical Scholarship

Now, by showing these examples I don’t mean to say that AI will replace Scholars. Algorithms still need to be trained by humans who understand the text’s original languages and its intricacies. Yet, I do see a future where Biblical scholarship will not be hampered by the current barriers of entry I described above. Imagine a future where scholars collaborate with data scientists to uncover new meaning in an ancient text. I also see an opportunity for professionals that know enough about Biblical studies and technology becoming valuable additions to research teams. (Are you listening Fuller Seminary? How about a new MA in Biblical Studies and Text Mining?). The hope is that with these tools, more people can be involved in the process and collaboration between researchers can increase. The task of Biblical research is too large to be limited to a select group highly educated scholars. Instead, AI can facilitate the crowdsourcing of the work to analyze and make meaning of the countless text currently available.

With all that said, it is difficult imagine a time where the Bible is just a book to be analyzed. Instead it is to be experienced, wrestled with and discussed. New technologies will not supplant that. Yet, could they open new avenues of meaning until now never conceived by the human mind. What if AI-enabled Biblical Scholarship could not just uncover new knowledge but also facilitate revelation?

Artificial Immortality: Honoring or Replacing our Parents?

Is there a way to achieve (artificial) immortality? What would that look like?

This month’s Wired featured an article where journalist James Vlahos sought to immortalize his dying father by creating a chatbot that would mimic his dad’s knowledge, expressions and speech mannerisms. His moving account provided rich material for reflection.

For a good portion of the article, the journalist recounts in detail the process of deciding and executing his idea. It took months of preparations, interviews and countless hours of programming. While some machine learning was used, the bulk of the work laid on his own knowledge of his father. He wanted to ensure the bot would respond in a way that would make the user feel like he was talking to the father. He even ensured the grammatical construction of sentences would reflect his father’s speech.

Even more interesting than the process itself were the questions that emerged as his project progressed. How would he and other family members feel about the bot after his father was gone? Would they feel like talking to the bot or would it creep them out? His personal project is a powerful anecdote of this new era where machines are increasingly acquiring human traits.

It is not just about how the machines are changing but even more importantly, how we respond to them. There are those who will interact the bot and be able to compare with the human person the bot was made to emulate. Yet, what about the grandkids who will have a greater exposure to the bot than to their actual grandfather? What type of relationship will they develop with the chatbot? Could the chatbot become its own entity, somewhat independent from the human it was built to emulate?

Honoring Our Fathers Through Technology

Last week, I received my cousin’s first book in the mail. In it, he recounts his journey to uncover details about the torture his parents suffered by the repressive Brazilian dictatorship in the early 1970’s. Besides having national significance as the country seeks to come to grips with that dark period of their history, the story is very personal to our family. Yet, what impressed me the most was his desire to make his parents story known so his children would not forget. In some ways, it was a book to honor his parents’ story, ensuring their memory would outlive them.

This desire to memorialize our parents is not new. In the Hebrew Scriptures, it is codified in the fifth commandment: “Thou shall honor thy father and thy mother.” Could this honoring now be done through these new technologies? As the Wired article demonstrated, it certainly can. In some ways, it is the next step in our current ways of memorializing our ancestors with pictures, books and videos. What makes this new stage unique is how these objects can now interact with us. When we look at videos and pictures, they are fixed snapshots of a past. Our feelings toward them may change but they themselves are static. Yet, as machine learning advances and AI takes on voice and possibly a physical appearance we now have the possibility to not just recall memories but actually create new ones. In fact, a well trained AI could create new content never spoken by the original human. It is, in one sense, the closest we have to bringing the dead back to life.

Memorializing or Idolizing?

It is at this point that I wonder whether our memorializing can quickly descend into idolizing. Let me explain. I wonder at what point the creation meant to resemble our ancestor becomes an independent entity that we relate to and revere. The warning in Scriptures about idolizing is always about replacing the real for the fake. Venerating the fake god instead of the real God. In the same way, could these artificial creations meant to resemble our real ancestors come to replace them in our memory and in our experience? How ironic that in an effort to memorialize somebody we could actually speed up the process to forget and replace them.

Thankfully, these technologies are still in their rudimentary stage so we can start asking these questions now. As the technology improves, it will become increasingly difficult to separate the real person from their artificial creation. So the question becomes, to what extent do we want to use this technology to honor our parents without fully replacing their memory with an artificial image of their real selves? What do you think?

Chatbots: How AI is Changing Relationships

In my previous blog, I discussed how algorithmic matchmaking was changing how people choose their spouse. In this blog, I want to explore how AI is actually displacing human relationships. While the first example shows some evidence of improving the quality of marriages, the examples in this part are more worrisome. Chatbots are slowly taking on human-like functions in a speed that is alarming. In the near future, they could not only displace human relations but also deteriorate existing ones.

In need of conversation, how about chatting with a bot?

Chat bots and intelligent agents are the first level where this type of displacement is happening. The idea here is a computer interface that responds to us in human-like form. Chatbots for example are increasingly being used by companies to address customer service needs. Instead of talking to a real person, you can interact with an intelligent application that responds to your needs and answers your questions. While this can be an effective way to solve customer problems, this introduces a new paradigm into human-machine relationships. Our interaction with machines have been limited to very narrow band of actions in which we often have to structure our communication to machine-specified parameters. That is, we had to learn the code language or press the right button to get a response from the computer. Intelligent chat-bots do that work for us. We speak/type our problem and they reason its way into a solution.

Siri is the most popular examples of these intelligent agent.  Yet the video below shows how Luna is much smarter than Siri spelling the future of what chat bots will be like in a few years.

Luna is a non-profit AI aimed at solving world problems like teaching math in rural areas and other humanitarian projects. This project deserves a blog on its own. For now, I will leave you with the link to the robot without borders organization and Luna’s creator if you want to learn more.

Intelligent agents like Luna go beyond fulfilling mundane needs, being able to engage in deep conversations about Philosophy, Religion, Metaphysics and even Theology. The computer goes from being technological slave to becoming a conversation partner. One that is able to dive into the complexities of the human experience. In this way, it is not far-fetched to foresee humans developing deep emotional bonds to these machines, even stronger than those we have with animals today. The movie Her depicts a future scenario in which humans have romantic relationships with these intelligent agents. Are we ready for this scenario?

Can chatbots make us less lonely?

As chatbots become more advanced as in the case of Luna, people will start seeking them for higher needs of companionship and affection. This trend in itself is concerning. A programmed companion made to cater to all our needs it is the antithesis of a healthy human relationship of give-and-take. My suspicion is that this could eventually shape our relationship with other humans by fostering unrealistic relational demands from each other. It could also lead to some to greater isolation as his or her emotional needs are increasingly met by artificially intelligent machines. Not all would be negative, as these applications could provide much-needed support for those who struggle with depression and other mental illness. It will all depend on how we use these technologies. As long as we keep on asking the question: How can this technology foster human flourishing? This should keep us as a guide for future un-foreseen scenarios.

What’s next?

What about physical needs? All examples above represent a disembodied interaction. What if there was a body involved? What if you could have sex with a machine? Well, sexbots are now being developed. This is a concerning trend that I will grapple with on my next blog.