Alexa Goes to Church: Imagining a Holy AI for Modern Worship

Can artificial intelligence be holy?

The very question of holy AI calls to mind certain images that raise our anxiety: chatbots offering spiritual advice or pastoral care; an artificial minister preaching from the pulpit or presiding at Communion; a highly advanced AI governing our lives with the authority, power, and mystery reserved for God alone.

It’s not surprising that we instinctively shrink back from such images. Artificial intelligence is still so new, and advancing so rapidly, that finding the proper categories to integrate it into our faith can be a major challenge.

But if we’re willing to entertain the idea that AI can be holy, doing so can help us imagine new possibilities for using AI faithfully in our churches and spiritual life. It can show us the potential of AI to be a constructive partner with people of faith in shaping our spiritual lives, bearing witness to God’s grace in the world, and loving one another.

Photo by christian buehner on Unsplash

What Is Holiness?

It’s important to begin with a clear understanding of holiness in the Bible and Christian tradition. Holiness in its most basic sense means set apart for God. The Hebrew word for holy, qadosh, has a root meaning of “separate,” indicating the boundary separating the everyday, the human realm from the sacred, divine realm. To be holy is to be separated—set apart—for God.

Throughout the Bible, we find a broad range of things designated as holy:

  • Places (the Tabernacle, the Temple, Mount Sinai).
  • Times (the Sabbat, various holidays and festivals).
  • People (the people of Israel, the Israelite priests, prophets).
  • Objects (the Ark of the Covenant, the menorah or lampstand, and the other instruments of worship in the Tabernacle).

These examples are from the Old Testament, but a look at Christian practice today shows that Christians recognize a similar range of holy things, though the specifics vary depending on one’s particular tradition:

  • Places (sanctuaries, holy sites such as the Church of the Holy Sepulcher in Jerusalem).
  • Times (Sunday, holidays like Christmas and Easter).
  • People (ministers, priests, bishops, elders).
  • Objects (altar, the chalice and patin used in observing Communion).

Holiness does not make something inherently better or more worthy in God’s sight. Rather, designating a person or object as holy often signifies and expresses God’s care and claim for all. So, the Temple is a holy place where God’s presence is especially intense and most keenly felt, but this does not mean God is absent everywhere else. On the contrary, at the Temple’s dedication, Solomon says, “But will God indeed dwell on the earth? Even heaven and the highest heaven cannot contain you, much less this house that I have built!” (1 Kings 8:27). God’s presence at the Temple signifies God’s presence throughout the whole earth. In the same way, God calls the Israelites a holy people, while affirming that all people belong to God: “Indeed, the whole earth is mine, but you shall be for me a priestly kingdom and a holy nation” (Exodus 19:5-6). The designation of the Sabbath as a holy day is a way of ordering all of our time in a way that honors God as the Creator of all that exits.

The existence of a holy artificial intelligence in this sense—that is, set apart for God in a special way—would not mean that only this AI belongs to God or serves God. Rather, a holy artificial intelligence would signify and express that all artificial intelligence belongs to God and finds its proper orientation when directed toward God’s purposes. Seen in this way, recognizing a holy artificial intelligence seems not only permissible but imperative. Identifying an artificial intelligence as holy, and recognizing it as such through specific practices, can teach us to envision how all artificial intelligence—and all the human energies and hopes it represents—belongs to God.

Photo by William Farlow on Unsplash

Holy Artificial Intelligence

One way to think of holy artificial intelligence is as a tool or instrument—in this case, a complex piece of technology—created by humans and used in worship. The Tabernacle and its furnishings described in Exodus 25-40 make for a good comparison: the Ark of the Covenant, the menorah or lampstand, the incense altar, even the curtains and tent posts that served as the Tabernacle’s structural elements and walls.

These items were created by humans, highly skilled at their craft, at God’s initiative and direction. God gave specific instructions to Moses, and the narrative repeatedly tells us that the workers built everything “as the Lord had commanded Moses.” The artisans exercised great care in creating them, expressed in the detailed, step-by-step account of their construction in Exodus 36-39. The Tabernacle signified God’s presence in the midst of the Israelites, and its furnishings and tools facilitated the people’s worship of God.

It requires a bit of imagination to envision ways in which artificial intelligence might serve similar purposes in Christian worship today. A few possibilities present themselves for holy AI:

  • An automated program to turn on lights, music, or other dimensions of a sanctuary’s atmosphere as a way of preparing the space or guiding the order of worship. The algorithm might work at pre-set times, or in response to other input such as facial recognition, number of people in the sanctuary, or verbal or physical cues from a worship leader. Such a program might tailor the worship atmosphere to feel more intimate for a smaller gathering, or grander and more energetic for a larger body of worshipers.
  • An automated program might offer a repeated portion of a litany or prayer, responding to specific cues from the congregation or minister. Such cues might be verbal, such as a particular word at the end of the congregation’s part of the litany, or physical, for instance in response to the congregation standing, kneeling, or making a particular gesture. A program used in a digital worship service might collect and respond to input through social media.
  • A self-driving vehicle might bring people to worship, helping worshipers prepare for the worship experience before arriving at the church. The AI might respond differently to different individuals or to different worship experiences. Upon detecting a family with young children, the vehicle might play kid-friendly worship music with brightly colored lighting, while it would play something quieter and more meditative for an adult individual.

Conclusion

Others will no doubt think of more and different possibilities, or find dilemmas with the possibilities mentioned above.

I will end with a final point of emphasis. Recognizing an AI as holy, something set aside for God, is different from simply using it in a holy or worshipful setting. There should be ways for the worshiping community to recognize its status.

Specific procedures to use during its development or activation, such as prayers or Scripture reading, would be one way to acknowledge its status as holy—for instance, saying a special set of prayers throughout the development or programming of the AI, or using certain programming processes and avoiding others. There might be a liturgy of dedication or short worship service for when the AI is activated or used for the first time in worship. Social media feeds or a virtual environment might allow the congregation to digitally “lay hands” on the AI as a part of the service. Another, similar liturgy or service could accompany its deactivation or replacement.

The key is not reducing the artificial intelligence to a purely functional role, but providing a means for worshipers to recognize and express God’s initiative and their own response in setting it apart for a holy purpose. The means to accomplish this should engage both the congregation and the AI in appropriate ways; should invoke God’s presence and blessing; and should be surrounded by a theological narrative that illuminates how and why it is being set apart for a holy purpose.

Such a way of identifying and acknowledging AI as holy is an invitation for the worshiping community to consider that all AI are a part of God’s creation, and can be directed toward God and God’s purposes in the world.


Dr. Brian Sigmon

Brian Sigmon is an acquisitions editor at The United Methodist Publishing House, where he edits books, Bible studies, and official resources for The United Methodist Church. He has a Ph.D. in Old Testament Studies from Marquette University, where he taught courses in the Bible and theology. Brian finds great joy in thinking deeply about the Christian faith and helping people of all backgrounds deepen their understanding of Scripture. He lives in Kingston Springs, Tennessee with his wife Amy and their three children.

How is AI Hiring Impacting Minorities? Evidence Points to Bias

Thousands of resumes, few positions, and limited time. The story repeats itself in companies globally. Growing economies and open labor markets, now re-shaped by platforms like Linkedin and Indeed, a growing recruiting industry opened wide the labor market. While this has expanded opportunity, it left employers with the daunting task to sift through the barrage of applications, cover letters, resumes thrown in their way. Enters AI, with its promise to optimize and smooth out the pre-selection process. That sounds like a sensible solution, right? Yet, how is AI hiring impacting minorities?

Not so fast – a 2020 paper summarizing data from multiple studies found that using AI for both selection and recruiting has shown evidence of bias. As in the case of facial recognition, AI for employment is also showing disturbing signs of bias. This is a concerning trend that requires attention from employers, job applicants, citizens, and government entities.

Photo by Cytonn Photography on Unsplash

Using AI for Hiring

MIT podcast In Machines we Trust goes under the hood of AI hiring. What they found was surprising and concerning. Firstly, it is important to highlight how widespread algorithms are in every step of hiring decisions. One of the most common ways is through initial screening games that narrow the applicant pool for interviews. These games come in many forms that vary depending on vendor and job type. What they share in common is that, unlike traditional interview questions, they do not directly relate to skills relevant to the job at hand.

AI game creators claim that this indirect method is intentional. This way, the candidate is unaware of how the employer is testing them and therefore cannot “fake” a suitable answer. Instead, many of these tools are trying to see whether the candidate exhibits traits of past successful employees for that job. Therefore, employers claim they get a better measurement of the candidate fit for the job than they would otherwise.

How about job applicants? How do they fare when AI decides who gets hired? More specifically, how does AI hiring impact minorities’ prospects of getting a job? On the other side of the interview table, job applicants do not share in the vendor’s enthusiasm. Many report an uneasiness in not knowing how the tests’ criteria. This unease in itself can severely impact their interview performance creating additional unnecessary anxiety. More concerning is how these tests impact applicants with disabilities. Today, thanks to the legal protections, job applicants do not have to report disabilities in the interviewing process. Now, some of these tests may force them to do it earlier.

What about Bias?

Unfortunately, bias does not happen only for applicants with disabilities. Other minority groups are also feeling the pinch. The MIT podcast tells the story of an African-American woman, who though having the pre-requisite qualifications did not get a single call back after applying to hundreds of positions. She eventually found a job the old-fashioned way – getting an interview through a network acquaintance.

The problem of bias is not entirely surprising. If machine learning models are using past data of job functions that are already fairly homogenous, they will only reinforce and duplicate this reality. Without examining the initial data or applying intentional weights, the process will continue to perpetuate this problem. Hence, when AI is training on majority-dominated datasets, the algorithms will tend to look for majority traits at the expense of minorities.

This becomes a bigger problem when AI applications go beyond resume filtering and selection games. They are also part of interviewing process itself. AI hiring companies like Hirevue claim that their algorithm can predict the success of a candidate by their tone of voice in an interview. Other applications will summarize taped interviews to select the most promising candidates. While these tools clearly can help speed up the hiring process, bias tendencies can severely exclude minorities from the process.

The Growing Need for Regulation

AI in hiring is here to stay and they can be very useful. In fact, the majority of hiring managers state that AI tools are saving them time in the hiring process. Yet, the biggest concern is how they are bending power dynamics towards employers – both sides should benefit from its applications. AI tools are now tipping the balance toward employers by shortening the selection and interview time.

If AI for employment is to work for human flourishing, then it cannot simply be a time-saving tool for employers. It must also expand opportunity for under-represented groups while also meeting the constant need for a qualified labor force. Above all, it cannot claim to be a silver bullet for hiring but instead an informative tool that adds a data point for the hiring manager.

There is growing consensus that AI in hiring cannot go on unregulated. Innovation in this area is welcome but expecting vendors and employers to self-police against disparate impact is naive. Hence, we need intelligent regulation that ensures workers get a fair representation in the process. As algorithms become more pervasive in the interviewing process, we must monitor their activity for adverse impact.

Job selection is not a trivial activity but is foundational for social mobility. We cannot afford to get this wrong. Unlike psychometric evaluations used in the past that have scientific and empirical evidence, these new tools are mostly untested. When AI vendors claim they can predict job success by the tone of voice or facial expression, then the burden is on them to prove the fairness of their methods. Should AI decide who gets hired? Given the evidence so far, the answer is no.

The Glaring Omission of Religious Voices in AI Ethics

Pew Research released a report predicting the state of AI ethics in 10 years. The primary question was: will AI systems have robust ethical principles focused on the common good by 2030? Of the over 600 experts who responded, 2/3 did not believe this would happen. Yet, this was not the most surprising thing about the report. Looking over the selection of responders, there was no clergy or academics of religion included. In the burgeoning field of AI ethics research, we are missing the millenary wisdom of religious voices.

Reasons to Worry and Hope

In a follow-up webinar, the research group presented the 7 main findings from the survey. They are the following:

Concerning Findings

1. There is no consensus on how to define AI ethics. Context, nature and power of actors are important.

2. Leading actors are large companies and governments that may not have the public interest at the center of their considerations.

3. AI is already deployed through opaque systems that are impossible to dissect. This is the infamous “black box” problem pervasive in most machine learning algorithms.

4. The AI race between China and the United States will shape the direction of development more than anything else. Furthermore, there are rogue actors that could also cause a lot of trouble

Hopeful Findings

5. AI systems design will be enhanced by AI itself which should speed up the mitigation of harmful effects.

6. Humanity has made acceptable adjustments to similar new technology in the past. Users have the power to bent AI uses towards their benefit.

7. There is widespread recognition of the challenges of AI. In the last decade, awareness has increased significantly resulting in efforts to regulate and curb AI abuses. The EU has led the way in this front.

Photo by Wisnu Widjojo on Unsplash

Widening the Table of AI Ethics with Faith

This eye-opening report confirms many trends we have addressed in this blog. In fact, the very existence of AI theology is proof of #7, showing that awareness is growing. Yet, I would add another concerning trend to the list above which is the narrow group of people in the AI ethics dialogue table. The field remains dominated by academic and industry leaders. However, the impact of AI is so ubiquitous that we cannot afford this lack of diversity.

Hopefully, this is starting to change. A recent New York Times piece outlines efforts of the AI and Faith network. The group consists of an impressive list of clergy, ethicists, and technologists who want to bring their faith values to the table. They seek to introduce the diverse universe of religious faith in AI ethics providing new questions and insights into this important task.

If we are to face the challenge of AI, why not start by consulting thousands of years of human wisdom? It is time we add religious voices to the AI ethics table as a purely secular approach will ostracize the majority of the human population.

We ignore them to our peril.

How Big Companies can be Hypocrites about Diversity

Can we trust big companies are saying the truth, or are they being hypocrites? We can say that the human race is somehow evolving and leaving behind discriminatory practices. Or at least some are trying to. And this reflects on the market. More and more, companies around the world are being called out on big problems, involving racism, social injustice, gender inequalities, and even false advertising. But how can we know what changes are real and what are fake? From Ford Motors to Facebook, many companies talk the talk but do not walk the walk.

The rise of Black Lives Matter protests is exposing societies’ crooked and oppressive ways, bringing discussions about systemic and structural racism out in the open. It’s a problem that can’t be fixed with empty promises and window dressing. Trying to solve deep problems isn’t easy and is a sort of “all hands on deck” type of situation. But it’s no longer an option for companies around the world to ignore these issues. That’s when the hypocrisy comes in.

Facebook, Amazon, Ford Motor Company, Spotify, Google, are a few examples of big companies that took a stand against racial inequality on their social media. Most of them also donated money to help the cause. They publicly acknowledged that a change has to be made. It is a start. But it means nothing if this change doesn’t happen inside the company itself.

Today I intend to expose a little bit about Facebook and Amazon’s diversity policies and actions. You can make your own conclusions.

We stand against racism and in support of the Black community and all those fighting for equality and justice every single day.”Facebook

Mark Zuckerberg wrote on his personal Facebook page: “To help in this fight, I know Facebook needs to do more to support equality and safety.” 

In Facebook’s business page, it claims some actions the company is making to fight inequalities. But it mostly revolves around funding. Of course money is important, but changes regarding the companies structure are ignored. They also promised to build a more inclusive workforce by 2023. They aim for 50% of the workforce to be from underrepresented communities. Also working to double the number of Black and Latino employees in the same timeframe.

But in reality, in the most recent FB Diversity Report, White people take up 41% of all roles, Followed by Asians with 44%, Hispanics with 6.3%, Black people with 3.9% and Native Americans with 0.4%. An even though it may seem that Asians are taking benefit in this current situation, White people take 63% of leadership roles in Facebook, reducing only 10% since 2014. Well, can you see the difference between the promises and ACTUAL reality?

Another problem FB employees talk about is leadership opportunities. Even though the company started hiring more people of color, it still doesn’t give them the opportunity to grow and occupy more important roles.  Former Facebook employees filled a complaint with with the Equal Employment Opportunity Commission trying to bring justice for the community. Read more about this case here.

Another big company: Amazon.

Facial recognition technology and police. Hypocrisy or not?

Amazon is also investing in this type of propaganda creating a “Diversity and Inclusion” page on their website. They also made some tweets talking about police abuse and the brutal treatment black Americans are forced to live with. What Amazon didn’t expect, is that it would backfire.

Amazon fabricated and sold technology that supports police abuse towards the Black population. In a 2018 study of Amazon’s Rekognition technology, the  American Civil Liberties Union (ACLU) found people of color were falsely matched at a high rate. Matt Cagle, an attorney for the ACLU of Northern California, called Amazon’s support for racial justice “utterly hypocritical.” Only in June of 2020, Amazon halted selling this technology to the police for one year. And in May of 2021, they extended the pause until further notice.

The ACLU admits that Amazon stopped selling this technology, is a start. But the US government has to “end its use by law enforcement entirely, regardless which company is selling it.” In previous posts, AI Theology talked about bias on facial recognition and algorithmic injustice.

What about Amazon’s workforce?

Another problem Amazon faces is in their workforce. At first sight, white people occupy only 32% of their entire workforce. But it means nothing since the best paid jobs belong to them. Corporate employees are composed of: 47% White, 34% Asian, 7% Black, 7% Latinos, 3% Multiracial, and 0.5% Native Americans. The numbers continue reducing drastically when you look at senior leaders that are composed of: 70% White, 20% Asian, 3,8% Black, 3,9% Latinos, 1.4% Multiracial and 0.2% Native Americans. You can find this data in this link.

What these numbers show us is that minorities are under represented in Amazon’s leadership ranks . Especially in the best paid and more influential roles. We need to be alert when big companies say their roles are equally distributed. Sometimes the hypocrisy is there. The roles may be equal, but the pay isn’t.

What can you do against these big companies actions?

So if the companies aren’t practicing what they preach, how can we change that?

Numbers show that public pressure can spark change. We should learn not to only applaud well built statements but demand concrete actions, exposing hypocrisy. We need to call on large companies to address the structural racism that denies opportunities from capable and innovative people of color. 

Consultant Monica Hawkins believes that executives struggle to raise diversity in senior management mostly because they don’t understand minorities. She believes that leaders need to expand their social and business circles, referrals are a key source of important hires as she mentioned in Reuters.

Another take that companies could consider taking is, instead of only making generic affirmations, they could put out campaigns recognizing their own flaws and challenges and what they are doing to change that reality. This type of action can not only improve the company’s rating but also press other companies to change as well. 

It’s also important that companies keep showing their workforce diversity numbers publicly. That way, we can keep track of changes and see whether they are actually working to improve from the inside.

In other words, does the company talk openly about inequalities? That’s nice. Does it make donations to help social justice organizations? Great. But it’s not enough, not anymore. Inequalities don’t exist just because of financial problems. For companies to thrive and continue alive in the future, they need to start creating an effective plan on how to change their own reality.  

3 Concerning Ways Israel Used AI Warfare in Gaza

We now have evidence that Israel used AI warfare in the conflict with Hamas. In this post, we’ll cover briefly the three main ways Israel used warfare AI and the ethical concerns they raise. Shifting from facial recognition posts, we now turn to this controversial topic.

Semi-Autonomous Machine Gun Robot

Photos surfaced on the Internet showing a machine-gun-mounted robot patrolling the Gaza wall. The intent is to deter Hamas militants from crossing the border or dealing with civil unrest. Just to be clear, they are not fully autonomous as they still have a remote human controller that must make the ultimate striking decision.

Yet, they are more autonomous than drones and other remote-controlled weapons. They can respond to fire or use non-lethal forces if challenged by enemy forces. They are also able to maneuver independently around obstacles.

Israel Defence Force (IDF) seeks to replace soldier patrols with semi-autonomous technologies like this. From their side, the benefits are clear: less risk for soldiers, cheaper patrolling power, and more efficient control of the border. It is part of a broader strategy of creating a smart border wall.

Less clear is how these Jaguars (as they are called) will do a better job in distinguishing enemy combatants from civilians.

US military Robot similar to the one used in Gaza (from Wikipedia)

Anti-Artilhery Systems

Hamas’s primary tactic to attack Israel is through short-range rockets – lots of them. Just in the last conflict, Hamas launched 4,000 rockets against Israeli targets. Destroying them mid-air was a crucial but extremely difficult and costly task.

For decades, IDF has improved its anti-ballistic capability in order to neutralize this threat. The most recent addition in this defense arsenal is using AI to better predict incoming missiles trajectories. By collecting a wealth of data from actual launches, IDF can train better models. This allows them to use anti-missile projectiles sparingly, leaving those destined to uninhabited areas alone.

This strategy not only improves accuracy, which now stands at 90% but also can save the IDF money. At $50K a pop, IDF must use anti-missile projectiles wisely. AI warfare is helping Israel save on resources and hopefully some lives as well.

Target Intelligence

The wealth of data was useful in other areas as well. IDF used intelligence to improve its targeted strikes in Gaza. Using 3-D models of the Gaza territory, they could focus on hitting weapons depots and missile bases. Furthermore, AI technology was employed for other facets of warfare. For example, military strategists used AI to ascertain the best routes for ground force invasions.

This was only possible because of the rich data coming from satellite imaging, surveillance cameras, and intercepted communications. As this plethora of information flowed in, partner-finding algorithms were essential in translating them into actionable intelligence.

Employing AI technology clearly gave Israel a considerable military advantage. Hence, the number of casualties on each side speaks for themselves: 253 in Palestine versus 12 on the Israeli side. AI warfare was a winner for Israel.

With that said, wars are no longer won on the battlefield alone. Can AI do more than giving one side an advantage? Could it actually diminish the human cost of war?

Photo by Clay Banks on Unsplash

Ethical Concerns with AI warfare

As I was writing this blog I reached out to Christian Ethicists and author of Robotic Persons, Dr. Josh Smith for reactions. He sent me the following (edited for length) statement:

“The greatest concern I have is that these defensive systems, which most AI weapons systems are, is that they become a necessary evil. Because every nation is a ‘threat’ we must have weapon systems to defend our liberty, and so on. The economic cost is high. Many children in the world die from a lack of clean water and pneumonia. Yet, we invest billions into AI for the sake of security.

Dr Josh Smith

I could not agree more. As the case for IDF illustrates, AI can do a lot to give military advantage in a conflict. AI warfare is not just about killer robots but encompasses an ecosystem of applications that can improve effectiveness and efficiency. Yet, is that really the best use of AI.

Finally, As we have stated before AI is best when employed for the flourishing of life. Can that happen in warfare? The jury is still out but it is hard to reconcile the flourishing of life with an activity focused on death and destruction.

Making a Difference: Facial Recognition Regulation Efforts in the US

As our review of Coded Bias demonstrates, concerns over facial recognition are mounting. In this blog, I’ll outline current efforts of facial recognition regulation while also point you to resources. While this post focus on the United States, this topic has global relevance. If you know of efforts in your region, drop us a note. Informed and engaged citizens are the best weapons to curb FR abuse.

National Level Efforts to Curb Public Use

Bipartisan consensus is emerging on the need to curb big tech power. However, there are many differences in how to address it. The most relevant piece of legislation moving through Congress is the Facial Recognition and Biometric Technology Moratorium Act. If approved, this bill would:

  • Ban the use of facial recognition technology by federal entities, which can only be lifted with an act of Congress
  • Condition federal grant funding to state and local entities moratoria on the use of facial recognition and biometric technology
  • Stop the use of federal dollars for biometric surveillance systems
  • Stop the use of biometric data in violation of the Act from any judicial proceedings
  • Empower individuals whose biometric data is used in violation of the Act and allow for enforcement by state Attorneys General

Beyond the issues outlined above, it would allow states and localities to enact their own laws regarding the use of facial recognition and biometric technologies.

Photo by Tingey Injury Law Firm on Unsplash

What about Private Use?

A glaring omission from the bill above is that it does nothing to curb private companies use of facial recognition. While stopping police and judicial use of FR is a step in the right direction, the biggest users of this technology is not the government.

On that front, other bills have emerged but have not gone far. One of them is National Biometric Information Privacy Act of 2020, cosponsored by Senadors Jeff Merkley (D-Ore.) and Bernie Sanders (I-Vt.). This law would make it illegal for corporations to use facial recognition to identify people without their consent. Moreover, they would have to prove they are using it for a “valid business purpose”. It models after a recent Illinois law that spurred lawsuits against companies like Facebook and Clearview.

Another promising effort is Republican Senator Jerry Moran Consumer Data Privacy and Security Act of 2021. This bill seeks to establish comprehensive regulation on data privacy that would include facial recognition. In short, the bill would create a federal standard for how companies use personal data and allow consumers to have more control over what is done with their data. On the other side of the aisle, Senator Gillibrand introduced a bill that would create a federal agency to regulate data use in the nation.

Cities have also entered the battle to regulate facial recognition. In 2020, the city of Portland passed a sweeping ban on FR that includes not only public use but also business use in places of “public accommodation”. On the other hand, the state of Washington passed a landmark law that curbs but still allows for the use of the technology. Not surprisingly, the efforts gained support from Seattle’s corporate giants Amazon and Microsoft. Whether that’s a good or bad sign, I’ll let you decide.

What can you do?

What is the best approach? There is no consensus on how to tackle this problem but leaving for the market to decide is certainly not a viable option. While consent is key, there are differences on whether the use is of FR is legitimate. For some, an outright ban is the best option. Others believe it should be highly regulated but still applied to areas like policing. In fact, a majority of Americans are in favor of law enforcement’s use of FR.

The first step is informed engagement. I encourage you to reach out to your Senator and Representative and express your concern over Facial Recognition. In these days, even getting FR regulation in legislator’s radar is a step on the right direction.

Look out for local efforts in your area that are addressing this issue. If none are present, maybe it is your opportunity to be a catalyst for action. While least covered by the media, local laws are often the ones that most impact our lives. Is your police department using FR? If so, what safeguards do they have to avoid racial and gender bias?

Finally, discuss the issue with friends, family and your social network. One good step is sharing this blog (shameless plug 🙂 ) with others in social media. You can do that using the buttons below. Regardless of where you stand on this issue, it is imperative we widen the conversation on facial recognition regulation.

AI Artistic Parrots and the Hope of the Resurrection

Guest contributor Dr. Scott Hawley discusses the implications for generative models and resurrection. As this technology improves, the generation of new work attributed to the dead multiply. How does that square with the Christian hope for resurrection?

“It is the business of the future to be dangerous.”

(Fake) Ivan Illich

“The first thing that technology gave us was greater strength. Then it gave us greater speed. Now it promises us greater intelligence. But always at the cost of meaninglessness.”

(Fake) Ivan Illich

Playing with Generative Models

The previous two quotes are just a sample of 365 fake quotes in the style of philosopher/theologian Ivan Illich by feeding a page’s worth of real Illich quotes from GoodReads.com into OpenAI’s massive language model, GPT-3, and had it continue “writing” from there. The wonder of GPT-3 is that it exhibits what its authors describe as “few-shot learning.” That is, rather than requiring of 100+ pages of Illich as older models, it works with a few Illich quotes. Two to three original sayings and the GPT-3 can generate new quotes that are highly believable.

Have I resurrected Illich? Am I putting words into the mouth of Illich, now dead for nearly 20 years? Would he (or the guardians of his estate) approve? The answers to these questions are: No, Explicitly not (via my use of the word “Fake”), and Almost certainly not. Even generating them started to feel “icky” after a bit. Perhaps someone with as flamboyant a public persona as Marshall McLuhan would have been pleased to be ― what shall we say, “re-animated“? ― in such a fashion, but Illich likely would have recoiled. At least, such is the intuition of myself and noted Illich commentator L.M. Sacasas, who inspired my initial foray into creating an “IllichBot”:

…and while I haven’t abandoned the IllichBot project entirely, Sacasas and I both feel that it would be better if it posted real Illich quotes rather than fake rehashes via GPT-3 or some other model.

Re-creating Dead Artists’ Work

For the AI Theology blog, I was not asked to write about “IllichBot,” but rather on the story of AI creating Nirvana music in a project called “Lost Tapes of the 27 Club.” This story was originally mis-reported (and is still in the Rolling Stone headline metadata) as “Hear ‘New’ Nirvana Song Written, Performed by Artificial Intelligence,” but really the song was “composed” by the AI system and then performed by a (human) cover band. One might ask, how is this is different from humans deciding to imitate another artists?

For example, the artist known as The Weeknd sounds almost exactly like the late Michael Jackson. Greta Van Fleet make songs that sound like Led Zeppelin anew. Songwriters, musicians, producers, and promoters routinely refer to prior work as signifiers when trying to communicate musical ideas. When AI generates a song idea, is that just a “tool” for the human artists? Are games for music composition or songwriting the same as “AI”? These are deep questions regarding “what is art?” and I will refer the reader to Marcus du Sautoy’s bestselling survey The Creativity Code: Art and Innovation in the Age of AI. (See my review here.)

Since that book was published, newer, more sophisticated models have emerged that generate not just ideas and tools but “performance.” The work of OpenAI’s Jukebox effort and artist-researchers Dadabots generate completely new audio such as “Country, in the style of Alan Jackson“. Dadabots have even partnered with a heavy metal band and beatbox artist Reeps One to generate entirely new music. When Dadabots used Jukebox to produce the “impossible cover song” of Frank Sinatra singing a Britney Spears song, they received a copyright takedown notice on YouTube…although it’s still unclear who requested the takedown or why.

Photo by Michal Matlon on Unsplash

Theology of Generative Models?

Where’s the theology angle on this? Well, relatedly, mistyping “Dadabots” as “dadbots” in a Google search will get you stories such as “A Son’s Race to Give His Dying Father Artificial Immortality” in which, like our Fake Ivan Illich, a man has trained a generative language model on his father’s statements to produce a chatbot to emulate his dad after he’s gone. Now we’re not merely talking about fake quotes by a theologian, or “AI cover songs,” or even John Dyer’s Worship Song Generator, but “AI cover Dad.” In this case there’s no distraction of pondering interesting legal/copyright issues, and no side-stepping the “uncomfortable” feeling that I personally experience.

One might try to couch the “uncomfortable” feeling in theological terms, as some sort of abhorrence of “digital” divination. It echoes the Biblical story of the witch of Endor temporarily bringing the spirit of Samuel back from the dead at Saul’s request. It can also relate to age-old taboos about defiling the (memory of) the dead. One could try to introduce a distinction between taboo “re-animation” that is the stuff of multiple horror tropes vs. the Christian hope of the resurrection through the power of God in Christ.

However I would stop short of this, because the source of my “icky” feeling stems not from theology but from a simpler objection to anthropomorphism, the “ontological” confusion that results when people try to cast a generative (probabilistic) algorithm as a person. I identify with the scientist-boss in the digital-Frosty-the-Snowman movie Short Circuit:

“It’s a machine, Schroeder. It doesn’t get pissed off. It doesn’t get happy, it doesn’t get sad, it doesn’t laugh at your jokes. It just runs programs.”

Short Circuit

Materialists, given their requirement that the human mind is purely physical, can perhaps anthropomorphize with impunity. I submit our present round of language and musical models, however impressively they may perform, are only a “reflection, as in a glass darkly” of true human intelligence. The error of anthropomorphism goes back for millenia, however, the Christian hope for resurrection addresses being truly reunited with lost loved ones. That means being able to hear new compositions of Haydn, by Haydn himself!

Acknowledgement: The title is an homage to the “Stochastic Parrots” paper of the (former) Google AI ethics team.


Scott H. Hawley is Professor of Physics at Belmont University and a Founding Member of AI and Faith. His writings include the winning entry of FaithTech Institute’s 2020 Writing Contest and the most popular Acoustics Today article of 2020, and have appeared in Perspectives on Science and Christian Faith and The Transhumanism Handbook.

How Coded Bias Makes a Powerful Case for Algorithmic Justice

What do you do when your computer can’t recognize your face? In a previous blog, we explored the potential applications for emotional AI. At the heart of this technology is the ability to recognize faces. Facial recognition is gaining widespread attention for its hidden dangers. This Coded Bias short review summarizes the story of female researchers who opened the black box of major applications that use FR. What they found is a warning to all of us making Coded Bias a bold call for algorithmic justice.


Official Trailer

Coded Bias Short Review: Exposing the Inaccuracies of Facial Recognition

The secret is out, FR algorithms are a lot better at recognizing white male faces than of any other group. The difference is not trivial. Joy Buolamwini, MIT researcher and main character in the film, found that dark-skinned women were miss-classified up to 35% of the time compared to less than 1% for male white faces! Error rates of this level can have life-altering consequences when used in policing, judicial decisions, or surveillance applications.

Screen Capture

It all started when Joy was looking for facial recognition software to recognize her face for an art project. She would have to put a white mask on in order to be detected by the camera. This initial experience led her down to a new path of research. If she was experiencing this problem, who else and how would this be impacting others that looked like her. Eventually, she stumbled upon the work of Kathy O’Neil, Weapons of Math Destruction: How How Big Data Increases Inequality and Threatens Democracy, discovering the world of Algorithmic activism already underway.

The documentary weaves in multiple cases where FR misclassification is having a devastating impact on people’s lives. Unfortunately, the burden is falling mostly on the poor and people of color. From an apartment complex in Brooklyn, the streets of London, and a school district in Houston, local activists are mobilizing political energy to expose the downsides of FR. In doing so, Netflix Coded Bias shows not only the problem but also sheds light on the growing movement that arose to correct it. In that, we can find hope.

If this wasn’t clear before, here it is: watch the documentary Coded Bias multiple times. This one is worth your time.

The Call for Algorithmic Justice

The fight for equality in the 21st century will be centered on algorithmic justice. What does that mean? Algorithms are fast becoming embedded in growing areas of decision-making. From movie recommendations to hiring, cute apps to judicial decisions, self-driving cars to who gets to rent a house, algorithms are influencing and dictating decisions.

Yet, they are only as good as the data used to train them. If that data contains present inequities and or is biased towards ruling majorities, they will inevitably disproportionately impact minorities. Hence, the fight for algorithmic justice starts with the regulation and monitoring of their results. The current lack of transparency in the process is no longer acceptable. While some corporations may intended to discriminate, their neglect of oversight makes them culpable.

Because of its ubiquitous impact, the struggle for algorithmic justice is not just the domain of data scientists and lawmakers. Instead, this is a fight that belongs to all of us. In the next blog, I’ll be going over recent efforts to regulate facial recognition. This marks the next step in Coded Bias call for algorithmic justice.

Stay tuned.

4 Surprising Ways Emotional AI is Making Life Better

It’s been a long night and you have driven for over 12 hours. The exhaustion is such that you are starting to blackout. As your eyes close and your head drops, the car slows down, moves to the shoulder, and stops. You wake up and realize your car saved your life. This is just one of many examples of how emotional AI can do good.

It doesn’t take much to see the ethical challenges of computer emotion recognition. Worse case scenarios of control and abuse quickly pop into mind. In this blog, I will explore the potential of emotional AI for human flourishing through 4 examples. We need to examine these technologies with a holistic view that weighs their benefits against their risks. Hence, here are 4 examples of how affecting computing could make life better.

1. Alert distracted drivers

Detecting signs of fatigue or alcohol intoxication early enough can be the difference between life and death. This applies not only to the driver but also to passengers and occupants of nearby vehicles. Emotional AI can detect blurry eyes, excessive blinking, and other facial signs that the driver is losing focus. As this mental state is detected early, the system can intervene through many means.

For example, it could alert the driver that they are too tired to drive. It could lower the windows or turn on loud music to jolt the driver into focus. More extreme interventions would include shocking the drivers’ hands through the steering wheel, and also slowing or stopping the car in a safe area.

As an additional benefit, this technology could also detect other volatile mental states such as anger, mania, and euphoria. This could lead to interventions like changing temperature, music, or even locking the car to keep the driver inside. In effect, this would not only reduce car accidents but could also diminish episodes of road rage.

2. Identify Depression in Patients

As those who suffer from depression would attest, the symptoms are not always clear to patients themselves. In fact, some of us can go years suffering the debilitating impacts of mental illness and think it is just part of life. This is especially true for those who live alone and therefore do not have the feedback of another close person to rely on.

Emotional AI trained to detect signs of depression in the face could therefore play an important role in moving clueless patients into awareness. While protecting privacy, in this case, is paramount, adding this to smartphones or AI companions could greatly help improve mental health.

Our faces let out a lot more than we realize. In this case, they may be alerting those around us that we are suffering in silence.

3. Detect emotional stress in workplaces

Workplaces can be toxic environments. In such cases, the fear of retaliation may keep workers from being honest with their peers or supervisors. A narrow focus on production and performance can easily make employees feel like machines. Emotional AI systems embedded through cameras and computer screens could detect a generalized increase in stress by collecting facial data from multiple employees. This in turn could be sent over to responsible leaders or regulators for appropriate intervention.

Is this too invasive? Well, it depends on how it is implemented. Many tracking systems are already present in workplaces where employee activity in computers and phones are monitored 24-7. Certainly, this could only work in places where there is trust, transparency and consent. It also depends on who has access to this data. An employee may not be comfortable with their bosses having this data but may agree to ceding this data to an independent group of peers.

4. Help autistic children socialize in schools

The last example shows how emotional AI can play a role in education. Autistic children process and respond to social queues differently. In this case, emotional AI in devices or a robot could gently teach the child to both interpret and respond to interactions with less anxiety.

This is not an attempt to put therapists or special-needs workers out of a job. It is instead an important enhancement to their essential work. The systems can be there to augment, expand and inform their work with each individual child. It can also provide a consistency that humans also fail to provide. This is especially important for kids who tend to thrive in structured environments. As in the cases above, privacy and consent must be at the forefront.

These are just a few examples of the promise of emotional AI. As industries start discovering and perfecting emotional AI technology, more use cases will emerge.

How does reading these examples make you feel? Do they sound promising or threatening? What other examples can you think of?

A Beginner’s Guide to Emotional AI, Its Challenges and Opportunities

You walk into your living room, Alexa dims the lights, lowers the temperature, and says: “You look really sad today. Would you like me to play Adele for you?” This could be a reality in a few years. Are we prepared? This beginner’s guide to emotional AI will introduce the technology, its applications and ethical challenges.

We will explore both the opportunities and dangers of this emerging AI application. This is part of the broader discipline of affective computing that use different inputs from the human body (i.e.: heartbeat, sweat, facial expression, speech, eye movement, etc) to interpret, emulate and predict emotion. For this piece, we’ll focus on the use of facial expressions to infer emotion.

According to Gartner, by 2022, 10% of smartphones will have affective computing capabilities. Latest Apple phones can already detect your identity through your face. The next step is detecting your mental state through that front camera. Estimates of the Emotive AI market range around $36 Billion in 5 years. Human emotion detection is no longer a Sci-Fi pipe dream but a reality poised to transform societies. Are we ready for it?

How does Emotional AI work?

Our beginner’s guide to emotional AI must start with explaining how it works. While this technology is relatively new, its foundation dates back to the mid-nineteenth century. founded primarily in the idea humans display universal facial cues for their emotions. Charles Darwin was one of the first to put forth this idea. A century later, American psychologist Paul Eckman further elaborated on it through extensive field studies. Recently, scholars have challenged this universality and there is now no consensus on its validity. AI entrepreneurs bet that we can find universal patterns. Their endeavors are testing this theory in real-time with machine learning.

The first step is “training” a computer to read emotions through a process of supervised learning. This entails feeding pictures of people’s faces along with labels that define that person’s emotion. For example, one could feed the picture of someone smiling with the label “happy.” For the learning process to be effective thousands if not millions of these examples are created.

The computer then uses machine learning algorithms to detect common patterns in the many examples of each emotion. This enables it to establish a general idea of what each emotion looks like in every face. Therefore, it is able to classify new cases, including faces it has never encountered before, with these emotions.

By Prawny from Pixabay

Commercial and Public Applications

As you can imagine, there are manifold applications to this type of technology. For example, one of the greatest challenges in marketing is collecting accurate feedback from customers. Satisfaction surveys are few and far between and often inaccurate. Hence, companies could use emotional AI to capture instantaneous human reactions to an ad or experience not from a survey but from their facial reactions.

Affectiva, a leading company using this technology, already claims it can detect emotions from any face. It collected 10M expressions from 87 countries, hand-labeled by crowd workers from Cairo. With its recent merger with Smart Eye, the company is poised to become the leader in Automotive, in-cabin driver mental state recognition for the automotive industry. This could be a valuable safety feature detecting when a driver is under the influence, sleepy, or in emotional distress.

More controversial applications include using it for surveillance as it is in the case of China’s treatment of the Uighur population. Police departments could use it as a lie-detection device in interrogations. Governments could use it to track the general mood of the population by scanning faces in the public square. Finally, employers could use it as part of the interview process to measure the mental state of an applicant.

Ethical Challenges for Emotional AI

No beginner’s guide on emotional AI without considering the ethics of its impact. Kate Crawford, a USC research professor, has sounded the alarm on emotional AI. In her recently published book and through a number of articles, she makes the case for regulating this technology. Her primary argument is that using facial recognition to detect emotions is based on shaky science. That is, the overriding premise that human emotion can universally be categorized through a set of facial expressions is faulty. It minimizes a plethora of cultural factors lending itself to dangerous bias.

This is not just conjecture, a recent University of Maryland study detected an inherent bias that tends to show black faces with more negative emotions than white faces. She also notes that the machine learning process is questionable as it is based on pictures of humans emulating emotions. The examples come from people that were told to make a type of facial expression as opposed to capturing a real reaction. This can lead to an artificial establishment of what a facial representation of emotion should look like rather than real emotional displays.

This is not limited to emotion detection. Instead it is part of a broader partner of error in facial recognition. In 2018 paper, MIT researcher Joy Bulowanmini analyzed disparities in the effectiveness of commercial facial recognition applications. She found that misclassification rates for dark-skinned women were up to 34% compared to 0.8% for white males.

Photo by Azamat Zhanisov on Unsplash

The Sanctity of the Human Face

The face is the window to the human soul. It the part of the body most identified with an individual’s unique identity. When we remember someone, it is their countenance that shows up in our mind. It is indeed the best indicator of our internal emotional state which may often betray the very things we speak.

Up to recently, interpreting the mystery of our faces was the job of humans and animals. What does it mean to now have machines that can intelligently decipher our inner states? This is certainly a new frontier in the human-ai interface that we must tread carefully if for no other reason than to respect the sanctity of the human soul. If left for commerce to decide, the process will most likely be faster than we as a society are comfortable with. That is where the calls for regulation are spot on.

Like every technology – the devil is in the details. It would be premature to outlaw the practice altogether as the city of Seattle has done recently. We should, however, limit and monitor its uses – especially in areas where the risk of bias can have a severe adverse impact on the individual. We must also ponder whether we want to live in a society where even our facial expressions are subject to monitoring.