God is NOT like Algorithms: Negating AI’s Absolute Power

In my previous blog, I discussed the totalitarianism and determinism already created by today’s AI, concluding my argument with a distinction between a positive and a negative theology of AI. I also made, without any elaboration, an appeal for the latter. The terminology of this distinction may lead to some confusion. The name “artificial intelligence” is usually applied to computer-based, state-of-the-art algorithms that display behavior or skills of which it has formerly been thought only human beings are capable. Notwithstanding, an AI algorithm, and especially the whole array of AI algorithms that are active online, may exhibit behavior or create an environment whose qualities go beyond the level or capacity of the human mind and, even more than that, appear to be “God-like” or are treated so.

Here enters theological reflection with two of its forms: positive and negative theology, of which the second is less common and more sophisticated than the first. Positive theology describes and discusses God by means of names and positive statements like – to give a few simple examples – “God is spirit”, “God is Lord”, “God is love”, and so on. But, according to negative theology, it is equally true that, by reason of God’s radical otherness and difference from anything in the created world, God can only be spoken of through negative statements: “God is not” or “is unlike” a “spirit” or a “lord” or “love”. Accordingly, these two distinct ways of approaching God can translate into the two following statements: “God is AI” or “God is not AI.”

Taken from unsplash.com

Defining a Positive Theology of AI

Scandalous as it may seem, a positive theology of AI is hardly avoidable, and its subject should be less the miraculous accomplishments of future AI and all the hopes attached to it than the everyday online spectacles of the present. True, the worship of today’s AI scarcely pours out into a profession of its divinity in the manner of the Apostle Thomas when confronted with the risen Christ (“My Lord and my God!” John 20:28), but spending with it the most beatific hours of the day including the first and last waking moments (before going to pee in the morning and after doing so in the evening) certainly qualifies as a life of prayer.

In a sense, the worship of AI does more than prayer to the Christian God could ever do in this life as AI provides light and nurture in seamless services tailored to every user’s interests, quirks, and wishes. Indeed, it casts a spell of bedazzlement on you in powerful alliance with the glamour, sleekness, and even sexiness of design. So it comes to pass that you end up in a city whose sky is created by AI, or, rather, whose sky is AI itself – a sky where your highest aspirations turn to. Could this city and sky possibly be those prophesied by John the Seer in the Apocalypse? “And the city had no need of the sun, neither of the moon, to shine in it: for the glory of God did lighten it…” (Revelations 21:23).

Image by anncapictures from Pixabay

Valiant Resistance or Fruitless Nostalgia?  

But, let’s suppose, there arises an urge in you to resist the city and sky of AI, recognizing that they are not God’s city and God’s sky, that AI is not God, and God is unlike AI – in other words, you negate AI as God. Of course, this is more than an act of logic and goes beyond the scope of a theoretical decision. The moment you realize you have treated AI as God, and you have been wrong, you change your attitude and orientation, and start searching for God elsewhere, outside the realm of AI.

You repent.

This metanoia of sorts leads you to trade your smartphone for nature, opting to live under the real sky. There, you experience real love and friendship outside social media platforms. You may even discard Google Maps and seek to get lost in real cities and find your bearings with the help of old paper maps.

Such actions, however, are not the best negative theology of AI. Do they not exhibit a nostalgia for the past, growing wistful about the sky, the love, the city, and the God of old? Is God nostalgic? Would God set up God’s tent outside the city of AI into which the whole of creation is moving? Have you, searching for God outside the realm of AI, not engaged in an unserious, even dull form of negation?

There must be another way.

In fact, the divine realm empowered by AI carries in itself its own theological negation, moments when its bedazzlement loosens its grip and its divine face undergoes an eclipse – moments that are empty, dull, boring, meaningless, or even full of frustration or anxiety. Such moments are specific to this realm and not just the usual downside of human life. It was, if you are willing to realize, the proliferation of such moments that have made you repudiate the divinity of AI and go searching outside its realm, and not just a sudden thought that occurred to you.

Image by strikers from Pixabay

A Balanced Negative Theology of AI

As a matter of fact, it was not only you; such moments in the midst of all the bedazzlement, now and then, happen to all devotees. Does the ubiquitousness of such moments mean that all citizens of the city of AI participate in its theological self-negation, and, therefore, living in it necessarily includes the act of negating it? In a sense, yes but this is just a ubiquitous and unintended, almost automatic negation, and not the right one. As a rule, the citizens of the city live in the moment and for the moment; they naively live its bedazzlement to the full and suffer its moments of meaninglessness to the full. In doing so, however, they are unfree.

Instead, you are better off living in the city of AI accompanied by a moderate and reserved, yet constant negation. In this balanced and overall experience, you always keep the harrowing moments of emptiness and meaninglessness in mind with a view to them no longer quite coming to harrow you and, above all, with a view to AI’s bedazzlement no longer gaining the upper hand.

As a consequence of your moderate and sustained negation of AI as God (a negative theology of AI), you create a certain distance between you and AI which is nevertheless also a space of curiosity and playfulness. Precisely because you negate it in a theological sense, you can curiously turn towards AI, witness the details of its behavior and also enjoy its responsiveness to your actions. And it is precisely in this dynamic and undecided area of free play with AI, opened up by your negation, that God, defined as to what God is not (not AI) and undefined as to what God is, can be offered a space to enter.  


Gábor L. Ambrus holds a post-doctoral research position in the Theology and Contemporary Culture Research Group at The Charles University, Prague. He is also a part-time research fellow at the Pontifical University of St. Thomas Aquinas, Rome. He is currently working on a book on theology, social media, and information technology. His research primarily aims at a dialogue between the Judaeo-Christian tradition and contemporary techno-scientific civilization.

Working for a Better Future: Sustainable AI and Gender Equality

At our February AI Theology Advisory Board meeting, Ana Catarina De Alencar joined us to discuss her research on sustainable AI and gender equality, as well as how she integrates her faith and work as a lawyer specializing in data protection. In Part 1 below, she describes her research on the importance of gender equality as we strive for AI sustainability.

Elias: Ana, thank you for joining us today. Why don’t you start by telling us a little about yourself and about your involvement with law and AI.

Ana: Thank you, Elias, for the invitation. It’s very nice to be with you today. I am a lawyer in a big law firm here in Brazil. I work with many startups on topics related to technology. Today I specialize in data protection law. This is a very recent topic for corporations in Brazil. They are learning how to adjust and adapt to these new laws designed to protect people’s data. We consult with them and provide legal opinions about these kinds of topics. I’m also a professor. I have a master’s degree in philosophy of law, and I teach in this field. 

judgement scale and gavel in judge office
Photo by Sora Shimazaki on Pexels.com

In my legal work, I engage many controversial topics involving data protection and AI ethics. For example, I have a client who wants to implement a facial recognition system that can be used for children and teenagers. From the legal point of view, it can be a considerable risk to privacy even when we see a lot of favorable points that this type of technology can provide. It also can be very challenging to balance the ethical perspective with the benefits that our clients see in certain technologies.

Gender Equality and Sustainable AI

Elias: Thank you. There’s so much already in what you shared. We could have a lot to talk about with facial recognition, but we’ll hold off on that for now. I’d like to talk first about the paper you presented at the conference where we met. It was a virtual conference on sustainable AI, and you presented a paper on gender equality. Can you summarize that paper and add anything else you want to say about that connection between gender equality and sustainable AI?

Ana: This paper came out of research I was doing for Women’s Day, which is celebrated internationally. I was thinking about how I could build something uniting this day specifically and the topic of AI, and the research became broader and broader. I realized that it had something to do with the sustainability issue. 

Sustainability and A Trans-Generational Point of View

When we think of AI and gender, often we don’t think with a trans-generational point of view. We fail to realize that interests in the past can impact interests in the future. Yet, that is what is happening with AI when we think about gender. The paper I presented asks how current technology impacts future generations of women.

The technology offered in the market is biased in a way that creates a less favorable context for women in generations to come. For example, when a natural language processing system sorts resumes, often it selects resumes in a way that favors men more than women. Another example is when we personalize AI systems as women or as men, which generates or perpetuates certain ideas about women. Watson from IBM is a powerful tool for business, and we personalize it as a man. Alexa is a tool for helping you out with your day-by-day routine, and we personalize it as a woman. It creates the idea that maybe women are servile, just for supporting society in lower tasks, so to speak. I explored other examples in the paper as well.

All of these things together are making AI technology biased and creating ideas about women that can have a negative impact on future generations. It creates a less favorable situation for women in the future.

Reinforcing and Amplifying Bias

Levi: I’m curious if you could give an example of what the intergenerational impact looks like specifically. In the United States, racial disparities persist across generations. Often it is because, for instance, if you’re a Black American, you have a harder time getting high-paying jobs. Then your children won’t be able to go to the best schools, and they will also have a harder time getting high-paying jobs. But it seems to be different with women, because their children may be women or men. So I wonder if you can give an example of what you mean with this intergenerational bias.

Ana: We don’t have concrete examples yet to show that future impact. However, we can imagine how it would shape future generations. Say we use some kind of technology now that reinforces biases–for example, a system for recruiting people that lowers resumes mentioning the word ‘women,’ ‘women’s college,’ or something feminine. Or a system which includes characterization of words related to women–for instance, the word ‘cook’ is related to women, ‘children’ is related to women. If we use these technologies in a broad sense, we are going to reinforce some biases already existing in our society, and we are going to amplify them for future generations. These biases become normal for everybody now and into the future. It becomes more systemic.

Racial Bias

You can use this same thinking for the racial bias, too. When you use these apps and collect data, it reinforces systemic biases about race. That’s why we have to think ethically about AI, not only legally, because we have to build some kind of control in these applications to be sure they do not reinforce and amplify what is already really bad in our society for the future.

Levi: There’s actually a really famous case that illustrates this from Harvard Business students. Black students and Asian students sent their applications out for job interviews, and then they sent out a second application where they had whitewashed it. They removed things on their CV that were coded with with their race–for instance, being the president of the Chinese Student Association or president of the Black Student Union, or even specific sports that are racially coded. They found that when they whitewashed their applications, even though they removed all of these accomplishments, they got significantly higher callbacks.

Elias: I have two daughters, ages 12 and 10. If AI tells them that they’re going to be more like Alexa, not Watson, it influences their possibilities. That is intergenerational, because we are building a society for them. I appreciated the paper you presented, Ana, because AI does have an intergenerational impact.

In Part 2 we will continue the conversation with Ana Catarina De Alencar and explore the way she thinks about faith and her work.

How is AI Hiring Impacting Minorities? Evidence Points to Bias

Thousands of resumes, few positions, and limited time. The story repeats itself in companies globally. Growing economies and open labor markets, now re-shaped by platforms like Linkedin and Indeed, a growing recruiting industry opened wide the labor market. While this has expanded opportunity, it left employers with the daunting task to sift through the barrage of applications, cover letters, resumes thrown in their way. Enters AI, with its promise to optimize and smooth out the pre-selection process. That sounds like a sensible solution, right? Yet, how is AI hiring impacting minorities?

Not so fast – a 2020 paper summarizing data from multiple studies found that using AI for both selection and recruiting has shown evidence of bias. As in the case of facial recognition, AI for employment is also showing disturbing signs of bias. This is a concerning trend that requires attention from employers, job applicants, citizens, and government entities.

Photo by Cytonn Photography on Unsplash

Using AI for Hiring

MIT podcast In Machines we Trust goes under the hood of AI hiring. What they found was surprising and concerning. Firstly, it is important to highlight how widespread algorithms are in every step of hiring decisions. One of the most common ways is through initial screening games that narrow the applicant pool for interviews. These games come in many forms that vary depending on vendor and job type. What they share in common is that, unlike traditional interview questions, they do not directly relate to skills relevant to the job at hand.

AI game creators claim that this indirect method is intentional. This way, the candidate is unaware of how the employer is testing them and therefore cannot “fake” a suitable answer. Instead, many of these tools are trying to see whether the candidate exhibits traits of past successful employees for that job. Therefore, employers claim they get a better measurement of the candidate fit for the job than they would otherwise.

How about job applicants? How do they fare when AI decides who gets hired? More specifically, how does AI hiring impact minorities’ prospects of getting a job? On the other side of the interview table, job applicants do not share in the vendor’s enthusiasm. Many report an uneasiness in not knowing how the tests’ criteria. This unease in itself can severely impact their interview performance creating additional unnecessary anxiety. More concerning is how these tests impact applicants with disabilities. Today, thanks to the legal protections, job applicants do not have to report disabilities in the interviewing process. Now, some of these tests may force them to do it earlier.

What about Bias?

Unfortunately, bias does not happen only for applicants with disabilities. Other minority groups are also feeling the pinch. The MIT podcast tells the story of an African-American woman, who though having the pre-requisite qualifications did not get a single call back after applying to hundreds of positions. She eventually found a job the old-fashioned way – getting an interview through a network acquaintance.

The problem of bias is not entirely surprising. If machine learning models are using past data of job functions that are already fairly homogenous, they will only reinforce and duplicate this reality. Without examining the initial data or applying intentional weights, the process will continue to perpetuate this problem. Hence, when AI is training on majority-dominated datasets, the algorithms will tend to look for majority traits at the expense of minorities.

This becomes a bigger problem when AI applications go beyond resume filtering and selection games. They are also part of interviewing process itself. AI hiring companies like Hirevue claim that their algorithm can predict the success of a candidate by their tone of voice in an interview. Other applications will summarize taped interviews to select the most promising candidates. While these tools clearly can help speed up the hiring process, bias tendencies can severely exclude minorities from the process.

The Growing Need for Regulation

AI in hiring is here to stay and they can be very useful. In fact, the majority of hiring managers state that AI tools are saving them time in the hiring process. Yet, the biggest concern is how they are bending power dynamics towards employers – both sides should benefit from its applications. AI tools are now tipping the balance toward employers by shortening the selection and interview time.

If AI for employment is to work for human flourishing, then it cannot simply be a time-saving tool for employers. It must also expand opportunity for under-represented groups while also meeting the constant need for a qualified labor force. Above all, it cannot claim to be a silver bullet for hiring but instead an informative tool that adds a data point for the hiring manager.

There is growing consensus that AI in hiring cannot go on unregulated. Innovation in this area is welcome but expecting vendors and employers to self-police against disparate impact is naive. Hence, we need intelligent regulation that ensures workers get a fair representation in the process. As algorithms become more pervasive in the interviewing process, we must monitor their activity for adverse impact.

Job selection is not a trivial activity but is foundational for social mobility. We cannot afford to get this wrong. Unlike psychometric evaluations used in the past that have scientific and empirical evidence, these new tools are mostly untested. When AI vendors claim they can predict job success by the tone of voice or facial expression, then the burden is on them to prove the fairness of their methods. Should AI decide who gets hired? Given the evidence so far, the answer is no.

The Glaring Omission of Religious Voices in AI Ethics

Pew Research released a report predicting the state of AI ethics in 10 years. The primary question was: will AI systems have robust ethical principles focused on the common good by 2030? Of the over 600 experts who responded, 2/3 did not believe this would happen. Yet, this was not the most surprising thing about the report. Looking over the selection of responders, there was no clergy or academics of religion included. In the burgeoning field of AI ethics research, we are missing the millenary wisdom of religious voices.

Reasons to Worry and Hope

In a follow-up webinar, the research group presented the 7 main findings from the survey. They are the following:

Concerning Findings

1. There is no consensus on how to define AI ethics. Context, nature and power of actors are important.

2. Leading actors are large companies and governments that may not have the public interest at the center of their considerations.

3. AI is already deployed through opaque systems that are impossible to dissect. This is the infamous “black box” problem pervasive in most machine learning algorithms.

4. The AI race between China and the United States will shape the direction of development more than anything else. Furthermore, there are rogue actors that could also cause a lot of trouble

Hopeful Findings

5. AI systems design will be enhanced by AI itself which should speed up the mitigation of harmful effects.

6. Humanity has made acceptable adjustments to similar new technology in the past. Users have the power to bent AI uses towards their benefit.

7. There is widespread recognition of the challenges of AI. In the last decade, awareness has increased significantly resulting in efforts to regulate and curb AI abuses. The EU has led the way in this front.

Photo by Wisnu Widjojo on Unsplash

Widening the Table of AI Ethics with Faith

This eye-opening report confirms many trends we have addressed in this blog. In fact, the very existence of AI theology is proof of #7, showing that awareness is growing. Yet, I would add another concerning trend to the list above which is the narrow group of people in the AI ethics dialogue table. The field remains dominated by academic and industry leaders. However, the impact of AI is so ubiquitous that we cannot afford this lack of diversity.

Hopefully, this is starting to change. A recent New York Times piece outlines efforts of the AI and Faith network. The group consists of an impressive list of clergy, ethicists, and technologists who want to bring their faith values to the table. They seek to introduce the diverse universe of religious faith in AI ethics providing new questions and insights into this important task.

If we are to face the challenge of AI, why not start by consulting thousands of years of human wisdom? It is time we add religious voices to the AI ethics table as a purely secular approach will ostracize the majority of the human population.

We ignore them to our peril.

How Big Companies can be Hypocrites about Diversity

Can we trust big companies are saying the truth, or are they being hypocrites? We can say that the human race is somehow evolving and leaving behind discriminatory practices. Or at least some are trying to. And this reflects on the market. More and more, companies around the world are being called out on big problems, involving racism, social injustice, gender inequalities, and even false advertising. But how can we know what changes are real and what are fake? From Ford Motors to Facebook, many companies talk the talk but do not walk the walk.

The rise of Black Lives Matter protests is exposing societies’ crooked and oppressive ways, bringing discussions about systemic and structural racism out in the open. It’s a problem that can’t be fixed with empty promises and window dressing. Trying to solve deep problems isn’t easy and is a sort of “all hands on deck” type of situation. But it’s no longer an option for companies around the world to ignore these issues. That’s when the hypocrisy comes in.

Facebook, Amazon, Ford Motor Company, Spotify, Google, are a few examples of big companies that took a stand against racial inequality on their social media. Most of them also donated money to help the cause. They publicly acknowledged that a change has to be made. It is a start. But it means nothing if this change doesn’t happen inside the company itself.

Today I intend to expose a little bit about Facebook and Amazon’s diversity policies and actions. You can make your own conclusions.

We stand against racism and in support of the Black community and all those fighting for equality and justice every single day.”Facebook

Mark Zuckerberg wrote on his personal Facebook page: “To help in this fight, I know Facebook needs to do more to support equality and safety.” 

In Facebook’s business page, it claims some actions the company is making to fight inequalities. But it mostly revolves around funding. Of course money is important, but changes regarding the companies structure are ignored. They also promised to build a more inclusive workforce by 2023. They aim for 50% of the workforce to be from underrepresented communities. Also working to double the number of Black and Latino employees in the same timeframe.

But in reality, in the most recent FB Diversity Report, White people take up 41% of all roles, Followed by Asians with 44%, Hispanics with 6.3%, Black people with 3.9% and Native Americans with 0.4%. An even though it may seem that Asians are taking benefit in this current situation, White people take 63% of leadership roles in Facebook, reducing only 10% since 2014. Well, can you see the difference between the promises and ACTUAL reality?

Another problem FB employees talk about is leadership opportunities. Even though the company started hiring more people of color, it still doesn’t give them the opportunity to grow and occupy more important roles.  Former Facebook employees filled a complaint with with the Equal Employment Opportunity Commission trying to bring justice for the community. Read more about this case here.

Another big company: Amazon.

Facial recognition technology and police. Hypocrisy or not?

Amazon is also investing in this type of propaganda creating a “Diversity and Inclusion” page on their website. They also made some tweets talking about police abuse and the brutal treatment black Americans are forced to live with. What Amazon didn’t expect, is that it would backfire.

Amazon fabricated and sold technology that supports police abuse towards the Black population. In a 2018 study of Amazon’s Rekognition technology, the  American Civil Liberties Union (ACLU) found people of color were falsely matched at a high rate. Matt Cagle, an attorney for the ACLU of Northern California, called Amazon’s support for racial justice “utterly hypocritical.” Only in June of 2020, Amazon halted selling this technology to the police for one year. And in May of 2021, they extended the pause until further notice.

The ACLU admits that Amazon stopped selling this technology, is a start. But the US government has to “end its use by law enforcement entirely, regardless which company is selling it.” In previous posts, AI Theology talked about bias on facial recognition and algorithmic injustice.

What about Amazon’s workforce?

Another problem Amazon faces is in their workforce. At first sight, white people occupy only 32% of their entire workforce. But it means nothing since the best paid jobs belong to them. Corporate employees are composed of: 47% White, 34% Asian, 7% Black, 7% Latinos, 3% Multiracial, and 0.5% Native Americans. The numbers continue reducing drastically when you look at senior leaders that are composed of: 70% White, 20% Asian, 3,8% Black, 3,9% Latinos, 1.4% Multiracial and 0.2% Native Americans. You can find this data in this link.

What these numbers show us is that minorities are under represented in Amazon’s leadership ranks . Especially in the best paid and more influential roles. We need to be alert when big companies say their roles are equally distributed. Sometimes the hypocrisy is there. The roles may be equal, but the pay isn’t.

What can you do against these big companies actions?

So if the companies aren’t practicing what they preach, how can we change that?

Numbers show that public pressure can spark change. We should learn not to only applaud well built statements but demand concrete actions, exposing hypocrisy. We need to call on large companies to address the structural racism that denies opportunities from capable and innovative people of color. 

Consultant Monica Hawkins believes that executives struggle to raise diversity in senior management mostly because they don’t understand minorities. She believes that leaders need to expand their social and business circles, referrals are a key source of important hires as she mentioned in Reuters.

Another take that companies could consider taking is, instead of only making generic affirmations, they could put out campaigns recognizing their own flaws and challenges and what they are doing to change that reality. This type of action can not only improve the company’s rating but also press other companies to change as well. 

It’s also important that companies keep showing their workforce diversity numbers publicly. That way, we can keep track of changes and see whether they are actually working to improve from the inside.

In other words, does the company talk openly about inequalities? That’s nice. Does it make donations to help social justice organizations? Great. But it’s not enough, not anymore. Inequalities don’t exist just because of financial problems. For companies to thrive and continue alive in the future, they need to start creating an effective plan on how to change their own reality.  

A Beginner’s Guide to Emotional AI, Its Challenges and Opportunities

You walk into your living room, Alexa dims the lights, lowers the temperature, and says: “You look really sad today. Would you like me to play Adele for you?” This could be a reality in a few years. Are we prepared? This beginner’s guide to emotional AI will introduce the technology, its applications and ethical challenges.

We will explore both the opportunities and dangers of this emerging AI application. This is part of the broader discipline of affective computing that use different inputs from the human body (i.e.: heartbeat, sweat, facial expression, speech, eye movement, etc) to interpret, emulate and predict emotion. For this piece, we’ll focus on the use of facial expressions to infer emotion.

According to Gartner, by 2022, 10% of smartphones will have affective computing capabilities. Latest Apple phones can already detect your identity through your face. The next step is detecting your mental state through that front camera. Estimates of the Emotive AI market range around $36 Billion in 5 years. Human emotion detection is no longer a Sci-Fi pipe dream but a reality poised to transform societies. Are we ready for it?

How does Emotional AI work?

Our beginner’s guide to emotional AI must start with explaining how it works. While this technology is relatively new, its foundation dates back to the mid-nineteenth century. founded primarily in the idea humans display universal facial cues for their emotions. Charles Darwin was one of the first to put forth this idea. A century later, American psychologist Paul Eckman further elaborated on it through extensive field studies. Recently, scholars have challenged this universality and there is now no consensus on its validity. AI entrepreneurs bet that we can find universal patterns. Their endeavors are testing this theory in real-time with machine learning.

The first step is “training” a computer to read emotions through a process of supervised learning. This entails feeding pictures of people’s faces along with labels that define that person’s emotion. For example, one could feed the picture of someone smiling with the label “happy.” For the learning process to be effective thousands if not millions of these examples are created.

The computer then uses machine learning algorithms to detect common patterns in the many examples of each emotion. This enables it to establish a general idea of what each emotion looks like in every face. Therefore, it is able to classify new cases, including faces it has never encountered before, with these emotions.

By Prawny from Pixabay

Commercial and Public Applications

As you can imagine, there are manifold applications to this type of technology. For example, one of the greatest challenges in marketing is collecting accurate feedback from customers. Satisfaction surveys are few and far between and often inaccurate. Hence, companies could use emotional AI to capture instantaneous human reactions to an ad or experience not from a survey but from their facial reactions.

Affectiva, a leading company using this technology, already claims it can detect emotions from any face. It collected 10M expressions from 87 countries, hand-labeled by crowd workers from Cairo. With its recent merger with Smart Eye, the company is poised to become the leader in Automotive, in-cabin driver mental state recognition for the automotive industry. This could be a valuable safety feature detecting when a driver is under the influence, sleepy, or in emotional distress.

More controversial applications include using it for surveillance as it is in the case of China’s treatment of the Uighur population. Police departments could use it as a lie-detection device in interrogations. Governments could use it to track the general mood of the population by scanning faces in the public square. Finally, employers could use it as part of the interview process to measure the mental state of an applicant.

Ethical Challenges for Emotional AI

No beginner’s guide on emotional AI without considering the ethics of its impact. Kate Crawford, a USC research professor, has sounded the alarm on emotional AI. In her recently published book and through a number of articles, she makes the case for regulating this technology. Her primary argument is that using facial recognition to detect emotions is based on shaky science. That is, the overriding premise that human emotion can universally be categorized through a set of facial expressions is faulty. It minimizes a plethora of cultural factors lending itself to dangerous bias.

This is not just conjecture, a recent University of Maryland study detected an inherent bias that tends to show black faces with more negative emotions than white faces. She also notes that the machine learning process is questionable as it is based on pictures of humans emulating emotions. The examples come from people that were told to make a type of facial expression as opposed to capturing a real reaction. This can lead to an artificial establishment of what a facial representation of emotion should look like rather than real emotional displays.

This is not limited to emotion detection. Instead it is part of a broader partner of error in facial recognition. In 2018 paper, MIT researcher Joy Bulowanmini analyzed disparities in the effectiveness of commercial facial recognition applications. She found that misclassification rates for dark-skinned women were up to 34% compared to 0.8% for white males.

Photo by Azamat Zhanisov on Unsplash

The Sanctity of the Human Face

The face is the window to the human soul. It the part of the body most identified with an individual’s unique identity. When we remember someone, it is their countenance that shows up in our mind. It is indeed the best indicator of our internal emotional state which may often betray the very things we speak.

Up to recently, interpreting the mystery of our faces was the job of humans and animals. What does it mean to now have machines that can intelligently decipher our inner states? This is certainly a new frontier in the human-ai interface that we must tread carefully if for no other reason than to respect the sanctity of the human soul. If left for commerce to decide, the process will most likely be faster than we as a society are comfortable with. That is where the calls for regulation are spot on.

Like every technology – the devil is in the details. It would be premature to outlaw the practice altogether as the city of Seattle has done recently. We should, however, limit and monitor its uses – especially in areas where the risk of bias can have a severe adverse impact on the individual. We must also ponder whether we want to live in a society where even our facial expressions are subject to monitoring.