China and the EU jump ahead of the US in the Race for Ethical AI

Given the crucial role AI technologies are playing in the defense industry, it is no secret that leading nations will be seeking the upper hand. US, China, and the EU have put forth plans to guide and prioritize research in this area. At AI Theology, we are interested in a very different kind of race. One that is less about technological supremacy and more about ensuring the flourishing of life. We call it the race for ethical AI.

What are governments doing to minimize, contain and limit the harm of AI applications? Looking at the leaders in this area, two show signs of making progress while one is still sitting on the sidelines. Though through different methods, China and the EU took decisive action to start addressing the challenge. The US has been awfully quiet in this area, even as most leading AI organizations have their headquarters on its soil.

The EU Tackles Facial Recognition

Last week, the European Parliament passed a ground-breaking resolution curbing the use of AI for mass surveillance and predictive policing based on behavioral data. In its language, the document calls out companies like Clearview for their controversial use of facial recognition in law enforcement. It also lists many examples in which this technology has erroneously targeted minorities. The legislative body also calls for greater transparency and human involvement in the decision process that comes from algorithms.

Photo by Maksim Chernishev on Unsplash

While not legally binding, this is a first and important step in regulating the use of computer vision in law enforcement. This is part of a bigger effort the EU is taking to draft regulations on AI to address multiple applications of the technology. In this sense, the multi-state body becomes the pioneer in attempting to place guardrails on AI use possibly becoming the standard for other countries to follow.

Even though ethical AI is not limited to regulation, government action can have a sweeping impact in curbing abuse and protecting the vulnerable. It also sends a strong signal to companies acting in this space that more accountability is on its way. This will likely force big tech and AI start-ups to take a hard look at how they develop products and deliver their services. In short, good legislation can be a catalyst for the type of change we need. In this way, the EU leaps forward in the race for ethical AI.

China Take Steps towards Ethical AI

On the other side of the world, another AI leader put forth guidelines on the use of the technology. The document outlines principles for algorithm governance, protect privacy, and give users more autonomy over their data. Besides its significance, it is notable that the guidelines include language around making the technology “people-oriented” and appeals to common values.

Photo by Timothée Gidenne on Unsplash

The guidelines for ethical AI are part of a broader effort to rein big tech power within the world’s most populous nation. Earlier this year, the government published a policy to better control recommendation algorithms on the Internet. This and other measures are sending a strong signal to the Chinese budding digital sector that the government is watching and will keep them accountable. Such a move also contributes to the centralization of power in the government in a way many western societies would not be comfortable with. However, in this case, they seem to align with the public good.

Regardless of how these guidelines will be implemented, it is notable that China would be at the forefront of publishing these guidelines. It shows that Beijing is taking the threat of AI misuse seriously, at least when it is perpetrated by business enterprises.

US Fragmented Efforts

What about the North American AI leader? Unfortunately, to date, there is no sweeping national effort to address AI abuse in the US. This is not to say that nothing is happening. States like California and Illinois are working on legislation on data privacy and AI surveillance. Biden’s chief science advisor recently is called for an AI Bill of rights. In a previous blog, I outlined US efforts to address bias in facial recognition as well.

Yet, nothing concrete has happened at a national level. The best we got was a FB former employee’s account of the company’s reluctance to curb AI abuse. It made for great television but no sweeping legislation to follow.

If there is a race for ethical AI, the North American competitor is behind. If this trend continues, AI ethics will be at the mercy of large company boardrooms in the Yankee nation. Company boards are never free of conflict of interest as the next quarter’s profit often takes precedence over human flourishing.

Self-regulation has not worked. It is time we move towards more active government intervention for the sake of the common good. This is a race the US cannot afford to sit out. It is time to hop on the track.

How Big Companies can be Hypocrites about Diversity

Can we trust big companies are saying the truth, or are they being hypocrites? We can say that the human race is somehow evolving and leaving behind discriminatory practices. Or at least some are trying to. And this reflects on the market. More and more, companies around the world are being called out on big problems, involving racism, social injustice, gender inequalities, and even false advertising. But how can we know what changes are real and what are fake? From Ford Motors to Facebook, many companies talk the talk but do not walk the walk.

The rise of Black Lives Matter protests is exposing societies’ crooked and oppressive ways, bringing discussions about systemic and structural racism out in the open. It’s a problem that can’t be fixed with empty promises and window dressing. Trying to solve deep problems isn’t easy and is a sort of “all hands on deck” type of situation. But it’s no longer an option for companies around the world to ignore these issues. That’s when the hypocrisy comes in.

Facebook, Amazon, Ford Motor Company, Spotify, Google, are a few examples of big companies that took a stand against racial inequality on their social media. Most of them also donated money to help the cause. They publicly acknowledged that a change has to be made. It is a start. But it means nothing if this change doesn’t happen inside the company itself.

Today I intend to expose a little bit about Facebook and Amazon’s diversity policies and actions. You can make your own conclusions.

We stand against racism and in support of the Black community and all those fighting for equality and justice every single day.”Facebook

Mark Zuckerberg wrote on his personal Facebook page: “To help in this fight, I know Facebook needs to do more to support equality and safety.” 

In Facebook’s business page, it claims some actions the company is making to fight inequalities. But it mostly revolves around funding. Of course money is important, but changes regarding the companies structure are ignored. They also promised to build a more inclusive workforce by 2023. They aim for 50% of the workforce to be from underrepresented communities. Also working to double the number of Black and Latino employees in the same timeframe.

But in reality, in the most recent FB Diversity Report, White people take up 41% of all roles, Followed by Asians with 44%, Hispanics with 6.3%, Black people with 3.9% and Native Americans with 0.4%. An even though it may seem that Asians are taking benefit in this current situation, White people take 63% of leadership roles in Facebook, reducing only 10% since 2014. Well, can you see the difference between the promises and ACTUAL reality?

Another problem FB employees talk about is leadership opportunities. Even though the company started hiring more people of color, it still doesn’t give them the opportunity to grow and occupy more important roles.  Former Facebook employees filled a complaint with with the Equal Employment Opportunity Commission trying to bring justice for the community. Read more about this case here.

Another big company: Amazon.

Facial recognition technology and police. Hypocrisy or not?

Amazon is also investing in this type of propaganda creating a “Diversity and Inclusion” page on their website. They also made some tweets talking about police abuse and the brutal treatment black Americans are forced to live with. What Amazon didn’t expect, is that it would backfire.

Amazon fabricated and sold technology that supports police abuse towards the Black population. In a 2018 study of Amazon’s Rekognition technology, the  American Civil Liberties Union (ACLU) found people of color were falsely matched at a high rate. Matt Cagle, an attorney for the ACLU of Northern California, called Amazon’s support for racial justice “utterly hypocritical.” Only in June of 2020, Amazon halted selling this technology to the police for one year. And in May of 2021, they extended the pause until further notice.

The ACLU admits that Amazon stopped selling this technology, is a start. But the US government has to “end its use by law enforcement entirely, regardless which company is selling it.” In previous posts, AI Theology talked about bias on facial recognition and algorithmic injustice.

What about Amazon’s workforce?

Another problem Amazon faces is in their workforce. At first sight, white people occupy only 32% of their entire workforce. But it means nothing since the best paid jobs belong to them. Corporate employees are composed of: 47% White, 34% Asian, 7% Black, 7% Latinos, 3% Multiracial, and 0.5% Native Americans. The numbers continue reducing drastically when you look at senior leaders that are composed of: 70% White, 20% Asian, 3,8% Black, 3,9% Latinos, 1.4% Multiracial and 0.2% Native Americans. You can find this data in this link.

What these numbers show us is that minorities are under represented in Amazon’s leadership ranks . Especially in the best paid and more influential roles. We need to be alert when big companies say their roles are equally distributed. Sometimes the hypocrisy is there. The roles may be equal, but the pay isn’t.

What can you do against these big companies actions?

So if the companies aren’t practicing what they preach, how can we change that?

Numbers show that public pressure can spark change. We should learn not to only applaud well built statements but demand concrete actions, exposing hypocrisy. We need to call on large companies to address the structural racism that denies opportunities from capable and innovative people of color. 

Consultant Monica Hawkins believes that executives struggle to raise diversity in senior management mostly because they don’t understand minorities. She believes that leaders need to expand their social and business circles, referrals are a key source of important hires as she mentioned in Reuters.

Another take that companies could consider taking is, instead of only making generic affirmations, they could put out campaigns recognizing their own flaws and challenges and what they are doing to change that reality. This type of action can not only improve the company’s rating but also press other companies to change as well. 

It’s also important that companies keep showing their workforce diversity numbers publicly. That way, we can keep track of changes and see whether they are actually working to improve from the inside.

In other words, does the company talk openly about inequalities? That’s nice. Does it make donations to help social justice organizations? Great. But it’s not enough, not anymore. Inequalities don’t exist just because of financial problems. For companies to thrive and continue alive in the future, they need to start creating an effective plan on how to change their own reality.  

How Coded Bias Makes a Powerful Case for Algorithmic Justice

What do you do when your computer can’t recognize your face? In a previous blog, we explored the potential applications for emotional AI. At the heart of this technology is the ability to recognize faces. Facial recognition is gaining widespread attention for its hidden dangers. This Coded Bias short review summarizes the story of female researchers who opened the black box of major applications that use FR. What they found is a warning to all of us making Coded Bias a bold call for algorithmic justice.


Official Trailer

Coded Bias Short Review: Exposing the Inaccuracies of Facial Recognition

The secret is out, FR algorithms are a lot better at recognizing white male faces than of any other group. The difference is not trivial. Joy Buolamwini, MIT researcher and main character in the film, found that dark-skinned women were miss-classified up to 35% of the time compared to less than 1% for male white faces! Error rates of this level can have life-altering consequences when used in policing, judicial decisions, or surveillance applications.

Screen Capture

It all started when Joy was looking for facial recognition software to recognize her face for an art project. She would have to put a white mask on in order to be detected by the camera. This initial experience led her down to a new path of research. If she was experiencing this problem, who else and how would this be impacting others that looked like her. Eventually, she stumbled upon the work of Kathy O’Neil, Weapons of Math Destruction: How How Big Data Increases Inequality and Threatens Democracy, discovering the world of Algorithmic activism already underway.

The documentary weaves in multiple cases where FR misclassification is having a devastating impact on people’s lives. Unfortunately, the burden is falling mostly on the poor and people of color. From an apartment complex in Brooklyn, the streets of London, and a school district in Houston, local activists are mobilizing political energy to expose the downsides of FR. In doing so, Netflix Coded Bias shows not only the problem but also sheds light on the growing movement that arose to correct it. In that, we can find hope.

If this wasn’t clear before, here it is: watch the documentary Coded Bias multiple times. This one is worth your time.

The Call for Algorithmic Justice

The fight for equality in the 21st century will be centered on algorithmic justice. What does that mean? Algorithms are fast becoming embedded in growing areas of decision-making. From movie recommendations to hiring, cute apps to judicial decisions, self-driving cars to who gets to rent a house, algorithms are influencing and dictating decisions.

Yet, they are only as good as the data used to train them. If that data contains present inequities and or is biased towards ruling majorities, they will inevitably disproportionately impact minorities. Hence, the fight for algorithmic justice starts with the regulation and monitoring of their results. The current lack of transparency in the process is no longer acceptable. While some corporations may intended to discriminate, their neglect of oversight makes them culpable.

Because of its ubiquitous impact, the struggle for algorithmic justice is not just the domain of data scientists and lawmakers. Instead, this is a fight that belongs to all of us. In the next blog, I’ll be going over recent efforts to regulate facial recognition. This marks the next step in Coded Bias call for algorithmic justice.

Stay tuned.

A Beginner’s Guide to Emotional AI, Its Challenges and Opportunities

You walk into your living room, Alexa dims the lights, lowers the temperature, and says: “You look really sad today. Would you like me to play Adele for you?” This could be a reality in a few years. Are we prepared? This beginner’s guide to emotional AI will introduce the technology, its applications and ethical challenges.

We will explore both the opportunities and dangers of this emerging AI application. This is part of the broader discipline of affective computing that use different inputs from the human body (i.e.: heartbeat, sweat, facial expression, speech, eye movement, etc) to interpret, emulate and predict emotion. For this piece, we’ll focus on the use of facial expressions to infer emotion.

According to Gartner, by 2022, 10% of smartphones will have affective computing capabilities. Latest Apple phones can already detect your identity through your face. The next step is detecting your mental state through that front camera. Estimates of the Emotive AI market range around $36 Billion in 5 years. Human emotion detection is no longer a Sci-Fi pipe dream but a reality poised to transform societies. Are we ready for it?

How does Emotional AI work?

Our beginner’s guide to emotional AI must start with explaining how it works. While this technology is relatively new, its foundation dates back to the mid-nineteenth century. founded primarily in the idea humans display universal facial cues for their emotions. Charles Darwin was one of the first to put forth this idea. A century later, American psychologist Paul Eckman further elaborated on it through extensive field studies. Recently, scholars have challenged this universality and there is now no consensus on its validity. AI entrepreneurs bet that we can find universal patterns. Their endeavors are testing this theory in real-time with machine learning.

The first step is “training” a computer to read emotions through a process of supervised learning. This entails feeding pictures of people’s faces along with labels that define that person’s emotion. For example, one could feed the picture of someone smiling with the label “happy.” For the learning process to be effective thousands if not millions of these examples are created.

The computer then uses machine learning algorithms to detect common patterns in the many examples of each emotion. This enables it to establish a general idea of what each emotion looks like in every face. Therefore, it is able to classify new cases, including faces it has never encountered before, with these emotions.

By Prawny from Pixabay

Commercial and Public Applications

As you can imagine, there are manifold applications to this type of technology. For example, one of the greatest challenges in marketing is collecting accurate feedback from customers. Satisfaction surveys are few and far between and often inaccurate. Hence, companies could use emotional AI to capture instantaneous human reactions to an ad or experience not from a survey but from their facial reactions.

Affectiva, a leading company using this technology, already claims it can detect emotions from any face. It collected 10M expressions from 87 countries, hand-labeled by crowd workers from Cairo. With its recent merger with Smart Eye, the company is poised to become the leader in Automotive, in-cabin driver mental state recognition for the automotive industry. This could be a valuable safety feature detecting when a driver is under the influence, sleepy, or in emotional distress.

More controversial applications include using it for surveillance as it is in the case of China’s treatment of the Uighur population. Police departments could use it as a lie-detection device in interrogations. Governments could use it to track the general mood of the population by scanning faces in the public square. Finally, employers could use it as part of the interview process to measure the mental state of an applicant.

Ethical Challenges for Emotional AI

No beginner’s guide on emotional AI without considering the ethics of its impact. Kate Crawford, a USC research professor, has sounded the alarm on emotional AI. In her recently published book and through a number of articles, she makes the case for regulating this technology. Her primary argument is that using facial recognition to detect emotions is based on shaky science. That is, the overriding premise that human emotion can universally be categorized through a set of facial expressions is faulty. It minimizes a plethora of cultural factors lending itself to dangerous bias.

This is not just conjecture, a recent University of Maryland study detected an inherent bias that tends to show black faces with more negative emotions than white faces. She also notes that the machine learning process is questionable as it is based on pictures of humans emulating emotions. The examples come from people that were told to make a type of facial expression as opposed to capturing a real reaction. This can lead to an artificial establishment of what a facial representation of emotion should look like rather than real emotional displays.

This is not limited to emotion detection. Instead it is part of a broader partner of error in facial recognition. In 2018 paper, MIT researcher Joy Bulowanmini analyzed disparities in the effectiveness of commercial facial recognition applications. She found that misclassification rates for dark-skinned women were up to 34% compared to 0.8% for white males.

Photo by Azamat Zhanisov on Unsplash

The Sanctity of the Human Face

The face is the window to the human soul. It the part of the body most identified with an individual’s unique identity. When we remember someone, it is their countenance that shows up in our mind. It is indeed the best indicator of our internal emotional state which may often betray the very things we speak.

Up to recently, interpreting the mystery of our faces was the job of humans and animals. What does it mean to now have machines that can intelligently decipher our inner states? This is certainly a new frontier in the human-ai interface that we must tread carefully if for no other reason than to respect the sanctity of the human soul. If left for commerce to decide, the process will most likely be faster than we as a society are comfortable with. That is where the calls for regulation are spot on.

Like every technology – the devil is in the details. It would be premature to outlaw the practice altogether as the city of Seattle has done recently. We should, however, limit and monitor its uses – especially in areas where the risk of bias can have a severe adverse impact on the individual. We must also ponder whether we want to live in a society where even our facial expressions are subject to monitoring.