Working for a Better Future: Sustainable AI and Gender Equality

Facebook
Twitter
LinkedIn
woman with bionic arm

At our February AI Theology Advisory Board meeting, Ana Catarina De Alencar joined us to discuss her research on sustainable AI and gender equality, as well as how she integrates her faith and work as a lawyer specializing in data protection. In Part 1 below, she describes her research on the importance of gender equality as we strive for AI sustainability.

Elias: Ana, thank you for joining us today. Why don’t you start by telling us a little about yourself and about your involvement with law and AI.

Ana: Thank you, Elias, for the invitation. It’s very nice to be with you today. I am a lawyer in a big law firm here in Brazil. I work with many startups on topics related to technology. Today I specialize in data protection law. This is a very recent topic for corporations in Brazil. They are learning how to adjust and adapt to these new laws designed to protect people’s data. We consult with them and provide legal opinions about these kinds of topics. I’m also a professor. I have a master’s degree in philosophy of law, and I teach in this field. 

judgement scale and gavel in judge office
Photo by Sora Shimazaki on Pexels.com

In my legal work, I engage many controversial topics involving data protection and AI ethics. For example, I have a client who wants to implement a facial recognition system that can be used for children and teenagers. From the legal point of view, it can be a considerable risk to privacy even when we see a lot of favorable points that this type of technology can provide. It also can be very challenging to balance the ethical perspective with the benefits that our clients see in certain technologies.

Gender Equality and Sustainable AI

Elias: Thank you. There’s so much already in what you shared. We could have a lot to talk about with facial recognition, but we’ll hold off on that for now. I’d like to talk first about the paper you presented at the conference where we met. It was a virtual conference on sustainable AI, and you presented a paper on gender equality. Can you summarize that paper and add anything else you want to say about that connection between gender equality and sustainable AI?

Ana: This paper came out of research I was doing for Women’s Day, which is celebrated internationally. I was thinking about how I could build something uniting this day specifically and the topic of AI, and the research became broader and broader. I realized that it had something to do with the sustainability issue. 

Sustainability and A Trans-Generational Point of View

When we think of AI and gender, often we don’t think with a trans-generational point of view. We fail to realize that interests in the past can impact interests in the future. Yet, that is what is happening with AI when we think about gender. The paper I presented asks how current technology impacts future generations of women.

The technology offered in the market is biased in a way that creates a less favorable context for women in generations to come. For example, when a natural language processing system sorts resumes, often it selects resumes in a way that favors men more than women. Another example is when we personalize AI systems as women or as men, which generates or perpetuates certain ideas about women. Watson from IBM is a powerful tool for business, and we personalize it as a man. Alexa is a tool for helping you out with your day-by-day routine, and we personalize it as a woman. It creates the idea that maybe women are servile, just for supporting society in lower tasks, so to speak. I explored other examples in the paper as well.

All of these things together are making AI technology biased and creating ideas about women that can have a negative impact on future generations. It creates a less favorable situation for women in the future.

Reinforcing and Amplifying Bias

Levi: I’m curious if you could give an example of what the intergenerational impact looks like specifically. In the United States, racial disparities persist across generations. Often it is because, for instance, if you’re a Black American, you have a harder time getting high-paying jobs. Then your children won’t be able to go to the best schools, and they will also have a harder time getting high-paying jobs. But it seems to be different with women, because their children may be women or men. So I wonder if you can give an example of what you mean with this intergenerational bias.

Ana: We don’t have concrete examples yet to show that future impact. However, we can imagine how it would shape future generations. Say we use some kind of technology now that reinforces biases–for example, a system for recruiting people that lowers resumes mentioning the word ‘women,’ ‘women’s college,’ or something feminine. Or a system which includes characterization of words related to women–for instance, the word ‘cook’ is related to women, ‘children’ is related to women. If we use these technologies in a broad sense, we are going to reinforce some biases already existing in our society, and we are going to amplify them for future generations. These biases become normal for everybody now and into the future. It becomes more systemic.

Racial Bias

You can use this same thinking for the racial bias, too. When you use these apps and collect data, it reinforces systemic biases about race. That’s why we have to think ethically about AI, not only legally, because we have to build some kind of control in these applications to be sure they do not reinforce and amplify what is already really bad in our society for the future.

Levi: There’s actually a really famous case that illustrates this from Harvard Business students. Black students and Asian students sent their applications out for job interviews, and then they sent out a second application where they had whitewashed it. They removed things on their CV that were coded with with their race–for instance, being the president of the Chinese Student Association or president of the Black Student Union, or even specific sports that are racially coded. They found that when they whitewashed their applications, even though they removed all of these accomplishments, they got significantly higher callbacks.

Elias: I have two daughters, ages 12 and 10. If AI tells them that they’re going to be more like Alexa, not Watson, it influences their possibilities. That is intergenerational, because we are building a society for them. I appreciated the paper you presented, Ana, because AI does have an intergenerational impact.

In Part 2 we will continue the conversation with Ana Catarina De Alencar and explore the way she thinks about faith and her work.

More Posts

Watching for AI-enabled falsehoods

This is a historical year for elections world-wide. This is also a time for unprecedented ai-enabled misinformation. The Misinformation Hub by AI and Faith focuses

A Priest at an AI Conference

Created through prompt by Dall-E Last month, I attended the 38th conference of the Association for the Advancement of Artificial Intelligence. The conference was held

Send Us A Message

Don't know where to start?

Download our free AI primer and get a comprehensive introduction to this emerging technology
Free