In the second part summary of our November AITAB meeting, we explored AI for good in Europe, Industry, and Academia. This final blog closes out our summary of our meeting in November. In this piece, you will find varied insights that range from Biblical interpretation, law, liberation theology, and exploring different definitions of data.
Model Training and Biblical Wisdom
Brian: I’ll jump to the question of the theological and biblical frameworks for AI for good. Something Scott said sparked a direction I hadn’t considered but could be an interesting resource. When Scott mentioned taking one model that was already trained and then training it differently, that opens up exciting new avenues of meaning in terms of how AI is formed, what inputs guide its development. Everything the Scripture has to say about how people are formed can potentially guide the way that we do machine learning.
The book of Proverbs and other Wisdom literature in the Bible address the way people are formed, the way sound instruction can shape people. And what’s really interesting is that these books approach wisdom in a variety of dimensions, all aspects of our lives. Wisdom means not only insight or understanding but practical skill, morality, experience, sound judgment. And that multivalence is important. We as people are formed by so many different inputs: we don’t exist in discrete bundles of attributes. I’m not only a student. I’m a student and a person from my family, a member of my local community. Those things overlap and can’t be easily separated. I don’t stop being from middle Tennessee when I enter the classroom. So teaching and learning must take account of this overlap and strive for our integration, the formation of the whole person. And Wisdom literature exemplifies that in some respects.
Elias: That’s a great point. One of the biggest searches of my life has been the search for integration. Even AI Theology started as a journey of integration of my work, machine learning, and my theological and Christian faith. I love when we start seeing these connections. It may seem a little awkward. How can the book of Proverbs connect to machine learning? As you stay with it, eventually something comes up, something you haven’t thought about.
Liberation Theology and Employment Law
Levi: In a different lens of what Brian was talking about: I’m working on a book project right now from the question about “preferential option for the poor ” (Catholic term). It comes from liberation theology and how it is understood. How you typically hear and study about it in theology. Being voices for the voiceless and champions of justice.
Yet, one of the biggest problems that are overlooked within this perspective is a recognition that the poor experience the world differently. The dignity of the poor is typically overlooked in societies where the ruling class identities are the ones that get imposed.
You mentioned the question of bias. We know for the most part, like facial recognition bias, isn’t because the programmers thought “I hate people from different races, so I’ll make sure this technology doesn’t work.” Most of the time, it’s because they weren’t aware of these problems. And that happens when you are a part of the dominant group.
When we look at the people who write about preferential options for the poor, they are people who aren’t poor. On the one hand, that is a great problem, that AI has currently, has been, and will continue to have the bias of the people who program. And these people are mostly upper-middle-class of white men. Even in places outside of Western countries, they still mostly are men.
The way AI works is based on what data it receives. If the data is given by white men, it’s going to be data they have curated. But if you bring data from different people, you will have different perspectives. And this perspective has great potential. When I listen to people from different countries, backgrounds, social economics classes, I can be sympathetic but I won’t ever understand fully.
If AI is trained by the data from people of different backgrounds, it can potentially be a better advocate of those things. One of the great advantages is that we think of AI as objective, and we think of the perspective of outcasts as being jaded. It’s harder to say that the computer’s outputs and ideas are not conducive to the realities of the poor. This is one of the great opportunities that help to bring the theological concept of preferential option for the poor and make it a preferential option “of the poor”, instead of only on their behalf.
Davi: Trying to navigate these waters as an attorney. The EEOC, is the US agency that handles employment discrimination cases. It just launched an initiative called “listening sessions”. They are starting to tackle the problem of algorithmic bias in the law. They are starting to see a lot of cases related to selection tests and association tests (IQ tests) that companies use to hire people. The right answer is based on a specific type of cultural background. If you come from a different background, make different associations, you score badly.
These listening sessions are open to the public. You can see how the government of the US is dealing with these problems. In Congress and other legal areas, you still have fewer folks raising these issues. So the law is being decided in the court in big cases, like FB on FR. AI for good may be creating some democratization through these listening sessions, and I hope this will be one way to get input from other people besides big companies only.
Data as the Voice of the Poor
Wen: I’d like to contribute by reflecting on what others have said and adding some thoughts. Several others have mentioned the democratization of AI with open source courses and data. Additionally, as different AI toolsets become more powerful and simpler to use, these will allow non-technical people who are not data scientists to work with AI. An analogy is how it used to be difficult to create fancy data visualizations, but now there are tools for anyone to create them with just a few clicks. As AI tools do increasingly more, the role of data scientists will differ in the years to come.
Scott mentioned a lot of AI tools are from ivory tower and/or homogenous model developers. There is a lot of bias encoded in those AI tools. Levi mentioned AI algorithms and training data tend to favor upper-class white men and overlook the experiences of the poor.
When we think about amplifying the views and voices of the poor, I’d like to speak from my perspective of Data Strategy: How are we defining “data”? What data is collected? Who collects the data? How is the data structured?
Most people who work with data think of spreadsheets, tables, and numbers. We need to also think about qualitative data, things in the realm of social science and anthropology. And audio and visual data, such as from self-driving cars, selfies, and surveillance cameras.
How can these datasets be used to serve poor and marginalized communities? For example, spf.io is a platform that captions and translates live events and church services into many languages, including less common languages; this increases the accessibility of informative content for people in lesser-known communities.
I want to widen this conversation on data. There are things we don’t currently collect as data, things that are happening but aren’t being captured, such as someone’s intuition in making decisions. We also need to explore the realm of epistemology – what is knowledge and information? And what are categories we haven’t considered yet?
Share this:
- Click to share on Twitter (Opens in new window)
- Click to share on Facebook (Opens in new window)
- Click to email a link to a friend (Opens in new window)
- Click to share on WhatsApp (Opens in new window)
- Click to share on LinkedIn (Opens in new window)
- Click to share on Pocket (Opens in new window)
- Click to print (Opens in new window)
- More