In late August we had our kick-off Zoom meeting of the Advisory Board. This is the first of our monthly meetings where we will be exploring the intersection of AI and spirituality. The idea is to gather scholars, professionals, and clergy to discuss this topic from a multi-faceted view. In this blog, we publish a short summary of our first conversation. The key theme that emerged was a concern for safeguarding and uploading human dignity as AI becomes embedded in growing spheres of our lives. The preocupation must inhabit the center of all AI discussions and be the guiding principles for laws, business practices and policies.
Question for Discussion: What, in your perspective, is the most pressing issue on AI ethics in the next three to five years? What keeps you up at night?
Brian Sigmon: The values from which AI is being developed, and their end goals. What is AI oriented for? Usually in the US, it’s oriented towards profit, not oriented to the common good or toward human flourishing. Until you change the fundamental orientation of AI’s development, you’re going to have problems.
AI is so pervasive in our lives that we cannot escape it. We don’t always understand the logic behind it. It is often beneath the surface intertwined with many other issues. For example, when I go on social media, AI controls what I see on my feed. It does not optimize in making me a better person but instead maximize clicks and revenue. That, to me, is the key issue.
Elias Kruger: Thank you, Brian. To add some color to that, since the pandemic, companies have increased their investment in AI. This in turn is creating a corporate AI race that will further ensure the encroachment of AI across multiple industries. How companies execute this AI strategy will deeply shape our lives, not just here in the US but globally.
Frantisek Stech: Coming from Eastern Europe, one of the greatest issues is the abuse of AI from authoritarian non-democratic regimes for human control. In other words, it is the relationship between AI control and human freedom. Another practical problem is how people are afraid to lose their jobs to AI-driven machines.
Elias Kruger: Thanks Frantisek, as you know we are aware of what is happening in China with the merging of AI and authoritarian governments. Can you tell us a little bit about your area of the world? Is AI more government-driven or more corporate?
Frantisek Stech: In the Czech Republic, we belong to the EU, an therefore to the West. So, it is very much corporate-driven. Yet, we are very close to our Eastern neighbors are we are watching closely how things develop in Belarussia and China especially as they will inevitably impact our region of the world.
However, this does not mean we are free from danger there. There is the issue of manipulation of elections that started with the Cambridge Analytics scandal and issues with the presidential elections in the US. Now we are approaching elections in the EU, so there is a lot of discussions about how AI will be used for manipulation and the problem . So when people hear AI, they often associate with politics. So they think they are already being manipulated if they buy a phone with Facial recognition. We have to be cautious but not completely afraid.
Ben Day: I am often pondering on this question of AI’s, or technology in general, relates with dignity and individual human flourishing. When we aggregate and manipulate data, we strip out individual human dignity which is Christian virtue, and begin to see people as compilations of manipulative data. It is really a threat to ontology, to our very sense of being. In effect, it is an assault on human dignity through AI.
Going further, I am interested in this question of how AI encroaches in our sense of identity. That is, how algorithms govern my entire exposure to media and news. Not just that but AI impacts our whole social eco-verse online and offline. What does that have to do with the nature of my being?
I often say that I have a very low view of humanity. I don’t think human beings are that great. And so, I fear that AI can manipulate the worst parts of human nature. That is an encroachment in huam dignity.
In the Episcopal church, we believe that serving Christ is intimately connected with upholding the dignity of human beings. So, if we are turning a blind eyed to human dignities being manipulated, then my Christian praxis compels me by moral obligation to do something about it.
Elias Kruger: Can you give us a specific example of how this plays out?
Ben Day: Let me give you one example of how this affected my ministry. I removed myself from most of social media as of October of 2016 because of what I was witnessing. I saw members of my church sparring on the internet, attacking each other’s dignity, and intellect over politicized issues. The vitriol was so pervasive that I encounter a moral dilemma. As a priest, it is my duty to deny the sacrament to those who are in unrepetant sin.
So I would face parishioners only hours after these spars online and wonder whether I should offer them the sacrament. I was facing this connundrum as a result of algorithms manipulating feeds to foster angry engagements because it leads to profit. It virtually puts the church at odds to how these companies pursue profit.
Levi Checketts: I lived in Berkley for many years and the cost of living there was really high. It was so because a lot of people who worked in Silicon Valley or in San Francisco were moving there. The influx of well-to-do professionals raised home prices in the area, forcing less fortunate existing residents to move out.
So, there is all this money going into AI. Of the big 5 biggest companies in market cap, three are in Silicon Valley and two in the Seattle area. Tech professionals often do not have full awareness of the impact their work is having on the rest of the world. For example, a few years back, a tech employee wrote an op-ed complaining about having to see disgusting homeless people in his way to work when he was paying so much for rent.
What I realized is that there is a massive disconnect between humanity and the people making decisions for companies that are larger than many countries’ economies. My biggest concern is that the people who are in charge and controlling AI have many blind spots. Their inability to emphathize with those who are suffering or even notice the realities of systems that breed oppression and poverty. To them, there is always a technical fix. Many lack the humility to listen to other perspectives, and come from mainly male Asian and White backgrounds. They are often opposed to other perspectives that challenge their work.
There have been high-profile cases recently like Google firing a black female researcher because she spoke up about problems in the company. The question that Ben mentioned about human dignity in AI is very pressing. If we want to address that, we need people from different backgrounds making decisions and working to develop these technologies.
Futhermore, if we define AI as a being that makes strictly rational decisions, what about people who do not fit that mold?
The key questions are where do we locate this dignity and how do we make sure AI doesn’t run roughshod over humanity?
Davi Leitão: These were all great points that I was not thinking about before. Thank you for sharing this with us.
All of these are important questions which drive the need for regulation and laws that will gear profit-driven corporations to the right path. All of the privacy and data security laws stand on a set of principles written in 1981 by the OECD. These laws are looking to these principles and putting into practice. They are there to inform and safeguard people from bias.
My question is: what are the blind spots on the FIP (fair information principles) that are not accounting for these new issues technology has brought in? This problem is a wide net, but it can help guide a lot of new laws that will come. This is the only way to make companies care about human dignity.
Right now, there is a proliferation of state laws. But this brings another problem: customers of states that have regulation laws can suffer discrimination by companies from other states. Therefore, there is a need for a federal uniform set of principles and laws about privacy in the US. The inconsistency between state laws keep lawyers in business but ultimately harm the average citizen.
Elias Kruger: Thanks for this perspective. I think it would be a good takeaway for the group to look for blindspots in these principles. AI is about algorithms and data. Data is fundamental. If we don’t handle it correctly, we can’t fix it with algorithms.
My 2 cents is that when it comes to AI applications, the one that concerns me most is facial recognition for surveillance and law enforcement. I don’t think there is any other application where a mistake can cause such devastating impact on the victim than here. When AI wrongly incriminates someone of a crime because an algorithm confused their face with the actual perpetrator, the indidivual loses his freedom. There is no way to recover from that.
This application calls for immediate regulation that puts human dignity at the center of AI in so we can prevent serious problems in the future.
Thanks everybody for your time.
Share this:
- Click to share on Twitter (Opens in new window)
- Click to share on Facebook (Opens in new window)
- Click to email a link to a friend (Opens in new window)
- Click to share on WhatsApp (Opens in new window)
- Click to share on LinkedIn (Opens in new window)
- Click to share on Pocket (Opens in new window)
- Click to print (Opens in new window)
- More