How Can Facebook Algorithms Be Accountable to Users?

Second AI Theology Advisory Board meeting: Part 1

In our past discussion, the theme of safeguarding human dignity came up as a central concern in the discussion of AI ethics. In our second meeting, I wanted us to dive deeper into a contemporary use case to see what themes would emerge. For that aim, our dialogue centered on the WSJ’s recent expose on Facebook’s unwillingness to address problems with its algorithms. This was the case even after internal research clearly identified and proposed solutions to the problems. While this is a classic case of how the imperative of profit can cloud ethical considerations, it also highlights algorithms can create self-enforcing vicious cycles never intended to be there in the first place.

Here is a summary of our time together:

Elias: Today we are going to focus on social media algorithms. The role of AI is directing the feed that everybody sees, and everybody sees something different. What I thought was interesting about the article is that in 2018 the company tried to make changes to improve the quality of engagement but it turned out doing the exact opposite.

I experienced that first hand. A while back, I noticed that the controversial and angry comments were the ones getting attention. So, sometimes I would poke a little bit at some members of our community that have more orthodox views of certain things, and that would get more engagement. In doing so, I also reinforced the vicious cycle. 

That is where I want to center the discussion. There are so many issues we can talk about Facebook algorithms. There is the technical side, there’s the legal side and there’s also the philosophical side. At the end of the day, Facebook algorithms are a reflection of who we are, even if it is run by a few.

These are my initial thoughts. I was wondering if we can start with Davi on the legal side. Some of these findings highlight the need for regulation.  The unfathomed power of social media companies can literally move elections. Davi, what are your thoughts on smart regulation, since shutting down isn’t the answer, what would safeguard human dignity and help put guardrails around companies like Facebook?    

By Pixabay

Davi: At least in the US, the internet is a wild west of any regulation or any type of influence in big tech.  Companies have tried to self-regulate, but they don’t have the incentive to really crack down on this. It’s always about profits, they talk the talk but don’t walk the walk. At the end of the day, the stock prices speak higher.

In the past, I’ve been approached by Facebook’s recruiters. In the package offered, their compensation was relatively low compared to industry standards but they offered life-changing stock rights. Hence, stock prices are key not only to management but to many employees as well.

Oligarchies do not allow serious regulation. As I researched data and privacy regulation, I came through the common things against discrimination, and most of the more serious cases are being brought to court through the federal trade commission to protect the consumers. Yet, regulations are being done in a very random kind of way. There are some task forces in Congress to come up with a  regulatory framework. But this is mostly in Washington, where it’s really hard to get anything done. 

Some states are trying to mimic what’s happening in Europe and bring the concept of human dignity, in comprehensive privacy laws like the ones of Europe. We have a comprehensive privacy law in California, Virginia, Colorado. And every day I review new legislation. Last week it was Connecticut. They are all going towards the European model. This fragmented approach is less than ideal. 

Levi: I have a question for Davi. You mentioned European regulation, and for what I understand, but because the EU represents such a large constituency when they shape policies for privacy it seems much easier for companies like FB to conform their overall strategy to fit EU policy, instead of making a tailored policy just for EU users, or California users, etc. Do you see that happening or is the intent to diversify according to different laws?  

Davi: I think that isn’t happening on Facebook. We can use 2010 as an example,  when the company launched a face recognition capability tool that would recognize people in photos so you could tag them more easily, increasing interactions. There were a lot of privacy issues with this technology. They shut it down in Europe in 2011, and then in 2013 for the US. Then they relaunched in the US. Europe has a big influential authority, but that is not enough to buck financial interests.

As a lawyer, that would be the best course of action. If multinationals based themselves on the strictest law and apply it to everyone the same way. That is the best option legal-wise, but then there can be different approaches.

Agile and Stock Options

Elias: Part of the issue of Facebook algorithms wasn’t about disparate impact per se. It was really about a systemic problem of increasing negativity. And one thing I thought was how sentiment analysis is very common right now. There is natural language processing, you can get a text and the computer can say “this person is happy – mad”. It’s mostly binary: negative and positive. What if we could have, just like in the stock market, a safety switch for when the market value drops too fast. At a certain threshold, the switch shuts down the whole thing. So I wonder, why can’t we have a negativity shut-off switch? If the platform is just off the charts with negativity, why don’t we just shut it down? Shut down FB for 30min and let people live their lives outside of the platform.  That’s just one idea for addressing this problem. 

I want to invite members that are coming from a technical background. What are some technical solutions to this problem?

Maggie: I feel like a lot of it comes from some of the engrained methodologies of agile gone wrong. That focus on the short term, iteration, and particular OKRs that were burst from Google where you are focusing on outlandish goals and just pushing for that short term. Echoing what Davi said on the stock options of big companies, and when you look at the packages, it’s mostly about stocks. 

You have to get people thinking about the long term. But these people are also invested in these companies. Thinking long term isn’t the only answer but I think a lot of the problems have to do with the short term iteration from the process. 

Noreen: Building on what Maggie said, a lot of it is the stock. In the sense that they don’t want to employ the number of people, they would have to employ to really take care of this. Right now, AI is just not at a point where it can work by itself. It’s a good tool. But turning it loose by itself isn’t sufficient. If they really are going to get a handle on what’s going on in the platform, they need to hire a lot more people to be working hand in hand with Artificial Intelligence using it as a tool, not as a substitute for moderators who actually work with this stuff.

Maggie: I think Amazon would just blow up by saying that because one of their core tenets is getting rid of humans. 

But of course, AI doesn’t need a stock option, it’s cheap. I think this is why they are going in this direction. This is an excellent example of something I’ve written over and over, AI is a great tool but not a substitute for human beings.  They would need to hire a bunch of people to be working with the programs, tweaking the programs, overseeing the programs. And that is what they don’t want to do. 

From Geralt on Pixabay

Transparency and Trade Secrets

Brian: This reminded me of how Facebook algorithms are using humans in a way, our human participation and using it as data to train them. However, there’s no transparency that would help me participate in a way that’s going to be more constructive.

Thinking about the article Elias shared from Wall Street Journal, that idea of ranking posts on Facebook. I had no idea that a simple like would get 1 point on the rank and a heart would get 5 points. If I had known that, I might have been more intentional using hearts instead of thumbs up to increase positivity. Just that simple bit of transparency to let us know how our actions affect what we see and how they affect the algorithm could go a long way. 

Davi: The challenge with transparency is trade secrets. Facebook algorithms are an example of a trade secret. Companies aren’t going to do it but if we can provide them with support and protection, and at the same time require this transparency. All technology companies tend to be so protective of their intellectual property, that is their gold, the only thing they have. This is a point where laws could really help both sides. Legislation that fosters transparency while still allowing for trade secrets to be safeguarded would go a long way.

Maggie: Companies are very protective of this information. And I would gamble that they don’t know how it works themselves. Because developers do not document things well. So people also might not know. 

Frantisek: I’m from Europe, and as far as I’m concerned here the regulation at a national level, or directly from the EU, has a lot to do with taxation. They are trying to figure out how to tax these big companies like Google, Amazon, and Facebook. It’s an attempt to not let the money stay somewhere else, but actually gets taxed and paid in the EU. This is a big issue right now.

In relation to theology, this issue of regulation is connected with the issue of authority. We are also dealing with authority in theology and in all churches. Who is deciding what is useful and what isn’t?

Now the question is, what is the tradition of social networks? As we talk about Facebook algorithms, there is a tradition. Can we always speak of tradition? How is this tradition shaped?              

From the perspective of the user. I think there might be an issue with anonymous users. Because the problem of social media is that you can make up a nickname and make a mess and get away with anything, that at least is what people think. Social networks are just an extension of normal life, trying to act the same way I do in normal life and in cyberspace as a Christian. The life commandment from Jesus is to treat your neighbors well. 

China and the EU jump ahead of the US in the Race for Ethical AI

Given the crucial role AI technologies are playing in the defense industry, it is no secret that leading nations will be seeking the upper hand. US, China, and the EU have put forth plans to guide and prioritize research in this area. At AI Theology, we are interested in a very different kind of race. One that is less about technological supremacy and more about ensuring the flourishing of life. We call it the race for ethical AI.

What are governments doing to minimize, contain and limit the harm of AI applications? Looking at the leaders in this area, two show signs of making progress while one is still sitting on the sidelines. Though through different methods, China and the EU took decisive action to start addressing the challenge. The US has been awfully quiet in this area, even as most leading AI organizations have their headquarters on its soil.

The EU Tackles Facial Recognition

Last week, the European Parliament passed a ground-breaking resolution curbing the use of AI for mass surveillance and predictive policing based on behavioral data. In its language, the document calls out companies like Clearview for their controversial use of facial recognition in law enforcement. It also lists many examples in which this technology has erroneously targeted minorities. The legislative body also calls for greater transparency and human involvement in the decision process that comes from algorithms.

Photo by Maksim Chernishev on Unsplash

While not legally binding, this is a first and important step in regulating the use of computer vision in law enforcement. This is part of a bigger effort the EU is taking to draft regulations on AI to address multiple applications of the technology. In this sense, the multi-state body becomes the pioneer in attempting to place guardrails on AI use possibly becoming the standard for other countries to follow.

Even though ethical AI is not limited to regulation, government action can have a sweeping impact in curbing abuse and protecting the vulnerable. It also sends a strong signal to companies acting in this space that more accountability is on its way. This will likely force big tech and AI start-ups to take a hard look at how they develop products and deliver their services. In short, good legislation can be a catalyst for the type of change we need. In this way, the EU leaps forward in the race for ethical AI.

China Take Steps towards Ethical AI

On the other side of the world, another AI leader put forth guidelines on the use of the technology. The document outlines principles for algorithm governance, protect privacy, and give users more autonomy over their data. Besides its significance, it is notable that the guidelines include language around making the technology “people-oriented” and appeals to common values.

Photo by Timothée Gidenne on Unsplash

The guidelines for ethical AI are part of a broader effort to rein big tech power within the world’s most populous nation. Earlier this year, the government published a policy to better control recommendation algorithms on the Internet. This and other measures are sending a strong signal to the Chinese budding digital sector that the government is watching and will keep them accountable. Such a move also contributes to the centralization of power in the government in a way many western societies would not be comfortable with. However, in this case, they seem to align with the public good.

Regardless of how these guidelines will be implemented, it is notable that China would be at the forefront of publishing these guidelines. It shows that Beijing is taking the threat of AI misuse seriously, at least when it is perpetrated by business enterprises.

US Fragmented Efforts

What about the North American AI leader? Unfortunately, to date, there is no sweeping national effort to address AI abuse in the US. This is not to say that nothing is happening. States like California and Illinois are working on legislation on data privacy and AI surveillance. Biden’s chief science advisor recently is called for an AI Bill of rights. In a previous blog, I outlined US efforts to address bias in facial recognition as well.

Yet, nothing concrete has happened at a national level. The best we got was a FB former employee’s account of the company’s reluctance to curb AI abuse. It made for great television but no sweeping legislation to follow.

If there is a race for ethical AI, the North American competitor is behind. If this trend continues, AI ethics will be at the mercy of large company boardrooms in the Yankee nation. Company boards are never free of conflict of interest as the next quarter’s profit often takes precedence over human flourishing.

Self-regulation has not worked. It is time we move towards more active government intervention for the sake of the common good. This is a race the US cannot afford to sit out. It is time to hop on the track.

Placing Human Dignity at the Center of AI Ethics

In late August we had our kick-off Zoom meeting of the Advisory Board. This is the first of our monthly meetings where we will be exploring the intersection of AI and spirituality. The idea is to gather scholars, professionals, and clergy to discuss this topic from a multi-faceted view. In this blog, we publish a short summary of our first conversation. The key theme that emerged was a concern for safeguarding and uploading human dignity as AI becomes embedded in growing spheres of our lives. The preocupation must inhabit the center of all AI discussions and be the guiding principles for laws, business practices and policies.

Question for Discussion: What, in your perspective, is the most pressing issue on AI ethics in the next three to five years? What keeps you up at night?

Brian Sigmon: The values from which AI is being developed, and their end goals. What is AI oriented for? Usually in the US, it’s oriented towards profit, not oriented to the common good or toward human flourishing. Until you change the fundamental orientation of AI’s development, you’re going to have problems.

AI is so pervasive in our lives that we cannot escape it. We don’t always understand the logic behind it. It is often beneath the surface intertwined with many other issues. For example, when I go on social media, AI controls what I see on my feed. It does not optimize in making me a better person but instead maximize clicks and revenue. That, to me, is the key issue.

Elias Kruger: Thank you, Brian. To add some color to that, since the pandemic, companies have increased their investment in AI. This in turn is creating a corporate AI race that will further ensure the encroachment of AI across multiple industries. How companies execute this AI strategy will deeply shape our lives, not just here in the US but globally.

Photo by Chris Montgomery on Unsplash

Frantisek Stech: Coming from Eastern Europe, one of the greatest issues is the abuse of AI from authoritarian non-democratic regimes for human control. In other words, it is the relationship between AI control and human freedom. Another practical problem is how people are afraid to lose their jobs to AI-driven machines.  

Elias Kruger: Thanks Frantisek, as you know we are aware of what is happening in China with the merging of AI and authoritarian governments. Can you tell us a little bit about your area of the world? Is AI more government-driven or more corporate?

Frantisek Stech: In the Czech Republic, we belong to the EU, an therefore to the West. So, it is very much corporate-driven. Yet, we are very close to our Eastern neighbors are we are watching closely how things develop in Belarussia and China especially as they will inevitably impact our region of the world.

However, this does not mean we are free from danger there. There is the issue of manipulation of elections that started with the Cambridge Analytics scandal and issues with the presidential elections in the US. Now we are approaching elections in the EU, so there is a lot of discussions about how AI will be used for manipulation and the problem . So when people hear AI, they often associate with politics. So they think they are already being manipulated if they buy a phone with Facial recognition. We have to be cautious but not completely afraid. 

Ben Day: I am often pondering on this question of AI’s, or technology in general, relates with dignity and individual human flourishing. When we aggregate and manipulate data, we strip out individual human dignity which is Christian virtue, and begin to see people as compilations of manipulative data. It is really a threat to ontology, to our very sense of being. In effect, it is an assault on human dignity through AI.

Going further, I am interested in this question of how AI encroaches in our sense of identity. That is, how algorithms govern my entire exposure to media and news. Not just that but AI impacts our whole social eco-verse online and offline. What does that have to do with the nature of my being?

I often say that I have a very low view of humanity. I don’t think human beings are that great. And so, I fear that AI can manipulate the worst parts of human nature. That is an encroachment in huam dignity.

In the Episcopal church, we believe that serving Christ is intimately connected with upholding the dignity of human beings. So, if we are turning a blind eyed to human dignities being manipulated, then my Christian praxis compels me by moral obligation to do something about it. 

Photo by Liv Merenberg on Unsplash

Elias Kruger: Can you give us a specific example of how this plays out?

Ben Day: Let me give you one example of how this affected my ministry. I removed myself from most of social media as of October of 2016 because of what I was witnessing. I saw members of my church sparring on the internet, attacking each other’s dignity, and intellect over politicized issues. The vitriol was so pervasive that I encounter a moral dilemma. As a priest, it is my duty to deny the sacrament to those who are in unrepetant sin.

So I would face parishioners only hours after these spars online and wonder whether I should offer them the sacrament. I was facing this connundrum as a result of algorithms manipulating feeds to foster angry engagements because it leads to profit. It virtually puts the church at odds to how these companies pursue profit.

Levi Checketts:  I lived in Berkley for many years and the cost of living there was really high. It was so because a lot of people who worked in Silicon Valley or in San Francisco were moving there. The influx of well-to-do professionals raised home prices in the area, forcing less fortunate existing residents to move out.

So, there is all this money going into AI. Of the big 5 biggest companies in market cap, three are in Silicon Valley and two in the Seattle area. Tech professionals often do not have full awareness of the impact their work is having on the rest of the world. For example, a few years back, a tech employee wrote an op-ed complaining about having to see disgusting homeless people in his way to work when he was paying so much for rent.

What I realized is that there is a massive disconnect between humanity and the people making decisions for companies that are larger than many countries’ economies. My biggest concern is that the people who are in charge and controlling AI have many blind spots. Their inability to emphathize with those who are suffering or even notice the realities of systems that breed oppression and poverty. To them, there is always a technical fix. Many lack the humility to listen to other perspectives, and come from mainly male Asian and White backgrounds. They are often opposed to other perspectives that challenge their work.

There have been high-profile cases recently like Google firing a black female researcher because she spoke up about problems in the company. The question that Ben mentioned about human dignity in AI is very pressing. If we want to address that, we need people from different backgrounds making decisions and working to develop these technologies.

Futhermore, if we define AI as a being that makes strictly rational decisions, what about people who do not fit that mold?

The key questions are where do we locate this dignity and how do we make sure AI doesn’t run roughshod over humanity?

Davi Leitão: These were all great points that I was not thinking about before. Thank you for sharing this with us.

All of these are important questions which drive the need for regulation and laws that will gear profit-driven corporations to the right path. All of the privacy and data security laws stand on a set of principles written in 1981 by the OECD. These laws are looking to these principles and putting into practice. They are there to inform and safeguard people from bias.

My question is: what are the blind spots on the FIP (fair information principles) that are not accounting for these new issues technology has brought in? This problem is a wide net, but it can help guide a lot of new laws that will come. This is the only way to make companies care about human dignity.

Right now, there is a proliferation of state laws. But this brings another problem: customers of states that have regulation laws can suffer discrimination by companies from other states. Therefore, there is a need for a federal uniform set of principles and laws about privacy in the US. The inconsistency between state laws keep lawyers in business but ultimately harm the average citizen.

Elias Kruger:  Thanks for this perspective. I think it would be a good takeaway for the group to look for blindspots in these principles. AI is about algorithms and data. Data is fundamental. If we don’t handle it correctly, we can’t fix it with algorithms. 

My 2 cents is that when it comes to AI applications, the one that concerns me most is facial recognition for surveillance and law enforcement. I don’t think there is any other application where a mistake can cause such devastating impact on the victim than here. When AI wrongly incriminates someone of a crime because an algorithm confused their face with the actual perpetrator, the indidivual loses his freedom. There is no way to recover from that.

This application calls for immediate regulation that puts human dignity at the center of AI in so we can prevent serious problems in the future.

Thanks everybody for your time.