How Can Facebook Algorithms Be Accountable to Users?

Facebook
Twitter
LinkedIn

Second AI Theology Advisory Board meeting: Part 1

In our past discussion, the theme of safeguarding human dignity came up as a central concern in the discussion of AI ethics. In our second meeting, I wanted us to dive deeper into a contemporary use case to see what themes would emerge. For that aim, our dialogue centered on the WSJ’s recent expose on Facebook’s unwillingness to address problems with its algorithms. This was the case even after internal research clearly identified and proposed solutions to the problems. While this is a classic case of how the imperative of profit can cloud ethical considerations, it also highlights algorithms can create self-enforcing vicious cycles never intended to be there in the first place.

Here is a summary of our time together:

Elias: Today we are going to focus on social media algorithms. The role of AI is directing the feed that everybody sees, and everybody sees something different. What I thought was interesting about the article is that in 2018 the company tried to make changes to improve the quality of engagement but it turned out doing the exact opposite.

I experienced that first hand. A while back, I noticed that the controversial and angry comments were the ones getting attention. So, sometimes I would poke a little bit at some members of our community that have more orthodox views of certain things, and that would get more engagement. In doing so, I also reinforced the vicious cycle. 

That is where I want to center the discussion. There are so many issues we can talk about Facebook algorithms. There is the technical side, there’s the legal side and there’s also the philosophical side. At the end of the day, Facebook algorithms are a reflection of who we are, even if it is run by a few.

These are my initial thoughts. I was wondering if we can start with Davi on the legal side. Some of these findings highlight the need for regulation.  The unfathomed power of social media companies can literally move elections. Davi, what are your thoughts on smart regulation, since shutting down isn’t the answer, what would safeguard human dignity and help put guardrails around companies like Facebook?    

By Pixabay

Davi: At least in the US, the internet is a wild west of any regulation or any type of influence in big tech.  Companies have tried to self-regulate, but they don’t have the incentive to really crack down on this. It’s always about profits, they talk the talk but don’t walk the walk. At the end of the day, the stock prices speak higher.

In the past, I’ve been approached by Facebook’s recruiters. In the package offered, their compensation was relatively low compared to industry standards but they offered life-changing stock rights. Hence, stock prices are key not only to management but to many employees as well.

Oligarchies do not allow serious regulation. As I researched data and privacy regulation, I came through the common things against discrimination, and most of the more serious cases are being brought to court through the federal trade commission to protect the consumers. Yet, regulations are being done in a very random kind of way. There are some task forces in Congress to come up with a  regulatory framework. But this is mostly in Washington, where it’s really hard to get anything done. 

Some states are trying to mimic what’s happening in Europe and bring the concept of human dignity, in comprehensive privacy laws like the ones of Europe. We have a comprehensive privacy law in California, Virginia, Colorado. And every day I review new legislation. Last week it was Connecticut. They are all going towards the European model. This fragmented approach is less than ideal. 

Levi: I have a question for Davi. You mentioned European regulation, and for what I understand, but because the EU represents such a large constituency when they shape policies for privacy it seems much easier for companies like FB to conform their overall strategy to fit EU policy, instead of making a tailored policy just for EU users, or California users, etc. Do you see that happening or is the intent to diversify according to different laws?  

Davi: I think that isn’t happening on Facebook. We can use 2010 as an example,  when the company launched a face recognition capability tool that would recognize people in photos so you could tag them more easily, increasing interactions. There were a lot of privacy issues with this technology. They shut it down in Europe in 2011, and then in 2013 for the US. Then they relaunched in the US. Europe has a big influential authority, but that is not enough to buck financial interests.

As a lawyer, that would be the best course of action. If multinationals based themselves on the strictest law and apply it to everyone the same way. That is the best option legal-wise, but then there can be different approaches.

Agile and Stock Options

Elias: Part of the issue of Facebook algorithms wasn’t about disparate impact per se. It was really about a systemic problem of increasing negativity. And one thing I thought was how sentiment analysis is very common right now. There is natural language processing, you can get a text and the computer can say “this person is happy – mad”. It’s mostly binary: negative and positive. What if we could have, just like in the stock market, a safety switch for when the market value drops too fast. At a certain threshold, the switch shuts down the whole thing. So I wonder, why can’t we have a negativity shut-off switch? If the platform is just off the charts with negativity, why don’t we just shut it down? Shut down FB for 30min and let people live their lives outside of the platform.  That’s just one idea for addressing this problem. 

I want to invite members that are coming from a technical background. What are some technical solutions to this problem?

Maggie: I feel like a lot of it comes from some of the engrained methodologies of agile gone wrong. That focus on the short term, iteration, and particular OKRs that were burst from Google where you are focusing on outlandish goals and just pushing for that short term. Echoing what Davi said on the stock options of big companies, and when you look at the packages, it’s mostly about stocks. 

You have to get people thinking about the long term. But these people are also invested in these companies. Thinking long term isn’t the only answer but I think a lot of the problems have to do with the short term iteration from the process. 

Noreen: Building on what Maggie said, a lot of it is the stock. In the sense that they don’t want to employ the number of people, they would have to employ to really take care of this. Right now, AI is just not at a point where it can work by itself. It’s a good tool. But turning it loose by itself isn’t sufficient. If they really are going to get a handle on what’s going on in the platform, they need to hire a lot more people to be working hand in hand with Artificial Intelligence using it as a tool, not as a substitute for moderators who actually work with this stuff.

Maggie: I think Amazon would just blow up by saying that because one of their core tenets is getting rid of humans. 

But of course, AI doesn’t need a stock option, it’s cheap. I think this is why they are going in this direction. This is an excellent example of something I’ve written over and over, AI is a great tool but not a substitute for human beings.  They would need to hire a bunch of people to be working with the programs, tweaking the programs, overseeing the programs. And that is what they don’t want to do. 

From Geralt on Pixabay

Transparency and Trade Secrets

Brian: This reminded me of how Facebook algorithms are using humans in a way, our human participation and using it as data to train them. However, there’s no transparency that would help me participate in a way that’s going to be more constructive.

Thinking about the article Elias shared from Wall Street Journal, that idea of ranking posts on Facebook. I had no idea that a simple like would get 1 point on the rank and a heart would get 5 points. If I had known that, I might have been more intentional using hearts instead of thumbs up to increase positivity. Just that simple bit of transparency to let us know how our actions affect what we see and how they affect the algorithm could go a long way. 

Davi: The challenge with transparency is trade secrets. Facebook algorithms are an example of a trade secret. Companies aren’t going to do it but if we can provide them with support and protection, and at the same time require this transparency. All technology companies tend to be so protective of their intellectual property, that is their gold, the only thing they have. This is a point where laws could really help both sides. Legislation that fosters transparency while still allowing for trade secrets to be safeguarded would go a long way.

Maggie: Companies are very protective of this information. And I would gamble that they don’t know how it works themselves. Because developers do not document things well. So people also might not know. 

Frantisek: I’m from Europe, and as far as I’m concerned here the regulation at a national level, or directly from the EU, has a lot to do with taxation. They are trying to figure out how to tax these big companies like Google, Amazon, and Facebook. It’s an attempt to not let the money stay somewhere else, but actually gets taxed and paid in the EU. This is a big issue right now.

In relation to theology, this issue of regulation is connected with the issue of authority. We are also dealing with authority in theology and in all churches. Who is deciding what is useful and what isn’t?

Now the question is, what is the tradition of social networks? As we talk about Facebook algorithms, there is a tradition. Can we always speak of tradition? How is this tradition shaped?              

From the perspective of the user. I think there might be an issue with anonymous users. Because the problem of social media is that you can make up a nickname and make a mess and get away with anything, that at least is what people think. Social networks are just an extension of normal life, trying to act the same way I do in normal life and in cyberspace as a Christian. The life commandment from Jesus is to treat your neighbors well. 

More Posts

A Priest at an AI Conference

Created through prompt by Dall-E Last month, I attended the 38th conference of the Association for the Advancement of Artificial Intelligence. The conference was held

Send Us A Message

Don't know where to start?

Download our free AI primer and get a comprehensive introduction to this emerging technology
Free