Social Unrest, AI Chefs and Keeping Big Tech in line

This year we are starting something new. Some of you may be aware that we keep a repository for articles that are relevant to the topics we discuss in the portal such as AI ethics, AI for good, culture & entertainment, and imagination (theology). We would like to take a step further and publish a monthly re-cap of the most important news on these areas. We are constantly tracking developments in these areas and would like to curate a summary for your edification. This is our first one so I ask you to be patient with us as we figure out formatting.

Can you believe it?

Harvesting data from prayers: Data is everywhere and companies are finding more and more uses for it. As the practice of data harvesting becomes commonplace, nothing is sacred anymore. Not even religion is safe, as pray.com now is collecting and sometimes sharing with other companies like Meta. As the market of religious heats up, investors are flocking to back new ventures. This probably means a future of answered prayers, not by God but by Amazon.

Read more here: BuzzFeed.

Predicting another Jan 6th: What if we could predict and effectively address social unrest before it becomes destructive? This is the promise of new algorithms that helps predict the next social unrest event. Coupcast, developed by the University of Central Florida uses AI and machine learning to predict civil unrest and electoral violence. Regardless of its accuracy, using ML in this arena raises many ethical questions. Is predicting social unrest a cover for suppressing it? Who gets to decide whether social unrest is legitimate or not? Hence we are left with many more questions but little guidance at this moment.

Read more here: WashingtonPost

IRS is looking for your selfies: IRS using facial recognition to identify taxpayers: opportunity or invitation to disaster? You tell me. Either way, the government quietly launched the initiative requiring individuals to sign up with the facial recognition company if they want to check the status of their filling. Needless to say, this move was not well received by civil liberty advocates. In the past, we dove into the ethical challenges of this growing AI practice.

To read more click here: CBS news

Meta’s announces a new AI quantum computer: Company will launch a powerful new quantum AI computer in Q1. This is another sign Meta refuses to listen to its critiques only marching on to its own techno-optimistic vision of the future – one in which it makes Billions of dollars, of course. What is not clear is how this new computer will enhance the company’s ability to create worlds in the metaverse. Game changer or window-dressing? Only time will tell

To read more click here: Venture Beat

AI Outside the Valley

While our attention is on the next move coming from Silicon Valley, a lot is happening in AI and other emerging technologies throughout the world. I would propose, that is actually where the future of these technologies lie. Here is a short selection of related updates from the globe.

Photo by Hitesh Choudhary on Unsplash

Digital Surveillance in South Asia: As activists and dissidents move their activity online, so does their repression. In this interesting article, Antonia Timmerman outlines 5 main ways authoritarian regimes are using cyber tools to suppress dissent.

To read more click here: Rest of the World

Using AI for health, you better be in a rich country: As we have discussed in previous blogs, AI algorithms are only as good as the data we feed them. Take eye illness, because most available images are coming from Europe, US, and China, researchers worry they will not be able to detect problems in under-represented groups. This example highlights that a true democratization of AI must include first an expansion of data sources.

To read more click here: Wired

US companies fighting for Latin American talent: Not all is bad news for the developing world. As the search for tech talent in the developed centers is returning empty, many are turning to overlooked areas. Latin American developers are currently on high demand, driving wages up but also creating problems for local companies who are unable to compete with foreign recruiters.

To read more click here: Rest of the World

Global Race for AI Regulation Marches On

unsplash

The window for new regulation in the US congress may be ending as mid-term elections approach. This will ensure the country will remain lagging behind global efforts to rein in Big Tech’s growing market power and mounting abuses.

As governments fail to take action or do it slowly, some are thinking about a different route. Could self-regulation be the answer? With that in mind, leading tech companies are joining forces to come up with rules for the metaverse as the technology unfolds. Will that be enough?

Certainly not for the Chinese government if you ask. The Asian super-power released the first global efforts to regulate deepfakes. With this unprecedented move, China leads the way being the first government to address this growing concern. Could this be a blueprint for other countries?

Finally, the EU fines for violations of GDPR hit a staggering 1.2 Billion. Amazon alone was slapped with an $850 Million penalty for its poor handling of customer data. While this is welcome news, one cannot assume it will lead to a change in behavior. Given mounting profit margins, Big Tech may see these fines not as a deterrent but simply as a cost of doing business in Europe. We certainly hope not but would be naive not to consider this possibility.

Cool Stuff

NASA’s latest and largest-ever Telescope reached its final destination. James Webb is now ready to start collecting data. Astrophysicists and space geeks (like myself) are excited about the possibilities of seeing well into the cosmic past. The potential for new discoveries and new knowledge is endless.

To read more click here: Nature

Chef AI, coming to a kitchen near you. In an interesting application, chefs are using AI to tinker and improve on their recipes. The results have been delicious. Driven in part by a trend away from animal protein, Chefs need to get more creative and AI is here to help.

To read more click here: BBC

That’s it. This is our update for January. Many blessings and see you next month!

Kora, our new addition to the family says hi and thank you for reading.

Painting a Global View of AI for Good: Part 2

This blog continues the summary for our AITAB meeting in November. Given the diverse group of voices, we were able to cover a lot of ground in the area of AI for good. In the first part I introduced the 3 main trends of AI for good: democratization of AI skills, green AI, and AI justice. In this blog, we cover examples of AI for good in the industry, academia, and a global perspective from Eastern Europe. Our board members spoke from experience and also listed some great resources to anyone interested in getting deeper into the field.

AI for Good in the Industry

Davi: Another way AI and machine learning are helping in sustainability, is by improving companies’ consumption of non-renewables. For example, one of the largest expenses of a cruise line company is fuel consumption. Mega ships require untold amounts of fuel to move them across oceans around the globe. And, in maritime, the exact same route may require differing amounts of fuel due to the many variables that impact fuel consumption such as the weight of the ship, seasonal and unexpected currents, and different weather patterns.

AI and machine learning have expanded the capacity to calculate, with never-seen-before precision, the amounts of fuel needed for such mega-ships to safely complete each of their routes in real-time. This newfound capability is not only good for these companies’ bottom lines but also helps them preserve the environment by diminishing emissions.

Elias: That’s a great point. A recent study by PricewaterhouseCoopers estimates that AI applications in transportation can reduce greenhouse emissions by as much as 1.5% globally so this is definitely an important trend to track.

Photo by Christian Lue on Unsplash

A Report from Eastern Europe   

Frantisek : I tried to investigate and revise my knowledge in the three areas Elias proposed. Regarding the first topic, democratization of AI skills, I think from the perspective of Prague and the Czech Republic, we are at the crossroads between Eastern and Western Europe. There are initiatives that focus on AI education and popularization and issues related to that. I would like to point out a specific Prague AI initiative corporation of different academic and private companies as well.

This kind of initiative is more technological, and they are just beginning to grasp the idea that they need some ethical additions like philosophers. Yet, they are not inviting theologians to the table. I guess we need to prove our value before we will be “invited to the club”.

With that said, Prague AI wants to transform the city into a European center for AI. They have good resources for that, both human and institutional support. So, I wouldn’t be surprised if they achieve this goal, and I wish them all the best. My research group aims at connecting with them too. But we need first to establish ourselves a bit better within the context of our university.

On another front, we established contact recently with a Ukrainian Catholic University which aims at opening an interdisciplinary program for technology and theology. However, we do not know yet, how far they are with this plan. We intend to learn more since I am in process of scheduling an in-person meeting with the dean of their Theological Faculty. It was not yet possible due to the pandemic. We are very much interested in this cooperation.

We also aspire to establish a conversation with the Dicastery for Integral Human Development and other Vatican-related organizations in Rome where issues of new technologies and AI receive great attention especially in relation to ethics and Catholic Social Teaching. In December 2021 one of my team members went to Rome to start conversations leading towards that aim.

Photo by Guillaume Périgois on Unsplash

In summary, here in Central and Eastern Europe democratization of AI is more focused on education and popularization. People are getting acquainted with the issue.

Regarding sustainable AI, we are following the footprints of the European Commission. One of the European commissioners that formed this agenda is from the Czech Republic. And maybe because of that, the European Commission sponsored a big conference in September which was in large part focused on green AI. The contribution about the role of AI in processing plastic materials was especially interesting because it has a great potential for green AI implementation.

The European Commission introduced a plan for the third decade of this century. It’s called the Digital Decade. It includes objectives like digitalization of public buildings, digital economics, and the growth of digital literacy among citizens with large support for the field of AI.

In Europe, AI justice is a big issue. There is hope and a lot of potential in AI to contribute towards the effectiveness and quality of judicial procedures. Yet there is an equivalent concern about the fundamental rights of individuals. I’m not very well acquainted with these issues, but it is an important topic here in Europe.

AI for Good In Academia

AI for good
Photo by Vadim Sherbakov on Unsplash

Scott: I’m a professor of physics at Belmont University. I started working with machine learning around 2013 /2014 with respect to developing audio signal processing applications. I managed to get into an AI Ethics grant in 2017 and went to Oxford for a couple of summers.

My background a long time ago was astrophysics, but recently, I eventually became focused on machine learning. I split my time between doing technical work and also doing philosophical and ethical thinking. I recently taught a general education undergraduate class that integrated machine learning and ethics. We talked about how machine learning and algorithms work while also discussing various ethical principles and players in the field.

Then the university requested I teach a more advanced course more focused on coding for upper-level students. This fall I’m teaching AI deep learning and ethics, and it’s kicking my butt because I am writing a lot of the lessons from scratch. One of the things I’m doing in this course is integrating a lot with things from the open-source community and the public free machine learning and deep learning education. There’s Google, Facebook, and then there’s everybody else. 

So I’ve taken a lot of classes online. I’m pretty involved with the Fast AI community of developers, and through their ancillary groups like Hugging Face ,for example. It’s a startup but also a community. This makes me think in terms of democratization, in addition to proliferation around the world, there’s also a proliferation with everybody that’s not at one of these big tech firms as far as disseminating education.

Democratization of AI and Open Source

I think a couple of big things that come to mind are open source communities that are doing their own work like Luther AI. They released their own GPT model that they trained. It’s a sort of grassroots community group that is loosely affiliated but managed to pull this off through people donating their time and expertise. 

Photo by Shahadat Rahman on Unsplash

One of the things they teach in Fast AI is a lot about transfer learning. Instead of training a giant model from scratch, we’re taking an existing model and fine-tuning it. That relates to sustainability as well. There are ecological concerns about the power consumption needed to train language models. An example would be Megatron-Turing Natural Language Generation (MT-NLP) from Microsoft, a gigantic language model. 

With transfer learning, we can start with an initialization of a model that doesn’t require much power. This allows people all over the globe to run them with little computational resources. The idea is to take Ivory Tower’s deep learning research and apply it to other things. Of course one of the questions people think about is what are we inheriting when we grab a big model and then fine-tune it. Yet, nobody really knows how much of that late structure stays intact after the fine-tuning.

It’s an interesting and accessible area. Considering how many people, myself included, post free content online for education. You can take free courses, free blog posts for learning about machine learning, developing tools and ethics as well. The open-source movement is a nice microcosm of democratization of content that relates both AI ethics and sustainable AI. 

Photo by niko photos on Unsplash

Elias: Thank you, Scott. I want to seize on that to make a point. Open source in the tech world is a great example of the mustard seed technology idea. It starts through grassroots efforts where many donate their time to create amazing things. That is the part I think technology culture is teaching theology to us by actualizing the gift economy. In the real world we live in we pay for companies and they focus on profit. It is highly transactional and calculating. Here you have an alternative economy where an army of volunteers are creating things for free and inviting anyone to take it as needed. They build it simply for the pleasure of building it. It’s a great example. 

Scott: I’m also starting to do some work on algorithmic auditing, and I just found this kid from a group called data science for social good. Other people may find it interesting as I do.

Painting a Global View of AI for Good: Part I

In early November, AITAB (AI Theology Advisory Board) met for another fruitful conversation. As in our previous meeting, the dynamic interaction between our illustrious members took the dialogue to places I did not anticipate. In this first part, we set up the dialogue by framing the key issues in AI for good. We then move to a brief report on how this is playing out in East Asia. In a conversation often dominated by Silicon Valley perspectives, it is refreshing to take a glimpse at less-known stories of how AI technologies are reshaping our world.

Defining AI for Good

Elias: Let me say a few words to start the discussion. This is different from our other discussions where we focused on ethics. In those instances, we were reflecting on what was “wrong” with technology and AI. Today, I wanted to flip this script and focus more on the positive, what I call “AI for good”. Good theology starts with imagination. Today is going to be more of an exercise of imagination to notice what’s happening but doesn’t necessarily make the news.

More specifically, there are three main areas where I see global AI for good starting to take shape. The first is the democratization of AI skills – spreading technological knowledge to underrepresented communities. This is a very important area, since as we discussed before, people make AI in their own image. If we don’t widen the group, we will have the same type of AI. A great example is Data Science Nigeria. Just yesterday I was in one of their bootcamps as a speaker. It’s very encouraging to see young men and women from Nigeria and other African countries getting involved in data science. It started as a vision of two data scientists that want to train 10 million data scientists in Nigeria for the next 10 years. It’s a bold goal, and I sure pray they achieve it.

The other topic is about Green AI or Sustainable AI. How AI can help us become more sustainable. One example is using computer vision to identify illegal fires in the Amazon – using AI to affect change with an eye on sustainability.  And the last one is AI justice. The same way AI is creating bias, it’s using AI tools to identify and call out this bias.  That is the work of some organizations like Algorithmic Justice League led by Joy Buolamwini. That’s also an area that is growing. These three areas cover the main themes of global AI for good.

Global AI for Good
by Bruno Melo from Unsplash.com

Re-Framing Technology

Let me frame them within a biblical context. In technology when we usually mean big tech that comes from Silicon Valley. As an alternative, I want to introduce this different concept, which is mustard seed technology. In the gospels, Jesus talks of the kingdom of God being like a mustard seed. Though it’s one of the smallest seeds it becomes a big tree where birds can come and rest in their shade.

I love this idea about this grassroots technology, either being developed or being deployed to provide for others. Just think of farmers in Kenya using their phones to make payments and doing other things they haven’t been able to do before. Those are the stories I wanted to think about today. I wanted to start thinking geographically.  How does global AI for good look like in different places of the world?

Photo by Akson on Unsplash

AI for Good in East Asia

Levi : Here in East Asia, the turning point came in 2016 when DeepMind AlphaGo (Google supercomputer) beat Lee Se Dol in a game of Go. It created a very interesting push in South Korea and China to rapidly advance and develop AI infrastructures. I’m involved with a group on AI social and ethical concerns focused on Asia. The group has nine different scholars from 6 different Asian countries. One of the things we are going to discuss soon is a report from MIT from interviewing several Asian business owners about direction. This report is 2 years old, but it’s interesting to see how small focused the state of China was then. Now they are one of the world leaders in AI development.

There is a major push in this part of the world. Asia across the board was late to the industrial game, except for Japan. As many countries like South Korea, China have massively industrialized in the last decades, they see AI as a way to push into the future. This opens a lot of questions. Like the ones about democratization and justice that need to be addressed. But one of the interesting things is that Asian countries are interested in pushing towards AI regulation compared to the USA or other European countries. There is also this recognition of wanting to be the best in advanced technology but also the best in “getting it right”. 

Where that’s going to go it’s hard to say. We know for that in China, the government directs most of AI development. So the question of democratization may not be the question at hand. South Korea allocated billions of won to developing AI. around the same time. It will likely engage in more democratization than China.

It is interesting to see how justice issues, like how facial recognition fails to recognize people that aren’t white men. When you’re training this tech in Chinese data sets, you have a much larger data set – one billion and a half people rather than 350 million (in the US), which allows the possibility to get rid of these biases which offers great potential for global AI for good.

There is also the problem of natural language processing. GPT-3 recently came out, and just like GTP-2, is based on English web pages. This means there is bias from the English-speaking world that is coded in those AI systems. But if you start training those same systems on Chinese, Korean, Japanese, Hindi language pages, you are going to end up with different frameworks. The bigger question will be, is there a way to put these in dialogue? I think this is a much more complicated question. Because there is so much development going around in this part of the world, it opens up the recognition that many of the biases encoded in western development of AI will not be the same as the rest of the world.

Conclusion

In this first part, we introduced the discussion on a global view of AI for good. It includes three main categories: democratizing AI skills, sustainable AI and AI justice. We then framed it within a mustard seed technology perspective. That is, we focus on the margins as opposed to the geo-centers of technological power. We are less interested in Silicon Valley and more on what is happening in the street corners of global cities.

Why ‘Don’t Look Up’ Falls Flat on Climate Change

A while back, I noticed “Don’t Look Up” at the top of the Netflix rankings. Considering the star-studded cast, I was excited to watch the comedy with my wife. I could not have been more disappointed. The long-winded satire missed many opportunities only accomplishing in repeating Hollywood caricature images of the last president and his supporters. With that said, this is not the first movie that I did not like. What surprised me, however, and made me open an exception to write about a movie I disliked was the passionate reaction I was getting from my lone FB comment. More importantly, what struck me was how many respondents saw it as a good metaphor for the climate change crisis.

In this blog, I would contend the exact opposite: the movie did a great disservice for raising awareness and affecting environmental change. It did so, not just because of its flat jokes but because it framed the issue wrongly, only serving to confirm the prejudices against Hollywood activism – namely, that it is shallow, misguided, and most often, ineffective. In short, ‘Don’t Look Up’ misses the point on Climate Change.

Before you tune out thinking you were trapped into reading a climate denier diatribe, let me introduce myself. I have written before here about the importance of making the environment our top priority. My commitment goes beyond writing. Our household composts nearly 80-90% of our non-animal food waste goes back to the earth. I drive a Plug-in Hybrid and solar panels will soon be placed in our rooftops.

I don’t say this to brag but only to make a point that there can be disagreement even within those who support the bold climate change action. This is not a binary world and I hope by now you can slow down and read what I have to say. I write this not because I don’t care about climate change but precisely because I do.

Trailer from Youtube

Framing the Issue Wrongly

Now that we got our introductions out of the way let me introduce the central point here. To use an analogy of a cataclysmic disaster 6 months from now to convince people about climate change misses the mark because it reduces it to a one-time event. This is hardly what is happening. Our climate crisis is not a premonition for an upcoming doomsday. Instead, it is a complex and gradual problem which ramifications we hardly understand. It does not mean it is not serious, just that real change requires long-term planning and commitment.

Don't Look Up poster

If anything, the movie exposed America’s inability to inspire grand ideas and engage in long-term plans. The problem with climate denial is not just that it ignores the facts but also that it demonstrates fatally selfish short-termism. We are simply unable to think beyond a 4-year election cycle or even the next year. Instead of working towards long-term plans we instead try to reduce the problem into one cataclysmic event through cheap comedy that only feeds into political polarization.

What about urgency? It is true that the window is closing for us to meet UN temperature increase goals. In that sense, there is a parallel with an impending disaster. With that said, while the urgency is real, addressing it is a lot more complex than shooting a meteor off-course. Hence, my concern is sounding a general alarm and labeling anyone who ignores it as an idiot is not very productive.

Top-down vs Grassroots Change

According to ‘Don’t Look Up’, while climate denial is a generalized problem, it is particularly acute among Silicon Valley and the political elite. They take a light jab at the media which is rather ironic, given who is talking. It also critiques recent billionaires’ efforts to reach space as a glorified act of escapism.

Not to say that their criticism here is unwarranted. I must admit that Meryl Streep as a Trump-like character had its funny moments. The memory of last year’s stupidity and cruel incompetence is still vivid. Almost too real to even be funny. The Tech Tycon character also had its moment, constantly looking for ways to profit from earth’s misfortunes. This is not too far from Big Tech’s mentality of technologizing their way out of any problem. That is, they are constantly seeking to fit a technological hammer to problems that require a scalpel.

Photo by Lina Trochez on Unsplash
Photo by Lina Trochez on Unsplash

With that said, the movie again misses the point. The change we need to address climate change must start at the grassroots and then makes its way to the top. If we continue to look at the centers of power for solutions, we will be in bad shape. Elon Musk made the electric car cool. That is progress but it is a bit disheartening that it took sleek design and neighbor envy to get people interested in this technology. An electric future powered by Tesla may be better than the one offered by other carmakers but that is still short of the change we need.

As long as American suburbs lie undisturbed with their gigantic SUVs spewing pollution in school car lines, we have a long way to go. The change needed is cultural. We need something that goes deeper than “scaring people” into doing good things. We need instead to articulate an attractive vision that will compel large segments of society to commit to sustained, long-term change.

Conclusion

You may say that I am taking this movie too seriously. Comedies are not meant to be political manifestos and will often get a pass in how they accomplish their goals. That may very well be the case. My goal here is not to change your mind in regards to the movie but instead to use this cultural phenomenon as a way to open up a wider conversation about our current predicament.

While our environmental crisis is dire, we need a bigger vision of flourishing to address it. It is not about an impending doom but a warning that we need to change our relationship with our planet. Instead of focusing on those who cannot see it yet, why not show them a vision of flourishing for the planet that they can get behind?

The work for the flourishing of all life requires a long-range view so we can engage in the hard work needed ahead of us. If all this movie does is to bring the conversation back to this issue, then that’s progress. In that sense, ‘Don’t Look Up’ may not be a complete loss on the cause to address climate change. Even if it misses the point, it hopefully makes people think.

And of course, watch out for the Broteroc!

How Knight Rider Predicts the Future of AI-Enabled Autonomous Cars

The automobile industry is about to experience transformative disruption as traditional carmakers respond to the Tesla challenge. The battle is not just about whether to go from combustion to electric but it extends the whole concept of motorized mobility. E-bikes, car-sharing, and autonomous driving are displacing the centrality of cars as not just a means of transportation but also a source of status and identity. The chip shortage also demonstrated the growing reliance on computers, exposing the limits of growth as cars become more and more computerized. In this world of uncertainties, could Knight Rider shed some light on the future of autonomous cars?

As a kid, I dreamed of having a (Knight Industries Two Thousand) KITT, a car that would work on my voice command. Little did I know that many of the traits in the show are now, nearly 40 years later, becoming a reality. To be sure, the show did not age well in some aspects (David Haselhoff sense of fashion for one and the tendency to show men’s bare hairy chest). Yet, on the car tech, they actually hit a few home runs. In this blog, I want to outline some traits that came up in the show that turned out to be well aligned with the direction of car development today.

Lone Ranger Riding a Dark Horse

Before proceeding, let me give you a quick intro to Knight Rider‘s plot. This 1980’s series revolves around the relationship between Michael, the lone ranger type out to save the world and his car KITT. The car, a supped-up version of a Pontiac Trans Am, is an AI-equipped vehicle that can self-drive, talk back to its driver, search databases, remotely unlock doors, and much more.

In the intro episode, we learned that Michael got a second chance in life. After being shot in the face by criminals, he undergoes plastic surgery and receives a new identity. Furthermore, a wealthy man bequeaths him the supercar along with the help of the team that built it to provide support. At his death bed, the wealthy magnate tells Michael the truth that will drive his existence: “One man can make a difference.”

Taken from Wikipedia

Yes, the show does suffer from an excess of testosterone and a royal lack of melanin.

Yet, I contend that Michael is not the main character of the show. KITT, the thinking car steals the show with wit and humor. The interaction between the two is what makes an average sci-fi flick into a blockbuster success. You can’t help but fall in love with the car.

Knight Rider Autonomous Car Predictions

  • Auto-pilot – this is the precursor of autonomous driving. While systems to keep speed constant has been common for decades, true autonomous driving is a recent advance. This is now an option for new Tesla models (albeit at a hefty $10K additional) and also partially present in other models such as auto parking, lane detection and automatic braking. This feature was not hard to predict. Maybe the surprise here is not that it happened but how long it took to happen. I suspect large auto-makers got a little cozy with innovation as they sold expensive gas-guzlers for most of the 90’s and early 00’s. It took an outsider to force them back into research.
  • Detecting drivers’ emotions – At one point in the debut episode, KITT informs Michael that his emotional state is altered and he might want to calm down. Michael responds angry that the car would talk back to him. While this makes for a funny bit it is also a good prediction of some recent facial recognition work using AI. Using a driver’s facial experession alone is sufficient to assertain the indivudal’s emotional state. There is a lot of controversy on this one but the show deserves credit for its foresight. Maybe a car that tells you to “calm down” may be coming your way in the next few years.
Image extraction from Coded Bias
  • Remote manipulation of electronic devices – This is probably the most far-sighted trait in the show. Even this day it is difficult to imagine automated cars that can interact with the world beyond its chassis. Yet, this is also in the realm of possibility. Emerging Internet of Things (IOT) technology will make this a reality. The idea is that devices, appliances and even buildings can be connected through the Internet and operate algorithms in them. It envisions a world where intelligence is not limited to living beings or phones but all objects.

Conclusion

Science Fiction works capture the imagination of the time they are written. They are never 100% accurate but sometimes can be surprisingly predictive. Show creators did not envision a future of flat screens and slick dashboard designs as we have today. On the other hand, they envisioned aspects of IOT and emotional AI that we unimaginable at the time. In this case, besides being entertainment, they also help create a vision of a future to come.

from Wikipedia.com

Reflecting on this 40 year-old show made me wonder about current Sci-fi and their own visions of what is to come. How will coming generations look back at our present visions of their time? Will we reveal our gross blind spots like Knigth Rider while male individualism? Will we inspire future technology such as IOT?

This only highlights the importance of imagination in history making. We build a future now inspired by our contemporary dreams . Hence, it is time we start asking more questions about our pictures of the future. How much to they reflect our time and how much do they challenge us to become better humans? Even more importantly, do they promote the flourishing of life or an alternative cyber-punk society? Wherther it Knight Rider depiction of autonomous cars or Oxygen‘s view of cryogenics, they reflect a vision of a future captured at historical time.

Recreating our World Through Mustard Seed Technology

In this blog, I sketch the outline of an alternative story for technology. It starts with an ancient parable and how it has sprung into a multiplicity of meanings in our time. It is a story of grassroots change, power from below, organic growth, and life-giving transformation. Those are terms we often do not associate with technology. Yet, this is about to change as we introduce the concept of mustard seed technology.

Narratives are powerful meaning-making tools. They bring facts together and organize them in a compelling way, making them hard to refute. Most often, their influence goes beyond story-telling to truth defining. That is, the reader becomes a passive, uncritical receiver of its message mostly taking for granted the fact that it is only a narrative. The story often becomes an undisputed fact.

Looking Behind the Curtain

When it comes to technology, the situation is no different. The dominant narrative tells the story of Silicon Valley overlords who rule our world through their magical gadgets, constantly capturing our attention and our desires. Other times, it hinges on a Frankenstein perspective of creation turning against their creators where machines conspire to re-shape our world without our consent. While both narratives hold kernels of truth, their power is not in their accuracy but in their influence. That is, they are not important because they are true but because we believe in them.

Photo by Frederico Beccari from unsplash.com

The role of the theologian, or the critical thinker if you will, is to expose and dismantle them. Yet, they do that not by direct criticism alone but also by offering alternative compelling narratives that connect the facts in new ways. Most dominant narratives around technology share a bent towards despair. It is most often the story of a power greater than us, a god if you will, imposing their will to our detriment. Hence, the best antidote is a narrative of hope that does not ignore the harms and dangers but weighs them properly against the vast opportunities human creativity has to offer the world.

The best challenge to algorithmic determinism is human flourishing against all odds.

That is what AI theology aspires to. As we seek to intersect technological narratives with ancient text, we look both for ethical considerations as well as the lens of hope, both in short supply in the world of techno-capitalism and techno-authoritarianism. In the worship of profit, novelty, and order, these two dominant currents tell only part of the story. Yet, unfortunately, as they proclaim it with powerful loudspeakers parallel stories are overshadowed.

A Biblical Parable

According to the Evangelists, Jesus liked to teach through parables. He knew the power of narrative. The gospels contain many examples of these short stories often meant to make the hearer find meaning in their environment. They were surprisingly simple, memorable, and yet penetrating. Instead of being something to discern, it discerned the listener as they encounter themselves in the story.

Photo by Mishaal Zahed on Unsplash

One of them is the seminal parable about the mustard seed. Evangelist Matthew puts it this way:

He put before them another parable: “The kingdom of heaven is like a mustard seed that someone took and sowed in his field;  it is the smallest of all the seeds, but when it has grown it is the greatest of shrubs and becomes a tree, so that the birds of the air come and make nests in its branches.

Matthew 13:31-32

From this short passage, we can gather two main paths of meaning. The first one is of small beginnings becoming robust and large over time. It is not just about the fast growth of a startup but more a movement that takes time to take hold eventually becomes an undisputed reality that no one can deny.

The other path of meaning is one of function. Once it is grown, is not there simply to be admired and revered. Instead, it is there to provide shelter for other beings who do not have a home. It is a picture of hospitality, inclusion, and invitation. The small seed becomes a big tree that can now help small animals. It can provide shade from the sun and a safe place for rest.

A Contemporary Story from the Margins

Jesus was not talking directly about technology. We can scarcely claim to know the original meaning of the text. That is not the task here. It is instead an attempt to transpose the rich avenues of meaning from the text into our current age and in turn, build a new narrative about the development of technology in our time. A story about how technology is emerging from the margins and solving problems in a life-giving way, rather than a flashy but profitable manner. That is what I would define as mustard seed technology.

What does that look like in concrete examples? From the great continent of Africa, I can tell of at least two examples. One is the story of a boy who built a wind generator to pump water to his village. With limited access to books, parts, and no one to teach him, he organized an effort to build the generator using an old bike motor. The Netflix movie The Boy who Harnessed the Wind tells this story and is worth your time. Another example is how Data Science Nigeria is training millions of data scientists in Africa. Through hackathons, boot camps, and online courses, the organization is a the forefront of AI skills democratization efforts.

Beyond these two examples, consider the power unleashed through the creative economy. As billions get access to free content on YouTube and other video platforms, knowledge can be transferred a lot faster than before. Many can learn new skills from the comfort of their home. Others can share their art and crafts and sell them in a global cyber marketplace. Entrepreneurship is flourishing at the margins as the world is becoming more connected.

Conclusion

These examples of mustard seed technology tell a different story. They speak of a subversive undercurrent of goodness in history that will not quiet down even in the midst of despair, growing inequality, and polarization. It is the story of the mustard seed technology springing up in the margins of our global home. They are growing into robust trees of creativity and economic empowerment.

Do you have the eyes to see and the courage to testify to their truth? When you consider technology, I invite you to keep the narratives of despair at bay. For a moment, start looking for the mustard seeds happening all around you. As you find them, they will grow into trees of hope and encouragement in your heart.

Finding Hope in a Sea of Skepticism over Facebook Algorithms

The previous blog summarized the first part of our discussion on Facebook algorithms and how they can become accountable to users. This blog summarizes the second part where we took a look at the potential and reasons for hope in this technology. While the temptation of algorithm misuse for profit maximization will always continue, can these technologies also work for the good? Here are some thoughts on this direction.


Elias: I never know where the discussion is going to go, but I’m loving this. I loved the question about tradition. Social media and Facebook are part of a new tradition that emerged out of Silicon Valley. But I would say that they are part of the broader tradition emerging out of cyberspace (Internet), which is now roughly 25 years old. I would also mention Transhumanism as one of the traditions influencing Big Tech titans and many of its leaders.  The mix of all of them forms a type of Techno Capitalism that is slowly conquering the world.  

Levi:  This reminds me of a post on the Facebook group that Jennifer posted a few months ago. It was a fascinating video from a Toronto TV station where they looked 20 years back and showed an interview with a couple of men about the internet. They were talking with excitement about the internet. They then interviewed the same men today. Considering how many things have changed, he was very skeptical. There was so much optimism and then everything became a sort of capitalist money-grabbing goal. I used to teach business ethics for 6 years in the Bay area. One of the main things I taught my students about is the questions we need to ask when looking at a company.  What is their mission, and values? What does the company say they uphold? These questions tell you a lot about what the company’s tradition is. 

The second thing is what is the actual corporate culture? One of the projects I would have the students do is every week they would present some ethical problem in the news related to some business. It’s never hard to find topics, which is depressing. We found a lot of companies that have had really terrible corporate cultures. Some people were incentivized from the top to do unethical things. When that is your standard, meeting a certain monetary goal, everything else becomes subordinated to that. 

Milton Friedman said 50 years ago that the social responsibility of a business is to increase its profit. According to Friedman, anything we do legally to obtain this is acceptable. If the goal is simply this then the legal aspect is subordinate to the goal then we can change that by changing laws in our favor. The challenge is that this focus has to come from the top. In a company like Facebook, Zuckerberg has the majority of shares, and then the board of directors are people he has hand-picked. So there is very little actual oversight. 

Within the question about tradition, Facebook has made it very clear that their tradition is sharing. That means sharing your personal information with other people. We would want to do that to some extent, but he is also sharing your data with third-party companies that are buying the data to make money. If profit is the goal everything becomes subordinated to that. Whether the sharing is positive or negative is less of a question of is it being shared and if it’s making money.

Photo by Mae Dulay on UnsplashPhoto by Mae Dulay on Unsplash
Photo by Mae Dulay on Unsplash

Glimpses of Hope in a Sea of Skepticism 

Elias: I would like to invite Micah, president of the Christian Transhumanist Association to share some thoughts on this topic. We have extensively identified the ethical challenges in this area. What does Christian Transhumanism has to say and are there any reasons for hope?

Micah:  On the challenge of finding hope and optimism, I was thinking if we compare this to the Christian tradition and development of the creeds, you are seeing some people looking at this emergence and saying that it is a radical, hopeful, and optimistic option in a world of pessimism. If you think about ideas of resurrection and other topics like this, it is a radical optimism about what will happen to the created order. 

The problem you run into (even in the New Testament) is a point of disappointed expectations. People are “where is he coming, where is the transformation, when will all this be made right?” So the apostles and the Christian community have to come in and explain the process of waiting, it will take a while but we can’t lose hope.  So a good Christian tradition is to maintain optimism and hope in the face of disappointed expectations and failures as a community. In the midst of bad news, they stayed anchored on the future good news.

There is a lesson in this tradition of looking at the optimism of the early internet community and seeing how people maintain that over time. You have to have a long-term view that figures out a way to redemptively take into account the huge hurdles and downfalls you encounter along the way. This is what the Christian and theological perspectives have to offer. I’ve heard from influential people from Silicon Valley that you can’t maintain that kind of perspective from a secular angle, if you only see from a secular angle you will be sorely disappointed. Bringing the theological perspective allows you to understand that the ups and downs are a part of the process, so you have to engage redemptively to aim for something else on the other side. 

Taken from Unsplash.com

Explainability and Global Differences

Micah: From a technical perspective, I want to raise the prospect of explainability AI and algorithms. I liked what Maggie pointed out about the ecosystems where the developers don’t actually understand what’s going on, that’s also been my experience. It’s what we’ve been baking into our algorithm, this lack of understanding of what is actually happening. I think a lot of people have the hope that we can make our algorithms self-explanatory, and I do definitely think we can make algorithms that explain themselves. But from a philosophical perspective, I think we can never trust those because even we can’t fully understand our mental processes. Yet, even if we could explain and we could trust them perfectly there are still unintended consequences.

I believe we need to move the focus of the criteria. Instead of seeking the perfect algorithm, focus on what are the inputs and outputs of this algorithm.  It has to move to a place of intentionality where we are continually revisiting and criticizing our intentions. How are we measuring it (algorithm) and how are we feeding them information that shapes it? These are just some questions to shift the direction of our thinking  

Yvonne: You have shared very interesting ideas. I’ve been having some different thoughts on what I’ve been reading. In terms of regulation and how companies would operate in one region versus the other. I have a running group with some Chinese women. They often talk to me about how the rules in China are very strict towards social media companies. Even Facebook isn’t allowed to operate fully there. They have their own Chinese versions of social network companies.

Leadership plays a crucial role in what unfolds in a company and any kind of environment. When I join a company or a group, I can tell the atmosphere based on how the leadership operates. A lot of big companies like Facebook, their leadership, and decision-makers have the same mindset and thoughts on profits. Unless regulation enforces morality and ethics most companies will get away with whatever they want to. That’s where we come in. I believe we, as Christians, can influence how things unfold and how things operate using our Christian perspective.  

In the past year, we have all seen how useful technology can be. Even this group is a testimony of how even with different time zones we can have a meeting without having to take plane tickets, which would be more expensive. I think technology has its upsides when applied correctly. That defines whether it will be helpful or detrimental to society. 

Brian:  Responding to the first part of what Micah said when we think about technology and its role it can be easier if we think about two perspectives. One as a creative vector. Where we can create value and good things. But at every step, there is the possibility of bias to creep in. I mean bias very broadly, it can be discrimination or simple mistakes that multiply over time. So there has to be a “healing” vector where bias is corrected. Once the healing vector is incorporated, the creative vector can be a leading force again. I believe that the healing vector has to start outside ourselves. The central thought of the Christian faith is that we can’t save ourselves, we require God’s intervention and grace.  This grace moves through people and communities so that we can actively participate in it. 

Elias: I think this also comes from the concept of co-creation. The partnership between humanity and God, embracing both our limitation (what some call sin) but also our immense potential as divine image-bearers.

I look forward to our next discussion. Until then, blessings to all of you.


How Can Facebook Algorithms Be Accountable to Users?

Second AI Theology Advisory Board meeting: Part 1

In our past discussion, the theme of safeguarding human dignity came up as a central concern in the discussion of AI ethics. In our second meeting, I wanted us to dive deeper into a contemporary use case to see what themes would emerge. For that aim, our dialogue centered on the WSJ’s recent expose on Facebook’s unwillingness to address problems with its algorithms. This was the case even after internal research clearly identified and proposed solutions to the problems. While this is a classic case of how the imperative of profit can cloud ethical considerations, it also highlights algorithms can create self-enforcing vicious cycles never intended to be there in the first place.

Here is a summary of our time together:

Elias: Today we are going to focus on social media algorithms. The role of AI is directing the feed that everybody sees, and everybody sees something different. What I thought was interesting about the article is that in 2018 the company tried to make changes to improve the quality of engagement but it turned out doing the exact opposite.

I experienced that first hand. A while back, I noticed that the controversial and angry comments were the ones getting attention. So, sometimes I would poke a little bit at some members of our community that have more orthodox views of certain things, and that would get more engagement. In doing so, I also reinforced the vicious cycle. 

That is where I want to center the discussion. There are so many issues we can talk about Facebook algorithms. There is the technical side, there’s the legal side and there’s also the philosophical side. At the end of the day, Facebook algorithms are a reflection of who we are, even if it is run by a few.

These are my initial thoughts. I was wondering if we can start with Davi on the legal side. Some of these findings highlight the need for regulation.  The unfathomed power of social media companies can literally move elections. Davi, what are your thoughts on smart regulation, since shutting down isn’t the answer, what would safeguard human dignity and help put guardrails around companies like Facebook?    

By Pixabay

Davi: At least in the US, the internet is a wild west of any regulation or any type of influence in big tech.  Companies have tried to self-regulate, but they don’t have the incentive to really crack down on this. It’s always about profits, they talk the talk but don’t walk the walk. At the end of the day, the stock prices speak higher.

In the past, I’ve been approached by Facebook’s recruiters. In the package offered, their compensation was relatively low compared to industry standards but they offered life-changing stock rights. Hence, stock prices are key not only to management but to many employees as well.

Oligarchies do not allow serious regulation. As I researched data and privacy regulation, I came through the common things against discrimination, and most of the more serious cases are being brought to court through the federal trade commission to protect the consumers. Yet, regulations are being done in a very random kind of way. There are some task forces in Congress to come up with a  regulatory framework. But this is mostly in Washington, where it’s really hard to get anything done. 

Some states are trying to mimic what’s happening in Europe and bring the concept of human dignity, in comprehensive privacy laws like the ones of Europe. We have a comprehensive privacy law in California, Virginia, Colorado. And every day I review new legislation. Last week it was Connecticut. They are all going towards the European model. This fragmented approach is less than ideal. 

Levi: I have a question for Davi. You mentioned European regulation, and for what I understand, but because the EU represents such a large constituency when they shape policies for privacy it seems much easier for companies like FB to conform their overall strategy to fit EU policy, instead of making a tailored policy just for EU users, or California users, etc. Do you see that happening or is the intent to diversify according to different laws?  

Davi: I think that isn’t happening on Facebook. We can use 2010 as an example,  when the company launched a face recognition capability tool that would recognize people in photos so you could tag them more easily, increasing interactions. There were a lot of privacy issues with this technology. They shut it down in Europe in 2011, and then in 2013 for the US. Then they relaunched in the US. Europe has a big influential authority, but that is not enough to buck financial interests.

As a lawyer, that would be the best course of action. If multinationals based themselves on the strictest law and apply it to everyone the same way. That is the best option legal-wise, but then there can be different approaches.

Agile and Stock Options

Elias: Part of the issue of Facebook algorithms wasn’t about disparate impact per se. It was really about a systemic problem of increasing negativity. And one thing I thought was how sentiment analysis is very common right now. There is natural language processing, you can get a text and the computer can say “this person is happy – mad”. It’s mostly binary: negative and positive. What if we could have, just like in the stock market, a safety switch for when the market value drops too fast. At a certain threshold, the switch shuts down the whole thing. So I wonder, why can’t we have a negativity shut-off switch? If the platform is just off the charts with negativity, why don’t we just shut it down? Shut down FB for 30min and let people live their lives outside of the platform.  That’s just one idea for addressing this problem. 

I want to invite members that are coming from a technical background. What are some technical solutions to this problem?

Maggie: I feel like a lot of it comes from some of the engrained methodologies of agile gone wrong. That focus on the short term, iteration, and particular OKRs that were burst from Google where you are focusing on outlandish goals and just pushing for that short term. Echoing what Davi said on the stock options of big companies, and when you look at the packages, it’s mostly about stocks. 

You have to get people thinking about the long term. But these people are also invested in these companies. Thinking long term isn’t the only answer but I think a lot of the problems have to do with the short term iteration from the process. 

Noreen: Building on what Maggie said, a lot of it is the stock. In the sense that they don’t want to employ the number of people, they would have to employ to really take care of this. Right now, AI is just not at a point where it can work by itself. It’s a good tool. But turning it loose by itself isn’t sufficient. If they really are going to get a handle on what’s going on in the platform, they need to hire a lot more people to be working hand in hand with Artificial Intelligence using it as a tool, not as a substitute for moderators who actually work with this stuff.

Maggie: I think Amazon would just blow up by saying that because one of their core tenets is getting rid of humans. 

But of course, AI doesn’t need a stock option, it’s cheap. I think this is why they are going in this direction. This is an excellent example of something I’ve written over and over, AI is a great tool but not a substitute for human beings.  They would need to hire a bunch of people to be working with the programs, tweaking the programs, overseeing the programs. And that is what they don’t want to do. 

From Geralt on Pixabay

Transparency and Trade Secrets

Brian: This reminded me of how Facebook algorithms are using humans in a way, our human participation and using it as data to train them. However, there’s no transparency that would help me participate in a way that’s going to be more constructive.

Thinking about the article Elias shared from Wall Street Journal, that idea of ranking posts on Facebook. I had no idea that a simple like would get 1 point on the rank and a heart would get 5 points. If I had known that, I might have been more intentional using hearts instead of thumbs up to increase positivity. Just that simple bit of transparency to let us know how our actions affect what we see and how they affect the algorithm could go a long way. 

Davi: The challenge with transparency is trade secrets. Facebook algorithms are an example of a trade secret. Companies aren’t going to do it but if we can provide them with support and protection, and at the same time require this transparency. All technology companies tend to be so protective of their intellectual property, that is their gold, the only thing they have. This is a point where laws could really help both sides. Legislation that fosters transparency while still allowing for trade secrets to be safeguarded would go a long way.

Maggie: Companies are very protective of this information. And I would gamble that they don’t know how it works themselves. Because developers do not document things well. So people also might not know. 

Frantisek: I’m from Europe, and as far as I’m concerned here the regulation at a national level, or directly from the EU, has a lot to do with taxation. They are trying to figure out how to tax these big companies like Google, Amazon, and Facebook. It’s an attempt to not let the money stay somewhere else, but actually gets taxed and paid in the EU. This is a big issue right now.

In relation to theology, this issue of regulation is connected with the issue of authority. We are also dealing with authority in theology and in all churches. Who is deciding what is useful and what isn’t?

Now the question is, what is the tradition of social networks? As we talk about Facebook algorithms, there is a tradition. Can we always speak of tradition? How is this tradition shaped?              

From the perspective of the user. I think there might be an issue with anonymous users. Because the problem of social media is that you can make up a nickname and make a mess and get away with anything, that at least is what people think. Social networks are just an extension of normal life, trying to act the same way I do in normal life and in cyberspace as a Christian. The life commandment from Jesus is to treat your neighbors well. 

China and the EU jump ahead of the US in the Race for Ethical AI

Given the crucial role AI technologies are playing in the defense industry, it is no secret that leading nations will be seeking the upper hand. US, China, and the EU have put forth plans to guide and prioritize research in this area. At AI Theology, we are interested in a very different kind of race. One that is less about technological supremacy and more about ensuring the flourishing of life. We call it the race for ethical AI.

What are governments doing to minimize, contain and limit the harm of AI applications? Looking at the leaders in this area, two show signs of making progress while one is still sitting on the sidelines. Though through different methods, China and the EU took decisive action to start addressing the challenge. The US has been awfully quiet in this area, even as most leading AI organizations have their headquarters on its soil.

The EU Tackles Facial Recognition

Last week, the European Parliament passed a ground-breaking resolution curbing the use of AI for mass surveillance and predictive policing based on behavioral data. In its language, the document calls out companies like Clearview for their controversial use of facial recognition in law enforcement. It also lists many examples in which this technology has erroneously targeted minorities. The legislative body also calls for greater transparency and human involvement in the decision process that comes from algorithms.

Photo by Maksim Chernishev on Unsplash

While not legally binding, this is a first and important step in regulating the use of computer vision in law enforcement. This is part of a bigger effort the EU is taking to draft regulations on AI to address multiple applications of the technology. In this sense, the multi-state body becomes the pioneer in attempting to place guardrails on AI use possibly becoming the standard for other countries to follow.

Even though ethical AI is not limited to regulation, government action can have a sweeping impact in curbing abuse and protecting the vulnerable. It also sends a strong signal to companies acting in this space that more accountability is on its way. This will likely force big tech and AI start-ups to take a hard look at how they develop products and deliver their services. In short, good legislation can be a catalyst for the type of change we need. In this way, the EU leaps forward in the race for ethical AI.

China Take Steps towards Ethical AI

On the other side of the world, another AI leader put forth guidelines on the use of the technology. The document outlines principles for algorithm governance, protect privacy, and give users more autonomy over their data. Besides its significance, it is notable that the guidelines include language around making the technology “people-oriented” and appeals to common values.

Photo by Timothée Gidenne on Unsplash

The guidelines for ethical AI are part of a broader effort to rein big tech power within the world’s most populous nation. Earlier this year, the government published a policy to better control recommendation algorithms on the Internet. This and other measures are sending a strong signal to the Chinese budding digital sector that the government is watching and will keep them accountable. Such a move also contributes to the centralization of power in the government in a way many western societies would not be comfortable with. However, in this case, they seem to align with the public good.

Regardless of how these guidelines will be implemented, it is notable that China would be at the forefront of publishing these guidelines. It shows that Beijing is taking the threat of AI misuse seriously, at least when it is perpetrated by business enterprises.

US Fragmented Efforts

What about the North American AI leader? Unfortunately, to date, there is no sweeping national effort to address AI abuse in the US. This is not to say that nothing is happening. States like California and Illinois are working on legislation on data privacy and AI surveillance. Biden’s chief science advisor recently is called for an AI Bill of rights. In a previous blog, I outlined US efforts to address bias in facial recognition as well.

Yet, nothing concrete has happened at a national level. The best we got was a FB former employee’s account of the company’s reluctance to curb AI abuse. It made for great television but no sweeping legislation to follow.

If there is a race for ethical AI, the North American competitor is behind. If this trend continues, AI ethics will be at the mercy of large company boardrooms in the Yankee nation. Company boards are never free of conflict of interest as the next quarter’s profit often takes precedence over human flourishing.

Self-regulation has not worked. It is time we move towards more active government intervention for the sake of the common good. This is a race the US cannot afford to sit out. It is time to hop on the track.

Placing Human Dignity at the Center of AI Ethics

In late August we had our kick-off Zoom meeting of the Advisory Board. This is the first of our monthly meetings where we will be exploring the intersection of AI and spirituality. The idea is to gather scholars, professionals, and clergy to discuss this topic from a multi-faceted view. In this blog, we publish a short summary of our first conversation. The key theme that emerged was a concern for safeguarding and uploading human dignity as AI becomes embedded in growing spheres of our lives. The preocupation must inhabit the center of all AI discussions and be the guiding principles for laws, business practices and policies.

Question for Discussion: What, in your perspective, is the most pressing issue on AI ethics in the next three to five years? What keeps you up at night?

Brian Sigmon: The values from which AI is being developed, and their end goals. What is AI oriented for? Usually in the US, it’s oriented towards profit, not oriented to the common good or toward human flourishing. Until you change the fundamental orientation of AI’s development, you’re going to have problems.

AI is so pervasive in our lives that we cannot escape it. We don’t always understand the logic behind it. It is often beneath the surface intertwined with many other issues. For example, when I go on social media, AI controls what I see on my feed. It does not optimize in making me a better person but instead maximize clicks and revenue. That, to me, is the key issue.

Elias Kruger: Thank you, Brian. To add some color to that, since the pandemic, companies have increased their investment in AI. This in turn is creating a corporate AI race that will further ensure the encroachment of AI across multiple industries. How companies execute this AI strategy will deeply shape our lives, not just here in the US but globally.

Photo by Chris Montgomery on Unsplash

Frantisek Stech: Coming from Eastern Europe, one of the greatest issues is the abuse of AI from authoritarian non-democratic regimes for human control. In other words, it is the relationship between AI control and human freedom. Another practical problem is how people are afraid to lose their jobs to AI-driven machines.  

Elias Kruger: Thanks Frantisek, as you know we are aware of what is happening in China with the merging of AI and authoritarian governments. Can you tell us a little bit about your area of the world? Is AI more government-driven or more corporate?

Frantisek Stech: In the Czech Republic, we belong to the EU, an therefore to the West. So, it is very much corporate-driven. Yet, we are very close to our Eastern neighbors are we are watching closely how things develop in Belarussia and China especially as they will inevitably impact our region of the world.

However, this does not mean we are free from danger there. There is the issue of manipulation of elections that started with the Cambridge Analytics scandal and issues with the presidential elections in the US. Now we are approaching elections in the EU, so there is a lot of discussions about how AI will be used for manipulation and the problem . So when people hear AI, they often associate with politics. So they think they are already being manipulated if they buy a phone with Facial recognition. We have to be cautious but not completely afraid. 

Ben Day: I am often pondering on this question of AI’s, or technology in general, relates with dignity and individual human flourishing. When we aggregate and manipulate data, we strip out individual human dignity which is Christian virtue, and begin to see people as compilations of manipulative data. It is really a threat to ontology, to our very sense of being. In effect, it is an assault on human dignity through AI.

Going further, I am interested in this question of how AI encroaches in our sense of identity. That is, how algorithms govern my entire exposure to media and news. Not just that but AI impacts our whole social eco-verse online and offline. What does that have to do with the nature of my being?

I often say that I have a very low view of humanity. I don’t think human beings are that great. And so, I fear that AI can manipulate the worst parts of human nature. That is an encroachment in huam dignity.

In the Episcopal church, we believe that serving Christ is intimately connected with upholding the dignity of human beings. So, if we are turning a blind eyed to human dignities being manipulated, then my Christian praxis compels me by moral obligation to do something about it. 

Photo by Liv Merenberg on Unsplash

Elias Kruger: Can you give us a specific example of how this plays out?

Ben Day: Let me give you one example of how this affected my ministry. I removed myself from most of social media as of October of 2016 because of what I was witnessing. I saw members of my church sparring on the internet, attacking each other’s dignity, and intellect over politicized issues. The vitriol was so pervasive that I encounter a moral dilemma. As a priest, it is my duty to deny the sacrament to those who are in unrepetant sin.

So I would face parishioners only hours after these spars online and wonder whether I should offer them the sacrament. I was facing this connundrum as a result of algorithms manipulating feeds to foster angry engagements because it leads to profit. It virtually puts the church at odds to how these companies pursue profit.

Levi Checketts:  I lived in Berkley for many years and the cost of living there was really high. It was so because a lot of people who worked in Silicon Valley or in San Francisco were moving there. The influx of well-to-do professionals raised home prices in the area, forcing less fortunate existing residents to move out.

So, there is all this money going into AI. Of the big 5 biggest companies in market cap, three are in Silicon Valley and two in the Seattle area. Tech professionals often do not have full awareness of the impact their work is having on the rest of the world. For example, a few years back, a tech employee wrote an op-ed complaining about having to see disgusting homeless people in his way to work when he was paying so much for rent.

What I realized is that there is a massive disconnect between humanity and the people making decisions for companies that are larger than many countries’ economies. My biggest concern is that the people who are in charge and controlling AI have many blind spots. Their inability to emphathize with those who are suffering or even notice the realities of systems that breed oppression and poverty. To them, there is always a technical fix. Many lack the humility to listen to other perspectives, and come from mainly male Asian and White backgrounds. They are often opposed to other perspectives that challenge their work.

There have been high-profile cases recently like Google firing a black female researcher because she spoke up about problems in the company. The question that Ben mentioned about human dignity in AI is very pressing. If we want to address that, we need people from different backgrounds making decisions and working to develop these technologies.

Futhermore, if we define AI as a being that makes strictly rational decisions, what about people who do not fit that mold?

The key questions are where do we locate this dignity and how do we make sure AI doesn’t run roughshod over humanity?

Davi Leitão: These were all great points that I was not thinking about before. Thank you for sharing this with us.

All of these are important questions which drive the need for regulation and laws that will gear profit-driven corporations to the right path. All of the privacy and data security laws stand on a set of principles written in 1981 by the OECD. These laws are looking to these principles and putting into practice. They are there to inform and safeguard people from bias.

My question is: what are the blind spots on the FIP (fair information principles) that are not accounting for these new issues technology has brought in? This problem is a wide net, but it can help guide a lot of new laws that will come. This is the only way to make companies care about human dignity.

Right now, there is a proliferation of state laws. But this brings another problem: customers of states that have regulation laws can suffer discrimination by companies from other states. Therefore, there is a need for a federal uniform set of principles and laws about privacy in the US. The inconsistency between state laws keep lawyers in business but ultimately harm the average citizen.

Elias Kruger:  Thanks for this perspective. I think it would be a good takeaway for the group to look for blindspots in these principles. AI is about algorithms and data. Data is fundamental. If we don’t handle it correctly, we can’t fix it with algorithms. 

My 2 cents is that when it comes to AI applications, the one that concerns me most is facial recognition for surveillance and law enforcement. I don’t think there is any other application where a mistake can cause such devastating impact on the victim than here. When AI wrongly incriminates someone of a crime because an algorithm confused their face with the actual perpetrator, the indidivual loses his freedom. There is no way to recover from that.

This application calls for immediate regulation that puts human dignity at the center of AI in so we can prevent serious problems in the future.

Thanks everybody for your time.