Blog

Social Unrest, AI Chefs and Keeping Big Tech in line

This year we are starting something new. Some of you may be aware that we keep a repository for articles that are relevant to the topics we discuss in the portal such as AI ethics, AI for good, culture & entertainment, and imagination (theology). We would like to take a step further and publish a monthly re-cap of the most important news on these areas. We are constantly tracking developments in these areas and would like to curate a summary for your edification. This is our first one so I ask you to be patient with us as we figure out formatting.

Can you believe it?

Harvesting data from prayers: Data is everywhere and companies are finding more and more uses for it. As the practice of data harvesting becomes commonplace, nothing is sacred anymore. Not even religion is safe, as pray.com now is collecting and sometimes sharing with other companies like Meta. As the market of religious heats up, investors are flocking to back new ventures. This probably means a future of answered prayers, not by God but by Amazon.

Read more here: BuzzFeed.

Predicting another Jan 6th: What if we could predict and effectively address social unrest before it becomes destructive? This is the promise of new algorithms that helps predict the next social unrest event. Coupcast, developed by the University of Central Florida uses AI and machine learning to predict civil unrest and electoral violence. Regardless of its accuracy, using ML in this arena raises many ethical questions. Is predicting social unrest a cover for suppressing it? Who gets to decide whether social unrest is legitimate or not? Hence we are left with many more questions but little guidance at this moment.

Read more here: WashingtonPost

IRS is looking for your selfies: IRS using facial recognition to identify taxpayers: opportunity or invitation to disaster? You tell me. Either way, the government quietly launched the initiative requiring individuals to sign up with the facial recognition company if they want to check the status of their filling. Needless to say, this move was not well received by civil liberty advocates. In the past, we dove into the ethical challenges of this growing AI practice.

To read more click here: CBS news

Meta’s announces a new AI quantum computer: Company will launch a powerful new quantum AI computer in Q1. This is another sign Meta refuses to listen to its critiques only marching on to its own techno-optimistic vision of the future – one in which it makes Billions of dollars, of course. What is not clear is how this new computer will enhance the company’s ability to create worlds in the metaverse. Game changer or window-dressing? Only time will tell

To read more click here: Venture Beat

AI Outside the Valley

While our attention is on the next move coming from Silicon Valley, a lot is happening in AI and other emerging technologies throughout the world. I would propose, that is actually where the future of these technologies lie. Here is a short selection of related updates from the globe.

Photo by Hitesh Choudhary on Unsplash

Digital Surveillance in South Asia: As activists and dissidents move their activity online, so does their repression. In this interesting article, Antonia Timmerman outlines 5 main ways authoritarian regimes are using cyber tools to suppress dissent.

To read more click here: Rest of the World

Using AI for health, you better be in a rich country: As we have discussed in previous blogs, AI algorithms are only as good as the data we feed them. Take eye illness, because most available images are coming from Europe, US, and China, researchers worry they will not be able to detect problems in under-represented groups. This example highlights that a true democratization of AI must include first an expansion of data sources.

To read more click here: Wired

US companies fighting for Latin American talent: Not all is bad news for the developing world. As the search for tech talent in the developed centers is returning empty, many are turning to overlooked areas. Latin American developers are currently on high demand, driving wages up but also creating problems for local companies who are unable to compete with foreign recruiters.

To read more click here: Rest of the World

Global Race for AI Regulation Marches On

unsplash

The window for new regulation in the US congress may be ending as mid-term elections approach. This will ensure the country will remain lagging behind global efforts to rein in Big Tech’s growing market power and mounting abuses.

As governments fail to take action or do it slowly, some are thinking about a different route. Could self-regulation be the answer? With that in mind, leading tech companies are joining forces to come up with rules for the metaverse as the technology unfolds. Will that be enough?

Certainly not for the Chinese government if you ask. The Asian super-power released the first global efforts to regulate deepfakes. With this unprecedented move, China leads the way being the first government to address this growing concern. Could this be a blueprint for other countries?

Finally, the EU fines for violations of GDPR hit a staggering 1.2 Billion. Amazon alone was slapped with an $850 Million penalty for its poor handling of customer data. While this is welcome news, one cannot assume it will lead to a change in behavior. Given mounting profit margins, Big Tech may see these fines not as a deterrent but simply as a cost of doing business in Europe. We certainly hope not but would be naive not to consider this possibility.

Cool Stuff

NASA’s latest and largest-ever Telescope reached its final destination. James Webb is now ready to start collecting data. Astrophysicists and space geeks (like myself) are excited about the possibilities of seeing well into the cosmic past. The potential for new discoveries and new knowledge is endless.

To read more click here: Nature

Chef AI, coming to a kitchen near you. In an interesting application, chefs are using AI to tinker and improve on their recipes. The results have been delicious. Driven in part by a trend away from animal protein, Chefs need to get more creative and AI is here to help.

To read more click here: BBC

That’s it. This is our update for January. Many blessings and see you next month!

Kora, our new addition to the family says hi and thank you for reading.

Painting a Global View of AI for Good: Part 2

This blog continues the summary for our AITAB meeting in November. Given the diverse group of voices, we were able to cover a lot of ground in the area of AI for good. In the first part I introduced the 3 main trends of AI for good: democratization of AI skills, green AI, and AI justice. In this blog, we cover examples of AI for good in the industry, academia, and a global perspective from Eastern Europe. Our board members spoke from experience and also listed some great resources to anyone interested in getting deeper into the field.

AI for Good in the Industry

Davi: Another way AI and machine learning are helping in sustainability, is by improving companies’ consumption of non-renewables. For example, one of the largest expenses of a cruise line company is fuel consumption. Mega ships require untold amounts of fuel to move them across oceans around the globe. And, in maritime, the exact same route may require differing amounts of fuel due to the many variables that impact fuel consumption such as the weight of the ship, seasonal and unexpected currents, and different weather patterns.

AI and machine learning have expanded the capacity to calculate, with never-seen-before precision, the amounts of fuel needed for such mega-ships to safely complete each of their routes in real-time. This newfound capability is not only good for these companies’ bottom lines but also helps them preserve the environment by diminishing emissions.

Elias: That’s a great point. A recent study by PricewaterhouseCoopers estimates that AI applications in transportation can reduce greenhouse emissions by as much as 1.5% globally so this is definitely an important trend to track.

Photo by Christian Lue on Unsplash

A Report from Eastern Europe   

Frantisek : I tried to investigate and revise my knowledge in the three areas Elias proposed. Regarding the first topic, democratization of AI skills, I think from the perspective of Prague and the Czech Republic, we are at the crossroads between Eastern and Western Europe. There are initiatives that focus on AI education and popularization and issues related to that. I would like to point out a specific Prague AI initiative corporation of different academic and private companies as well.

This kind of initiative is more technological, and they are just beginning to grasp the idea that they need some ethical additions like philosophers. Yet, they are not inviting theologians to the table. I guess we need to prove our value before we will be “invited to the club”.

With that said, Prague AI wants to transform the city into a European center for AI. They have good resources for that, both human and institutional support. So, I wouldn’t be surprised if they achieve this goal, and I wish them all the best. My research group aims at connecting with them too. But we need first to establish ourselves a bit better within the context of our university.

On another front, we established contact recently with a Ukrainian Catholic University which aims at opening an interdisciplinary program for technology and theology. However, we do not know yet, how far they are with this plan. We intend to learn more since I am in process of scheduling an in-person meeting with the dean of their Theological Faculty. It was not yet possible due to the pandemic. We are very much interested in this cooperation.

We also aspire to establish a conversation with the Dicastery for Integral Human Development and other Vatican-related organizations in Rome where issues of new technologies and AI receive great attention especially in relation to ethics and Catholic Social Teaching. In December 2021 one of my team members went to Rome to start conversations leading towards that aim.

Photo by Guillaume Périgois on Unsplash

In summary, here in Central and Eastern Europe democratization of AI is more focused on education and popularization. People are getting acquainted with the issue.

Regarding sustainable AI, we are following the footprints of the European Commission. One of the European commissioners that formed this agenda is from the Czech Republic. And maybe because of that, the European Commission sponsored a big conference in September which was in large part focused on green AI. The contribution about the role of AI in processing plastic materials was especially interesting because it has a great potential for green AI implementation.

The European Commission introduced a plan for the third decade of this century. It’s called the Digital Decade. It includes objectives like digitalization of public buildings, digital economics, and the growth of digital literacy among citizens with large support for the field of AI.

In Europe, AI justice is a big issue. There is hope and a lot of potential in AI to contribute towards the effectiveness and quality of judicial procedures. Yet there is an equivalent concern about the fundamental rights of individuals. I’m not very well acquainted with these issues, but it is an important topic here in Europe.

AI for Good In Academia

AI for good
Photo by Vadim Sherbakov on Unsplash

Scott: I’m a professor of physics at Belmont University. I started working with machine learning around 2013 /2014 with respect to developing audio signal processing applications. I managed to get into an AI Ethics grant in 2017 and went to Oxford for a couple of summers.

My background a long time ago was astrophysics, but recently, I eventually became focused on machine learning. I split my time between doing technical work and also doing philosophical and ethical thinking. I recently taught a general education undergraduate class that integrated machine learning and ethics. We talked about how machine learning and algorithms work while also discussing various ethical principles and players in the field.

Then the university requested I teach a more advanced course more focused on coding for upper-level students. This fall I’m teaching AI deep learning and ethics, and it’s kicking my butt because I am writing a lot of the lessons from scratch. One of the things I’m doing in this course is integrating a lot with things from the open-source community and the public free machine learning and deep learning education. There’s Google, Facebook, and then there’s everybody else. 

So I’ve taken a lot of classes online. I’m pretty involved with the Fast AI community of developers, and through their ancillary groups like Hugging Face ,for example. It’s a startup but also a community. This makes me think in terms of democratization, in addition to proliferation around the world, there’s also a proliferation with everybody that’s not at one of these big tech firms as far as disseminating education.

Democratization of AI and Open Source

I think a couple of big things that come to mind are open source communities that are doing their own work like Luther AI. They released their own GPT model that they trained. It’s a sort of grassroots community group that is loosely affiliated but managed to pull this off through people donating their time and expertise. 

Photo by Shahadat Rahman on Unsplash

One of the things they teach in Fast AI is a lot about transfer learning. Instead of training a giant model from scratch, we’re taking an existing model and fine-tuning it. That relates to sustainability as well. There are ecological concerns about the power consumption needed to train language models. An example would be Megatron-Turing Natural Language Generation (MT-NLP) from Microsoft, a gigantic language model. 

With transfer learning, we can start with an initialization of a model that doesn’t require much power. This allows people all over the globe to run them with little computational resources. The idea is to take Ivory Tower’s deep learning research and apply it to other things. Of course one of the questions people think about is what are we inheriting when we grab a big model and then fine-tune it. Yet, nobody really knows how much of that late structure stays intact after the fine-tuning.

It’s an interesting and accessible area. Considering how many people, myself included, post free content online for education. You can take free courses, free blog posts for learning about machine learning, developing tools and ethics as well. The open-source movement is a nice microcosm of democratization of content that relates both AI ethics and sustainable AI. 

Photo by niko photos on Unsplash

Elias: Thank you, Scott. I want to seize on that to make a point. Open source in the tech world is a great example of the mustard seed technology idea. It starts through grassroots efforts where many donate their time to create amazing things. That is the part I think technology culture is teaching theology to us by actualizing the gift economy. In the real world we live in we pay for companies and they focus on profit. It is highly transactional and calculating. Here you have an alternative economy where an army of volunteers are creating things for free and inviting anyone to take it as needed. They build it simply for the pleasure of building it. It’s a great example. 

Scott: I’m also starting to do some work on algorithmic auditing, and I just found this kid from a group called data science for social good. Other people may find it interesting as I do.

What Will Online Religion Look Like In The Metaverse?

The Internet is not to be understood merely as a tool. It is a specific extension of a complex environment instead. Contemporaries live on (or in) the Internet as well as in the physical landscapes of this world. Besides, such a mode of living in the world continues to intensify. Always more human activities are being moved into the online environment which changes them a lot. Just think of how rapidly the activity of shopping has changed during, let’s say, the last decade. Three decades ago, shopping was a completely different experience than it is nowadays, as the Internet became an everyday reality.

Regardless of these considerations, it would be a mistake to see the Internet only as a kind of parallel reality. As a complex phenomenon, the Internet touches all spheres of human life, including the sphere of religion – the religious life. At this place, a crucial question might be asked: What is the relation between religion and the key technical medium of the internet?

In general, we may consider 3 dimensions of such a relationship. It is (1) religion online, (2) online religion, and (3) online religious experience. The first two dimensions were studied and well defined by Canadian sociologist and anthropologist of religion, Christopher Helland. The third was added a few years ago by new media and media theory experts from the Hebrew University in Jerusalem, Menahem Blondheim, and Hananel Rosenberg. The concept they suggested raises serious questions, but at the same time, it touches on the limits of what is presently possible. In any case, all phases (if we look at the problem from the perspective of development as Christopher Helland), or dimensions (if we consider the fact that the first two categories often exist in parallel or in different combinations) might be described as follows.

taken from Pixabay.com

The Initial Stages of Internet Religion

Religion online describes a static presence of religion on the internet. Typically, good examples of this are the websites containing information about different religious communities and their activities. In the Christian religious tradition, we can point out websites of parishes or church communities. According to Helland, this relationship between religion and the internet belongs to the past, which was characteristic of slow internet connection and technically undeveloped, static, access devices (e.g., heavy personal computers), which some of us may remember from the 1990s. At that time, the internet was understood by religious communities merely as a tool for their presentations. With time, websites, as well as social networks with religious content, became an integral part of life for a great number of religious communities. They will likely continue to serve, as such, even in times when the internet offers new possibilities.

In online religion, the Internet has become a tool for developing religious practice online with the improvement of the connection speed and improved access technologies. Online prayer groups, religious rituals, or services set and performed in an online environment might be mentioned as examples. This form of interaction between religions and the internet encountered its unprecedented boom during the last two years in the context of the COVID-19 pandemic. With governmental restrictions and lockdowns, religious life in its traditional communitarian form stopped practically overnight and religions were forced to move a large part of their activities to the online environment. The internet was naturally used as a tool to handle this change. Soon, however, it was understood that it is not only a tool but also a new environment for religious practice from now on.

This is of course not without theological problems. Perhaps, one of the crucial questions refers to the online religious experience. Is it possible to speak about online religious experience at the level of online religion? Is online religious experience real or rather “just” virtual (and thus not real)? Blondheim and Rosenberg open this question and argue for a new dimension of the relationship between religion and the internet because, in their opinion, it is possible to encounter authentic religious experiences in cyberspace.

Photo by ThisisEngineering RAEng on Unsplash

The Next Frontier

It is disputable if online religious experiences are already present phenomena or if they are an uncertain matter of the future. In the latter form, they would only be considered at a level of a vision for the future. Some suggest that something like an online religious experience is principally not possible at all. However, theoretically speaking, the increasing speed of connection and response of largely personalized and omnipresent access devices (e.g., smartphones), quickly advancing the datafication of human lives, virtual reality development, and interaction with artificial intelligence, bring serious questions into the realm of religion.

The internet is becoming a true environment; something that becomes more transparent like a borderline between cyberspace and real space in which our bodies dwell. It slowly fades away. Consequently, cyberspace should neither be labeled as a “consensual hallucination” as it once was by its conceiver William Gibson, nor as a kind of utopia, or place “nowhere-somewhere” (Kevin Robins). Contemporary people live in digital landscapes as they do in physical ones. These two traditionally separated spaces manifest themselves, today, as one hybrid space.

The age of the internet of things is slowly coming to its end, and the era of the internet of everything is setting in. Quickly advancing mobile technologies are playing a key role in the hybrid space interface. Thanks to them, people are practically connected to the internet non-stop. They can create digital-physical landscapes and perceive how they become digital-physical hybrid entities as they live in hybrid (real-virtual) spaces they create for themselves. To put it bluntly, what is happening on the world wide web, is happening in the real world, and vice versa. Everything might be online, and to a large extent, it already is.

Photo by Diana Vargas on Unsplash.com

Religion in the Metaverse

Let’s assume that in such an environment (such as the metaverse) it is possible to have an authentic religious experience. In other words, let’s presume that from this perspective, the encounter with the Sacred in cyberspace has the same characteristics and qualities as in the physical landscapes of this world. An imaginary wall between real and virtual is still perceivable. However, with the emergence of the metaverse, it is becoming more transparent and more permeable. Yet, if it ever will disappear remains a question. In each case, we can already speak of religious experience in cyberspace concerning some computer games as World of Warcraft, for instance (Geraci, Gálik, Gáliková).

Recent research on Neo-Paganism suggests that a relatively high number of its adherents consider their online religious activity equivalent to that in the physical world. Some of them even stated that their religious activity in cyberspace is on a higher level than that in real life. We may also speak of online religious experiences concerning the phenomena of virtual pilgrimages (cyber-pilgrimages or e-pilgrimages). Further, platforms like the one with the meaningful name Second Life make it possible to live a religious life in a completely online environment.

Blondheim and Rosenberg believe that online religious experience in the digital world is “emerging from the breakdown and collapse of all entrenched conventions and narratives in the digital world, and the opening of a chaotic abyss can (…) serve as a prelude to a fresh new theological start.” Unfortunately, they do not say anything about how this new theological start they propose should look. But, right now, it is not that important because it may stimulate our imagination and thoughts on the transformations of faith in the digital age.

What would be your reflection on this matter?

An earlier version of this text originally appeared in the Christnet online magazine in Czech (https://www.christnet.eu/clanky/6592/nabozenstvi_on_line_on_line_nabozenstvi_a_on_line_nabozenska_zkusenost.url); published 22nd September 2021. English translation published with the permission of the Christnet magazine. Translated by the author.


František Štěch is a research fellow at the Protestant Theological Faculty of Charles University. He serves as coordinator of the “Theology & Contemporary Culture” research group. Previously he worked at the Catholic Theological Faculty of Charles University as a research fellow and project PI. His professional interests include Fundamental theology; Ecclesiology; Youth theology; Religious, and Christian identity; Intercultural

Get to know our Advisory Board

Painting a Global View of AI for Good: Part I

In early November, AITAB (AI Theology Advisory Board) met for another fruitful conversation. As in our previous meeting, the dynamic interaction between our illustrious members took the dialogue to places I did not anticipate. In this first part, we set up the dialogue by framing the key issues in AI for good. We then move to a brief report on how this is playing out in East Asia. In a conversation often dominated by Silicon Valley perspectives, it is refreshing to take a glimpse at less-known stories of how AI technologies are reshaping our world.

Defining AI for Good

Elias: Let me say a few words to start the discussion. This is different from our other discussions where we focused on ethics. In those instances, we were reflecting on what was “wrong” with technology and AI. Today, I wanted to flip this script and focus more on the positive, what I call “AI for good”. Good theology starts with imagination. Today is going to be more of an exercise of imagination to notice what’s happening but doesn’t necessarily make the news.

More specifically, there are three main areas where I see global AI for good starting to take shape. The first is the democratization of AI skills – spreading technological knowledge to underrepresented communities. This is a very important area, since as we discussed before, people make AI in their own image. If we don’t widen the group, we will have the same type of AI. A great example is Data Science Nigeria. Just yesterday I was in one of their bootcamps as a speaker. It’s very encouraging to see young men and women from Nigeria and other African countries getting involved in data science. It started as a vision of two data scientists that want to train 10 million data scientists in Nigeria for the next 10 years. It’s a bold goal, and I sure pray they achieve it.

The other topic is about Green AI or Sustainable AI. How AI can help us become more sustainable. One example is using computer vision to identify illegal fires in the Amazon – using AI to affect change with an eye on sustainability.  And the last one is AI justice. The same way AI is creating bias, it’s using AI tools to identify and call out this bias.  That is the work of some organizations like Algorithmic Justice League led by Joy Buolamwini. That’s also an area that is growing. These three areas cover the main themes of global AI for good.

Global AI for Good
by Bruno Melo from Unsplash.com

Re-Framing Technology

Let me frame them within a biblical context. In technology when we usually mean big tech that comes from Silicon Valley. As an alternative, I want to introduce this different concept, which is mustard seed technology. In the gospels, Jesus talks of the kingdom of God being like a mustard seed. Though it’s one of the smallest seeds it becomes a big tree where birds can come and rest in their shade.

I love this idea about this grassroots technology, either being developed or being deployed to provide for others. Just think of farmers in Kenya using their phones to make payments and doing other things they haven’t been able to do before. Those are the stories I wanted to think about today. I wanted to start thinking geographically.  How does global AI for good look like in different places of the world?

Photo by Akson on Unsplash

AI for Good in East Asia

Levi : Here in East Asia, the turning point came in 2016 when DeepMind AlphaGo (Google supercomputer) beat Lee Se Dol in a game of Go. It created a very interesting push in South Korea and China to rapidly advance and develop AI infrastructures. I’m involved with a group on AI social and ethical concerns focused on Asia. The group has nine different scholars from 6 different Asian countries. One of the things we are going to discuss soon is a report from MIT from interviewing several Asian business owners about direction. This report is 2 years old, but it’s interesting to see how small focused the state of China was then. Now they are one of the world leaders in AI development.

There is a major push in this part of the world. Asia across the board was late to the industrial game, except for Japan. As many countries like South Korea, China have massively industrialized in the last decades, they see AI as a way to push into the future. This opens a lot of questions. Like the ones about democratization and justice that need to be addressed. But one of the interesting things is that Asian countries are interested in pushing towards AI regulation compared to the USA or other European countries. There is also this recognition of wanting to be the best in advanced technology but also the best in “getting it right”. 

Where that’s going to go it’s hard to say. We know for that in China, the government directs most of AI development. So the question of democratization may not be the question at hand. South Korea allocated billions of won to developing AI. around the same time. It will likely engage in more democratization than China.

It is interesting to see how justice issues, like how facial recognition fails to recognize people that aren’t white men. When you’re training this tech in Chinese data sets, you have a much larger data set – one billion and a half people rather than 350 million (in the US), which allows the possibility to get rid of these biases which offers great potential for global AI for good.

There is also the problem of natural language processing. GPT-3 recently came out, and just like GTP-2, is based on English web pages. This means there is bias from the English-speaking world that is coded in those AI systems. But if you start training those same systems on Chinese, Korean, Japanese, Hindi language pages, you are going to end up with different frameworks. The bigger question will be, is there a way to put these in dialogue? I think this is a much more complicated question. Because there is so much development going around in this part of the world, it opens up the recognition that many of the biases encoded in western development of AI will not be the same as the rest of the world.

Conclusion

In this first part, we introduced the discussion on a global view of AI for good. It includes three main categories: democratizing AI skills, sustainable AI and AI justice. We then framed it within a mustard seed technology perspective. That is, we focus on the margins as opposed to the geo-centers of technological power. We are less interested in Silicon Valley and more on what is happening in the street corners of global cities.

Why ‘Don’t Look Up’ Falls Flat on Climate Change

A while back, I noticed “Don’t Look Up” at the top of the Netflix rankings. Considering the star-studded cast, I was excited to watch the comedy with my wife. I could not have been more disappointed. The long-winded satire missed many opportunities only accomplishing in repeating Hollywood caricature images of the last president and his supporters. With that said, this is not the first movie that I did not like. What surprised me, however, and made me open an exception to write about a movie I disliked was the passionate reaction I was getting from my lone FB comment. More importantly, what struck me was how many respondents saw it as a good metaphor for the climate change crisis.

In this blog, I would contend the exact opposite: the movie did a great disservice for raising awareness and affecting environmental change. It did so, not just because of its flat jokes but because it framed the issue wrongly, only serving to confirm the prejudices against Hollywood activism – namely, that it is shallow, misguided, and most often, ineffective. In short, ‘Don’t Look Up’ misses the point on Climate Change.

Before you tune out thinking you were trapped into reading a climate denier diatribe, let me introduce myself. I have written before here about the importance of making the environment our top priority. My commitment goes beyond writing. Our household composts nearly 80-90% of our non-animal food waste goes back to the earth. I drive a Plug-in Hybrid and solar panels will soon be placed in our rooftops.

I don’t say this to brag but only to make a point that there can be disagreement even within those who support the bold climate change action. This is not a binary world and I hope by now you can slow down and read what I have to say. I write this not because I don’t care about climate change but precisely because I do.

Trailer from Youtube

Framing the Issue Wrongly

Now that we got our introductions out of the way let me introduce the central point here. To use an analogy of a cataclysmic disaster 6 months from now to convince people about climate change misses the mark because it reduces it to a one-time event. This is hardly what is happening. Our climate crisis is not a premonition for an upcoming doomsday. Instead, it is a complex and gradual problem which ramifications we hardly understand. It does not mean it is not serious, just that real change requires long-term planning and commitment.

Don't Look Up poster

If anything, the movie exposed America’s inability to inspire grand ideas and engage in long-term plans. The problem with climate denial is not just that it ignores the facts but also that it demonstrates fatally selfish short-termism. We are simply unable to think beyond a 4-year election cycle or even the next year. Instead of working towards long-term plans we instead try to reduce the problem into one cataclysmic event through cheap comedy that only feeds into political polarization.

What about urgency? It is true that the window is closing for us to meet UN temperature increase goals. In that sense, there is a parallel with an impending disaster. With that said, while the urgency is real, addressing it is a lot more complex than shooting a meteor off-course. Hence, my concern is sounding a general alarm and labeling anyone who ignores it as an idiot is not very productive.

Top-down vs Grassroots Change

According to ‘Don’t Look Up’, while climate denial is a generalized problem, it is particularly acute among Silicon Valley and the political elite. They take a light jab at the media which is rather ironic, given who is talking. It also critiques recent billionaires’ efforts to reach space as a glorified act of escapism.

Not to say that their criticism here is unwarranted. I must admit that Meryl Streep as a Trump-like character had its funny moments. The memory of last year’s stupidity and cruel incompetence is still vivid. Almost too real to even be funny. The Tech Tycon character also had its moment, constantly looking for ways to profit from earth’s misfortunes. This is not too far from Big Tech’s mentality of technologizing their way out of any problem. That is, they are constantly seeking to fit a technological hammer to problems that require a scalpel.

Photo by Lina Trochez on Unsplash
Photo by Lina Trochez on Unsplash

With that said, the movie again misses the point. The change we need to address climate change must start at the grassroots and then makes its way to the top. If we continue to look at the centers of power for solutions, we will be in bad shape. Elon Musk made the electric car cool. That is progress but it is a bit disheartening that it took sleek design and neighbor envy to get people interested in this technology. An electric future powered by Tesla may be better than the one offered by other carmakers but that is still short of the change we need.

As long as American suburbs lie undisturbed with their gigantic SUVs spewing pollution in school car lines, we have a long way to go. The change needed is cultural. We need something that goes deeper than “scaring people” into doing good things. We need instead to articulate an attractive vision that will compel large segments of society to commit to sustained, long-term change.

Conclusion

You may say that I am taking this movie too seriously. Comedies are not meant to be political manifestos and will often get a pass in how they accomplish their goals. That may very well be the case. My goal here is not to change your mind in regards to the movie but instead to use this cultural phenomenon as a way to open up a wider conversation about our current predicament.

While our environmental crisis is dire, we need a bigger vision of flourishing to address it. It is not about an impending doom but a warning that we need to change our relationship with our planet. Instead of focusing on those who cannot see it yet, why not show them a vision of flourishing for the planet that they can get behind?

The work for the flourishing of all life requires a long-range view so we can engage in the hard work needed ahead of us. If all this movie does is to bring the conversation back to this issue, then that’s progress. In that sense, ‘Don’t Look Up’ may not be a complete loss on the cause to address climate change. Even if it misses the point, it hopefully makes people think.

And of course, watch out for the Broteroc!

How Knight Rider Predicts the Future of AI-Enabled Autonomous Cars

The automobile industry is about to experience transformative disruption as traditional carmakers respond to the Tesla challenge. The battle is not just about whether to go from combustion to electric but it extends the whole concept of motorized mobility. E-bikes, car-sharing, and autonomous driving are displacing the centrality of cars as not just a means of transportation but also a source of status and identity. The chip shortage also demonstrated the growing reliance on computers, exposing the limits of growth as cars become more and more computerized. In this world of uncertainties, could Knight Rider shed some light on the future of autonomous cars?

As a kid, I dreamed of having a (Knight Industries Two Thousand) KITT, a car that would work on my voice command. Little did I know that many of the traits in the show are now, nearly 40 years later, becoming a reality. To be sure, the show did not age well in some aspects (David Haselhoff sense of fashion for one and the tendency to show men’s bare hairy chest). Yet, on the car tech, they actually hit a few home runs. In this blog, I want to outline some traits that came up in the show that turned out to be well aligned with the direction of car development today.

Lone Ranger Riding a Dark Horse

Before proceeding, let me give you a quick intro to Knight Rider‘s plot. This 1980’s series revolves around the relationship between Michael, the lone ranger type out to save the world and his car KITT. The car, a supped-up version of a Pontiac Trans Am, is an AI-equipped vehicle that can self-drive, talk back to its driver, search databases, remotely unlock doors, and much more.

In the intro episode, we learned that Michael got a second chance in life. After being shot in the face by criminals, he undergoes plastic surgery and receives a new identity. Furthermore, a wealthy man bequeaths him the supercar along with the help of the team that built it to provide support. At his death bed, the wealthy magnate tells Michael the truth that will drive his existence: “One man can make a difference.”

Taken from Wikipedia

Yes, the show does suffer from an excess of testosterone and a royal lack of melanin.

Yet, I contend that Michael is not the main character of the show. KITT, the thinking car steals the show with wit and humor. The interaction between the two is what makes an average sci-fi flick into a blockbuster success. You can’t help but fall in love with the car.

Knight Rider Autonomous Car Predictions

  • Auto-pilot – this is the precursor of autonomous driving. While systems to keep speed constant has been common for decades, true autonomous driving is a recent advance. This is now an option for new Tesla models (albeit at a hefty $10K additional) and also partially present in other models such as auto parking, lane detection and automatic braking. This feature was not hard to predict. Maybe the surprise here is not that it happened but how long it took to happen. I suspect large auto-makers got a little cozy with innovation as they sold expensive gas-guzlers for most of the 90’s and early 00’s. It took an outsider to force them back into research.
  • Detecting drivers’ emotions – At one point in the debut episode, KITT informs Michael that his emotional state is altered and he might want to calm down. Michael responds angry that the car would talk back to him. While this makes for a funny bit it is also a good prediction of some recent facial recognition work using AI. Using a driver’s facial experession alone is sufficient to assertain the indivudal’s emotional state. There is a lot of controversy on this one but the show deserves credit for its foresight. Maybe a car that tells you to “calm down” may be coming your way in the next few years.
Image extraction from Coded Bias
  • Remote manipulation of electronic devices – This is probably the most far-sighted trait in the show. Even this day it is difficult to imagine automated cars that can interact with the world beyond its chassis. Yet, this is also in the realm of possibility. Emerging Internet of Things (IOT) technology will make this a reality. The idea is that devices, appliances and even buildings can be connected through the Internet and operate algorithms in them. It envisions a world where intelligence is not limited to living beings or phones but all objects.

Conclusion

Science Fiction works capture the imagination of the time they are written. They are never 100% accurate but sometimes can be surprisingly predictive. Show creators did not envision a future of flat screens and slick dashboard designs as we have today. On the other hand, they envisioned aspects of IOT and emotional AI that we unimaginable at the time. In this case, besides being entertainment, they also help create a vision of a future to come.

from Wikipedia.com

Reflecting on this 40 year-old show made me wonder about current Sci-fi and their own visions of what is to come. How will coming generations look back at our present visions of their time? Will we reveal our gross blind spots like Knigth Rider while male individualism? Will we inspire future technology such as IOT?

This only highlights the importance of imagination in history making. We build a future now inspired by our contemporary dreams . Hence, it is time we start asking more questions about our pictures of the future. How much to they reflect our time and how much do they challenge us to become better humans? Even more importantly, do they promote the flourishing of life or an alternative cyber-punk society? Wherther it Knight Rider depiction of autonomous cars or Oxygen‘s view of cryogenics, they reflect a vision of a future captured at historical time.

How can Machine Learning Empower Human Flourishing?

As a practicing Software Product Manager who is currently working on the 3rd integration of a Machine Learning (ML) enabled product my understanding and interaction with models is much more quotidian, and at times, downright boring. But it is precisely this form of ML that needs more attention because ML is the primary building block to Artificial Intelligence (AI). In other words, in order to get AI right, we need to first focus on how to get ML right. To do so, we need to take a step back and reflect on the question: how can machine learning work for human flourishing?

First, we’ll take some cues from liberation theology to properly orient ourselves. Second, we need to understand how ML models are already impacting our lives. Last, I will provide a pragmatic list of questions for those of us in the technology field that can help move us towards better ML models, which will hopefully lead to better AI in the future. 

Gloria Dei, Vivens Homo

Let’s consider Elizabeth Johnson’s recap of Latin American liberation theology. To the stock standard elements of Latin American liberation theology–preferential option for the poor, the Exodus narrative, and the sermon on the Mt –she raises a consideration from St. Irenaeus’s phrase Gloria Dei, vivens homo. Translated as “the glory of God is the human being fully alive,” this means that human flourishing is God’s glory manifesting in the common good. One can think of the common good not simply as an economic factor. Instead, it is an intentional move towards the good of others by seeking to dismantle the structural issues that prevent flourishing.

Now, let’s dig into this a bit deeper –what prevents human flourishing?  Johnson points to two things: 1) inflicting violence or 2) neglecting their good. Both of these translate “into an insult to the Holy One” (82). Not only do we need to not inflict violence on others (which we can all agree is important), but we also need to be attentive to their good. Now, let’s turn to the current state of ML.

Big Tech and Machine Learning

We’ll look at two recent works to understand the current impact of ML models and hold them to the test. Do they inflict violence? Do they neglect the good? The 2020 investigative documentary entitled (with a side of narrative drama) The Social Dilemma (Netflix) and Cathy O’Neil’s Weapons of Math Destruction are both popular and accessible introductions to how actual ML models touch our daily lives. 

Screen capture of Social Dilemma

The Social Dilemma takes us into the fast-paced world of the largest tech companies (Google, Facebook, Instagram, etc.) that touch our daily lives. The primary use cases for machine learning in these companies is to drive engagement, by scientifically focusing on methods of persuasion. More clicks, more likes, more interactions, more is better. Except, of course, when it isn’t.

The film sheds light on how a desire to increase activity and to monetize their products has led to social media addiction, manipulation, and even provides data on the increased rates of sucide amongst pre-teen girls.  Going even further, the movie points out, for these big tech companies, the applications themselves are not the product, but instead, it’s humans. That is, the gradual but imperceptible change in behavior itself is the product.

These gradual changes are fueled and intensified by hundreds of daily small randomized tests that A/B change minor variables to influence behavior. For example, do more people click on this button when it’s purple or green? With copious amounts of data flowing into the system, the models become increasingly more accurate so the model knows (more than humans) who is going to click on a particular ad or react to a post.

This is how they generate revenue. They target ads at people who are extremely likely to click on them. These small manipulations and nudges to elicit behavior have become such a part of our daily lives we no longer are aware of their pervasiveness. Hence, humans become commodities that need to be continuously persuaded. Liberation theology would look to this documentary as a way to show concrete ways in which ML is currently inflicting violence and neglecting the good. 

from Pixabay.com

Machine Learning Outside the Valley

Perhaps ‘normal’ companies fare better? Non-tech companies are getting in on the ML game as well. Unlike tech companies that focus on influencing user behavior for ad revenue, these companies focus on ML as a means to reduce the workload of individual workers or reduce headcount and make more profitable decisions. Here are a few types of questions they would ask: “Need to order stock and determine which store it goes to? Use Machine Learning. Need to find a way to match candidates to jobs for your staffing agency? Use ML. Need to find a way to flag customers that are going to close their accounts? ML.” And the list goes on. 

Cathy O’Neil’s work helps us to get insight into this technocratic world by sharing examples from credit card companies, predictions of recidivism, for-profit colleges, and even challenges the US News & World Report College Rankings. O’Neil coins the term “WMD”, Weapons of Math Destruction for models that inflict violence and neglect the good. The three criteria of WMD’s are models that lack transparency, grow exponentially, and cause a pernicious feedback loop, it’s the third that needs the most unpacking.

The pernicious feedback loop is fed by biases of selectivity in the original data set–the example that she gives in chapter 5 is PredPol, a big data startup in order to predict crime used by police departments. This model learns from historical data in order to predict where crime is likely to happen, using geography as its key input. The difficulty here is that when police departments choose to include nuisance data in the model (panhandling, jaywalking, etc), the model will be more likely to predict new crime will happen in that location, which in turn will prompt the police department to send more patrols to that area. More patrols mean a greater likelihood of seeing and ticketing minor crimes, which in turn, feeds more data into the model. In other words, the models become a self-fulfilling prophecy. 

A Starting Point for Improvement

As we can see based on these two works, we are far from the topic of human flourishing. Both point to many instances where ML Models are currently not only neglecting the good of others, they are also inflicting violence. Before we can reach the ideal of Gloria Dei, vivens homo we need to make a Liberationist move within our technology to dismantle the structural issues that prevent flourishing. This starts at the design phase of these ML models. At that point, we can ask key questions to address egregious issues from the start. This would be a first for making ML models (and later AI) work for human flourishing and God’s glory. 

Here are a few questions that will start us on that journey:

  1. Is this data indicative of anything else (can it be used to prove another line of thought)? 
  2. If everything went perfectly (everyone took this recommendation, took this action), then what? Is this a desirable state? Are there any downsides to this? 
  3. How much proxy data am I using? In general proxy data or data that ‘stands-in’ for other data.
  4. Is the data balanced (age, gender, socio-economic)? What does this data tell us about our customers? 
  5. What does this data say about our assumptions? This is a slightly different cut from above, this is more aimed at the presuppositions of who is selecting the data set. 
  6. Last but not least: zip codes. As zip codes are often a proxy for race, use zip codes with caution. Perhaps using state level data or three digit zip code levels average out the results and monitor results by testing for bias. 

Maggie Bender is a Senior Product Manager at Bain & Company within their software solutions division. She has a M.A. in Theology from Marquette University with a specialization in biblical studies where her thesis explored the implications of historical narratives on group cohesion. She lives in Milwaukee, Wisconsin, enjoys gardening, dog walking, and horseback riding.

Sources:

Johnson, Elizabeth A. Quest for the Living God: Mapping Frontiers in the Theology of God (New York: Continuum, 2008), 82-83.

O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Broadway Books, 2017), 85-87.

Orlowski, Jeff. The Social Dilemma (Netflix, 2020) 1hr 57, https://www.netflix.com/title/81254224.

What would a Theology of AI Look Like?

The influence of secular thinking in our global society is powerful enough to make a project like a “theology of Artificial Intelligence” appear to be a doomed enterprise or a contradiction in terms at best, and sheer nonsense at worst. Is there a theology of the microchip? The integrated circuit? The Boolean gates?  And even if one happened to think that God is closer to software than hardware – is there a theology of AI or machine learning?

Put so plainly and abruptly, these questions can easily lead to the conclusion that such a theology is impossible to make sense of. Just as a secular opinion (surreptitiously powerful even among adherents of religions) often hastens to declare that “religion and science simply cannot go together”, the same would be assumed about theology and modern technology – it is like yoking a turtle and a jaguar together.

Moreover, if one approached the incongruence between “theology” and “artificial intelligence” by transposing it to the field of anthropology, one would again face the same problem on another plane. What does a human being practicing religion – a homo religiosus – have to do with a human being – perhaps the very same human being – as a user of artificial intelligence? Is it not the case that historical progress has by our days left behind not only the relevance of religion but also the very humanism that used to enshrine the same human being in question as sacred? Is secular humanism in our day not giving way to things like the “transhuman” and the “posthuman”? 

YouTube Liturgies

But this secular-historical argument is not difficult to turn upside down. When it comes to human history, it is the nature of the things of the past that they are still with us and, what is more, religious forms of consciousness that many would deem atavistic today not only stay present but can also come across with new vigor in the contemporary digital environment. They might strike many as hybrid forms of consciousness, in which the day before yesterday stages an intense and perplexing comeback.

Photo by Pixabay.com

Take the example of Christian devotion in an online environment like YouTube. Assisted, surrounded, and finally motivated by the artificial intelligence of YouTube, a Christian believer will soon find herself in the intensifying bubble of her own religious fervor. Her worship of Jesus Christ in watching devotional videos is quickly and easily perceived by YouTube’s algorithms which will soon offer her historical documentaries, testimonies, Christian talk shows, subscriptions to Christian channels, and the like. In the wake of this spiraling movement, her religious consciousness will be very different and, in a sense, more intense than that of a premodern devotee of Christ – a consciousness steeped in a medium orchestrated by artificial intelligence.  

It follows from the pervasive presence of artificial intelligence in today’s society in general and in what we call “new media” in particular that, the same way as any other kind of content, any positive religious content may also invite an inquiry into the nature of AI. But a note of caution is in order here. The terms “religious” and “religion” in this context must include much more than the semantics of mainstream religious traditions like Christianity.

An online religious attitude includes much more than any cult of personality and may extend to the whole of online existence.

For instance, the above example of artificial intelligence orchestrating Christian experience, after all, is perfectly applicable to any online cult of personality. A teenager worshipping Billie Eilish will experience something very similar to Christian worship on YouTube whose algorithms do not make any methodical distinction between a pop singer and a Messiah. 

Online Worship and Techno-Totalitarianism

In a theology of AI what really matters online is not positive religious content but a certain religious attitude intensified and eventually motivated by Artificial Intelligence. An online religious attitude includes much more than any cult of personality and may extend to the whole of online existence. As researchers of contemporary cultural anthropology and sociology of religion have pointed out, many users of digital technology find a “higher life” and a “more authentic self” online, at the same time as experiencing a mystical fusion with the entirety of the global digital cloud.[1] The relocation of the sacred and the godlike in the realm of the digital is as obvious here as the influence of a technological version of New Age spirituality which is often called “New Edge” by researchers and devotees alike.

From Pixabay.com

This “techno-religion” is fully subservient to what can be termed techno-totalitarianism. The digital technology and environment of our times perfectly fit the definition of totalitarianism: it pervades and knits tightly together all aspects of society while enabling the full subjugation of the individual to a ubiquitous and anonymous power. The totalitarian and curiously religious presence of the secular, “neutral” and functional algorithms of artificial intelligence evokes both a religious past and a religious future.

Algorithmic Determinism

This is another example of the historical dialectic between religion and secularisation. The secular probability theory underpinning these secular algorithms (and predicting the online behavior of users) has roots in the Early Modern statistical theory of prediction modeled on the idea of God’s predestination.[2] Ironically, the idea of divine predestination is making a gruesome return in contemporary times as the increasing bulk of big data at the disposal of AI algorithms means more and more certainty about user behavior and, as a consequence, increasingly precise prediction for and automation of the human future. It is, therefore, safe to say that there can indeed be such a thing as a theology of AI and machine learning.

The division between those who are elected and those who are not, increasingly defines various sectors of contemporary information society such as the financial market. The simple truth of a formula like “the rich get richer, the poor poorer” has deep roots in the reality of inscrutably complex AI algorithms running in the financial sector that determine not only trade on Wall Street but also the success or failure of many millions of small cases like individual credit applications.

By Pixabay.com

Algorithms decide on who obtains credit and at what interest rate. The more data about individual applicants they have at their disposal, the more accurately they can predict their future financial behavior.[3] Like in many other fields defined by AI, it is not difficult to recognize here how prediction slips into modification and modification into techno-determinism which seals the fate of the world’s population. Indeed, this immense power over individuals, holding their past, present, and future with iron clips together, is nothing short of a force for a new religious realm and a wake-up call to Christian theology from its dogmatic slumber.

Conclusion

It is clear that if there is a positive theology of artificial intelligence as such it must go far beyond an analysis of explicit, positive “religious content” in today’s online environment.  If so, one question certainly remains which is impossible to answer within the confines of this blogpost: what would a negative theology of AI look like, a theology in which an engagement with AI would go hand in hand with a distance from and criticism of it?


[1]  cf. Stef Aupers & Dick Houtman (eds.), Religions of Modernity: Relocating the Sacred to the Self and the Digital (Leiden-Boston: Brill, 2010).

[2] This idea is spelled out in Virgil W. Brower, “Genealogy of Algorithms: Datafication as Transvaluation”, Le foucaldien 6, no. 1 (2020): 11, 1-43.

[3] This is one of the main arguments in Cathy O’Neill, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Crown, 2016).


Gábor L. Ambrus holds a post-doctoral research position in the Theology and Contemporary Culture Research Group at The Charles University, Prague. He is also a part-time research fellow at the Pontifical University of St. Thomas Aquinas, Rome. He is currently working on a book on theology, social media, and information technology. His research primarily aims at a dialogue between the Judaeo-Christian tradition and contemporary techno-scientific civilization.

Recreating our World Through Mustard Seed Technology

In this blog, I sketch the outline of an alternative story for technology. It starts with an ancient parable and how it has sprung into a multiplicity of meanings in our time. It is a story of grassroots change, power from below, organic growth, and life-giving transformation. Those are terms we often do not associate with technology. Yet, this is about to change as we introduce the concept of mustard seed technology.

Narratives are powerful meaning-making tools. They bring facts together and organize them in a compelling way, making them hard to refute. Most often, their influence goes beyond story-telling to truth defining. That is, the reader becomes a passive, uncritical receiver of its message mostly taking for granted the fact that it is only a narrative. The story often becomes an undisputed fact.

Looking Behind the Curtain

When it comes to technology, the situation is no different. The dominant narrative tells the story of Silicon Valley overlords who rule our world through their magical gadgets, constantly capturing our attention and our desires. Other times, it hinges on a Frankenstein perspective of creation turning against their creators where machines conspire to re-shape our world without our consent. While both narratives hold kernels of truth, their power is not in their accuracy but in their influence. That is, they are not important because they are true but because we believe in them.

Photo by Frederico Beccari from unsplash.com

The role of the theologian, or the critical thinker if you will, is to expose and dismantle them. Yet, they do that not by direct criticism alone but also by offering alternative compelling narratives that connect the facts in new ways. Most dominant narratives around technology share a bent towards despair. It is most often the story of a power greater than us, a god if you will, imposing their will to our detriment. Hence, the best antidote is a narrative of hope that does not ignore the harms and dangers but weighs them properly against the vast opportunities human creativity has to offer the world.

The best challenge to algorithmic determinism is human flourishing against all odds.

That is what AI theology aspires to. As we seek to intersect technological narratives with ancient text, we look both for ethical considerations as well as the lens of hope, both in short supply in the world of techno-capitalism and techno-authoritarianism. In the worship of profit, novelty, and order, these two dominant currents tell only part of the story. Yet, unfortunately, as they proclaim it with powerful loudspeakers parallel stories are overshadowed.

A Biblical Parable

According to the Evangelists, Jesus liked to teach through parables. He knew the power of narrative. The gospels contain many examples of these short stories often meant to make the hearer find meaning in their environment. They were surprisingly simple, memorable, and yet penetrating. Instead of being something to discern, it discerned the listener as they encounter themselves in the story.

Photo by Mishaal Zahed on Unsplash

One of them is the seminal parable about the mustard seed. Evangelist Matthew puts it this way:

He put before them another parable: “The kingdom of heaven is like a mustard seed that someone took and sowed in his field;  it is the smallest of all the seeds, but when it has grown it is the greatest of shrubs and becomes a tree, so that the birds of the air come and make nests in its branches.

Matthew 13:31-32

From this short passage, we can gather two main paths of meaning. The first one is of small beginnings becoming robust and large over time. It is not just about the fast growth of a startup but more a movement that takes time to take hold eventually becomes an undisputed reality that no one can deny.

The other path of meaning is one of function. Once it is grown, is not there simply to be admired and revered. Instead, it is there to provide shelter for other beings who do not have a home. It is a picture of hospitality, inclusion, and invitation. The small seed becomes a big tree that can now help small animals. It can provide shade from the sun and a safe place for rest.

A Contemporary Story from the Margins

Jesus was not talking directly about technology. We can scarcely claim to know the original meaning of the text. That is not the task here. It is instead an attempt to transpose the rich avenues of meaning from the text into our current age and in turn, build a new narrative about the development of technology in our time. A story about how technology is emerging from the margins and solving problems in a life-giving way, rather than a flashy but profitable manner. That is what I would define as mustard seed technology.

What does that look like in concrete examples? From the great continent of Africa, I can tell of at least two examples. One is the story of a boy who built a wind generator to pump water to his village. With limited access to books, parts, and no one to teach him, he organized an effort to build the generator using an old bike motor. The Netflix movie The Boy who Harnessed the Wind tells this story and is worth your time. Another example is how Data Science Nigeria is training millions of data scientists in Africa. Through hackathons, boot camps, and online courses, the organization is a the forefront of AI skills democratization efforts.

Beyond these two examples, consider the power unleashed through the creative economy. As billions get access to free content on YouTube and other video platforms, knowledge can be transferred a lot faster than before. Many can learn new skills from the comfort of their home. Others can share their art and crafts and sell them in a global cyber marketplace. Entrepreneurship is flourishing at the margins as the world is becoming more connected.

Conclusion

These examples of mustard seed technology tell a different story. They speak of a subversive undercurrent of goodness in history that will not quiet down even in the midst of despair, growing inequality, and polarization. It is the story of the mustard seed technology springing up in the margins of our global home. They are growing into robust trees of creativity and economic empowerment.

Do you have the eyes to see and the courage to testify to their truth? When you consider technology, I invite you to keep the narratives of despair at bay. For a moment, start looking for the mustard seeds happening all around you. As you find them, they will grow into trees of hope and encouragement in your heart.

Finding Hope in a Sea of Skepticism over Facebook Algorithms

The previous blog summarized the first part of our discussion on Facebook algorithms and how they can become accountable to users. This blog summarizes the second part where we took a look at the potential and reasons for hope in this technology. While the temptation of algorithm misuse for profit maximization will always continue, can these technologies also work for the good? Here are some thoughts on this direction.


Elias: I never know where the discussion is going to go, but I’m loving this. I loved the question about tradition. Social media and Facebook are part of a new tradition that emerged out of Silicon Valley. But I would say that they are part of the broader tradition emerging out of cyberspace (Internet), which is now roughly 25 years old. I would also mention Transhumanism as one of the traditions influencing Big Tech titans and many of its leaders.  The mix of all of them forms a type of Techno Capitalism that is slowly conquering the world.  

Levi:  This reminds me of a post on the Facebook group that Jennifer posted a few months ago. It was a fascinating video from a Toronto TV station where they looked 20 years back and showed an interview with a couple of men about the internet. They were talking with excitement about the internet. They then interviewed the same men today. Considering how many things have changed, he was very skeptical. There was so much optimism and then everything became a sort of capitalist money-grabbing goal. I used to teach business ethics for 6 years in the Bay area. One of the main things I taught my students about is the questions we need to ask when looking at a company.  What is their mission, and values? What does the company say they uphold? These questions tell you a lot about what the company’s tradition is. 

The second thing is what is the actual corporate culture? One of the projects I would have the students do is every week they would present some ethical problem in the news related to some business. It’s never hard to find topics, which is depressing. We found a lot of companies that have had really terrible corporate cultures. Some people were incentivized from the top to do unethical things. When that is your standard, meeting a certain monetary goal, everything else becomes subordinated to that. 

Milton Friedman said 50 years ago that the social responsibility of a business is to increase its profit. According to Friedman, anything we do legally to obtain this is acceptable. If the goal is simply this then the legal aspect is subordinate to the goal then we can change that by changing laws in our favor. The challenge is that this focus has to come from the top. In a company like Facebook, Zuckerberg has the majority of shares, and then the board of directors are people he has hand-picked. So there is very little actual oversight. 

Within the question about tradition, Facebook has made it very clear that their tradition is sharing. That means sharing your personal information with other people. We would want to do that to some extent, but he is also sharing your data with third-party companies that are buying the data to make money. If profit is the goal everything becomes subordinated to that. Whether the sharing is positive or negative is less of a question of is it being shared and if it’s making money.

Photo by Mae Dulay on UnsplashPhoto by Mae Dulay on Unsplash
Photo by Mae Dulay on Unsplash

Glimpses of Hope in a Sea of Skepticism 

Elias: I would like to invite Micah, president of the Christian Transhumanist Association to share some thoughts on this topic. We have extensively identified the ethical challenges in this area. What does Christian Transhumanism has to say and are there any reasons for hope?

Micah:  On the challenge of finding hope and optimism, I was thinking if we compare this to the Christian tradition and development of the creeds, you are seeing some people looking at this emergence and saying that it is a radical, hopeful, and optimistic option in a world of pessimism. If you think about ideas of resurrection and other topics like this, it is a radical optimism about what will happen to the created order. 

The problem you run into (even in the New Testament) is a point of disappointed expectations. People are “where is he coming, where is the transformation, when will all this be made right?” So the apostles and the Christian community have to come in and explain the process of waiting, it will take a while but we can’t lose hope.  So a good Christian tradition is to maintain optimism and hope in the face of disappointed expectations and failures as a community. In the midst of bad news, they stayed anchored on the future good news.

There is a lesson in this tradition of looking at the optimism of the early internet community and seeing how people maintain that over time. You have to have a long-term view that figures out a way to redemptively take into account the huge hurdles and downfalls you encounter along the way. This is what the Christian and theological perspectives have to offer. I’ve heard from influential people from Silicon Valley that you can’t maintain that kind of perspective from a secular angle, if you only see from a secular angle you will be sorely disappointed. Bringing the theological perspective allows you to understand that the ups and downs are a part of the process, so you have to engage redemptively to aim for something else on the other side. 

Taken from Unsplash.com

Explainability and Global Differences

Micah: From a technical perspective, I want to raise the prospect of explainability AI and algorithms. I liked what Maggie pointed out about the ecosystems where the developers don’t actually understand what’s going on, that’s also been my experience. It’s what we’ve been baking into our algorithm, this lack of understanding of what is actually happening. I think a lot of people have the hope that we can make our algorithms self-explanatory, and I do definitely think we can make algorithms that explain themselves. But from a philosophical perspective, I think we can never trust those because even we can’t fully understand our mental processes. Yet, even if we could explain and we could trust them perfectly there are still unintended consequences.

I believe we need to move the focus of the criteria. Instead of seeking the perfect algorithm, focus on what are the inputs and outputs of this algorithm.  It has to move to a place of intentionality where we are continually revisiting and criticizing our intentions. How are we measuring it (algorithm) and how are we feeding them information that shapes it? These are just some questions to shift the direction of our thinking  

Yvonne: You have shared very interesting ideas. I’ve been having some different thoughts on what I’ve been reading. In terms of regulation and how companies would operate in one region versus the other. I have a running group with some Chinese women. They often talk to me about how the rules in China are very strict towards social media companies. Even Facebook isn’t allowed to operate fully there. They have their own Chinese versions of social network companies.

Leadership plays a crucial role in what unfolds in a company and any kind of environment. When I join a company or a group, I can tell the atmosphere based on how the leadership operates. A lot of big companies like Facebook, their leadership, and decision-makers have the same mindset and thoughts on profits. Unless regulation enforces morality and ethics most companies will get away with whatever they want to. That’s where we come in. I believe we, as Christians, can influence how things unfold and how things operate using our Christian perspective.  

In the past year, we have all seen how useful technology can be. Even this group is a testimony of how even with different time zones we can have a meeting without having to take plane tickets, which would be more expensive. I think technology has its upsides when applied correctly. That defines whether it will be helpful or detrimental to society. 

Brian:  Responding to the first part of what Micah said when we think about technology and its role it can be easier if we think about two perspectives. One as a creative vector. Where we can create value and good things. But at every step, there is the possibility of bias to creep in. I mean bias very broadly, it can be discrimination or simple mistakes that multiply over time. So there has to be a “healing” vector where bias is corrected. Once the healing vector is incorporated, the creative vector can be a leading force again. I believe that the healing vector has to start outside ourselves. The central thought of the Christian faith is that we can’t save ourselves, we require God’s intervention and grace.  This grace moves through people and communities so that we can actively participate in it. 

Elias: I think this also comes from the concept of co-creation. The partnership between humanity and God, embracing both our limitation (what some call sin) but also our immense potential as divine image-bearers.

I look forward to our next discussion. Until then, blessings to all of you.