In this podcast episode Elias Kruger and Maggie Bender talk about the latest news in the tech world, Generative AI. How can this new tech change the way we create and consume content? Introducing the paradox of hope and despair, this episode brings innovative thoughts on this topic. Listen to it now on your favorite platform.
In the previous blog, I introduced the new project AIT is embarking on and we invited the reader to start thinking about the future by looking first at the past. We now turn to scenario planning as a way to prepare for the future of AI and Faith. For those curious about this, Future studies is an academic field that has developed solid business practices in the last 50 years. I even considered pursuing a degree in it in my early 30’s but that’s a story for another day. The main point here is to mine some of these practices to see what could be useful as we engage in a project to imagine alternative futures for AI and faith.
What is Scenario Planning?
A common foresight practice for large institutions to engage in is scenario planning. In the 1970’s Royal Dutch Shell corporation leadership wanted to engage in a robust process to prepare for an uncertain future. While the company already employed forecasting techniques for short-term planning, leaders felt the need for a different approach as they looked into the mid and long-term future. They turned to a practice developed a decade earlier by the Rand corporation to help them imagine new futures.
Instead of spending too much energy trying to predict the future, the leadership group sought to create plausible scenarios. That is, instead of simply extrapolating current trends, they wanted to paint pictures of possible futures at a conceptual level. Their point was not to “get it right” but to challenge executives to consider multiple alternatives.
In the early ’00s, I participated in one of these sessions with my employer. It was an exciting experience for a young professional and probably one of the reasons I got hooked on future thinking and what inspired me to consider scenario planning for AI and faith. On that occasion, the group chose 2 main variables that would define our scenarios. Then, plotting in a graph, we would create 4 scenarios that would alternate high and low for each of the variables. Each quadrant would have a catchy name that described the combination of the two variables for each scenario as illustrated in the picture below:
In essence, scenarios are nothing more than narratives about the future. They are not striving for accuracy but must be compelling, plausible, and memorable. This way, they can play an important role in painting a picture of the future that the decision-maker can prepare for.
Why Focus on Multiple Futures?
Looking at the chart above can be overwhelming and it begs the question: why build multiple futures? Wouldn’t that create more confusion over what to do next? That’s a fair question to anyone encountering this practice. Yet, there is a strong reason for doing so. Futurist Amy Webb explains it this way:
It’s about flexibility. Most people and organizations are very inflexible in how they think about the future. In fact, it’s difficult to imagine yourself in the future, and there are neurological reasons for that. Our brains are designed to deal with immediate problems, not future ones. That plus the pace of technology improvement is becoming so fast that we’re increasingly focused on the now. Collectively, we are learning to be “nowists,” not futurists.
Here’s the problem with a “nowist” mentality: when faced with uncertainty, we become inflexible. We revert to historical patterns, we stick to a predetermined plan, or we simply refuse to adopt a new mental model.
Thinking through alternative options forces us out of our short-term mentality. It also breaks us out of pre-conceived ideas based on the past about how the future may unfold. In short, scenario planning undercuts the tendency to predict the future putting the focus instead on the range of possibilities.
Who should engage in this practice?
By now, it should be clear why large organizations are already embedding this practice into their planning cycle. Yet, is that limited to large institutions? Should smaller entities or individuals consider this practice? I would contend the answer is a resounding yes. In a world of increasing uncertainty, there is a growing incentive for democratizing scenario planning.
Certainly, in the field of AI and faith, there is a pressing need for considering alternative futures. It would not be prudent to assume AI adoption or even the make-up of the faithful will remain constant. Communities of faith are still reeling from the disruptive effects of the COVID-19 crisis. AI development and adoption continue to march on at neck-breaking speed. Just between these two factors, the possibilities are quite numerous not even considering the uncertainties around climate change and geopolitics.
In a fast-changing world, we need to reject the dichotomy of resorting to old thinking patterns or accepting change in passive resignation. There is a third way which is preparing for possibilities with courage, caution, and hope. That is why AI theology is engaging in scenario planning discussions to paint alternative futures. This is how we can best serve church, industry, and academia.
In my previous blog, I discussed the totalitarianism and determinism already created by today’s AI, concluding my argument with a distinction between a positive and a negative theology of AI. I also made, without any elaboration, an appeal for the latter. The terminology of this distinction may lead to some confusion. The name “artificial intelligence” is usually applied to computer-based, state-of-the-art algorithms that display behavior or skills of which it has formerly been thought only human beings are capable. Notwithstanding, an AI algorithm, and especially the whole array of AI algorithms that are active online, may exhibit behavior or create an environment whose qualities go beyond the level or capacity of the human mind and, even more than that, appear to be “God-like” or are treated so.
Here enters theological reflection with two of its forms: positive and negative theology, of which the second is less common and more sophisticated than the first. Positive theology describes and discusses God by means of names and positive statements like – to give a few simple examples – “God is spirit”, “God is Lord”, “God is love”, and so on. But, according to negative theology, it is equally true that, by reason of God’s radical otherness and difference from anything in the created world, God can only be spoken of through negative statements: “God is not” or “is unlike” a “spirit” or a “lord” or “love”. Accordingly, these two distinct ways of approaching God can translate into the two following statements: “God is AI” or “God is not AI.”
Defining a Positive Theology of AI
Scandalous as it may seem, a positive theology of AI is hardly avoidable, and its subject should be less the miraculous accomplishments of future AI and all the hopes attached to it than the everyday online spectacles of the present. True, the worship of today’s AI scarcely pours out into a profession of its divinity in the manner of the Apostle Thomas when confronted with the risen Christ (“My Lord and my God!” John 20:28), but spending with it the most beatific hours of the day including the first and last waking moments (before going to pee in the morning and after doing so in the evening) certainly qualifies as a life of prayer.
In a sense, the worship of AI does more than prayer to the Christian God could ever do in this life as AI provides light and nurture in seamless services tailored to every user’s interests, quirks, and wishes. Indeed, it casts a spell of bedazzlement on you in powerful alliance with the glamour, sleekness, and even sexiness of design. So it comes to pass that you end up in a city whose sky is created by AI, or, rather, whose sky is AI itself – a sky where your highest aspirations turn to. Could this city and sky possibly be those prophesied by John the Seer in the Apocalypse? “And the city had no need of the sun, neither of the moon, to shine in it: for the glory of God did lighten it…” (Revelations 21:23).
Valiant Resistance or Fruitless Nostalgia?
But, let’s suppose, there arises an urge in you to resist the city and sky of AI, recognizing that they are not God’s city and God’s sky, that AI is not God, and God is unlike AI – in other words, you negate AI as God. Of course, this is more than an act of logic and goes beyond the scope of a theoretical decision. The moment you realize you have treated AI as God, and you have been wrong, you change your attitude and orientation, and start searching for God elsewhere, outside the realm of AI.
You repent.
This metanoia of sorts leads you to trade your smartphone for nature, opting to live under the real sky. There, you experience real love and friendship outside social media platforms. You may even discard Google Maps and seek to get lost in real cities and find your bearings with the help of old paper maps.
Such actions, however, are not the best negative theology of AI. Do they not exhibit a nostalgia for the past, growing wistful about the sky, the love, the city, and the God of old? Is God nostalgic? Would God set up God’s tent outside the city of AI into which the whole of creation is moving? Have you, searching for God outside the realm of AI, not engaged in an unserious, even dull form of negation?
There must be another way.
In fact, the divine realm empowered by AI carries in itself its own theological negation, moments when its bedazzlement loosens its grip and its divine face undergoes an eclipse – moments that are empty, dull, boring, meaningless, or even full of frustration or anxiety. Such moments are specific to this realm and not just the usual downside of human life. It was, if you are willing to realize, the proliferation of such moments that have made you repudiate the divinity of AI and go searching outside its realm, and not just a sudden thought that occurred to you.
A Balanced Negative Theology of AI
As a matter of fact, it was not only you; such moments in the midst of all the bedazzlement, now and then, happen to all devotees. Does the ubiquitousness of such moments mean that all citizens of the city of AI participate in its theological self-negation, and, therefore, living in it necessarily includes the act of negating it? In a sense, yes but this is just a ubiquitous and unintended, almost automatic negation, and not the right one. As a rule, the citizens of the city live in the moment and for the moment; they naively live its bedazzlement to the full and suffer its moments of meaninglessness to the full. In doing so, however, they are unfree.
Instead, you are better off living in the city of AI accompanied by a moderate and reserved, yet constant negation. In this balanced and overall experience, you always keep the harrowing moments of emptiness and meaninglessness in mind with a view to them no longer quite coming to harrow you and, above all, with a view to AI’s bedazzlement no longer gaining the upper hand.
As a consequence of your moderate and sustained negation of AI as God (a negative theology of AI), you create a certain distance between you and AI which is nevertheless also a space of curiosity and playfulness. Precisely because you negate it in a theological sense, you can curiously turn towards AI, witness the details of its behavior and also enjoy its responsiveness to your actions. And it is precisely in this dynamic and undecided area of free play with AI, opened up by your negation, that God, defined as to what God is not (not AI) and undefined as to what God is, can be offered a space to enter.
Gábor L. Ambrusholds a post-doctoral research position in the Theology and Contemporary Culture Research Group at The Charles University, Prague. He is also a part-time research fellow at the Pontifical University of St. Thomas Aquinas, Rome. He is currently working on a book on theology, social media, and information technology. His research primarily aims at a dialogue between the Judaeo-Christian tradition and contemporary techno-scientific civilization.
In this series of two posts, I’ll equip you with a simple but distinctive set of concepts that can help us think and talk about spiritual egalitarianism. This kind of conceptualization is urgently important in a time when the development of AI systems can increasingly take on leadership and management functions in society. This post will articulate a concept of social synecdoche and why it is especially relevant now, in thinking about human-AI societies. The next post will apply it to a question of church governance today, in an illustrative way.
What is Social Synecdoche?
Our thoughts here will center on a socially and sociologically important concept called synecdoche. Here are two examples of it at work:
When a Pope acts, in some meaningful sense, the Church acts.
When a President acts, in some meaningful sense, the nation acts.
Both sentences illustrate social synecdoche at work: it is the representation of a social whole by a single person who is a part of it. The indefinitely expansive use of this mode of group identity is what will define the term ‘axial consciousness’ in my usage. I use the terms “axial age” and “axial consciousness” to define a substantial shift in human history, that is marked by the emergence of the slave machines that we call civilization. By focusing attention on a figure who could, at least in principle, unify a human group of any size in themselves, ancient civilizations created increasingly expansive governments, eventually including a variety of warring empires.
My usage of the term “axial” provides an alternative way of framing these big history discussions about AI and ancient human history. It invites comparison (and contrast) with Ilia Delio’s more standard usage of axial language in Re-enchanting the Earth: Why AI Needs Religion.
Insofar as we are psychologically, socially, and somatically embedded in large social bodies today, it is substantially through the sympathetic “social magic” of synecdoche. Both then and now, we have access to this axial mode of consciousness whenever we identify with a representative of an organized group agent, and thereby identify with it. At the same time, we are also able to slip out of this mode and become increasingly atomized in our experience of the world.
A Visceral Connection with the Whole
For example, when we feel that leaders or a group have betrayed us so deeply that we are no longer a part of it (that it has left us), we experience a kind of atomized consciousness that is the opposite of axial consciousness. This process is often experienced as a painful loss of identity, a confusion about who we are, precisely because we substantially find our identities in this kind of group through representation.
This capacity is rooted in a deep analogy between a personal body and a social body, and this analogy is not only conceptual but also physiological: when our nation is attacked, we feel attacked, and when something happens to our leader, we spontaneously identify with them as a part of the group they represent. Social synecdoche is therefore part of the way we reify social bodies. Reifying a social body is what we do when we make a country or Church into a thing, through group psychology processes that are consciously experienced as synecdoche: the representation of the whole by a part.
Synecdoche and Representative Governments
This notion of social synecdoche can help us notice new things and reframe familiar discussions in interesting ways. For example, how does social synecdoche relate to present debates about representative democracy vs autocracy? Representative government refines and extends this type synecdoche, articulating it at more intermediate scales in terms of space (districts, representing smaller areas), time (limited terms, representing a people for an explicit time) and types of authority (separations of powers, representing us in our different social functions).
This can create a more flexible social body, in certain contexts, because identification is distributed in ways that give the social body more points of articulation and therefore degrees of freedom and potential for accountability. For all of this articulation, representative government remains axial, just more fully articulated. If it weren’t axial in this sense, representative government wouldn’t reach social scale in the first place.
So sociologically and socially, we are still very much in the axial age, even in highly articulated representative governments. In a real sense, representative government is an intensification of and deepening articulation of axial consciousness; it responds to the authoritarianism of a single representative by dramatically multiplying representation.
Synecdoche and the Axial Age
Ever since social synecdoche facilitated the first expanding slave machines, there has been a sometimes intense tug-of-war between atomized consciousness and axial consciousness. This effort to escape axial social bodies through individuation has always been a feature of the axial experience, often because axial group agents are routinely capricious and cruel and unjust. For example, our first known legal code, the Code of Ur-Nammu, bears witness to the ways in which a legal representative of the axial social body incentivized the recuperation of slaves who desperately tried to individuate:
If a slave escapes from the city limits, and someone returns him, the owner shall pay two shekels to the one who returned him.
For all of the privation involved in privateness, some people throughout the axial period have also attempted various forms of internal immigration (into the spirit or mind) as a means of escape. Some, but certainly not all, axial spirituality can be understood in these terms. The Hebrew prophetic tradition, for example, does not engage generally in internal escapism, but instead seeks to hold axial social bodies to account, especially by holding their representatives accountable.
Social Synedocque in the Age of AI
Our long history as axial beings suggests that we will probably stay like this, even as we build the technology that will enable us to make AI Presidents and Kings. It seems possible that we will have AI systems that can be better than humans at fulfilling the office of President before we have AI systems that are better than us at plumbing or firefighting. In part this is because the bar for good political leadership is especially low, and in part it reflects the relative ease of automating a wide range of creative, social and analytical work through advanced text generation systems. If this sounds absurd, I’d recommend getting caught up on the developments with GPT-3 and similar systems. You can go to openai.com and try it out if you like.
How hard would it be for an AI system to more faithfully or reliably represent your nation or church or city or ward than the current ones? Suppose it can listen and synthesize information well, identify solutions that can satisfy various stakeholders, and build trust by behaving in a reliable, honest and trustworthy way. And suppose it never runs the risk of sexually molesting someone in your group. By almost any instrumental measure, meaning an external and non-experience-focused measure of its ability to achieve a goal, I think that we may well have systems that do better than a person within a generation. We might also envision a human President who runs on a platform of just approving the decisions of some AI system, or a President who does this secretly.
In such a context, as with any other case where AI systems outperform humans, human agents will come to seem like needless interlopers who only make things worse; it will seem that AI has ascended to its rightful throne.
A Call to Egalitarianism
But this precisely raises the central point I’d like to make:
In that world, humans become interlopers only insofar as our goals are merely instrumental. That is to say, this is the rightful place of AI only insofar as we conceive of leadership merely as a matter of receiving inputs (public feedback, polling data, intelligence briefings) and generating outputs (a political platform, strategy, public communications, and the resultant legitimation structure rooted in social trust and identification).
This scenario highlights the limits of instrumentality itself. Hence, instead of having merely instrumental goals for governance, I believe that we urgently need to treat all humans as image-bearers, as true ends in themselves, as Creation’s priests.
A range of scholarship has highlighted the basic connection between image-bearing and the governance functions of priests and kings in the religions of the Ancient Near East. Image-bearing is, then, very early language for social synecdoche. In an axial age context, which was and is our context, the notion that all of humanity bears God’s image remains a challenging and deeply egalitarian response to the problem of concentrated power that results from social synecdoche. That is what I’ll turn to in the next post.
Daniel Heck is a Pastor at Central Vineyard Church in Columbus, OH. His work focuses on immigrant and refugee support, spiritual direction, and training people of all ages how to follow the teachings of Jesus. He is the author of According to Folly, founder of Tattered Books, and writes regularly on Medium: https://medium.com/@danheck
At our February AI Theology Advisory Board meeting, Ana Catarina De Alencar joined us to discuss her research on sustainable AI and gender equality, as well as how she integrates her faith and work as a lawyer specializing in data protection. In Part 1 below, she describes her research on the importance of gender equality as we strive for AI sustainability.
Elias: Ana, thank you for joining us today. Why don’t you start by telling us a little about yourself and about your involvement with law and AI.
Ana: Thank you, Elias, for the invitation. It’s very nice to be with you today. I am a lawyer in a big law firm here in Brazil. I work with many startups on topics related to technology. Today I specialize in data protection law. This is a very recent topic for corporations in Brazil. They are learning how to adjust and adapt to these new laws designed to protect people’s data. We consult with them and provide legal opinions about these kinds of topics. I’m also a professor. I have a master’s degree in philosophy of law, and I teach in this field.
In my legal work, I engage many controversial topics involving data protection and AI ethics. For example, I have a client who wants to implement a facial recognition system that can be used for children and teenagers. From the legal point of view, it can be a considerable risk to privacy even when we see a lot of favorable points that this type of technology can provide. It also can be very challenging to balance the ethical perspective with the benefits that our clients see in certain technologies.
Gender Equality and Sustainable AI
Elias: Thank you. There’s so much already in what you shared. We could have a lot to talk about with facial recognition, but we’ll hold off on that for now. I’d like to talk first about the paper you presented at the conference where we met. It was a virtual conference on sustainable AI, and you presented a paper on gender equality. Can you summarize that paper and add anything else you want to say about that connection between gender equality and sustainable AI?
Ana: This paper came out of research I was doing for Women’s Day, which is celebrated internationally. I was thinking about how I could build something uniting this day specifically and the topic of AI, and the research became broader and broader. I realized that it had something to do with the sustainability issue.
Sustainability and A Trans-Generational Point of View
When we think of AI and gender, often we don’t think with a trans-generational point of view. We fail to realize that interests in the past can impact interests in the future. Yet, that is what is happening with AI when we think about gender. The paper I presented asks how current technology impacts future generations of women.
The technology offered in the market is biased in a way that creates a less favorable context for women in generations to come. For example, when a natural language processing system sorts resumes, often it selects resumes in a way that favors men more than women. Another example is when we personalize AI systems as women or as men, which generates or perpetuates certain ideas about women. Watson from IBM is a powerful tool for business, and we personalize it as a man. Alexa is a tool for helping you out with your day-by-day routine, and we personalize it as a woman. It creates the idea that maybe women are servile, just for supporting society in lower tasks, so to speak. I explored other examples in the paper as well.
All of these things together are making AI technology biased and creating ideas about women that can have a negative impact on future generations. It creates a less favorable situation for women in the future.
Reinforcing and Amplifying Bias
Levi: I’m curious if you could give an example of what the intergenerational impact looks like specifically. In the United States, racial disparities persist across generations. Often it is because, for instance, if you’re a Black American, you have a harder time getting high-paying jobs. Then your children won’t be able to go to the best schools, and they will also have a harder time getting high-paying jobs. But it seems to be different with women, because their children may be women or men. So I wonder if you can give an example of what you mean with this intergenerational bias.
Ana: We don’t have concrete examples yet to show that future impact. However, we can imagine how it would shape future generations. Say we use some kind of technology now that reinforces biases–for example, a system for recruiting people that lowers resumes mentioning the word ‘women,’ ‘women’s college,’ or something feminine. Or a system which includes characterization of words related to women–for instance, the word ‘cook’ is related to women, ‘children’ is related to women. If we use these technologies in a broad sense, we are going to reinforce some biases already existing in our society, and we are going to amplify them for future generations. These biases become normal for everybody now and into the future. It becomes more systemic.
Racial Bias
You can use this same thinking for the racial bias, too. When you use these apps and collect data, it reinforces systemic biases about race. That’s why we have to think ethically about AI, not only legally, because we have to build some kind of control in these applications to be sure they do not reinforce and amplify what is already really bad in our society for the future.
Levi: There’s actually a really famous case that illustrates this from Harvard Business students. Black students and Asian students sent their applications out for job interviews, and then they sent out a second application where they had whitewashed it. They removed things on their CV that were coded with with their race–for instance, being the president of the Chinese Student Association or president of the Black Student Union, or even specific sports that are racially coded. They found that when they whitewashed their applications, even though they removed all of these accomplishments, they got significantly higher callbacks.
Elias: I have two daughters, ages 12 and 10. If AI tells them that they’re going to be more like Alexa, not Watson, it influences their possibilities. That is intergenerational, because we are building a society for them. I appreciated the paper you presented, Ana, because AI does have an intergenerational impact.
In Part 2 we will continue the conversation with Ana Catarina De Alencar and explore the way she thinks about faith and her work.
Who doesn’t like to listen to podcasts? Listeners are growing by the day in the major platforms (Spotify, Google, Apple Play). But is there QUALITY content?
AI Theology presents to you a new podcast. Elias Kruger and Maggie Bender discuss the intersection between theology and technology in the budding world of AI and other emerging technologies. They bring the best from academy, industry and church together in a lively conversation. Join us and expand your mind with topics like ai ethics, ai for good, guest interviews and much more.
Here is episode 1: Faith, AI and the Climate Crisis
Elias Kruger and Maggie Bender discuss how AI and faith can help address in the climate crisis. We dive into some controversy here and how religion has not always been an ally in the battle for conservation. Yet, what are the opportunities for AI and faith to join forces in this daunting challenges. The conversation covers creation, worship, algorithms, optimization and recent efforts to save the Amazon.
After listening, don’t forget to hare wih friends and give us your feedback. Also don’t forget to rate the episodes on the podcast platforms.
At our January Advisory Board meeting, we explored the question of whether we live in a technological age. You can find Part 1 of our conversation in this post. In Part 2 below, we discuss a new telos of technology.
Elias: I think we established, for the most part, that this is a technological age. Maybe we always have been in a technological age, but technology is definitely part of our lives now. Some of you started hinting at the idea that technology is pointing towards something. It is teleological, from the Greek word telos, meaning goal. Technology leads toward something. And I think Chardin saw technology leading into the Omega point, while Ellul saw it more as a perversion of a Christian eschaton. In his view, the Christian position was to resist and subvert it.
The question I have now is very broad. How do we forge a new vision, a new telos, for technology? Or maybe even, what would that telos be? We talked earlier about technology for the sake of capitalism or consumption. What would be a new telos for technology, and how would we forge this new vision?
No Overall Goal for Technology
František: I have a great colleague with a technical background and a longtime friend. I studied with him in Amsterdam. He’s now sort of an important person in a company developing AI. He’s a member of the team which programmed the AI to play poker. So he’s quite skillful in programming, and actually working on the development of AI. He’s developing amazing things.
I spoke with him about this telos question, “What is the aim of technology?” He said, “Well, there is no such thing as an overall goal.” The goal is to improve our program to be able to fight more sophisticated threats to our system. That’s what we are developing. So basically, there is no general telos of technology. There is only a narrow focus. There is just the goal to improve technology, that it gets better, and serves better the concrete purpose for which is built. It’s a very particular focus.
A Clash of Mentalities
I was very unhappy with this answer. After all, there must be some goal. And he said, “Well, that’s the business of theologians.” My friend said he doesn’t believe in anything. Not in theism, not even in atheism, he just doesn’t bother discussing it. So for him, there is no God, no goal, nothing. We’re just living our life. And we’re improving it. We are improving it step by step. He’s a well-studied, learned person, and he sees it like that. I’ve experienced the same thing during conversations with many of my friends who are working in technology on the technical or the business side.
So they would say, perhaps, there is no goal. That’s a clash of mentalities. We are trying to build a bridge between this technological type of thinking and the theological, philosophical perspective which intends to see the higher goal.
I don’t have a good argument. You can try to convince him that there is a higher goal, but he doesn’t believe in a higher goal. So I’m afraid that a lot of people developing technology do not see further than the next step of a particular piece of technology. And I’m afraid that here we are, getting close to the vision of the Brave New World, you know, the novel. People are improving technology on a particular stage, but they do not see the full picture. It is all about improving technology to the next step. There is no long-term thinking. Perhaps there are some visionaries, but this is at least my experience, which I’m afraid is quite broad in the field of technology.
The Human Telos of Technology
Maggie: I feel like that happens a lot from the developer side of technology. But at least the import within technology should be that you have some sort of product owner or product manager, that’s supposed to be supplying a vision. That person could start thinking about the goal of technology. I know a lot of times within technology, the product manager draws out the user story. So, “I’m a user. I want to ______, so that ______.” And it’s the so that which becomes the bigger element that’s drawn out. But that’s still at a very microscopic level. So yeah, there might be an intersection with the larger goal of technology, but I don’t think it really is used there very well.
Elias: Some of you who have known me for a long time know how much I have struggled with my job and finding meaning in what I do. And a lot of times it was exactly like you described, František. It was like, What am I doing here? What is this for? And I found, at least recently, this sweet spot where I found a lot of meaning in what I was doing. It wasn’t like I was changing people’s lives. But I found this passion to make things better and more efficient. When you are in a large corporation things can be so bureaucratic. And we were able to come in and say, I don’t care how you do it, we’re gonna accomplish this thing. And then you actually get it done. There is a sense of purpose and satisfaction in that alone.
The Creative Value of Work
I would venture to say that your friend, František, is actually doing creative work, co-creative work with God. He may not call it that. But there is something about bringing order out of chaos. I think even in a situation where the user or the developer is not aware, there might be goals happening there that we could appreciate and describe theologically.
For instance, going back to my experience, it might just be the phase that I’m in at work. But I’m feeling a lot of satisfaction in getting things done nowadays. Just simply getting things done. How can I put that theologically? I don’t know. Is that how God felt after creation? But there is something about accomplishing things. Now, if that’s all you do, obviously, eventually it just becomes meaningless. But there is something meaningful in the act of accomplishing a task.
Maggie: And just the sanctity of work too. Your friend, he’s working, he’s doing something. And in that type of work, even though it’s labor, I think it’s still a part of the human telos.
František: Yeah, I think so, even though he thinks that there is no human telos as such. And we keep having conversations, and he still sees something important in the conversations. So that means he still keeps coming to the conversation with philosophers and theologians, even though he sort of disregards their work because he sees it as not relevant to his work. But I think that’s a sign of hope in his heart.
If bureaucracies are full of human cogs, what’s the difference in replacing them with AI?
(For this entry following the main topic of classifications by machines vs. humans, we consider classifications and their union with judgments for their prospect of life-altering decisions. It is inspired by a sermon given today by pastor Jim Thomas of The Village Chapel, Nashville TN)
Esau’s fateful Choice
In Genesis 25:29–34 we see Esau, the firstborn son of Abraham, coming in from the fields famished, and finding his younger brother Jacob in possession of hearty stew, Esau pleads for some. My paraphrase follows: Jacob replies, “First you have to give me your birthright.” “Whatever,” says Esau, “You can have it, just gimme some STEWWW!” …”And thus Esau sold his birthright for a mess of pottage.”
Simply put, it is a bad idea make major, life-altering decisions while in a stressed state, examples of which are often drawn from the acronym HALT:
Hungry
Angry
Lonely
Tired
Sometimes HALT becomes SHALT by adding “Sad”.
When we’re in these (S)HALT states, our brains are operating relying on quick inferences “burned” into them either via instinct or training. Dual Process Theory of psychology calls this “System 1” or “Type 1” reasoning, according to the (cf. Kahneman, 2003; Strack & Deutch 2004). System 1 includes the fight-or-flight response. While System 1 is fast, it’s also often prone to make errors, oversimplify, and operate based on of biases such as stereotypes and prejudices.
System 1 relies on only a tiny subset of the brain’s overall capacity, the part usually associated with involuntary and regulatory systems of the body governed by the cerebellum and medulla, rather than the cerebrum with its higher-order reasoning capabilities and creativity. Thus trying to make important decisions (if they’re not immediate and life-threatening) while in a System 1 state is inadvisable if waiting is possible.
At a later time we may be more relaxed, content, and able to engage in so-called System 2 reasoning, which is able to consider alternatives, question assumptions, perform planning and goal-alignment, display generosity, seek creative solutions, etc.
Hangry Computers Making Hasty Decisions
Machine Learning systems, other statistics-based models, and even rule-based symbolic AI systems, as sophisticated as they may currently be, are at best operating in a System 1 capacity — to the extent that the analogy to the human brain holds (See, e.g., Turing Award winner Yoshua Bengio’s invited lecture at NeurIPS 2019: video, slides.)
This analogy between human System 1 and AI systems is the reason for this post. AI systems are increasingly serving as proxies for human reasoning, even for important, life-altering decisions. And as such, news stories appear daily with instances of AI systems displaying bias and unjustly employing stereotypes.
So if humans are discouraged from making important decisions while in a System 1 state, and machines are currently capable of only System 1, then why are machines entrusted with important decision-making responsibilities? This is not simply a matter of which companies may choose to offer AI systems for speed and scale; governments do this too.
Government is a great place to look to further this discussion, because government bodies are chock full of humans making life-altering decisions (for others) based on of System 1 reasoning — tired people, implementing decisions based on procedures and rules – bureaucracy.1 In this way, whether it is a human being following a procedure or a machine following its instruction set, the result is quite similar.
Human Costs and Human Goods
The building of a large bureaucratic system provides a way to scale and enforce a kind of (to borrow from AI Safety lingo) “value alignment,” whether for governments, companies, or non-profits. The movies of Terry Gilliam (e.g., Brazil) illustrated well the excesses of this through a vast office complex of desks after desks of office drones. The socio-political theorist Max Weber, who advanced many of our conceptions of bureaucracy as a positive means to maximize efficiency and eliminate favoritism, was aware of the danger of excess:
“It is horrible to think that the world could one day be filled with nothing but those little cogs, little men clinging to little jobs and striving towards bigger ones… That the world should know no men but these: it is such an evolution that we are already caught up, and the great question is, therefore, not how we can promote and hasten it, but what can we oppose to this machinery in order to keep a portion of mankind free from this parcelling-out of the soul, from this supreme mastery of the bureaucratic way of life.”
Max Weber, Gesammelte Augsaetze zur Soziologie und Sozialpolitik, pp. 412, (1909).
Thus by outsourcing some of this drudgery to machines, we can “free” some workers from having to serve as “cogs.” This bears some similarity to the practice of replacing human assembly-line workers with robots in hazardous conditions (e.g., welding, toxic environments), whereas in the bureaucratic sense we are removing people from mentally or emotionally taxing situations. Yet one may ask what the other costs of such an enterprise may be, if any: If the system is already “soulless,” then what do we lose by having the human “cogs” in the bureaucratic machine replaced by machines?
The Heart of the Matter
So, what is different about machines doing things, specifically performing classifications (judgments, grading, etc.) as opposed to humans?
One difference between the automated and human forms of bureaucracy is the possibility of discretionary action on the part of humans, such as the demonstration of mercy in certain circumstances. God exhorts believers in Micah 6:8 “to love mercy.” In contrast, human bureaucrats going through the motions of following the rules of their organization can result in what Hannah Arendt termed “the banality of evil,” typified in her portrayal of Nazi war criminal Adolph Eichmann who she described as “neither perverted nor sadistic,” but rather “terrifyingly normal.”
“The sad truth is of the matter is that most evil is done by people who never make up their minds to be or do evil or good.”
Hannah Arendt, The Life of the Mind, Volume 1: Thinking, p.180 (1977).
Here again we see the potential for AI systems, as the ultimate “neutral” rule-followers, to facilitate evil on massive scales. So if machines could somehow deviate from the rules and show mercy on occasion, how would that even work? Which AI researchers are working on the “machine ethics” issue of determining when and how to show mercy? (At the time of writing, this author is unaware of such efforts). Given that human judges have a tendency to show favoritism and bias in the selective granting of mercy to certain ethnicities more than others. Also, the automated systems have shown bias even in rule-following, would the matter of “mercy” simply be a new opportunity for automated unfairness? It is a difficult issue with no clear answers.
The Human Factor
One other key, if the pedantic difference, between human vs machine “cogs” is the simple fact that with a human being “on the line,” you can try to break out of the limited options presented by menus and if-then decision trees. Even the latest chatbot helper interfaces currently deployed are nothing more than natural language front ends to menus. Whereas with a human being, you can explain your situation, and they can (hopefully) work with you or connect you to someone with the authority to do so.
I suspect that in the next ten years we will see machine systems with increasing forays into System 2 reasoning categories (e.g., causality, planning, self-examination). I’m not sure how I feel about the prospect of pleading with a next-gen chatbot to offer me an exception because the rule shouldn’t apply in this case, or some such. 😉 But it might happen — or more likely such a system will decide whether to kick the matter up to a real human.
Summary
We began by talking about Jacob and Esau. Jacob, the creative swindling deal-broker, and Esau who quite literally “goes with his gut.”. Then we talked about reasoning according to the two systems described by Dual Process Theory, noting that machines currently can do System 1 quite well. The main question was: if humans make numerous erroneous and unjust decisions in a System 1 state, how do we justify the use of machines? And the easy answers available seem to be a cop-out: the incentive of scale, speed, and lower cost. And this is not just “capitalism,” rather these incentives would still be drivers in a variety of socio-economic situations.
Another answer came in the form of bureaucracy. System 1 already exists albeit with humans as operators. We explored “what’s different” between a bureaucracy implemented via humans vs. machines. We realized that “what is lost” is the humans’ ability to transcend, if not their authority in the organization, at least the rigid and deficient set of software designs imposed by vendors of bureaucratic IT systems. Predicting how the best of these systems will improve in the coming years is hard. Yet, given the prevalence of shoddy software in widespread use, I prefer talking to a human in Mumbai rather than “Erica” the Bank of America Chatbot for quite some time.
[1] Literally “government by the desk,” a term coined originally by 16th-century French economist Jacques Claude Marie Vincent de Gournay as a pejorative, but has since entered common usage.
As a practicing Software Product Manager who is currently working on the 3rd integration of a Machine Learning (ML) enabled product my understanding and interaction with models is much more quotidian, and at times, downright boring. But it is precisely this form of ML that needs more attention because ML is the primary building block to Artificial Intelligence (AI). In other words, in order to get AI right, we need to first focus on how to get ML right. To do so, we need to take a step back and reflect on the question: how can machine learning work for human flourishing?
First, we’ll take some cues from liberation theology to properly orient ourselves. Second, we need to understand how ML models are already impacting our lives. Last, I will provide a pragmatic list of questions for those of us in the technology field that can help move us towards better ML models, which will hopefully lead to better AI in the future.
Gloria Dei, Vivens Homo
Let’s consider Elizabeth Johnson’s recap of Latin American liberation theology. To the stock standard elements of Latin American liberation theology–preferential option for the poor, the Exodus narrative, and the sermon on the Mt –she raises a consideration from St. Irenaeus’s phrase Gloria Dei, vivens homo. Translated as “the glory of God is the human being fully alive,” this means that human flourishing is God’s glory manifesting in the common good. One can think of the common good not simply as an economic factor. Instead, it is an intentional move towards the good of others by seeking to dismantle the structural issues that prevent flourishing.
Now, let’s dig into this a bit deeper –what prevents human flourishing? Johnson points to two things: 1) inflicting violence or 2) neglecting their good. Both of these translate “into an insult to the Holy One” (82). Not only do we need to not inflict violence on others (which we can all agree is important), but we also need to be attentive to their good. Now, let’s turn to the current state of ML.
Big Tech and Machine Learning
We’ll look at two recent works to understand the current impact of ML models and hold them to the test. Do they inflict violence? Do they neglect the good? The 2020 investigative documentary entitled (with a side of narrative drama) The Social Dilemma(Netflix) and Cathy O’Neil’s Weapons of Math Destruction are both popular and accessible introductions to how actual ML models touch our daily lives.
The Social Dilemma takes us into the fast-paced world of the largest tech companies (Google, Facebook, Instagram, etc.) that touch our daily lives. The primary use cases for machine learning in these companies is to drive engagement, by scientifically focusing on methods of persuasion. More clicks, more likes, more interactions, more is better. Except, of course, when it isn’t.
The film sheds light on how a desire to increase activity and to monetize their products has led to social media addiction, manipulation, and even provides data on the increased rates of sucide amongst pre-teen girls. Going even further, the movie points out, for these big tech companies, the applications themselves are not the product, but instead, it’s humans. That is, the gradual but imperceptible change in behavior itself is the product.
These gradual changes are fueled and intensified by hundreds of daily small randomized tests that A/B change minor variables to influence behavior. For example, do more people click on this button when it’s purple or green? With copious amounts of data flowing into the system, the models become increasingly more accurate so the model knows (more than humans) who is going to click on a particular ad or react to a post.
This is how they generate revenue. They target ads at people who are extremely likely to click on them. These small manipulations and nudges to elicit behavior have become such a part of our daily lives we no longer are aware of their pervasiveness. Hence, humans become commodities that need to be continuously persuaded. Liberation theology would look to this documentary as a way to show concrete ways in which ML is currently inflicting violence and neglecting the good.
Machine Learning Outside the Valley
Perhaps ‘normal’ companies fare better? Non-tech companies are getting in on the ML game as well. Unlike tech companies that focus on influencing user behavior for ad revenue, these companies focus on ML as a means to reduce the workload of individual workers or reduce headcount and make more profitable decisions. Here are a few types of questions they would ask: “Need to order stock and determine which store it goes to? Use Machine Learning. Need to find a way to match candidates to jobs for your staffing agency? Use ML. Need to find a way to flag customers that are going to close their accounts? ML.” And the list goes on.
Cathy O’Neil’s work helps us to get insight into this technocratic world by sharing examples from credit card companies, predictions of recidivism, for-profit colleges, and even challenges the US News & World Report College Rankings. O’Neil coins the term “WMD”, Weapons of Math Destruction for models that inflict violence and neglect the good. The three criteria of WMD’s are models that lack transparency, grow exponentially, and cause a pernicious feedback loop, it’s the third that needs the most unpacking.
The pernicious feedback loop is fed by biases of selectivity in the original data set–the example that she gives in chapter 5 is PredPol, a big data startup in order to predict crime used by police departments. This model learns from historical data in order to predict where crime is likely to happen, using geography as its key input. The difficulty here is that when police departments choose to include nuisance data in the model (panhandling, jaywalking, etc), the model will be more likely to predict new crime will happen in that location, which in turn will prompt the police department to send more patrols to that area. More patrols mean a greater likelihood of seeing and ticketing minor crimes, which in turn, feeds more data into the model. In other words, the models become a self-fulfilling prophecy.
A Starting Point for Improvement
As we can see based on these two works, we are far from the topic of human flourishing. Both point to many instances where ML Models are currently not only neglecting the good of others, they are also inflicting violence. Before we can reach the ideal of Gloria Dei, vivens homo we need to make a Liberationist move within our technology to dismantle the structural issues that prevent flourishing. This starts at the design phase of these ML models. At that point, we can ask key questions to address egregious issues from the start. This would be a first for making ML models (and later AI) work for human flourishing and God’s glory.
Here are a few questions that will start us on that journey:
Is this data indicative of anything else (can it be used to prove another line of thought)?
If everything went perfectly (everyone took this recommendation, took this action), then what? Is this a desirable state? Are there any downsides to this?
How much proxy data am I using? In general proxy data or data that ‘stands-in’ for other data.
Is the data balanced (age, gender, socio-economic)? What does this data tell us about our customers?
What does this data say about our assumptions? This is a slightly different cut from above, this is more aimed at the presuppositions of who is selecting the data set.
Last but not least: zip codes. As zip codes are often a proxy for race, use zip codes with caution. Perhaps using state level data or three digit zip code levels average out the results and monitor results by testing for bias.
Maggie Bender is a Senior Product Manager at Bain & Company within their software solutions division. She has a M.A. in Theology from Marquette University with a specialization in biblical studies where her thesis explored the implications of historical narratives on group cohesion. She lives in Milwaukee, Wisconsin, enjoys gardening, dog walking, and horseback riding.
Sources:
Johnson, Elizabeth A. Quest for the Living God: Mapping Frontiers in the Theology of God (New York: Continuum, 2008), 82-83.
O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Broadway Books, 2017), 85-87.
Orlowski, Jeff. The Social Dilemma (Netflix, 2020) 1hr 57, https://www.netflix.com/title/81254224.
I am hesitant to watch French movies as the protagonist often dies at the end. Would this be another case of learning to love the main character only to see her die at the end? Given the movie premise, it was worth the risk. Similar to Eden, Netflix Oxygen is a powerful exploration of the intersection of hope and technology.
It is uncommon to see a French movie make it to the top charts of American audiences. Given our royal laziness, we tend to stay away from anything that has subtitles preferring more the glorified-theatrics-simplistic plots of Hollywood. The French are too sophisticated for that. For them, movies are not entertainment but an art form.
Realizing I had never watched a French Sci-Fi Thriller, maybe it was time to walk down that road. I am glad I did. The next day, I reflected on the movie’s plot after re-telling the whole story to my wife and my daughter. Following the instigating conversation that ensued, I realized there was enough material for an AI theology review.
Simple Plot of Human AI Partnership
You wake up and find yourself trapped in a capsule. You knock on the walls eventually activating an AI that informs you that you are in a cryogenic chamber. There is no way of knowing how you got there and how you can get out. You have 90 minutes before the oxygen runs out. The clock is ticking and you need to find a way to survive or simply accept your untimely death.
Slowly the main character played by Melanie Laurent, Elizabeth, discovers pieces and puzzles about who she is, why she is in the chamber and ultimately what options she has. This journey is punctuated by painful discoveries and a few close calls building the suspense through out the feature.
Her only companion throughout this ordeal is the chamber AI voice assistant, Milo. She converses, argues and pleads with him through out as she struggles to find a way to survive. The movie revolves around their unexpected partnership, as the AI is her only way to learn about her past and communicate with the outside world. The contrast between his calm monotone voice with her desperate cries further energize the movie’s dramatic effect.
In my view, the plot’s simple premise along with Melanie’s superb performance makes the movie work even as it stays centered on one life and one location the whole time.
Spoiler Alert: The next sections give away key parts of the plot.
AI Ethics, Cryogenics and Space Travel
Oxygen is the type of film that you wake up the next day thinking about it. That is, the impact is not clearly felt until later. There is so much to process that its meaning does not become clear right away. The viewer is so involved in the main character’s ordeal that you don’t have time to reflect on the major ethical, philosophical and theological issues that emerge in the story.
For example, once Elizabeth wakes up, one of the first things Milo offers her is sedatives. She refuses, preferring to be alert in her struggle for survival rather than calmly accepting her slow death. In one of the most dramatic scenes of the movie, Milo follows protocol to euthanize her as she is reaching the end of her oxygen supply. In an ironic twist that Elizabeth picks up on: the AI asks her permission for sedatives but does not consult her about the ultimate decision to end her life. While a work of fiction, this may very well be sign of things to come, as assisted suicide becomes legal in many parts of the world. Is it assisted-suicide of humane end-of-life care?
In an interesting combination, Oxygen portrays cryogenics, cloning and space travel as the ultimate solution for human survival. As humanity faced a growing host of incurable diseases they send a spaceship with thousands of clones in cryogenic chambers to find the cure in another planet. Elizabeth, as she learns mid-way, is a clone of a famous cryogenics scientist carrying her memories and DNA. This certainly raises interesting questions about the singularity of the human soul. Can it really transfer to clones or are they altogether different beings? Is memory and DNA the totality of our beings or are there transcending parts impossible to replicate in a lab?
Co-Creating Hopeful Futures
In the end, human ingenuity prevails. Through a series of discoveries, Liz finds a way to survive. It entails asking Milo to transfer the oxygen from other cryogenic chambers into hers. Her untimely awakening was the result of an asteroid collision that affected a number of other chambers. After ensuring there were no other survivors in these damaged chambers, she asks for the oxygen transfer.
To my surprise, the movie turns out to be a colossal affirmation of life. Where the flourishing of life is, there is also theology. While having no religious content, the story shows how the love for self and others can lead us to fight for life. Liz learns that her husband’s clone is in the spaceship which gives her a reason to go on. This stays true even after she learns she herself is a clone and in effect have never met or lived with him. The memory of their life together is enough to propel her forward, overcoming insurmountable odds to stay alive.
The story also illustrates the power of augmentation, how humans enabled through technology can find innovative solutions that extend life. In that sense, the movie aligns with a Christian Transhumanist view – one that sees humans co-creating hopeful futures with the divine.
Even if God is not present explicitly, the divine seems to whisper through Milo’s reassuring voice.