Egalitarian Human Futures in the wake of AI: Social Synecdoche

In this series of two posts, I’ll equip you with a simple but distinctive set of concepts that can help us think and talk about spiritual egalitarianism. This kind of conceptualization is urgently important in a time when the development of AI systems can increasingly take on leadership and management functions in society. This post will articulate a concept of social synecdoche and why it is especially relevant now, in thinking about human-AI societies. The next post will apply it to a question of church governance today, in an illustrative way.

What is Social Synecdoche?

Our thoughts here will center on a socially and sociologically important concept called synecdoche. Here are two examples of it at work:

When a Pope acts, in some meaningful sense, the Church acts.

When a President acts, in some meaningful sense, the nation acts.

Both sentences illustrate social synecdoche at work: it is the representation of a social whole by a single person who is a part of it. The indefinitely expansive use of this mode of group identity is what will define the term ‘axial consciousness’ in my usage. I use the terms “axial age” and “axial consciousness” to define a substantial shift in human history, that is marked by the emergence of the slave machines that we call civilization. By focusing attention on a figure who could, at least in principle, unify a human group of any size in themselves, ancient civilizations created increasingly expansive governments, eventually including a variety of warring empires.

My usage of the term “axial” provides an alternative way of framing these big history discussions about AI and ancient human history. It invites comparison (and contrast) with Ilia Delio’s more standard usage of axial language in Re-enchanting the Earth: Why AI Needs Religion.

Insofar as we are psychologically, socially, and somatically embedded in large social bodies today, it is substantially through the sympathetic “social magic” of synecdoche. Both then and now, we have access to this axial mode of consciousness whenever we identify with a representative of an organized group agent, and thereby identify with it. At the same time, we are also able to slip out of this mode and become increasingly atomized in our experience of the world.

A Visceral Connection with the Whole

For example, when we feel that leaders or a group have betrayed us so deeply that we are no longer a part of it (that it has left us), we experience a kind of atomized consciousness that is the opposite of axial consciousness. This process is often experienced as a painful loss of identity, a confusion about who we are, precisely because we substantially find our identities in this kind of group through representation.

This capacity is rooted in a deep analogy between a personal body and a social body, and this analogy is not only conceptual but also physiological: when our nation is attacked, we feel attacked, and when something happens to our leader, we spontaneously identify with them as a part of the group they represent. Social synecdoche is therefore part of the way we reify social bodies. Reifying a social body is what we do when we make a country or Church into a thing, through group psychology processes that are consciously experienced as synecdoche: the representation of the whole by a part.

Synecdoche and Representative Governments

This notion of social synecdoche can help us notice new things and reframe familiar discussions in interesting ways. For example, how does social synecdoche relate to present debates about representative democracy vs autocracy? Representative government refines and extends this type synecdoche, articulating it at more intermediate scales in terms of space (districts, representing smaller areas), time (limited terms, representing a people for an explicit time) and types of authority (separations of powers, representing us in our different social functions).

This can create a more flexible social body, in certain contexts, because identification is distributed in ways that give the social body more points of articulation and therefore degrees of freedom and potential for accountability. For all of this articulation, representative government remains axial, just more fully articulated. If it weren’t axial in this sense, representative government wouldn’t reach social scale in the first place.

So sociologically and socially, we are still very much in the axial age, even in highly articulated representative governments. In a real sense, representative government is an intensification of and deepening articulation of axial consciousness; it responds to the authoritarianism of a single representative by dramatically multiplying representation.

Synecdoche and the Axial Age

Ever since social synecdoche facilitated the first expanding slave machines, there has been a sometimes intense tug-of-war between atomized consciousness and axial consciousness. This effort to escape axial social bodies through individuation has always been a feature of the axial experience, often because axial group agents are routinely capricious and cruel and unjust. For example, our first known legal code, the Code of Ur-Nammu, bears witness to the ways in which a legal representative of the axial social body incentivized the recuperation of slaves who desperately tried to individuate:

If a slave escapes from the city limits, and someone returns him, the owner shall pay two shekels to the one who returned him.

For all of the privation involved in privateness, some people throughout the axial period have also attempted various forms of internal immigration (into the spirit or mind) as a means of escape. Some, but certainly not all, axial spirituality can be understood in these terms. The Hebrew prophetic tradition, for example, does not engage generally in internal escapism, but instead seeks to hold axial social bodies to account, especially by holding their representatives accountable.

Photo by Frederico Beccari from unsplash.com

Social Synedocque in the Age of AI

Our long history as axial beings suggests that we will probably stay like this, even as we build the technology that will enable us to make AI Presidents and Kings. It seems possible that we will have AI systems that can be better than humans at fulfilling the office of President before we have AI systems that are better than us at plumbing or firefighting. In part this is because the bar for good political leadership is especially low, and in part it reflects the relative ease of automating a wide range of creative, social and analytical work through advanced text generation systems. If this sounds absurd, I’d recommend getting caught up on the developments with GPT-3 and similar systems. You can go to openai.com and try it out if you like.

How hard would it be for an AI system to more faithfully or reliably represent your nation or church or city or ward than the current ones? Suppose it can listen and synthesize information well, identify solutions that can satisfy various stakeholders, and build trust by behaving in a reliable, honest and trustworthy way. And suppose it never runs the risk of sexually molesting someone in your group. By almost any instrumental measure, meaning an external and non-experience-focused measure of its ability to achieve a goal, I think that we may well have systems that do better than a person within a generation. We might also envision a human President who runs on a platform of just approving the decisions of some AI system, or a President who does this secretly.

In such a context, as with any other case where AI systems outperform humans, human agents will come to seem like needless interlopers who only make things worse; it will seem that AI has ascended to its rightful throne.

A Call to Egalitarianism

But this precisely raises the central point I’d like to make:

In that world, humans become interlopers only insofar as our goals are merely instrumental. That is to say, this is the rightful place of AI only insofar as we conceive of leadership merely as a matter of receiving inputs (public feedback, polling data, intelligence briefings) and generating outputs (a political platform, strategy, public communications, and the resultant legitimation structure rooted in social trust and identification).

This scenario highlights the limits of instrumentality itself. Hence, instead of having merely instrumental goals for governance, I believe that we urgently need to treat all humans as image-bearers, as true ends in themselves, as Creation’s priests.

A range of scholarship has highlighted the basic connection between image-bearing and the governance functions of priests and kings in the religions of the Ancient Near East. Image-bearing is, then, very early language for social synecdoche. In an axial age context, which was and is our context, the notion that all of humanity bears God’s image remains a challenging and deeply egalitarian response to the problem of concentrated power that results from social synecdoche. That is what I’ll turn to in the next post.


Daniel Heck is a Pastor at Central Vineyard Church in Columbus, OH. His work focuses on immigrant and refugee support, spiritual direction, and training people of all ages how to follow the teachings of Jesus. He is the author of According to Folly, founder of Tattered Books, and writes regularly on Medium: https://medium.com/@danheck

Vulnerable like God: Perfect Machines and Imperfect Humans

This four-part series started with the suggestion that AI can be of real help to theologians, in their attempt to better understand what makes humans distinctive and in the image of God. We have since noted how different machine intelligence is from human intelligence, and how alien-like an intelligent robot could be ‘on the inside’, in spite of its humanlike outward behavior.

For theological anthropology, the main takeaway is that intelligence – understood as rationality and problem-solving – is not the defining feature of human nature. We’ve long been the most intelligent and capable creature in town, but that might soon change, with the emergence of AI. What makes us special and in the image of God is thus not some intellectual capacity (in theology, this is known as the substantive interpretation), nor something that we can do on God’s behalf (the functional interpretation), because AI could soon surpass us in both respects.

The interpretation of the imago Dei that seems to account best for the scenario of human-level AI is the relational one. According to it, the image of God is our special I-Thou relationship with God, the fact that we can be an authentic Thou, capable to receive God’s love and respond back. We exist only because God calls us into existence. Our relationship with God is therefore the deepest foundation of our ontology. Furthermore, we are deeply relational beings. Our growth and fulfillment can only be realized in authentic personal relationships with other human beings and with God.

AI and Authentic Relationality

It is not surprising that the key to human distinctiveness is profoundly relational. Alan Turing tapped right into this intuition when he designed his eponymous test for AI. Turing’s test is, in fact, a measurement of AI’s ability to relate like us. Unsurprisingly, the most advanced AIs still struggle when it comes to simulating relationships, and none has yet passed the Turing test.

But even if a program will someday convincingly relate to humans, will that be an authentic relationship? We’ve already seen that human-level AI will be anything but humanlike ‘on the inside.’ Intelligent robots might become capable to speak and act like us, but they will be completely different from us in terms of their internal motivation or meaning systems. What kind of relationship could there be between us and them, when we’d have so little in common?

We long for other humans precisely because we are not self-sufficient. Hence, we seek others precisely because we want to discover them and our own selves through relationships. We fall in love because we are not completely rational. Human-level AI will be the opposite of that: self-sufficient, perfectly rational, and with a quasi-complete knowledge of itself.

The Quirkiness of Human intelligence

Our limitations are instrumental for the kind of relationships that we have with each other. An argument can thus be made that a significant degree of cognitive and physical vulnerability is required for authentic relationality to be possible. There can be no authentic relationship without the two parts intentionally making themselves vulnerable to each other, opening to one another outside any transactional logic.

Photo by Duy Pham on Unsplash

A hyper-rational being would likely have very serious difficulties to engage fully in relationships and make oneself totally vulnerable to the loved other. It surely does not sound very smart.

Nevertheless, we, humans, do this tirelessly and often at high costs, exactly, perhaps, because we are not that intelligent and goal oriented as AI. Although that appears to be illogic, it is such experiences that give meaning and fulfillment to our lives.

From an evolutionary perspective, it is puzzling that our species evolved to be this way. Evolution promotes organisms that are better at adapting to the challenges of their environment, thus at solving practical survival and reproduction problems. It is therefore unsurprising that intelligence-as-problem-solving is a common feature of evolved organisms, and this is precisely the direction in which AI seems to develop.

If vulnerability is so important for the image of God as relationship, does this imply that God too is vulnerable?

What is strange in the vast space of possible intelligences is our quirky type of intelligence, one heavily optimized for relationship, marked by a bizarre thirst for meaning, and plagued by a surprising degree of irrationality. In the previous post I called out the strangeness of strong AI, but it is we who seem to be strange ones. However, it is specifically this kind of intellectual imperfection, or vulnerability, that enables us to dwell in the sort of Goldilocks of intelligence where personal relationships and the image of God are possible.

Vulnerability, God, Humans and Robots

Photo by Jordan Whitt on Unsplash

If vulnerability is so important for the image of God as relationship, does this imply that God too is vulnerable? Indeed, that seems to be the conclusion, and it’s not surprising at all, especially when we think of Christ. Through God’s incarnation, suffering, and voluntary death, we have been revealed a deeply vulnerable side of the divine. God is not an indifferent creator of the world, nor a dispassionate almighty, all-intelligent ruler. God cares deeply for creation, to the extent of committing to the supreme self-sacrifice to redeem it (Jn. 3: 16).

This means that we are most like God not when we are at our smartest or strongest, but when we engage in this kind of hyper-empathetic, though not entirely logical, behavior.

Compared to AI, we might look stupid, irrational, and outdated, but it is paradoxically due to these limitations that we are able to cultivate our divine likeness through loving, authentic, personal relationships. If looking at AI teaches theologians anything, it is that our limitations are just as important as our capabilities. We are vulnerable, just as our God has revealed to be vulnerable. Being like God does not necessarily mean being more intelligent, especially when intelligence is seen as rationality or problem solving

Christ – whether considered historically or symbolically – shows that what we value most about human nature are traits like empathy, meekness, and forgiveness, which are eminently relational qualities. Behind such qualities are ways of thinking rooted more in the irrational than in the rational parts of our minds. We should then wholeheartedly join the apostle Paul in “boast[ing] all the more gladly about [our] weaknesses […] for when [we] are weak, then [we] are strong” (2 Cor. 12: 9-10).