Blog

How Can Facebook Algorithms Be Accountable to Users?

Second AI Theology Advisory Board meeting: Part 1

In our past discussion, the theme of safeguarding human dignity came up as a central concern in the discussion of AI ethics. In our second meeting, I wanted us to dive deeper into a contemporary use case to see what themes would emerge. For that aim, our dialogue centered on the WSJ’s recent expose on Facebook’s unwillingness to address problems with its algorithms. This was the case even after internal research clearly identified and proposed solutions to the problems. While this is a classic case of how the imperative of profit can cloud ethical considerations, it also highlights algorithms can create self-enforcing vicious cycles never intended to be there in the first place.

Here is a summary of our time together:

Elias: Today we are going to focus on social media algorithms. The role of AI is directing the feed that everybody sees, and everybody sees something different. What I thought was interesting about the article is that in 2018 the company tried to make changes to improve the quality of engagement but it turned out doing the exact opposite.

I experienced that first hand. A while back, I noticed that the controversial and angry comments were the ones getting attention. So, sometimes I would poke a little bit at some members of our community that have more orthodox views of certain things, and that would get more engagement. In doing so, I also reinforced the vicious cycle. 

That is where I want to center the discussion. There are so many issues we can talk about Facebook algorithms. There is the technical side, there’s the legal side and there’s also the philosophical side. At the end of the day, Facebook algorithms are a reflection of who we are, even if it is run by a few.

These are my initial thoughts. I was wondering if we can start with Davi on the legal side. Some of these findings highlight the need for regulation.  The unfathomed power of social media companies can literally move elections. Davi, what are your thoughts on smart regulation, since shutting down isn’t the answer, what would safeguard human dignity and help put guardrails around companies like Facebook?    

By Pixabay

Davi: At least in the US, the internet is a wild west of any regulation or any type of influence in big tech.  Companies have tried to self-regulate, but they don’t have the incentive to really crack down on this. It’s always about profits, they talk the talk but don’t walk the walk. At the end of the day, the stock prices speak higher.

In the past, I’ve been approached by Facebook’s recruiters. In the package offered, their compensation was relatively low compared to industry standards but they offered life-changing stock rights. Hence, stock prices are key not only to management but to many employees as well.

Oligarchies do not allow serious regulation. As I researched data and privacy regulation, I came through the common things against discrimination, and most of the more serious cases are being brought to court through the federal trade commission to protect the consumers. Yet, regulations are being done in a very random kind of way. There are some task forces in Congress to come up with a  regulatory framework. But this is mostly in Washington, where it’s really hard to get anything done. 

Some states are trying to mimic what’s happening in Europe and bring the concept of human dignity, in comprehensive privacy laws like the ones of Europe. We have a comprehensive privacy law in California, Virginia, Colorado. And every day I review new legislation. Last week it was Connecticut. They are all going towards the European model. This fragmented approach is less than ideal. 

Levi: I have a question for Davi. You mentioned European regulation, and for what I understand, but because the EU represents such a large constituency when they shape policies for privacy it seems much easier for companies like FB to conform their overall strategy to fit EU policy, instead of making a tailored policy just for EU users, or California users, etc. Do you see that happening or is the intent to diversify according to different laws?  

Davi: I think that isn’t happening on Facebook. We can use 2010 as an example,  when the company launched a face recognition capability tool that would recognize people in photos so you could tag them more easily, increasing interactions. There were a lot of privacy issues with this technology. They shut it down in Europe in 2011, and then in 2013 for the US. Then they relaunched in the US. Europe has a big influential authority, but that is not enough to buck financial interests.

As a lawyer, that would be the best course of action. If multinationals based themselves on the strictest law and apply it to everyone the same way. That is the best option legal-wise, but then there can be different approaches.

Agile and Stock Options

Elias: Part of the issue of Facebook algorithms wasn’t about disparate impact per se. It was really about a systemic problem of increasing negativity. And one thing I thought was how sentiment analysis is very common right now. There is natural language processing, you can get a text and the computer can say “this person is happy – mad”. It’s mostly binary: negative and positive. What if we could have, just like in the stock market, a safety switch for when the market value drops too fast. At a certain threshold, the switch shuts down the whole thing. So I wonder, why can’t we have a negativity shut-off switch? If the platform is just off the charts with negativity, why don’t we just shut it down? Shut down FB for 30min and let people live their lives outside of the platform.  That’s just one idea for addressing this problem. 

I want to invite members that are coming from a technical background. What are some technical solutions to this problem?

Maggie: I feel like a lot of it comes from some of the engrained methodologies of agile gone wrong. That focus on the short term, iteration, and particular OKRs that were burst from Google where you are focusing on outlandish goals and just pushing for that short term. Echoing what Davi said on the stock options of big companies, and when you look at the packages, it’s mostly about stocks. 

You have to get people thinking about the long term. But these people are also invested in these companies. Thinking long term isn’t the only answer but I think a lot of the problems have to do with the short term iteration from the process. 

Noreen: Building on what Maggie said, a lot of it is the stock. In the sense that they don’t want to employ the number of people, they would have to employ to really take care of this. Right now, AI is just not at a point where it can work by itself. It’s a good tool. But turning it loose by itself isn’t sufficient. If they really are going to get a handle on what’s going on in the platform, they need to hire a lot more people to be working hand in hand with Artificial Intelligence using it as a tool, not as a substitute for moderators who actually work with this stuff.

Maggie: I think Amazon would just blow up by saying that because one of their core tenets is getting rid of humans. 

But of course, AI doesn’t need a stock option, it’s cheap. I think this is why they are going in this direction. This is an excellent example of something I’ve written over and over, AI is a great tool but not a substitute for human beings.  They would need to hire a bunch of people to be working with the programs, tweaking the programs, overseeing the programs. And that is what they don’t want to do. 

From Geralt on Pixabay

Transparency and Trade Secrets

Brian: This reminded me of how Facebook algorithms are using humans in a way, our human participation and using it as data to train them. However, there’s no transparency that would help me participate in a way that’s going to be more constructive.

Thinking about the article Elias shared from Wall Street Journal, that idea of ranking posts on Facebook. I had no idea that a simple like would get 1 point on the rank and a heart would get 5 points. If I had known that, I might have been more intentional using hearts instead of thumbs up to increase positivity. Just that simple bit of transparency to let us know how our actions affect what we see and how they affect the algorithm could go a long way. 

Davi: The challenge with transparency is trade secrets. Facebook algorithms are an example of a trade secret. Companies aren’t going to do it but if we can provide them with support and protection, and at the same time require this transparency. All technology companies tend to be so protective of their intellectual property, that is their gold, the only thing they have. This is a point where laws could really help both sides. Legislation that fosters transparency while still allowing for trade secrets to be safeguarded would go a long way.

Maggie: Companies are very protective of this information. And I would gamble that they don’t know how it works themselves. Because developers do not document things well. So people also might not know. 

Frantisek: I’m from Europe, and as far as I’m concerned here the regulation at a national level, or directly from the EU, has a lot to do with taxation. They are trying to figure out how to tax these big companies like Google, Amazon, and Facebook. It’s an attempt to not let the money stay somewhere else, but actually gets taxed and paid in the EU. This is a big issue right now.

In relation to theology, this issue of regulation is connected with the issue of authority. We are also dealing with authority in theology and in all churches. Who is deciding what is useful and what isn’t?

Now the question is, what is the tradition of social networks? As we talk about Facebook algorithms, there is a tradition. Can we always speak of tradition? How is this tradition shaped?              

From the perspective of the user. I think there might be an issue with anonymous users. Because the problem of social media is that you can make up a nickname and make a mess and get away with anything, that at least is what people think. Social networks are just an extension of normal life, trying to act the same way I do in normal life and in cyberspace as a Christian. The life commandment from Jesus is to treat your neighbors well. 

China and the EU jump ahead of the US in the Race for Ethical AI

Given the crucial role AI technologies are playing in the defense industry, it is no secret that leading nations will be seeking the upper hand. US, China, and the EU have put forth plans to guide and prioritize research in this area. At AI Theology, we are interested in a very different kind of race. One that is less about technological supremacy and more about ensuring the flourishing of life. We call it the race for ethical AI.

What are governments doing to minimize, contain and limit the harm of AI applications? Looking at the leaders in this area, two show signs of making progress while one is still sitting on the sidelines. Though through different methods, China and the EU took decisive action to start addressing the challenge. The US has been awfully quiet in this area, even as most leading AI organizations have their headquarters on its soil.

The EU Tackles Facial Recognition

Last week, the European Parliament passed a ground-breaking resolution curbing the use of AI for mass surveillance and predictive policing based on behavioral data. In its language, the document calls out companies like Clearview for their controversial use of facial recognition in law enforcement. It also lists many examples in which this technology has erroneously targeted minorities. The legislative body also calls for greater transparency and human involvement in the decision process that comes from algorithms.

Photo by Maksim Chernishev on Unsplash

While not legally binding, this is a first and important step in regulating the use of computer vision in law enforcement. This is part of a bigger effort the EU is taking to draft regulations on AI to address multiple applications of the technology. In this sense, the multi-state body becomes the pioneer in attempting to place guardrails on AI use possibly becoming the standard for other countries to follow.

Even though ethical AI is not limited to regulation, government action can have a sweeping impact in curbing abuse and protecting the vulnerable. It also sends a strong signal to companies acting in this space that more accountability is on its way. This will likely force big tech and AI start-ups to take a hard look at how they develop products and deliver their services. In short, good legislation can be a catalyst for the type of change we need. In this way, the EU leaps forward in the race for ethical AI.

China Take Steps towards Ethical AI

On the other side of the world, another AI leader put forth guidelines on the use of the technology. The document outlines principles for algorithm governance, protect privacy, and give users more autonomy over their data. Besides its significance, it is notable that the guidelines include language around making the technology “people-oriented” and appeals to common values.

Photo by Timothée Gidenne on Unsplash

The guidelines for ethical AI are part of a broader effort to rein big tech power within the world’s most populous nation. Earlier this year, the government published a policy to better control recommendation algorithms on the Internet. This and other measures are sending a strong signal to the Chinese budding digital sector that the government is watching and will keep them accountable. Such a move also contributes to the centralization of power in the government in a way many western societies would not be comfortable with. However, in this case, they seem to align with the public good.

Regardless of how these guidelines will be implemented, it is notable that China would be at the forefront of publishing these guidelines. It shows that Beijing is taking the threat of AI misuse seriously, at least when it is perpetrated by business enterprises.

US Fragmented Efforts

What about the North American AI leader? Unfortunately, to date, there is no sweeping national effort to address AI abuse in the US. This is not to say that nothing is happening. States like California and Illinois are working on legislation on data privacy and AI surveillance. Biden’s chief science advisor recently is called for an AI Bill of rights. In a previous blog, I outlined US efforts to address bias in facial recognition as well.

Yet, nothing concrete has happened at a national level. The best we got was a FB former employee’s account of the company’s reluctance to curb AI abuse. It made for great television but no sweeping legislation to follow.

If there is a race for ethical AI, the North American competitor is behind. If this trend continues, AI ethics will be at the mercy of large company boardrooms in the Yankee nation. Company boards are never free of conflict of interest as the next quarter’s profit often takes precedence over human flourishing.

Self-regulation has not worked. It is time we move towards more active government intervention for the sake of the common good. This is a race the US cannot afford to sit out. It is time to hop on the track.

Placing Human Dignity at the Center of AI Ethics

In late August we had our kick-off Zoom meeting of the Advisory Board. This is the first of our monthly meetings where we will be exploring the intersection of AI and spirituality. The idea is to gather scholars, professionals, and clergy to discuss this topic from a multi-faceted view. In this blog, we publish a short summary of our first conversation. The key theme that emerged was a concern for safeguarding and uploading human dignity as AI becomes embedded in growing spheres of our lives. The preocupation must inhabit the center of all AI discussions and be the guiding principles for laws, business practices and policies.

Question for Discussion: What, in your perspective, is the most pressing issue on AI ethics in the next three to five years? What keeps you up at night?

Brian Sigmon: The values from which AI is being developed, and their end goals. What is AI oriented for? Usually in the US, it’s oriented towards profit, not oriented to the common good or toward human flourishing. Until you change the fundamental orientation of AI’s development, you’re going to have problems.

AI is so pervasive in our lives that we cannot escape it. We don’t always understand the logic behind it. It is often beneath the surface intertwined with many other issues. For example, when I go on social media, AI controls what I see on my feed. It does not optimize in making me a better person but instead maximize clicks and revenue. That, to me, is the key issue.

Elias Kruger: Thank you, Brian. To add some color to that, since the pandemic, companies have increased their investment in AI. This in turn is creating a corporate AI race that will further ensure the encroachment of AI across multiple industries. How companies execute this AI strategy will deeply shape our lives, not just here in the US but globally.

Photo by Chris Montgomery on Unsplash

Frantisek Stech: Coming from Eastern Europe, one of the greatest issues is the abuse of AI from authoritarian non-democratic regimes for human control. In other words, it is the relationship between AI control and human freedom. Another practical problem is how people are afraid to lose their jobs to AI-driven machines.  

Elias Kruger: Thanks Frantisek, as you know we are aware of what is happening in China with the merging of AI and authoritarian governments. Can you tell us a little bit about your area of the world? Is AI more government-driven or more corporate?

Frantisek Stech: In the Czech Republic, we belong to the EU, an therefore to the West. So, it is very much corporate-driven. Yet, we are very close to our Eastern neighbors are we are watching closely how things develop in Belarussia and China especially as they will inevitably impact our region of the world.

However, this does not mean we are free from danger there. There is the issue of manipulation of elections that started with the Cambridge Analytics scandal and issues with the presidential elections in the US. Now we are approaching elections in the EU, so there is a lot of discussions about how AI will be used for manipulation and the problem . So when people hear AI, they often associate with politics. So they think they are already being manipulated if they buy a phone with Facial recognition. We have to be cautious but not completely afraid. 

Ben Day: I am often pondering on this question of AI’s, or technology in general, relates with dignity and individual human flourishing. When we aggregate and manipulate data, we strip out individual human dignity which is Christian virtue, and begin to see people as compilations of manipulative data. It is really a threat to ontology, to our very sense of being. In effect, it is an assault on human dignity through AI.

Going further, I am interested in this question of how AI encroaches in our sense of identity. That is, how algorithms govern my entire exposure to media and news. Not just that but AI impacts our whole social eco-verse online and offline. What does that have to do with the nature of my being?

I often say that I have a very low view of humanity. I don’t think human beings are that great. And so, I fear that AI can manipulate the worst parts of human nature. That is an encroachment in huam dignity.

In the Episcopal church, we believe that serving Christ is intimately connected with upholding the dignity of human beings. So, if we are turning a blind eyed to human dignities being manipulated, then my Christian praxis compels me by moral obligation to do something about it. 

Photo by Liv Merenberg on Unsplash

Elias Kruger: Can you give us a specific example of how this plays out?

Ben Day: Let me give you one example of how this affected my ministry. I removed myself from most of social media as of October of 2016 because of what I was witnessing. I saw members of my church sparring on the internet, attacking each other’s dignity, and intellect over politicized issues. The vitriol was so pervasive that I encounter a moral dilemma. As a priest, it is my duty to deny the sacrament to those who are in unrepetant sin.

So I would face parishioners only hours after these spars online and wonder whether I should offer them the sacrament. I was facing this connundrum as a result of algorithms manipulating feeds to foster angry engagements because it leads to profit. It virtually puts the church at odds to how these companies pursue profit.

Levi Checketts:  I lived in Berkley for many years and the cost of living there was really high. It was so because a lot of people who worked in Silicon Valley or in San Francisco were moving there. The influx of well-to-do professionals raised home prices in the area, forcing less fortunate existing residents to move out.

So, there is all this money going into AI. Of the big 5 biggest companies in market cap, three are in Silicon Valley and two in the Seattle area. Tech professionals often do not have full awareness of the impact their work is having on the rest of the world. For example, a few years back, a tech employee wrote an op-ed complaining about having to see disgusting homeless people in his way to work when he was paying so much for rent.

What I realized is that there is a massive disconnect between humanity and the people making decisions for companies that are larger than many countries’ economies. My biggest concern is that the people who are in charge and controlling AI have many blind spots. Their inability to emphathize with those who are suffering or even notice the realities of systems that breed oppression and poverty. To them, there is always a technical fix. Many lack the humility to listen to other perspectives, and come from mainly male Asian and White backgrounds. They are often opposed to other perspectives that challenge their work.

There have been high-profile cases recently like Google firing a black female researcher because she spoke up about problems in the company. The question that Ben mentioned about human dignity in AI is very pressing. If we want to address that, we need people from different backgrounds making decisions and working to develop these technologies.

Futhermore, if we define AI as a being that makes strictly rational decisions, what about people who do not fit that mold?

The key questions are where do we locate this dignity and how do we make sure AI doesn’t run roughshod over humanity?

Davi Leitão: These were all great points that I was not thinking about before. Thank you for sharing this with us.

All of these are important questions which drive the need for regulation and laws that will gear profit-driven corporations to the right path. All of the privacy and data security laws stand on a set of principles written in 1981 by the OECD. These laws are looking to these principles and putting into practice. They are there to inform and safeguard people from bias.

My question is: what are the blind spots on the FIP (fair information principles) that are not accounting for these new issues technology has brought in? This problem is a wide net, but it can help guide a lot of new laws that will come. This is the only way to make companies care about human dignity.

Right now, there is a proliferation of state laws. But this brings another problem: customers of states that have regulation laws can suffer discrimination by companies from other states. Therefore, there is a need for a federal uniform set of principles and laws about privacy in the US. The inconsistency between state laws keep lawyers in business but ultimately harm the average citizen.

Elias Kruger:  Thanks for this perspective. I think it would be a good takeaway for the group to look for blindspots in these principles. AI is about algorithms and data. Data is fundamental. If we don’t handle it correctly, we can’t fix it with algorithms. 

My 2 cents is that when it comes to AI applications, the one that concerns me most is facial recognition for surveillance and law enforcement. I don’t think there is any other application where a mistake can cause such devastating impact on the victim than here. When AI wrongly incriminates someone of a crime because an algorithm confused their face with the actual perpetrator, the indidivual loses his freedom. There is no way to recover from that.

This application calls for immediate regulation that puts human dignity at the center of AI in so we can prevent serious problems in the future.

Thanks everybody for your time.

3 Effective Ways to Improve AI Ethics Discussions

Let’s face it: the quality of discourse in the blogosphere and social media is dismal! What gets traffic is most often not the best representation of a topic but instead the most outrageous click-bait title. As a recent WSJ report suggests, content creators face the constant temptation (including myself and this portal) to trade well-crafted arguments for divisive pieces that emphasize controversy. When it comes to discussions on AI ethics, the situation is no different. Useless outrageous claims abound while a nuanced conversation that would help improve AI ethics discussions are rare.

It is time to raise the level of discourse on AI impact. While I am encouraged to see this topic get the attention is getting, I fear that it is fraught with hyperboles and misinformation which degrade rather than improve dialogue. Consequently, most pieces lead to precipitated conclusions rather than the thoughtful dialogue the topic requires. In this blog, I put forth three ways to improve the quality of dialogue in this space. By keeping them in mind, you can differentiate what is worth your attention from what can be ignored.

Impact is not the same as Intent

The narrative of big business or government seeking to hurt the small guy is an attractive one. We are hard-wired to choose simple explanations and simple storylines of good and evil fit the bill. Most often, they are a good front for our addiction to escape-goating. By putting all evil in one entity, we are excused from looking at evil among ourselves. Most importantly, they undermine the reality that a lot of evil happens as unintended consequences of well-intended efforts.

When it comes to AI bias, I am concerned that too many stories imply a definite villain without probing further to understand systemic dynamics. Consider this article from TNW, titled “Stop Calling it Bias: AI is Racist.” The click-bait title should you give a reason to pause. Moreover, the author seems to assign a human intent to complex systems without probing further into the causes. This type of hyperbolic rhetoric does more harm than good, assigning blame towards one group while ignoring the technical complexities of the issue at hand.

Photo by Christina @ wocintechchat.com on Unsplash

By implying intent to impact, these pieces miss the opportunity to ask broader questions such as: what environmental factors amplified the harmful impact of this problem? How could other actors such as consumers, users, businesses, and regulators play a part in mitigating risks in the future? What technical limitations helped cause or expand the problem? These are a few questions that can elevate the discussion on AI’s impact. Above all, they help us get past the idea that for every harmful impact lies a morally deficient actor behind it.

Generalizations Do More Harm than Good

Ironically, this is precisely at the root of the problem. What do I mean by that? AI algorithms err because they rely on generalizations of past data. Then, when they see new cases, they tend to, as the adage goes, “jump to conclusions.” While many times this has harmless consequences, such as recommending Strawberry Shortcake for me in my Netflix cue, other times this selection can cause serious harm.

Yet, when it comes to articles on AI, the problem is when the author takes one case and generalizes to all cases. Consider this Forbes article about AI. It takes a few statements by Elon Musk, one study, and a lot of speculation to tell us that AI is dangerous. While some of the points are valid, the article does nothing to help us understand why exactly it is dangerous and what we can do about it. In that sense, it does more harm than good giving the reader reasons for worry without grounding them on evidence or proposing solutions.

Taking anecdotal evidence (one case) devoid of statistical backing and stating it as the norm is very misleading. We often pay attention to these stories because they tend to describe extreme cases of AI adverse impact. Not that we should dismiss them outright but consider them in context. We should also ask how prevalent is the problem? Is it increasing or decreasing? What can be done to ensure such cases remain rare or non-existent? By staying at a general level we miss the opportunity to better understand the problem. Thus, it does very little to improve AI ethics discussions

Show me the Data

This leads me to the third and final recommendation. Discussions on AI ethics must stand on empirical evidence. In fact, given that data is at the foundation of algorithm formation, data is also readily available to evaluate its impact. The accessibility of data is both an advantage and also a reminder that transparency is crucial. This is probably one of the key roles regulators can play, ensuring that companies, government, and NGOs make their data available to the public.

This is not limited to impact but extends into the inputs. That is, understanding the data that algorithms train on is as important as understanding the downstream impact they create. For example, if past data shows that white applicants get higher approval rates for mortgages than people of color, guess what? The models will inevitably replicate this bias in their results. This is certainly a problem that needs to be recognized in the front-end rather than monitored on the outcomes only.

Discussions on AI ethics must include statistical evidence for the issues at hand. That is, when presenting a problem, the claims must be accompanied by numbers. Consider this article from the World Economic Forum. It makes appropriate claims, avoids generalizations, and backs up each claim with research. It not only informs the reader but provides references for further. By doing so, it goes a long way to improve AI ethics discussions.

Conclusion

It is encouraging to see the growing interest in AI. The public must engage with his topic as it affects many aspects of our lives. Expanding dialogue and welcoming new voices to the table is critical to ensure AI will work towards human flourishing. With that said, it is now time to ground AI ethics discussions on real evidence and sober assessments of cause and effect. We must resist the temptation of escape-goating and lazy generalizing. Instead, we must pursue the careful path of relentless probing and examining evidence.

This starts with content creators. As those who frame the topic, we must do a better job to represent the issues clearly. We must avoid misleading statements that attract eyeballs but confuse minds. We must also have a commitment to accurate reporting and transparency with our sources.

Can we take the challenge to improve AI ethics discussions? I hope so.

Theology as the Intelligence of Faith in the Cyberspace

The book Cybertheology: Thinking Christianity in the Era of the Internet (Fordham University Press 2014) by the prominent Vatican theologian Antonio Spadaro SJ, represents an explicit attempt to conceptualize an encounter between Christian theology and contemporary digital culture. It tries to answer questions related not only to the impact of the internet on the church’s self-understanding but also reflects on God’s revelation, grace, liturgy, sacraments, and many other theological topics. Hence, Spadaro’s book serves as a brief but lucid introduction to a whole range of questions emerging in the Internet era.

Defining Cybertheology

From his perspective, the Internet is not a tool to be used. Rather, it is a genuine environment for contemporaries to inhabit as much as they do in the physical landscapes of this world. We would be mistaken if we conceive the Internet just as a kind of parallel reality because it permeates the complex of human dwelling. It is “an anthropological space that is deeply intertwined with our everyday lives.”[1] As such, it represents a new culture – the culture of cyberspace,[2] and in relation to that fact, theology entering the coordinates of this culture becomes Cybertheology.

At the beginning of the 21st century, many authors attempted to define Cybertheology. Some understood it as a theology of new technologies. Others saw it as the study of spirituality appearing within the internet environment. Spadaro’s aim is to reframe these first attempts and offer his own alternative definition: “It is necessary to consider cybertheology as being the intelligence of the faith in the era of the Internet, that is, reflection on the thinkability of the faith in the light of the Web’s logic.”[3] Cybertheology reflects on faith lived “at a time when the Web’s logic marks the way of thinking, knowing, communicating, and living.”[4]

This is an important characteristic because in this sense it would not be appropriate to define cybertheology only as a kind of contextual theology since the internet is a phenomenon that became an integral part of everyday human life, at least for the majority of people living on planet Earth. Cybertheology could be understood as mediation between God’s word (Logos) and digital culture and for Spadaro, it appears as one of the most important vocations for contemporary Christians.[5] Consequently, cyberspace is a new anthropological space, where Christians encode and de-code their digital witnesses about their faith and hope they have in Jesus Christ (cf. 1. Peter 3,15). It is a new eco-system (or extension of the physical eco-system) where theology is done and thought.

Church as the Spiritual Google

Image by Gábor Adonyi from Pixabay

Two ecclesiological relevant topics may be mentioned here to illustrate this. The first one is connectivity, which introduces the Church as a connective environment, i.e., as a communication hub allowing for multiple encounters of people among themselves, with the rest of creation, and with God the Creator. In this relation, the Church can become a connective authority or a kind of Google for the realm of spiritual life.[6] In other words, just as Google enables its users to find what they search for, Church enables people to find and encounter God.

The second example is relationality itself, which receives new meanings in the environment of the internet. Just think of how often we are preoccupied with deciding if our meetings will happen online or offline. According to Spadaro, the Church may understand itself as a network and derive new impulses from the very conception of the internet for the sake of its own self-reflection. This kind of theology does not only react to new trends or technologies. At the same time, it is influenced by them and starts to live inside a milieu shaped by them.

With that said, Spadaro is rather critical to living a Christian life exclusively in the realm of cyberspace. In consonance with his own denomination, he still holds that physical community is essential and indispensable for a genuine Christian life from faith. With respect to this, he argues against tendencies like virtual sacraments received by avatars in cyberspace, which are supposed to mediate grace to physical persons of whom they are extensions.

Teilhard de Chardin’s Noosphere

Even though he holds that from a Christian point of view it is not possible to accept the concept of purely virtual sacraments, he concludes that thanks to God’s grace, religious experience is principally possible also in cyberspace.[7] In any case, it might be said that for Spadaro, the age of the Internet introduces a new and specific phase of the human journey towards God, which requires complex theological reflection stemming from deep immersion in digital culture. Spadaro writes:

Today, one thinks, and one knows the world not only in the traditional manner, through reading and exchange or within the confines of special interest groups (for example, teaching or study groups), but through realizing a vast connection between people. Intelligence is distributed everywhere, and it can be easily interconnected. The Web gives life to a form of collective intelligence. The Church itself recognizes that it has a responsible role in the formation of a human collective culture.[8]

Photo by NASA on Unsplash

Here, Spadaro connects to Pierre Teilhard de Chardin, who he considers a prophetic theological voice, because Teilhard thought that the development of human culture is directed towards ever more intensive interconnection (complexification), that is into the global network which would be in future the environment for life.

While for philosopher Pierre Levy, the global environment implies the subordination of the individual to the whole, Teilhard turns this conviction upside down and speaks of an individual mind. In the milieu of the intensive, global interconnection, this individual mind is lifted into a higher level of being, into the level of the noosphere. In the sphere of reason, the new intensification of human interconnectedness (including their minds and consciousnesses) occurs.

The interesting fact is that according to Teilhard, machines play an important role in this process,[9] because they help with interconnecting intelligent entities and contribute to the genesis of “the technological, planetary nervous system.”[10] Restlessly complexified, the techno-human network of the world (noosphere) remains evolutionarily connected to the biosphere as well as an ancient lithosphere. This continuously opens up (more and more) to its own transcendence (even more intensive integration and interconnectedness) reaching its final climax in the Omega point – the end of history, in salvation, which comes through Jesus Christ as the very basis of all evolution.

Through Jesus Christ, with Him and in Him, the whole process of evolution is brought towards completion, towards God, who shall be “all in all” (1. Corinthians 15, 28). This final unity, however, does not mean the vanishing of the particular in universal. On the contrary, it becomes preservation of the particularity of all parts and may be compared to a firmly woven net of distinctive beings imbued by God in whom, all particularities meet their unity in diversity because He is all in all. This was clear already to Jennifer Cobb who at the beginning of the 1990s, saw in cyberspace a clear parallel to Teilhard’s noosphere.[11]

We may conclude that in his book, Spadaro shows how theology may help in the contemporary quest for re-thinking new technologies and changes they bring along. In this attempt, he finds the theology of Teilhard extremely inspiring, even though he is aware of all its ambiguities.[12] Spadaro thinks the most important is Teilhard’s emphasis on proposing “an open vision of transcendence that is able to understand an intelligence that is not collective but convergent.”[13] Consequently, we can understand digital culture as a specific phase of the human journey towards God, and, thanks to that, it is also legitimate to think about the internet, in theological terms, as an integral part of the divine milieu.

Cybertheology in COVID Times

Spadaro formulated his ideas (in Italian) already a decade ago. The English translation of his book appeared 7 years ago. At that time Spadaro could have hardly imagined that the theological reflection he proposed will become so important in times of the global pandemic of the Covid-19. Within a very short period, an unprecedented amount of people throughout the world found themselves in social isolation.

Consequently, the vast amount of human social activities was quickly transferred to online mode (or environment as Spadaro would probably say). Including education and religious life. With brute force, the Covid-19 pandemic pointed out the key role of new technologies in the lives of contemporaries, religious people not excluded. Debates on how to be the Church in the digital age intensified in all Christian denominations, and this requires a conscientious theological reflection.

In such context, the return to Spadaro’s 2014 Cybertheology book becomes even more pertinent. The things he envisioned then as faint glimpses of the future became our de facto reality when houses of worship were forced to close. Shifting a faith paradigm from attracting people to people buildings to developing intelligent forms in cyberspace is a good start.


František Štěch is a research fellow at the Protestant Theological Faculty of Charles University. He serves as coordinator of the “Theology & Contemporary Culture” research group. Previously he worked at the Catholic Theological Faculty of Charles University as a research fellow and project PI. His professional interests include Fundamental theology; Ecclesiology; Youth theology; Religious, and Christian identity; Intercultural theology; Public Theology; Theology of Religions; Landscape & Theology.

Get to know our Advisory Board


[1] Antonio SPADARO (2014), Cybertheology: Thinking Christianity in the Era of the Internet. (Translated by Maria Way), New York: Fordham University Press, p. 3.

[2] SPADARO, Cybertheology, p. 14.

[3] SPADARO, Cybertheology, p. 16.

[4] SPADARO, Cybertheology, p. 17.

[5] SPADARO, Cybertheology, p. 18.

[6] See FRIESEN, Dwight, J. Thy Kingdom Connected: What the Church Can Learn from Facebook, the Internet, and Other Networks, 2009, Grand Rapids (MI): Baker Books, p. 80-81.

[7] SPADARO, Cybertheology, p. 75-76.

[8] SPADARO, Cybertheology, p. 94.

[9] TEILHARD DE CHARDIN, Pierre, The Future of Man, 2004, New York: Image Books, 158-161.

[10] SPADARO, Cybertheology, p. 100.

[11] SPADARO, Cybertheology, p. 103.

[12] SPADARO, Cybertheology, p. 105.

[13] SPADARO, Cybertheology, p. 105.

What is Mystical Christian Transhumanism? A Conversation with ICN

Just recently, I had the privilege to talk to Luke Healy and David Pinkston for the Integral Christian Network podcast. The interview was inspired by the 3 essay series I completed at Medium on Mystical Christian Transhumanism.

To listen to the podcast click on the picture

In this casual conversation, we covered a lot of ground from deconstructing evangelical faith to integrating it into all aspects of life. I really enjoyed the conversation and would like to provide a guided summary here for those interested in listening in.

Luke started us off with a short guided meditation, setting the tone for a lively but relaxed conversation. It also helped me engage with the questions less from the head and more from the heart.

The conversation started at 4:00 when Luke asked me to give a short overview of my spiritual journey. I discussed portions of it in previous blogs like this one and this one. In the podcast, I described the path from a Charismatic militant religion to the Mystical Christian Transhumanism where I am today.

Discovering the Mystical

Next at about 7:55, Luke asked about how the mystical fits into this picture. What is the mystical part? I spoke a bit about how the mystical was a thread that was there all along. One that has run through Christian history and even embedded in our current movements. In short, the mystical is about the experience of the divine presence irrespective of how we explain it theologically. It sets a foundation of non-dualistic thinking that enables us to be open to the world.

At around minute 12, Luke asked me to dive deeper into the Christian part. He was particularly interested in the militant part of my faith upbringing. I shared how while having to shed the more combative aspects of my earlier faith, I was also grateful for how it celebrated the experiential. While this was a long a painful road, I would certainly not be who I am today without going through it.

Technology and Transhumanism

This became a good segway into discussing technology at minute 19. He first asked me about the path to integrating my work with technology with the Christian faith. That is how I told the story of how AI Theology started as a desire to integrate the technologist in me with the theologian.

From minute 24 onwards, the conversation shifted towards Transhumanism. I started by providing a brief definition. Next, I talked about engaging this emerging philosophy and its Christian roots. I then proceeded to better define Christian Transhumanism as a way to live out the faith in very practical terms.

At 29:30, Luke asked how the mystical relates to Transhumanism. Are those opposing ideas? I talked about how mystical adds a spiritual dimension to the pursuit of Transhumanism. The remainder of the conversation revolved around our relationship with technology and how it can support and uphold human flourishing. Part of this process is re-thinking how we use church buildings.

This is the first of many conversations to come on this topic. I hope you find the exploration of Mystical Christian Transhumanism helpful for your journey.

Developing an E-Bike Faith: Divine Power with Human Effort

What can technology teach about faith? In a past blog, I spoke of the mystical qubit. Previously I spoke on how AI can expand our view of God. In this blog, I explore a different technology that is now becoming a common fixture of our cities: e-bikes. A few weeks ago I bought a used one and have loved riding it ever since. For those wondering, you still get your exercise minus the heart palpitations in the uphill climbs. But I digress, this is not a blog about the benefits of an e-bike but of how its hybrid nature can teach us about faith and spirituality.

Biking to Seminary

Eight years ago, we moved to sunny Southern California so I could attend seminary. We found a house about 5 miles from the campus which in my mind meant that I could commute by bike. The distance was reasonable and the wonderful weather seem to conspire in my favor. I could finally free myself from the shackles of motorized dependency.

On our first weekend there, I decided to go for a trial bike ride. The way to the campus went by like a breeze. In no more than 15 minutes I was arriving at Fuller seminary beaming in delight. Yet, I had a nagging suspicion the way back home would be different. One thing that did not enter my calculations was that though we were only 5 miles away from campus, our house was at the foothills of Altadena. That meant that the only way home was uphill. The first 2 miles were bearable yet by mile 3, my legs were giving out. I eventually made it back home drenched in sweat and disappointment.

It became clear that this would not be a ride I could take often. My dreams of biking to seminary ended that day. Back to the gasoline cages for the rescue, not as exciting but definitely more practical.

Photo by Federico Beccari on Unsplash

Divine Electricity

We now live in the Atlanta area and often go to Chattanooga for day trips. This charming Tennesee jewel offers a beautiful riverfront with many attractions for families like ours. Like many cities seeking to attract Millenials, they offer a network of public bikes for a small cost. Among them, I noticed they had some e-bikes available. For a while, I was curious to try one but not enough to shell out the thousands of dollars they cost. Timidly, I pick one for a leisure ride in the city.

From the beginning, I could sense the difference. I still had to pedal normally like I would on a normal bike. Yet, as I pedaled it was like I got a little push that made my pedaling more effective. I would dash by other bikers glancing back at them triumphantly. I then decided to test in an uphill. Would the push sustain or eventually fizzle out because of gravity?

To my contentment, that was when the e-bike shined. For those accustomed to biking, you know that right before going uphill you pedal fast to get as much speed as you can. As you start climbing, you switch to lower gears until the bike is barely moving while you pedal intensively. You make up for the weight relief by tripling your pedal rotations. It can be demoralizing to pedal like a maniac but move like a turtle which is why many dismount and walk. It is like all that effort dissipates by the gravitational pull on the bike.

Pedaling uphill in an e-bike is a completely different experience. First, there is no need to maximize your speed coming into it. You pedal normally and as the bike slows down, the electric motor kicks in to propel you forward. You end up keeping the same speed while pedaling at the same rate.

Goodbye frantic-pedaling-slow-going uphill, hello eternal-e-bike-flatlands

It is as if the hand of God is pushing you from behind when your leg muscles can keep the speed. Going up is no longer a drag but a thrill, all thanks to the small electric motor in the back wheel capable of pushing up a grown man and a 50 lbs bike.

Humanity Plus

If I could change one thing in the Western Christian tradition, that would be the persistent and relentless loathing for humanity. From very early on, and at times even expressed in the biblical text, there is a tendency to make humans look bad in order to make God look good. The impetus stems from a desire to curb our constant temptation to hubris. Sure, we all, especially those whom society put on a pedestal, need to remember our puny frailty lest we overestimate our abilities.

Yet, we are mysteriously beautiful and unpredictable. Once I let go of this indoctrinated loathing, I could face this intricate concoction of flesh in a whole new way. Humanity is a spectacular outcome for an insanely long and painful process of evolution. In fact, that is what often leads us back to the belief in God. The lucid beauty of our humanity is what points us to the invisible divine.

This loathing of humanity often translates into the confusing and ineffective grace-versus-work theology. Stretching Pauline letters to ways never intended by the beloved apostle, theologians have produced miles of literature on the topic. While some of it is helpful (maybe 2%, who knows?), most of it devolves into a tendency to deny the role of human effort in spirituality. In an effort to address transactional legalism, many overshoot in emphasizing divine activity in the process. This is unfortunate because removing the role of human effort in spirituality is a grave mistake. We need both.

Photo by Fabrizio Conti on Unsplash

The Two Sides of Spiritual Growth

Human empowerment plays a pivotal role in a healthy spirituality. If pride is a problem so is its passive-aggressive counterpart low self-loathing. To have an inordinately negative view of self does not lead to godliness but it is a sure path to depression. Along with a realistic view of self comes the understanding that human effort is key to accomplishing things on this earth.

Yet, just like pedaling uphill, human effort can only take you so far. Sometimes you need a divine push. For a long time, I thought divine empowerment worked independently from human effort. What if it is less like a car and more like an e-bike? That is, you still need to pedal, tending for this earth and lifting fellow humans from the curse of entropy. Yet, as you faithfully do it, you are propelled by divine power to reach new heights.

Had e-bikes existed 8 years ago, my idea of commuting to seminary would have been viable. I could have conquered those grueling hills of Altadena with elegant pedaling. I would have made it home without breaking a sweat and still kiss my wife and kids without repelling them with my body odor. It would have been glorious.

Conclusion

Human effort without divine inspiration is not much different from trying to bike uphill. It requires initial concentrated effort only to get us to a state of profuse effort with little movement. Engaging the world without sacred imagination can and will often lead to burnout.

As we face mounting challenges with a stubborn pandemic that will relentlessly destroy our plans, let’s hold on to an e-bike faith. One the calls us to action fueled by divine inspiration. One that reminds us of our human limitation but focuses on a limitless God. That is when we can soar to new heights as divine electricity propel us into new beginnings.

Faith Deconstruction: Trading Convictions for Better Questions

In a previous blog, I shared about my journey to find a more integrated faith. In this blog, I talk about the process of faith deconstruction that led me there. It was a long windy road that took years. Nevertheless, I am grateful for every mile traveled.


Imagine your car breaks down. Because you don’t have the money to buy a new one, you decide to fix it. However, you do not know any mechanics. You still need a mode of transportation and bikes are out of the question. That is when you decide to fix the engine yourself. Your first step is taking the engine apart, piece by piece, inspecting to see what can be wrong with it. After this long process, you are now ready to put the engine back together. Yet, to save time and effort, you decide to let go of the parts that are broken and those that are unnecessary. Instead, you rebuild it a leaner version of the original to ensure you are able to have a working car to take you from point a to point b.

That is what faith deconstruction looks like.

It is a long and laborious process of taking beliefs apart, inspecting what may not serve you any longer. Seeing the good and the bad and choosing to retain only what is needed for the journey ahead.

Photo by Tim Mossholder on Unsplash

A Personal Story

Deconstructing one’s faith is not for the faint of heart. I confess this was a task I dared not engage in for years. Why? In one word: fear. I was afraid I would lose my bearings, my sanity, my identity, my community, the respect of my loved ones, my very purpose of being. On top of that, of course, there was the fear of eternal damnation. That small nagging feeling that even if there was a 1% chance of being true, that was enough not to risk it. Forget it, there was much to lose, too many uncertainties on the other side and after all — things were not that bad on this side. At least, so I thought.

I wasn’t like I woke up one day and said: “Now I am ready to deconstruct my faith!” Like for many that underwent this process, it was a combination of events, disappointments, and irreconcilable situations that thrust us into the tempestuous sea of doubt. For some, it was the death of a loved one. To me, it was the death of a dream. Yes, in every story of faith deconstruction, there is death involved.

This is the way.

I have written before about my pain and disappointment. Suffice it to say that at every turn doors closed and it became painfully clear that my vocational path would lead elsewhere. It wasn’t just about vocation but also about identity, meaning, and deep disappointment with Chrisitians’ attitude in the public square. Yes, you guessed it: Trump, treatment of LGBTQ, authoritarianism, and other unfortunate events.

Revisiting Old Certainties

Photo by Nathan Dumlao on Unsplash

If I was going to move further, I had to let go of some convictions. Like screws in an engine, they must first loosen up before we fix anything. One key conviction was my view of the Bible. In my childhood faith, the Bible was the ultimate authority and source of all truth. It was never to be questioned only to be submitted to. While this may have saved my European ancestors from Papal oppression, it has now become the foundation for dogmatic thinking and stifling perspectives. Re-visiting my view of the Bible was a key step in the journey of faith deconstruction.

The change went deeper than that. It meant letting go of certainty and inviting doubtful faith. My childhood faith taught me the blessed assurance was beyond doubt. Letting go of this perceived security was a hard thing to do. It begged the following: if the Bible is no longer the source of ultimate authority, then what is?

For years I had no answer to this question which was why I also stood paralyzed in this conundrum. On the one hand, I knew that placing this amount of faith in the letter of the Bible was no longer viable. On the other, I did not see any alternative that could adequately replace it.

Leaping into Untethered Faith

Would my experience now be the arbiter of truth? I am not that smart or spiritually enlightened. That, I knew for sure so it had to be elsewhere. Would science be the new source of authority? It was also a problematic choice given the evolving nature of scientific inquiry. What we know now can really change in the next discovery. What then?

It was then that instead of trying to answer the question, that I encountered a new question: What if there is no absolute authority to hang my belief in? What if I will never really know for sure? The implications were terrifying but also surprisingly freeing.

Even so, they did not require a simple change of perspective. Instead, they call for a leap of faith. It was more like a jump into untethered belief. A certainty that even though I could not articulate an ultimate authority for my faith, that faith was real nevertheless. Not just that, but there was a trust that there was higher power on the other side to catch me. This is not a rejection of God but an acceptance that God is much more we can describe or experience.

The place of encounter is where God lives – no other assurances are needed.

The Mystical Qubit: How Quantum Computers Inspire Christian Spirituality

AI Theology is about integrating different ways of thinking. Not just that but reflecting deeply on experiences even more than ideas or technologies. In this blog, I reflect on my spiritual journey looking for ways in which technology can help explain what at first seems unexplainable. Can that be so? In this post, I introduce the mystical qubit, how Quantum computers can help us understand Christian Mysticism.

Intrigued? Read on.

Christian Mysticism

My introduction to Christian Mysticism came in seminary. I honestly did not know this was a “thing.” My evangelical upbringing taught me that mysticism belonged to New Age and spiritists and had no place in the “true” religion. That is kind of ironic because as I reflect on my charismatic experiences of my youth, they turned out to be, well…quite mystical. 

Some definitions are in order. After all, what do I mean by Christian Mysticism? Author Carl Coleman offers us this description:

Christian Mysticism is the spiritual encounter with a sacred mystery that cannot be put into words, but may be embodied through feelings, conscious awareness, experience, or intuition – or even through darkness or unknowing.

In short, it is the secret sauce of spirituality. The experiential unexplainable that has drawn billions of souls to religious practices through the millennia. It is often what we mean when we say we have had a spiritual experience. 

To me it was less of a discovery but more of an unveiling. It was an awakening to recognize that throughout my life, I have practiced a type of CM. As I mentioned in the first paragraph of this section, my charismatic background is steeped in mysticism. I first encountered it as an 11 years old boy, while praying with a friend out in nature. That was when I had a deep connection with the divine reality, one that would carry me through the many valleys of doubt I would go through in the following years. 

Image by Pete Linforth from Pixabay

Quantum Computers as a Metaphor for Integrality

I imagine by now you have heard about Quantum computers. If not, suffice it to say that they are the next generation of computing that will revolutionize hardware for decades to come. In short, it uses Quantum mechanics to run calculations more efficiently than traditional computers. This increased capacity will open the way to a myriad of opportunities in AI and simulation currently not possible.

At its core, Quantum computers look at information differently. Traditional computers process information in bits. These bits are gates, or switches if you will, with two possibilities: 0 or 1. A bit can be either 1 or 0 at a time and the combination of these switches to the billions and trillions is what allowed our current revolution in digitization.

Quantum computer’s foundational unit is the qubit. Different from the traditional bit, it can be 1, 0, or both. Due to the phenomenon of superposition, the bit can simultaneously be 0 and 1.

Don’t think too hard about how that works as it is likely to give you a royal headache!

The point is, unlike traditional bits, the qubit is not either-or but both-and. In its foundational measure of foundation, it can hold different signals simultaneously without forcing them to resolve to one or the other.

Quantum computing is the non-dual thinking computer!

The Mystical Qubit

We inhabit a binary world where people and things are forced into either-or categories. There is simplicity and comfort in that. When translated into religious thinking, this creates the in-and-out dynamic of faith groups. A big driver of the Christian theological enterprise has been in the service of classifying “who is in” and “who is out.”

The allure of binary thinking is self-evident. It brings clarity and direction for areas that once seem complex and intractable. It is not that binary thinking is useless. However, it can be very counter-productive when it prematurely forces conflicting ideas to resolve. Here is the gift of the mystical quibit.

A mystical perspective is able to hold conflicting ideas without resolving them. This does not just apply to ideas but also how we perceive each other. The mystic does not rush to judge the other prematurely. Instead, she is able to contemplate the full complexity of the other being: seeing their beauty and shortcoming as a whole.

When we are able to hold each in the graceful gaze that is free from judgment, then we move closer to God. That’s the power of the mystic qubit and how quantum computers help us understand Christian Mysticism.

Losing My Religion to Find an Integrated Christianity

In the last six months, I have written primarily on AI ethics and AI for good in the AI theology portal. In this piece, I would like to turn inward. Rather than providing informative pieces that keep the pulse of AI developments, I would like to dive deeper into a theological reflection of my spiritual experience – more theology, less AI. In this blog, I share about the journey of letting go of militant convictions to find an integrated Christianity.

A Holistic Spirituality

Encouraged by my western upbringing, I tend to compartmentalize spirituality separately from the rest of my life. On the one hand, I had my spiritual life consisting of practices like prayer, studying, and worship. On the other hand, I managed the remainder of life through analytical rational forms, trying to balance the competing demands of being a father, husband, professional, and citizen. I knew these two parts were interrelated but found it difficult to integrate them. It invited too many questions often making good fodder for deep thinking but little impetus for action. And so, I carried on with an internal spiritual life while also responding to external circumstances brought by my many roles in society.

Photo by Willian Justen de Vasconcellos on Unsplash

Thankfully, this dynamic began to shift in the last few years. I have written before about my journey out of church life. In this wilderness, I have encountered companions that helped me show the way to a more integrated Christianity. I am far from mastering it but I am content to become an avid disciple under its vast wisdom. When convictions wane and certainties loosen up, we can finally receive the gift of the new. That is, the new wine of an integrated spirituality can only arrive in the new wineskins of an open heart.

What does that integrated spirituality look like? I really didn’t have words until recently and in this piece will attempt to flesh it out for others. This is in no way an authoritative description of an emerging Christianity. It is, however, anecdotal evidence that an Christian spirituality can thrive outside of the confines of organized religion. I hope you find it useful to your journey.

Shedding a Militant Worldview

The move to a more holistic spirituality could not happen without leaving some old convictions behind. One of those that I was happy to shed was a militant dualistic view of the world. One of the most destructive theological fallacies of the last two centuries was a marrying of dispensationalism with political conservatism. That is, the first one filled believers with fear of imminent doom. The second one mistook Capitalism for Christianity. The mix created the insidious Christian nationalism that mistakes global cooperation with the mark of the beast.

Photo by Maxim Potkin on Unsplash

In practice, what that meant to me was that the Christianity I was raised in was often punctuated by a need to fight real and imaginary enemies. Our spiritual practices were part of a military mobilization for the kingdom of God – as if Jesus needed an army of freedom fighters (or terrorists) to bring his kingdom to earth. Spiritual warfare was an indirect way to address the social anxiety of losing cultural influence.

This militarism also made me suspicious of any mystical experience outside the very narrow acceptable definitions imposed by evangelical orthodoxy. That is, they have to be “biblical,” lest they be an opportunity for the enemy. In this militarized focus, many were hit by friendly fire. Spiritual experiences, especially those of rival Christian denominations, that deviated from an arbitrary “biblical” norm, became a threat. This in effect closed me off from going deeper into Christian tradition so I could learn more from the mystics. After all, when you are a part of the church that will usher Jesus’ return, you have no need to learn from history. 

Integral Christianity

As my journey moved away from the centers of official Christendom, I grew increasingly isolated. Thankfully, I recently learned about the Integral Christian Network. That was when I discovered mystical Christianity anew. 

Reading books, studying movements, and discussing their implications are all helpful ways to learn. They are, however, poor substitutes to experiencing spiritual practices in community. This is how any faith is best transmitted and preserved through generations. So, while I have had my share of studying Christian mystics from the past and even read their important writings, joining an ICN Wespace allowed me to go a step further. 

This Zoom facilitated small group has allowed me to encounter a supportive group to explore mystical Christianity unbounded by the militant restrictions of my upbringing. I confess I was scared and at times skeptical. The talk about spirit guides and speaking with angels made me uncomfortable at first. As I pressed forward, I received an inner affirmation that the Creator would be there to prod me from error. As I get to know a bigger God, the fear of error diminishes. That is when I am free to fly. 

Conclusion

The movement from dualism to mystical openness did not happen overnight. Instead, it came from a long process of dying and being born anew with the help of others along the way. In an integrated Christianity, I don’t claim to have found a new orthodoxy to hang my hat on. I am only here to report what my experience has said. It is neither authoritative nor meaningless.

Yet, I do hope that by learning about my experience you can look to your own. The path for spiritual growth will rarely look the same for two individuals but thankfully we can always learn from each other. And that is why I leave you with a final question:

Where is your spiritual journey leading you to?


This post is a snippet from a larger article I published at medium.