How Knight Rider Predicts the Future of AI-Enabled Autonomous Cars

The automobile industry is about to experience transformative disruption as traditional carmakers respond to the Tesla challenge. The battle is not just about whether to go from combustion to electric but it extends the whole concept of motorized mobility. E-bikes, car-sharing, and autonomous driving are displacing the centrality of cars as not just a means of transportation but also a source of status and identity. The chip shortage also demonstrated the growing reliance on computers, exposing the limits of growth as cars become more and more computerized. In this world of uncertainties, could Knight Rider shed some light on the future of autonomous cars?

As a kid, I dreamed of having a (Knight Industries Two Thousand) KITT, a car that would work on my voice command. Little did I know that many of the traits in the show are now, nearly 40 years later, becoming a reality. To be sure, the show did not age well in some aspects (David Haselhoff sense of fashion for one and the tendency to show men’s bare hairy chest). Yet, on the car tech, they actually hit a few home runs. In this blog, I want to outline some traits that came up in the show that turned out to be well aligned with the direction of car development today.

Lone Ranger Riding a Dark Horse

Before proceeding, let me give you a quick intro to Knight Rider‘s plot. This 1980’s series revolves around the relationship between Michael, the lone ranger type out to save the world and his car KITT. The car, a supped-up version of a Pontiac Trans Am, is an AI-equipped vehicle that can self-drive, talk back to its driver, search databases, remotely unlock doors, and much more.

In the intro episode, we learned that Michael got a second chance in life. After being shot in the face by criminals, he undergoes plastic surgery and receives a new identity. Furthermore, a wealthy man bequeaths him the supercar along with the help of the team that built it to provide support. At his death bed, the wealthy magnate tells Michael the truth that will drive his existence: “One man can make a difference.”

Taken from Wikipedia

Yes, the show does suffer from an excess of testosterone and a royal lack of melanin.

Yet, I contend that Michael is not the main character of the show. KITT, the thinking car steals the show with wit and humor. The interaction between the two is what makes an average sci-fi flick into a blockbuster success. You can’t help but fall in love with the car.

Knight Rider Autonomous Car Predictions

  • Auto-pilot – this is the precursor of autonomous driving. While systems to keep speed constant has been common for decades, true autonomous driving is a recent advance. This is now an option for new Tesla models (albeit at a hefty $10K additional) and also partially present in other models such as auto parking, lane detection and automatic braking. This feature was not hard to predict. Maybe the surprise here is not that it happened but how long it took to happen. I suspect large auto-makers got a little cozy with innovation as they sold expensive gas-guzlers for most of the 90’s and early 00’s. It took an outsider to force them back into research.
  • Detecting drivers’ emotions – At one point in the debut episode, KITT informs Michael that his emotional state is altered and he might want to calm down. Michael responds angry that the car would talk back to him. While this makes for a funny bit it is also a good prediction of some recent facial recognition work using AI. Using a driver’s facial experession alone is sufficient to assertain the indivudal’s emotional state. There is a lot of controversy on this one but the show deserves credit for its foresight. Maybe a car that tells you to “calm down” may be coming your way in the next few years.
Image extraction from Coded Bias
  • Remote manipulation of electronic devices – This is probably the most far-sighted trait in the show. Even this day it is difficult to imagine automated cars that can interact with the world beyond its chassis. Yet, this is also in the realm of possibility. Emerging Internet of Things (IOT) technology will make this a reality. The idea is that devices, appliances and even buildings can be connected through the Internet and operate algorithms in them. It envisions a world where intelligence is not limited to living beings or phones but all objects.

Conclusion

Science Fiction works capture the imagination of the time they are written. They are never 100% accurate but sometimes can be surprisingly predictive. Show creators did not envision a future of flat screens and slick dashboard designs as we have today. On the other hand, they envisioned aspects of IOT and emotional AI that we unimaginable at the time. In this case, besides being entertainment, they also help create a vision of a future to come.

from Wikipedia.com

Reflecting on this 40 year-old show made me wonder about current Sci-fi and their own visions of what is to come. How will coming generations look back at our present visions of their time? Will we reveal our gross blind spots like Knigth Rider while male individualism? Will we inspire future technology such as IOT?

This only highlights the importance of imagination in history making. We build a future now inspired by our contemporary dreams . Hence, it is time we start asking more questions about our pictures of the future. How much to they reflect our time and how much do they challenge us to become better humans? Even more importantly, do they promote the flourishing of life or an alternative cyber-punk society? Wherther it Knight Rider depiction of autonomous cars or Oxygen‘s view of cryogenics, they reflect a vision of a future captured at historical time.

How can Machine Learning Empower Human Flourishing?

As a practicing Software Product Manager who is currently working on the 3rd integration of a Machine Learning (ML) enabled product my understanding and interaction with models is much more quotidian, and at times, downright boring. But it is precisely this form of ML that needs more attention because ML is the primary building block to Artificial Intelligence (AI). In other words, in order to get AI right, we need to first focus on how to get ML right. To do so, we need to take a step back and reflect on the question: how can machine learning work for human flourishing?

First, we’ll take some cues from liberation theology to properly orient ourselves. Second, we need to understand how ML models are already impacting our lives. Last, I will provide a pragmatic list of questions for those of us in the technology field that can help move us towards better ML models, which will hopefully lead to better AI in the future. 

Gloria Dei, Vivens Homo

Let’s consider Elizabeth Johnson’s recap of Latin American liberation theology. To the stock standard elements of Latin American liberation theology–preferential option for the poor, the Exodus narrative, and the sermon on the Mt –she raises a consideration from St. Irenaeus’s phrase Gloria Dei, vivens homo. Translated as “the glory of God is the human being fully alive,” this means that human flourishing is God’s glory manifesting in the common good. One can think of the common good not simply as an economic factor. Instead, it is an intentional move towards the good of others by seeking to dismantle the structural issues that prevent flourishing.

Now, let’s dig into this a bit deeper –what prevents human flourishing?  Johnson points to two things: 1) inflicting violence or 2) neglecting their good. Both of these translate “into an insult to the Holy One” (82). Not only do we need to not inflict violence on others (which we can all agree is important), but we also need to be attentive to their good. Now, let’s turn to the current state of ML.

Big Tech and Machine Learning

We’ll look at two recent works to understand the current impact of ML models and hold them to the test. Do they inflict violence? Do they neglect the good? The 2020 investigative documentary entitled (with a side of narrative drama) The Social Dilemma (Netflix) and Cathy O’Neil’s Weapons of Math Destruction are both popular and accessible introductions to how actual ML models touch our daily lives. 

Screen capture of Social Dilemma

The Social Dilemma takes us into the fast-paced world of the largest tech companies (Google, Facebook, Instagram, etc.) that touch our daily lives. The primary use cases for machine learning in these companies is to drive engagement, by scientifically focusing on methods of persuasion. More clicks, more likes, more interactions, more is better. Except, of course, when it isn’t.

The film sheds light on how a desire to increase activity and to monetize their products has led to social media addiction, manipulation, and even provides data on the increased rates of sucide amongst pre-teen girls.  Going even further, the movie points out, for these big tech companies, the applications themselves are not the product, but instead, it’s humans. That is, the gradual but imperceptible change in behavior itself is the product.

These gradual changes are fueled and intensified by hundreds of daily small randomized tests that A/B change minor variables to influence behavior. For example, do more people click on this button when it’s purple or green? With copious amounts of data flowing into the system, the models become increasingly more accurate so the model knows (more than humans) who is going to click on a particular ad or react to a post.

This is how they generate revenue. They target ads at people who are extremely likely to click on them. These small manipulations and nudges to elicit behavior have become such a part of our daily lives we no longer are aware of their pervasiveness. Hence, humans become commodities that need to be continuously persuaded. Liberation theology would look to this documentary as a way to show concrete ways in which ML is currently inflicting violence and neglecting the good. 

from Pixabay.com

Machine Learning Outside the Valley

Perhaps ‘normal’ companies fare better? Non-tech companies are getting in on the ML game as well. Unlike tech companies that focus on influencing user behavior for ad revenue, these companies focus on ML as a means to reduce the workload of individual workers or reduce headcount and make more profitable decisions. Here are a few types of questions they would ask: “Need to order stock and determine which store it goes to? Use Machine Learning. Need to find a way to match candidates to jobs for your staffing agency? Use ML. Need to find a way to flag customers that are going to close their accounts? ML.” And the list goes on. 

Cathy O’Neil’s work helps us to get insight into this technocratic world by sharing examples from credit card companies, predictions of recidivism, for-profit colleges, and even challenges the US News & World Report College Rankings. O’Neil coins the term “WMD”, Weapons of Math Destruction for models that inflict violence and neglect the good. The three criteria of WMD’s are models that lack transparency, grow exponentially, and cause a pernicious feedback loop, it’s the third that needs the most unpacking.

The pernicious feedback loop is fed by biases of selectivity in the original data set–the example that she gives in chapter 5 is PredPol, a big data startup in order to predict crime used by police departments. This model learns from historical data in order to predict where crime is likely to happen, using geography as its key input. The difficulty here is that when police departments choose to include nuisance data in the model (panhandling, jaywalking, etc), the model will be more likely to predict new crime will happen in that location, which in turn will prompt the police department to send more patrols to that area. More patrols mean a greater likelihood of seeing and ticketing minor crimes, which in turn, feeds more data into the model. In other words, the models become a self-fulfilling prophecy. 

A Starting Point for Improvement

As we can see based on these two works, we are far from the topic of human flourishing. Both point to many instances where ML Models are currently not only neglecting the good of others, they are also inflicting violence. Before we can reach the ideal of Gloria Dei, vivens homo we need to make a Liberationist move within our technology to dismantle the structural issues that prevent flourishing. This starts at the design phase of these ML models. At that point, we can ask key questions to address egregious issues from the start. This would be a first for making ML models (and later AI) work for human flourishing and God’s glory. 

Here are a few questions that will start us on that journey:

  1. Is this data indicative of anything else (can it be used to prove another line of thought)? 
  2. If everything went perfectly (everyone took this recommendation, took this action), then what? Is this a desirable state? Are there any downsides to this? 
  3. How much proxy data am I using? In general proxy data or data that ‘stands-in’ for other data.
  4. Is the data balanced (age, gender, socio-economic)? What does this data tell us about our customers? 
  5. What does this data say about our assumptions? This is a slightly different cut from above, this is more aimed at the presuppositions of who is selecting the data set. 
  6. Last but not least: zip codes. As zip codes are often a proxy for race, use zip codes with caution. Perhaps using state level data or three digit zip code levels average out the results and monitor results by testing for bias. 

Maggie Bender is a Senior Product Manager at Bain & Company within their software solutions division. She has a M.A. in Theology from Marquette University with a specialization in biblical studies where her thesis explored the implications of historical narratives on group cohesion. She lives in Milwaukee, Wisconsin, enjoys gardening, dog walking, and horseback riding.

Sources:

Johnson, Elizabeth A. Quest for the Living God: Mapping Frontiers in the Theology of God (New York: Continuum, 2008), 82-83.

O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Broadway Books, 2017), 85-87.

Orlowski, Jeff. The Social Dilemma (Netflix, 2020) 1hr 57, https://www.netflix.com/title/81254224.

What would a Theology of AI Look Like?

The influence of secular thinking in our global society is powerful enough to make a project like a “theology of Artificial Intelligence” appear to be a doomed enterprise or a contradiction in terms at best, and sheer nonsense at worst. Is there a theology of the microchip? The integrated circuit? The Boolean gates?  And even if one happened to think that God is closer to software than hardware – is there a theology of AI or machine learning?

Put so plainly and abruptly, these questions can easily lead to the conclusion that such a theology is impossible to make sense of. Just as a secular opinion (surreptitiously powerful even among adherents of religions) often hastens to declare that “religion and science simply cannot go together”, the same would be assumed about theology and modern technology – it is like yoking a turtle and a jaguar together.

Moreover, if one approached the incongruence between “theology” and “artificial intelligence” by transposing it to the field of anthropology, one would again face the same problem on another plane. What does a human being practicing religion – a homo religiosus – have to do with a human being – perhaps the very same human being – as a user of artificial intelligence? Is it not the case that historical progress has by our days left behind not only the relevance of religion but also the very humanism that used to enshrine the same human being in question as sacred? Is secular humanism in our day not giving way to things like the “transhuman” and the “posthuman”? 

YouTube Liturgies

But this secular-historical argument is not difficult to turn upside down. When it comes to human history, it is the nature of the things of the past that they are still with us and, what is more, religious forms of consciousness that many would deem atavistic today not only stay present but can also come across with new vigor in the contemporary digital environment. They might strike many as hybrid forms of consciousness, in which the day before yesterday stages an intense and perplexing comeback.

Photo by Pixabay.com

Take the example of Christian devotion in an online environment like YouTube. Assisted, surrounded, and finally motivated by the artificial intelligence of YouTube, a Christian believer will soon find herself in the intensifying bubble of her own religious fervor. Her worship of Jesus Christ in watching devotional videos is quickly and easily perceived by YouTube’s algorithms which will soon offer her historical documentaries, testimonies, Christian talk shows, subscriptions to Christian channels, and the like. In the wake of this spiraling movement, her religious consciousness will be very different and, in a sense, more intense than that of a premodern devotee of Christ – a consciousness steeped in a medium orchestrated by artificial intelligence.  

It follows from the pervasive presence of artificial intelligence in today’s society in general and in what we call “new media” in particular that, the same way as any other kind of content, any positive religious content may also invite an inquiry into the nature of AI. But a note of caution is in order here. The terms “religious” and “religion” in this context must include much more than the semantics of mainstream religious traditions like Christianity.

An online religious attitude includes much more than any cult of personality and may extend to the whole of online existence.

For instance, the above example of artificial intelligence orchestrating Christian experience, after all, is perfectly applicable to any online cult of personality. A teenager worshipping Billie Eilish will experience something very similar to Christian worship on YouTube whose algorithms do not make any methodical distinction between a pop singer and a Messiah. 

Online Worship and Techno-Totalitarianism

In a theology of AI what really matters online is not positive religious content but a certain religious attitude intensified and eventually motivated by Artificial Intelligence. An online religious attitude includes much more than any cult of personality and may extend to the whole of online existence. As researchers of contemporary cultural anthropology and sociology of religion have pointed out, many users of digital technology find a “higher life” and a “more authentic self” online, at the same time as experiencing a mystical fusion with the entirety of the global digital cloud.[1] The relocation of the sacred and the godlike in the realm of the digital is as obvious here as the influence of a technological version of New Age spirituality which is often called “New Edge” by researchers and devotees alike.

From Pixabay.com

This “techno-religion” is fully subservient to what can be termed techno-totalitarianism. The digital technology and environment of our times perfectly fit the definition of totalitarianism: it pervades and knits tightly together all aspects of society while enabling the full subjugation of the individual to a ubiquitous and anonymous power. The totalitarian and curiously religious presence of the secular, “neutral” and functional algorithms of artificial intelligence evokes both a religious past and a religious future.

Algorithmic Determinism

This is another example of the historical dialectic between religion and secularisation. The secular probability theory underpinning these secular algorithms (and predicting the online behavior of users) has roots in the Early Modern statistical theory of prediction modeled on the idea of God’s predestination.[2] Ironically, the idea of divine predestination is making a gruesome return in contemporary times as the increasing bulk of big data at the disposal of AI algorithms means more and more certainty about user behavior and, as a consequence, increasingly precise prediction for and automation of the human future. It is, therefore, safe to say that there can indeed be such a thing as a theology of AI and machine learning.

The division between those who are elected and those who are not, increasingly defines various sectors of contemporary information society such as the financial market. The simple truth of a formula like “the rich get richer, the poor poorer” has deep roots in the reality of inscrutably complex AI algorithms running in the financial sector that determine not only trade on Wall Street but also the success or failure of many millions of small cases like individual credit applications.

By Pixabay.com

Algorithms decide on who obtains credit and at what interest rate. The more data about individual applicants they have at their disposal, the more accurately they can predict their future financial behavior.[3] Like in many other fields defined by AI, it is not difficult to recognize here how prediction slips into modification and modification into techno-determinism which seals the fate of the world’s population. Indeed, this immense power over individuals, holding their past, present, and future with iron clips together, is nothing short of a force for a new religious realm and a wake-up call to Christian theology from its dogmatic slumber.

Conclusion

It is clear that if there is a positive theology of artificial intelligence as such it must go far beyond an analysis of explicit, positive “religious content” in today’s online environment.  If so, one question certainly remains which is impossible to answer within the confines of this blogpost: what would a negative theology of AI look like, a theology in which an engagement with AI would go hand in hand with a distance from and criticism of it?


[1]  cf. Stef Aupers & Dick Houtman (eds.), Religions of Modernity: Relocating the Sacred to the Self and the Digital (Leiden-Boston: Brill, 2010).

[2] This idea is spelled out in Virgil W. Brower, “Genealogy of Algorithms: Datafication as Transvaluation”, Le foucaldien 6, no. 1 (2020): 11, 1-43.

[3] This is one of the main arguments in Cathy O’Neill, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Crown, 2016).


Gábor L. Ambrus holds a post-doctoral research position in the Theology and Contemporary Culture Research Group at The Charles University, Prague. He is also a part-time research fellow at the Pontifical University of St. Thomas Aquinas, Rome. He is currently working on a book on theology, social media, and information technology. His research primarily aims at a dialogue between the Judaeo-Christian tradition and contemporary techno-scientific civilization.