Wandering Earth Review: A Chinese Vision of Apocalyptic Hope

Sci-Fi authors are the visionaries of our time, those who can see where society is going and imagine future scenarios that inspire us to live a better present. While their books introduce their ideas to the public, their cinematic expressions are what bring them to life. This review claims that the Chinese blockbuster Wandering Earth does that to Cixin Liu’s writing of apocalyptic hope. The movie is not just entertainment but pertinent material for theological reflection. Its message of hope and cooperation, similar to Eden and Oxygen, sheds light on how we can face the global challenge to avoid climate catastrophe.

A Daring Apocalyptic Vision

Apocalyptic comes from a Greek word that means unveiling. It is about making a hidden meaning show up in plain sight. It is also the name for an ancient type of literature that foresaw end-of-the-world scenarios. In the West, the biblical books of Daniel and Revelation are the most well-known examples of this type of writing.

Science Fiction is the contemporary version of this ancient literature. If you pay attention, most stories in this genre revolve around a moral dilemma that is resolved by the end. While they paint a future world, their implications speak directly to the present environment of the readers. In this respect, Wandering Earth is no different. The movie grapples with how humanity responds to impending doom.

The story happens in the near future where natural disasters become commonplace. A warming climate, droughts, torrential rains, hurricanes happen in higher frequency because the Sun is collapsing and inching closer to Earth. Humanity has 100 years to come up with a plan.

What is the plan? Send a few souls to space in order to start a new civilization? No. Shoot the Sun with a gigantic nuclear bazooka? Nope. How about moving the whole friggin Earth out of the Solar system? Yep, that’s the plan. How? By building enormous reactors in 10,000 places on earth, burning mountains, and sheltering 3.5 Billion people underground.

Talk about a grand plan!

The idea is to shuttle the whole planet 4.2 light-years away into a new galaxy where they can find a new sun. How long will that take? Not 100 or even 1000 years but 2,500 years to complete! Needless to say, this multi-generational project entails immense sacrifice of present generations so future descendants can simply live. The main storyline revolves around one family and their fate in this great scheme. It defines hope as the collective will to persevere for a better future.

Photo by Rod Long on Unsplash

A Distinct Story Line

I hope by now you can see what makes the Wandering Earth different from other epic doomsday blockbusters. As the world is facing insurmountable challenges, humanity opts for a daring long-view solution. We also see this theme in Cixin Liu’s award-winning novel The Three-Body Problem, where the planet learns of alien invaders 400 years out.

Could this be a metaphor for climate change? Maybe, but it sure is a refreshing alternative to the once-and-done happy ending prevalent in Hollywood cinematic stories. The deeper question that confronts us is not whether we have what it takes to avert an imminent disaster but do we have the generational resolve to work for long-term plans of salvation?

In this way, the millenary Chinese culture offers the long-view perspective as an alternative route to solving global intractable problems. With that said, the movie is still a blockbuster for a reason. There is no shortage of entertaining visuals and the tech is stunning. On the downside, the personal storylines could have been a bit more polished and the plot is hard to follow at times. Even so, the overall result is still an impressive accomplishment.

Screenshot from the Wandering Earth‘s movie scene

The Thousand-Year Reign

Now, let’s turn to some theological reflection. The book of Revelation in the New Testament is filled with mysterious imagery. While many throughout time have claimed to understand it, the imagery continues to elude modern readers and believers alike. As I reflected on the movie, I wondered how would a long view of redemption interact with the Biblical story. Hence, this review probes how Wandering Earth apocalyptic hope squares with the Biblical apocalyptic literature.

At first glance and heavily influenced by dispensational theology, a reading of the last book of the NT may yield a sense of a quick succession of events. That is, the doomsday scenario will unfold in a matter of years and certainly within a generation. No place for a long view plan in this perspective.

However, the text may not lend itself to these certainties. Any text built on imagery is wide open for interpretation. Hence, when John the Revelator talks about 3 1/2 years, these may not be literal years. Furthermore, chapter 20 introduces the idea of the Millenial reign. This is a period of peace where the faithful reign with God as our ultimate enemy is imprisoned and unable to thwart our plans.

This is not to say that the Bible suggests a millenary plan to move earth across galaxies. The idea is more of a dramatic liberation followed by a long period of peace. With that said, the millennial reign does open the way for a human-divine partnership in the service of earth stewardship. In this way, the 1,000 years, literal or not, provides a nod to a long view.

Screenshot from Wandering Earth Movie

Re-Considering Wandering Earth‘s Long View

At the end of the day, movies like Wandering Earth are meant primarily to entertain us with fantastic visuals and unexpected plot twists. Hence, I don’t claim to speak for the author or movie director. However, there is enough there to give us reason to ponder. In an age where multiple sources fight for our attention in a split second of a finger scroll, it is wise to expand our time horizons. An inordinate focus on the immediate crisis can rob us of the hope and resolve to build a sustainable future for the generations to come. If for nothing else, the movie is worth your time for that alone.

Furthermore, the interaction with the long view also allowed me to re-think the meaning of millenary biblical texts. While Christian theology continues to over-emphasize an imminent redemption through Christ’s return, we do well to take a pause and consider a longer time horizon. If anything, followers of Christ have been anticipating a return for over 2,000 years. Could it be that we missed something about how this is to unfold? As we grapple with these questions, it is wise to engage Eastern voices offering alternative perspectives. As this review stated earlier, Wandering Earth apocalyptic hope can help us better understand a Christian view of the future as well.

The Glaring Omission of Religious Voices in AI Ethics

Pew Research released a report predicting the state of AI ethics in 10 years. The primary question was: will AI systems have robust ethical principles focused on the common good by 2030? Of the over 600 experts who responded, 2/3 did not believe this would happen. Yet, this was not the most surprising thing about the report. Looking over the selection of responders, there was no clergy or academics of religion included. In the burgeoning field of AI ethics research, we are missing the millenary wisdom of religious voices.

Reasons to Worry and Hope

In a follow-up webinar, the research group presented the 7 main findings from the survey. They are the following:

Concerning Findings

1. There is no consensus on how to define AI ethics. Context, nature and power of actors are important.

2. Leading actors are large companies and governments that may not have the public interest at the center of their considerations.

3. AI is already deployed through opaque systems that are impossible to dissect. This is the infamous “black box” problem pervasive in most machine learning algorithms.

4. The AI race between China and the United States will shape the direction of development more than anything else. Furthermore, there are rogue actors that could also cause a lot of trouble

Hopeful Findings

5. AI systems design will be enhanced by AI itself which should speed up the mitigation of harmful effects.

6. Humanity has made acceptable adjustments to similar new technology in the past. Users have the power to bent AI uses towards their benefit.

7. There is widespread recognition of the challenges of AI. In the last decade, awareness has increased significantly resulting in efforts to regulate and curb AI abuses. The EU has led the way in this front.

Photo by Wisnu Widjojo on Unsplash

Widening the Table of AI Ethics with Faith

This eye-opening report confirms many trends we have addressed in this blog. In fact, the very existence of AI theology is proof of #7, showing that awareness is growing. Yet, I would add another concerning trend to the list above which is the narrow group of people in the AI ethics dialogue table. The field remains dominated by academic and industry leaders. However, the impact of AI is so ubiquitous that we cannot afford this lack of diversity.

Hopefully, this is starting to change. A recent New York Times piece outlines efforts of the AI and Faith network. The group consists of an impressive list of clergy, ethicists, and technologists who want to bring their faith values to the table. They seek to introduce the diverse universe of religious faith in AI ethics providing new questions and insights into this important task.

If we are to face the challenge of AI, why not start by consulting thousands of years of human wisdom? It is time we add religious voices to the AI ethics table as a purely secular approach will ostracize the majority of the human population.

We ignore them to our peril.

Can AI Empower the Poor or Will it Increase Inequality?

Faster, better, stronger, smarter. These are, with no exaggerations, the revolutionary goals of AI. Faster trading is revolutionizing capitalism.[1] Better diagnostics is revolutionizing health care.[2] Stronger defense systems are revolutionizing warfare.[3] And smarter everything will revolutionize all aspects of our lives, from transportation,[4] to criminal justice,[5] to manufacturing,[6] to science,[7] and so forth. But can AI also revolutionize our relationship to the poor?

According to International Data Corporation, AI is a $157 billion industry and expected to surpass $300 billion by 2024.[8] What’s behind this figure, however, is that “AI” is being developed by companies for specifically targeted goals. While some organizations, like Google’s Deep Mind, have their goal as Artificial General Intelligence, nearly every current breakthrough and application of AI is targeted toward specific industries. The money spent on AI is, therefore, seen primarily as an investment—the technology will yield much greater profit than human-based approaches.

This shouldn’t surprise us. As they say, money makes the world go around. But it does create a moral problem for Christians. Is it really a good thing for AI to be developed around the primary goal of increasing wealth? According to Latin American Liberation Theology, the answer is no.

Photo by Roberto Huczek on Unsplash

Liberation Theology

Latin American Liberation Theology, distinct from, say, Black Liberation Theology or Minjung Theology, is a theological tradition rooted in Roman Catholic communities in Latin America. The tradition, as explained by Gustavo Gutierrez, is rooted in a Marxian approach to society that develops theology through “praxis.” Praxis, for Gutierrez, is a cyclical process of letting one’s theology and activity in the world mutually influence each other.[9] Theology should not be removed from the experiences of the campesinos. A theology stuck in the “ivory tower” is, in the view of liberation theology, a dead theology.

Liberation theology has had a large impact on Catholic Social Teaching from the late 60s on. One of the most popular contributions is the so-called “option for the poor,” an idea taken from the 1968 Latin American Episcopal Conference in Medellin, Colombia. The basic idea of this, which Pope John Paul II validated in his 1987 encyclical Sollicitudo Rei Socialis, is that our social perspective should prioritize the needs and experiences of the poor above all else. The idea has undergone some modifications in more recent theologians use of it, but the core remains that those most underprivileged by society should get the greatest attention from Christians.

But what does this have to do with AI?

The Civilization of Wealth and the Civilization of Poverty

The Jesuit martyr Ignacio Ellacuría proposed the concepts of a “Civilization of Wealth” and a “Civilization of Poverty.” Like Luther’s Two Kingdoms or St Augustine’s Two Cities, these antagonistic civilizations sit as dipoles for Christians. The Civilization of Wealth, for Ellacuría, is modeled by so-called “first world” countries like the United States and Western Europe. It’s the goal of growth, of efficiency, of progress and wealth. In this model, it is “the possessive accumulation, by individuals or families, of the maximum possible wealth [that is] the fundamental basis of one’s own security and the possibility of an ever-increasing consumerism as the basis of one’s own happiness.”[10] The problem with this model, Ellcuría’s student Jon Sobrino notes, is that it “does not meet the basic needs of everyone, and…that it does not build spirit or values that can humanize people and societies.”[11] In short, the goal of technological progress and “faster, better, stronger, smarter” that the Civilization of Wealth pursues is a goal that lets some people starve while others are rich (cf: Thomas Malthus), but also reduces human beings and the world around us to use objects. Max Weber called this phenomenon “instrumental rationality”—the world becomes an assemblage of numerical values, which, for capitalists, can be converted to money while, for data scientists, can be converted to data.

I don’t think it is too much to suggest that nearly all AI projects currently underway operate under these goals of the Civilization of Wealth. The Civilization of Poverty, in contrast, “rejects the accumulation of capital as the engine of history, and the possession-enjoyment of wealth as the principle of humanization; rather, it makes the universal satisfaction of basic needs the principle of development, and the growth of shared solidarity the basis of humanization.”[12] This model may not be the “wealth of nations” Adam Smith promised nearly 250 years ago, but it is a civilization where the poor and hungry are not reduced to poverty statistics. The dedication to human rights and the virtue of solidarity over progress leads to collective flourishing, even if it does not lead to leaps and bounds in science and technology. There may be no AGI in the Civilization of Poverty, but there will also be no discarded human beings.

A New Role for AI: The Voice of the Poor?

The place of AI in liberation theology I have presented is quite unfavorable, but I believe it is not the last word. The “option for the poor” is a privileged, but poorly developed notion in Catholic thought. As both an undergraduate and a grad student, I often heard this phrase tied to the call to be “voices for the voiceless.” The sentiment is noble, but how can we really have an “option” for the poor if we don’t actually hear from the poor? Why not give the “voiceless” their own voice? Therein lies my biggest problem with liberation theology as well: while Ellacuría and Sobrino are prophetic voices, they were also middle-class Spanish Jesuits, not formed within the third-world poverty they encountered.

Since AI develops its “understanding” based on the data and rules programmed into it, the problem of AI serving the Civilization of Wealth extends as far as the programmers themselves pursue those goals. AI programmed on data sets created by the poor, or AI programmed by the poor could, theoretically, be able to be an actual voice for the poor. An AI that can help shape policies directed toward the Civilization of Poverty because its references are taken from the voices of the poor does not have the same limitations or blind spots that current AI projects suffer from.

Ultimately, it remains to be seen whether AI can or will be an instrument to promote the flourishing of the poor or if its uses will remain tethered to the Civilization of Wealth. As Christians, our task must be toward building the Kingdom of God, a place where, Isaiah reminds us, all eat and drink without money and without cost (Isaiah 55:1).


Levi Checketts. Photo by Jiyoung Ko

Levi Checketts is an incoming Assistant Professor of Religion and Philosophy at Hong Kong Baptist University and an assistant pastor at Jesus Love Korean United Methodist Church in Cupertino, California. His research focuses on ethical issues related to new technologies, with a special interest for the transhumanist movement and Artificial Intelligence. He has been published in Religions, Theology and Science and Techne: Research in Philosophy and Technology and is currently working on a book related to the challenge of our obligations to the poor and AI. When not teaching or preaching, Levi likes to play RPGs and point-and-click adventure games and go site-seeing with his wife and daughter. 


[1] https://builtin.com/artificial-intelligence/ai-trading-stock-market-tech

[2] https://www.healtheuropa.eu/technological-innovations-of-ai-in-medical-diagnostics/103457/

[3] https://fas.org/sgp/crs/natsec/IF11150.pdf

[4] https://indatalabs.com/blog/ai-in-logistics-and-transportation

[5] https://www.ojp.gov/pdffiles1/nij/252038.pdf

[6] https://www.plantautomation-technology.com/articles/the-future-of-artificial-intelligence-in-manufacturing-industries

[7] https://royalsociety.org/-/media/policy/projects/ai-and-society/AI-revolution-in-science.pdf?la=en-GB&hash=5240F21B56364A00053538A0BC29FF5F

[8] https://www.idc.com/getdoc.jsp?containerId=prUS46757920

[9] Gustavo Gutierrez, A Theology of Liberation: History, Politics, Salvation, trans. Sr. Caridad Inda and John Eagleson (Maryknoll, NY: Orbis Books, 1986), 10-13.

[10] Ignacio Ellacuría, “Utopía y Profetismo,” Revista Lationamericana de Teología 17 (1989): 170.

[11] Jon Sobrino, “The Crucified People and the Civilization of Poverty: Ignacio Ellcuría’s ‘Taking Hold of Reality,’” in No Salvation Outside the Poor: Prophetic-Utopian Essays, trans. Margaret Wilde (Maryknoll, NY: Orbis, 2008), 9.

[12] Ellcauría, 170.

How Big Companies can be Hypocrites about Diversity

Can we trust big companies are saying the truth, or are they being hypocrites? We can say that the human race is somehow evolving and leaving behind discriminatory practices. Or at least some are trying to. And this reflects on the market. More and more, companies around the world are being called out on big problems, involving racism, social injustice, gender inequalities, and even false advertising. But how can we know what changes are real and what are fake? From Ford Motors to Facebook, many companies talk the talk but do not walk the walk.

The rise of Black Lives Matter protests is exposing societies’ crooked and oppressive ways, bringing discussions about systemic and structural racism out in the open. It’s a problem that can’t be fixed with empty promises and window dressing. Trying to solve deep problems isn’t easy and is a sort of “all hands on deck” type of situation. But it’s no longer an option for companies around the world to ignore these issues. That’s when the hypocrisy comes in.

Facebook, Amazon, Ford Motor Company, Spotify, Google, are a few examples of big companies that took a stand against racial inequality on their social media. Most of them also donated money to help the cause. They publicly acknowledged that a change has to be made. It is a start. But it means nothing if this change doesn’t happen inside the company itself.

Today I intend to expose a little bit about Facebook and Amazon’s diversity policies and actions. You can make your own conclusions.

We stand against racism and in support of the Black community and all those fighting for equality and justice every single day.”Facebook

Mark Zuckerberg wrote on his personal Facebook page: “To help in this fight, I know Facebook needs to do more to support equality and safety.” 

In Facebook’s business page, it claims some actions the company is making to fight inequalities. But it mostly revolves around funding. Of course money is important, but changes regarding the companies structure are ignored. They also promised to build a more inclusive workforce by 2023. They aim for 50% of the workforce to be from underrepresented communities. Also working to double the number of Black and Latino employees in the same timeframe.

But in reality, in the most recent FB Diversity Report, White people take up 41% of all roles, Followed by Asians with 44%, Hispanics with 6.3%, Black people with 3.9% and Native Americans with 0.4%. An even though it may seem that Asians are taking benefit in this current situation, White people take 63% of leadership roles in Facebook, reducing only 10% since 2014. Well, can you see the difference between the promises and ACTUAL reality?

Another problem FB employees talk about is leadership opportunities. Even though the company started hiring more people of color, it still doesn’t give them the opportunity to grow and occupy more important roles.  Former Facebook employees filled a complaint with with the Equal Employment Opportunity Commission trying to bring justice for the community. Read more about this case here.

Another big company: Amazon.

Facial recognition technology and police. Hypocrisy or not?

Amazon is also investing in this type of propaganda creating a “Diversity and Inclusion” page on their website. They also made some tweets talking about police abuse and the brutal treatment black Americans are forced to live with. What Amazon didn’t expect, is that it would backfire.

Amazon fabricated and sold technology that supports police abuse towards the Black population. In a 2018 study of Amazon’s Rekognition technology, the  American Civil Liberties Union (ACLU) found people of color were falsely matched at a high rate. Matt Cagle, an attorney for the ACLU of Northern California, called Amazon’s support for racial justice “utterly hypocritical.” Only in June of 2020, Amazon halted selling this technology to the police for one year. And in May of 2021, they extended the pause until further notice.

The ACLU admits that Amazon stopped selling this technology, is a start. But the US government has to “end its use by law enforcement entirely, regardless which company is selling it.” In previous posts, AI Theology talked about bias on facial recognition and algorithmic injustice.

What about Amazon’s workforce?

Another problem Amazon faces is in their workforce. At first sight, white people occupy only 32% of their entire workforce. But it means nothing since the best paid jobs belong to them. Corporate employees are composed of: 47% White, 34% Asian, 7% Black, 7% Latinos, 3% Multiracial, and 0.5% Native Americans. The numbers continue reducing drastically when you look at senior leaders that are composed of: 70% White, 20% Asian, 3,8% Black, 3,9% Latinos, 1.4% Multiracial and 0.2% Native Americans. You can find this data in this link.

What these numbers show us is that minorities are under represented in Amazon’s leadership ranks . Especially in the best paid and more influential roles. We need to be alert when big companies say their roles are equally distributed. Sometimes the hypocrisy is there. The roles may be equal, but the pay isn’t.

What can you do against these big companies actions?

So if the companies aren’t practicing what they preach, how can we change that?

Numbers show that public pressure can spark change. We should learn not to only applaud well built statements but demand concrete actions, exposing hypocrisy. We need to call on large companies to address the structural racism that denies opportunities from capable and innovative people of color. 

Consultant Monica Hawkins believes that executives struggle to raise diversity in senior management mostly because they don’t understand minorities. She believes that leaders need to expand their social and business circles, referrals are a key source of important hires as she mentioned in Reuters.

Another take that companies could consider taking is, instead of only making generic affirmations, they could put out campaigns recognizing their own flaws and challenges and what they are doing to change that reality. This type of action can not only improve the company’s rating but also press other companies to change as well. 

It’s also important that companies keep showing their workforce diversity numbers publicly. That way, we can keep track of changes and see whether they are actually working to improve from the inside.

In other words, does the company talk openly about inequalities? That’s nice. Does it make donations to help social justice organizations? Great. But it’s not enough, not anymore. Inequalities don’t exist just because of financial problems. For companies to thrive and continue alive in the future, they need to start creating an effective plan on how to change their own reality.  

AI for Good in the Majority World: Data Science Nigeria

Data Science Nigeria has an ambitious goal: to train 1 million Nigerian data scientists by the end of the decade. Yet, it does not end there, the non-profit aims to make the largest African nation a leading player in the growing global AI industry. Hence, DSN is a shining example of the growing trend of AI for good in the majority world.

AI holds great potential to solve intractable socioeconomic problems. It is not a silver-bullet solution, but a great enabler to speed up, optimize, and greatly improve decision making. Hence, it is not surprising to see the burgeoning AI for good trend emerging in the majority of the world. Yet, what makes DSN stand apart is that it goes a step further. It seeks not only to solve social problems but also to create economic opportunity that would not exist otherwise.

It is this abundance mentality that will best align AI with the flourishing of life.

Re-framing Who Are AI’s Customers

I learned about DSN while attending Pew Research recent webinar on AI ethics. One of its panelist was Dr. Uyi Stewart, DSN board member and IBM distinguished engineer, whose perspective stood out. While others discussed AI ethics in abstract terms, he proposed that AI should be about solving problems for 75% of the world population. That is, AI is not limited to solving complex business problems for the world’s largest corporations. Instead, it can and should be part of the daily life of those living in remote villages and cramped urban centers in the Southern Hemisphere.

Photo by Nqobile Vundla on Unsplash

How so? He went further to provide an example. The world’s poor today face life-and-death choice around the scarcity of resources. The farmers must contend with the fluctuations of a warming climate. The urban dweller, must make key decisions with very limited financial resources. Most of them already own a phone. Hence, he believes industry should develop decision support solutions through their devices so they can make better choices. These are not ways to optimize profit but can represent the difference between life and death for some.

Where most see a social problem, Dr. Stewart envisions a potent market opportunity.

From Scarcity to Abundance

Our economic system is mostly based on the concept of scarcity. That is, the idea that resources are finite and therefore must be allocated efficiently. It is scarcity mentality that drives the market to increase prices for commodities even when they are abundant. Moreover, companies and government may limit production of a product simply to simulate this effect and therefore achieve higher profit margins.

The digital economy has turned the concept of scarcity on its head. When knowledge is digitized and storage is cheap, we move from finite resources to limitless solutions. Even so, one must first optimize these solutions which is why AI becomes crucial in the digital economy. The promise of AI for good in the majority world is unleashing this wealth of opportunity in places where physical resources are scarce. DSN is leading the way by empowering young Nigerians to become data scientists. With this knowledge, they can unlock hidden opportunities in the communities they live.

By investing in the Nigerian youth, this organization is tapping into the majority world’s greatest resource. This is what AI for good is all about: technology for the flourishing of humanity in places of scarcity.

3 Concerning Ways Israel Used AI Warfare in Gaza

We now have evidence that Israel used AI warfare in the conflict with Hamas. In this post, we’ll cover briefly the three main ways Israel used warfare AI and the ethical concerns they raise. Shifting from facial recognition posts, we now turn to this controversial topic.

Semi-Autonomous Machine Gun Robot

Photos surfaced on the Internet showing a machine-gun-mounted robot patrolling the Gaza wall. The intent is to deter Hamas militants from crossing the border or dealing with civil unrest. Just to be clear, they are not fully autonomous as they still have a remote human controller that must make the ultimate striking decision.

Yet, they are more autonomous than drones and other remote-controlled weapons. They can respond to fire or use non-lethal forces if challenged by enemy forces. They are also able to maneuver independently around obstacles.

Israel Defence Force (IDF) seeks to replace soldier patrols with semi-autonomous technologies like this. From their side, the benefits are clear: less risk for soldiers, cheaper patrolling power, and more efficient control of the border. It is part of a broader strategy of creating a smart border wall.

Less clear is how these Jaguars (as they are called) will do a better job in distinguishing enemy combatants from civilians.

US military Robot similar to the one used in Gaza (from Wikipedia)

Anti-Artilhery Systems

Hamas’s primary tactic to attack Israel is through short-range rockets – lots of them. Just in the last conflict, Hamas launched 4,000 rockets against Israeli targets. Destroying them mid-air was a crucial but extremely difficult and costly task.

For decades, IDF has improved its anti-ballistic capability in order to neutralize this threat. The most recent addition in this defense arsenal is using AI to better predict incoming missiles trajectories. By collecting a wealth of data from actual launches, IDF can train better models. This allows them to use anti-missile projectiles sparingly, leaving those destined to uninhabited areas alone.

This strategy not only improves accuracy, which now stands at 90% but also can save the IDF money. At $50K a pop, IDF must use anti-missile projectiles wisely. AI warfare is helping Israel save on resources and hopefully some lives as well.

Target Intelligence

The wealth of data was useful in other areas as well. IDF used intelligence to improve its targeted strikes in Gaza. Using 3-D models of the Gaza territory, they could focus on hitting weapons depots and missile bases. Furthermore, AI technology was employed for other facets of warfare. For example, military strategists used AI to ascertain the best routes for ground force invasions.

This was only possible because of the rich data coming from satellite imaging, surveillance cameras, and intercepted communications. As this plethora of information flowed in, partner-finding algorithms were essential in translating them into actionable intelligence.

Employing AI technology clearly gave Israel a considerable military advantage. Hence, the number of casualties on each side speaks for themselves: 253 in Palestine versus 12 on the Israeli side. AI warfare was a winner for Israel.

With that said, wars are no longer won on the battlefield alone. Can AI do more than giving one side an advantage? Could it actually diminish the human cost of war?

Photo by Clay Banks on Unsplash

Ethical Concerns with AI warfare

As I was writing this blog I reached out to Christian Ethicists and author of Robotic Persons, Dr. Josh Smith for reactions. He sent me the following (edited for length) statement:

“The greatest concern I have is that these defensive systems, which most AI weapons systems are, is that they become a necessary evil. Because every nation is a ‘threat’ we must have weapon systems to defend our liberty, and so on. The economic cost is high. Many children in the world die from a lack of clean water and pneumonia. Yet, we invest billions into AI for the sake of security.

Dr Josh Smith

I could not agree more. As the case for IDF illustrates, AI can do a lot to give military advantage in a conflict. AI warfare is not just about killer robots but encompasses an ecosystem of applications that can improve effectiveness and efficiency. Yet, is that really the best use of AI.

Finally, As we have stated before AI is best when employed for the flourishing of life. Can that happen in warfare? The jury is still out but it is hard to reconcile the flourishing of life with an activity focused on death and destruction.

Making a Difference: Facial Recognition Regulation Efforts in the US

As our review of Coded Bias demonstrates, concerns over facial recognition are mounting. In this blog, I’ll outline current efforts of facial recognition regulation while also point you to resources. While this post focus on the United States, this topic has global relevance. If you know of efforts in your region, drop us a note. Informed and engaged citizens are the best weapons to curb FR abuse.

National Level Efforts to Curb Public Use

Bipartisan consensus is emerging on the need to curb big tech power. However, there are many differences in how to address it. The most relevant piece of legislation moving through Congress is the Facial Recognition and Biometric Technology Moratorium Act. If approved, this bill would:

  • Ban the use of facial recognition technology by federal entities, which can only be lifted with an act of Congress
  • Condition federal grant funding to state and local entities moratoria on the use of facial recognition and biometric technology
  • Stop the use of federal dollars for biometric surveillance systems
  • Stop the use of biometric data in violation of the Act from any judicial proceedings
  • Empower individuals whose biometric data is used in violation of the Act and allow for enforcement by state Attorneys General

Beyond the issues outlined above, it would allow states and localities to enact their own laws regarding the use of facial recognition and biometric technologies.

Photo by Tingey Injury Law Firm on Unsplash

What about Private Use?

A glaring omission from the bill above is that it does nothing to curb private companies use of facial recognition. While stopping police and judicial use of FR is a step in the right direction, the biggest users of this technology is not the government.

On that front, other bills have emerged but have not gone far. One of them is National Biometric Information Privacy Act of 2020, cosponsored by Senadors Jeff Merkley (D-Ore.) and Bernie Sanders (I-Vt.). This law would make it illegal for corporations to use facial recognition to identify people without their consent. Moreover, they would have to prove they are using it for a “valid business purpose”. It models after a recent Illinois law that spurred lawsuits against companies like Facebook and Clearview.

Another promising effort is Republican Senator Jerry Moran Consumer Data Privacy and Security Act of 2021. This bill seeks to establish comprehensive regulation on data privacy that would include facial recognition. In short, the bill would create a federal standard for how companies use personal data and allow consumers to have more control over what is done with their data. On the other side of the aisle, Senator Gillibrand introduced a bill that would create a federal agency to regulate data use in the nation.

Cities have also entered the battle to regulate facial recognition. In 2020, the city of Portland passed a sweeping ban on FR that includes not only public use but also business use in places of “public accommodation”. On the other hand, the state of Washington passed a landmark law that curbs but still allows for the use of the technology. Not surprisingly, the efforts gained support from Seattle’s corporate giants Amazon and Microsoft. Whether that’s a good or bad sign, I’ll let you decide.

What can you do?

What is the best approach? There is no consensus on how to tackle this problem but leaving for the market to decide is certainly not a viable option. While consent is key, there are differences on whether the use is of FR is legitimate. For some, an outright ban is the best option. Others believe it should be highly regulated but still applied to areas like policing. In fact, a majority of Americans are in favor of law enforcement’s use of FR.

The first step is informed engagement. I encourage you to reach out to your Senator and Representative and express your concern over Facial Recognition. In these days, even getting FR regulation in legislator’s radar is a step on the right direction.

Look out for local efforts in your area that are addressing this issue. If none are present, maybe it is your opportunity to be a catalyst for action. While least covered by the media, local laws are often the ones that most impact our lives. Is your police department using FR? If so, what safeguards do they have to avoid racial and gender bias?

Finally, discuss the issue with friends, family and your social network. One good step is sharing this blog (shameless plug 🙂 ) with others in social media. You can do that using the buttons below. Regardless of where you stand on this issue, it is imperative we widen the conversation on facial recognition regulation.