Warfare AI in Ukraine: How Algorithms are Changing Combat

Facebook
Twitter
LinkedIn

There is a war in Europe, again. Two weeks in and the world is watching in disbelief as Russian forces invade Ukraine. While the conflict is still confined to the two nations, the proximity to NATO nations and the unpredictability of the Russian autocrat has caused the world to get the jitters. It is too soon to speak of WWIII but the prospect is now closer than it has ever been.

No doubt this is the biggest story of the moment with implications that span multiple levels. In this piece, I want to focus on how it is impacting the conversation on AI ethics. This encompasses not only the potential for AI weapons but also the involvement of algorithms in cyber warfare and in addressing the refugee crisis that result from it. In a previous blog, I outlined the first documented uses of AI in an armed conflict. This instance requires a more extensive treatment.

Andrew Ng Rethinks AI warfare

In the AI field, few command as much respect as Andrew Ng. Former Chief Scientist of Baidu and co-founder of Google Brain, he has recently shifted his focus to education and helping startups lead innovation in AI. He prefaces his most recent newsletter this way:

I’ve often thought about the role of AI in military applications, but I haven’t spoken much about it because I don’t want to contribute to the proliferation of AI arms. Many people in AI believe that we shouldn’t have anything to do with military use cases, and I sympathize with that idea. War is horrific, and perhaps the AI community should just avoid it. Nonetheless, I believe it’s time to wrestle with hard, ugly questions about the role of AI in warfare, recognizing that sometimes there are no good options.

Andrew Ng

He goes on to explain how in a globally connected world where a lot of code is open-source, there is no way to ensure these technologies will not fall in the wrong hands. Andrew Ng still defends recent UN guidance that affirms that a human decision-maker should be involved in any warfare system. The thought leader likens it to the treatment of atomic weapons where a global body audits and verifies national commitments. In doing so, he opens the door for the legitimate development of such weapons as long as there are appropriate controls.

Photo by Katie Godowski from Pexels

Andrew’s most salient point is that this is no longer a conversation we can avoid. It needs to happen now. It needs to include military experts, political leaders, and scientists. Moreover, it should include a diverse group of members from civil society as civilians are still the ones who suffer the most in these armed conflicts.

Are we ready to open this pandora’s box? This war may prove that it has already been open.

AI Uses in the Ukraine’s war

While much is still unclear, reports are starting to surface on some AI uses on both sides of the conflict. Ukraine is using semi-autonomous Turkish-made drones that can drop laser-guided bombs. A human operator is still required to pull the triggers but the drone can take off, fly and land on its own. Russia is opting for kamikazi drones that will literally crash into its targets after finding and circling them for a bit. This is certainly a terrifying sight, straight out of Sci-fi movies – a predator machine that will hunt down and strike its enemies with cold precision.

Yet, AI uses are not limited to the battlefield. 21st-century wars are no longer fought with guns and ammunition only but now extend to bits and bytes. Russian troll farms are creating fake faces for propagandist profiles. They understand that any military conflict in our age is followed by an information war to control the narrative. Hence the use of bots or another automated posting mechanism will come in handy in a situation like this.

Photo by Tima Miroshnichenko from Pexels

Furthermore, there is a parallel and very destructive cyber war happening alongside the war in the streets. From the very beginning of the invasion, reports surfaced of Russian cyberattacks on Ukraine’s infrastructure. There are also multi-national cyber defense teams formed to counteract and stop such attempts. While cyber-attacks do not always entail AI techniques, the pursuit to stop them or scale them most often does. This, therefore, ensures AI will be a vital part of the current conflict

Conclusion

While I would hope we could guarantee a war-free world for my children, this is not a reality. The prospect of war will continue and therefore it must be part of our discussions on AI and ethics. This becomes even more relevant as contemporary wars are extending into the digital sphere in unprecedented ways. This is uncharted territory in some ways. In others, it is not, as technology has always been at the center of armed conflict.

As I write this, I pray and hope for a swift resolution to the conflict in Ukraine. Standing with the Ukrainian people and against unilateral aggression, I hope that a mobilized global community will be enough to stop a dictator. I suspect it will not. In the words of the wise prophet Sting, we all hope the Russians love their children too.

More Posts

Watching for AI-enabled falsehoods

This is a historical year for elections world-wide. This is also a time for unprecedented ai-enabled misinformation. The Misinformation Hub by AI and Faith focuses

A Priest at an AI Conference

Created through prompt by Dall-E Last month, I attended the 38th conference of the Association for the Advancement of Artificial Intelligence. The conference was held

Send Us A Message

Don't know where to start?

Download our free AI primer and get a comprehensive introduction to this emerging technology
Free