Warfare AI in Ukraine: How Algorithms are Changing Combat

There is a war in Europe, again. Two weeks in and the world is watching in disbelief as Russian forces invade Ukraine. While the conflict is still confined to the two nations, the proximity to NATO nations and the unpredictability of the Russian autocrat has caused the world to get the jitters. It is too soon to speak of WWIII but the prospect is now closer than it has ever been.

No doubt this is the biggest story of the moment with implications that span multiple levels. In this piece, I want to focus on how it is impacting the conversation on AI ethics. This encompasses not only the potential for AI weapons but also the involvement of algorithms in cyber warfare and in addressing the refugee crisis that result from it. In a previous blog, I outlined the first documented uses of AI in an armed conflict. This instance requires a more extensive treatment.

Andrew Ng Rethinks AI warfare

In the AI field, few command as much respect as Andrew Ng. Former Chief Scientist of Baidu and co-founder of Google Brain, he has recently shifted his focus to education and helping startups lead innovation in AI. He prefaces his most recent newsletter this way:

I’ve often thought about the role of AI in military applications, but I haven’t spoken much about it because I don’t want to contribute to the proliferation of AI arms. Many people in AI believe that we shouldn’t have anything to do with military use cases, and I sympathize with that idea. War is horrific, and perhaps the AI community should just avoid it. Nonetheless, I believe it’s time to wrestle with hard, ugly questions about the role of AI in warfare, recognizing that sometimes there are no good options.

Andrew Ng

He goes on to explain how in a globally connected world where a lot of code is open-source, there is no way to ensure these technologies will not fall in the wrong hands. Andrew Ng still defends recent UN guidance that affirms that a human decision-maker should be involved in any warfare system. The thought leader likens it to the treatment of atomic weapons where a global body audits and verifies national commitments. In doing so, he opens the door for the legitimate development of such weapons as long as there are appropriate controls.

Photo by Katie Godowski from Pexels

Andrew’s most salient point is that this is no longer a conversation we can avoid. It needs to happen now. It needs to include military experts, political leaders, and scientists. Moreover, it should include a diverse group of members from civil society as civilians are still the ones who suffer the most in these armed conflicts.

Are we ready to open this pandora’s box? This war may prove that it has already been open.

AI Uses in the Ukraine’s war

While much is still unclear, reports are starting to surface on some AI uses on both sides of the conflict. Ukraine is using semi-autonomous Turkish-made drones that can drop laser-guided bombs. A human operator is still required to pull the triggers but the drone can take off, fly and land on its own. Russia is opting for kamikazi drones that will literally crash into its targets after finding and circling them for a bit. This is certainly a terrifying sight, straight out of Sci-fi movies – a predator machine that will hunt down and strike its enemies with cold precision.

Yet, AI uses are not limited to the battlefield. 21st-century wars are no longer fought with guns and ammunition only but now extend to bits and bytes. Russian troll farms are creating fake faces for propagandist profiles. They understand that any military conflict in our age is followed by an information war to control the narrative. Hence the use of bots or another automated posting mechanism will come in handy in a situation like this.

Photo by Tima Miroshnichenko from Pexels

Furthermore, there is a parallel and very destructive cyber war happening alongside the war in the streets. From the very beginning of the invasion, reports surfaced of Russian cyberattacks on Ukraine’s infrastructure. There are also multi-national cyber defense teams formed to counteract and stop such attempts. While cyber-attacks do not always entail AI techniques, the pursuit to stop them or scale them most often does. This, therefore, ensures AI will be a vital part of the current conflict

Conclusion

While I would hope we could guarantee a war-free world for my children, this is not a reality. The prospect of war will continue and therefore it must be part of our discussions on AI and ethics. This becomes even more relevant as contemporary wars are extending into the digital sphere in unprecedented ways. This is uncharted territory in some ways. In others, it is not, as technology has always been at the center of armed conflict.

As I write this, I pray and hope for a swift resolution to the conflict in Ukraine. Standing with the Ukrainian people and against unilateral aggression, I hope that a mobilized global community will be enough to stop a dictator. I suspect it will not. In the words of the wise prophet Sting, we all hope the Russians love their children too.

Making a Difference: Facial Recognition Regulation Efforts in the US

As our review of Coded Bias demonstrates, concerns over facial recognition are mounting. In this blog, I’ll outline current efforts of facial recognition regulation while also point you to resources. While this post focus on the United States, this topic has global relevance. If you know of efforts in your region, drop us a note. Informed and engaged citizens are the best weapons to curb FR abuse.

National Level Efforts to Curb Public Use

Bipartisan consensus is emerging on the need to curb big tech power. However, there are many differences in how to address it. The most relevant piece of legislation moving through Congress is the Facial Recognition and Biometric Technology Moratorium Act. If approved, this bill would:

  • Ban the use of facial recognition technology by federal entities, which can only be lifted with an act of Congress
  • Condition federal grant funding to state and local entities moratoria on the use of facial recognition and biometric technology
  • Stop the use of federal dollars for biometric surveillance systems
  • Stop the use of biometric data in violation of the Act from any judicial proceedings
  • Empower individuals whose biometric data is used in violation of the Act and allow for enforcement by state Attorneys General

Beyond the issues outlined above, it would allow states and localities to enact their own laws regarding the use of facial recognition and biometric technologies.

Photo by Tingey Injury Law Firm on Unsplash

What about Private Use?

A glaring omission from the bill above is that it does nothing to curb private companies use of facial recognition. While stopping police and judicial use of FR is a step in the right direction, the biggest users of this technology is not the government.

On that front, other bills have emerged but have not gone far. One of them is National Biometric Information Privacy Act of 2020, cosponsored by Senadors Jeff Merkley (D-Ore.) and Bernie Sanders (I-Vt.). This law would make it illegal for corporations to use facial recognition to identify people without their consent. Moreover, they would have to prove they are using it for a “valid business purpose”. It models after a recent Illinois law that spurred lawsuits against companies like Facebook and Clearview.

Another promising effort is Republican Senator Jerry Moran Consumer Data Privacy and Security Act of 2021. This bill seeks to establish comprehensive regulation on data privacy that would include facial recognition. In short, the bill would create a federal standard for how companies use personal data and allow consumers to have more control over what is done with their data. On the other side of the aisle, Senator Gillibrand introduced a bill that would create a federal agency to regulate data use in the nation.

Cities have also entered the battle to regulate facial recognition. In 2020, the city of Portland passed a sweeping ban on FR that includes not only public use but also business use in places of “public accommodation”. On the other hand, the state of Washington passed a landmark law that curbs but still allows for the use of the technology. Not surprisingly, the efforts gained support from Seattle’s corporate giants Amazon and Microsoft. Whether that’s a good or bad sign, I’ll let you decide.

What can you do?

What is the best approach? There is no consensus on how to tackle this problem but leaving for the market to decide is certainly not a viable option. While consent is key, there are differences on whether the use is of FR is legitimate. For some, an outright ban is the best option. Others believe it should be highly regulated but still applied to areas like policing. In fact, a majority of Americans are in favor of law enforcement’s use of FR.

The first step is informed engagement. I encourage you to reach out to your Senator and Representative and express your concern over Facial Recognition. In these days, even getting FR regulation in legislator’s radar is a step on the right direction.

Look out for local efforts in your area that are addressing this issue. If none are present, maybe it is your opportunity to be a catalyst for action. While least covered by the media, local laws are often the ones that most impact our lives. Is your police department using FR? If so, what safeguards do they have to avoid racial and gender bias?

Finally, discuss the issue with friends, family and your social network. One good step is sharing this blog (shameless plug 🙂 ) with others in social media. You can do that using the buttons below. Regardless of where you stand on this issue, it is imperative we widen the conversation on facial recognition regulation.

How Coded Bias Makes a Powerful Case for Algorithmic Justice

What do you do when your computer can’t recognize your face? In a previous blog, we explored the potential applications for emotional AI. At the heart of this technology is the ability to recognize faces. Facial recognition is gaining widespread attention for its hidden dangers. This Coded Bias short review summarizes the story of female researchers who opened the black box of major applications that use FR. What they found is a warning to all of us making Coded Bias a bold call for algorithmic justice.


Official Trailer

Coded Bias Short Review: Exposing the Inaccuracies of Facial Recognition

The secret is out, FR algorithms are a lot better at recognizing white male faces than of any other group. The difference is not trivial. Joy Buolamwini, MIT researcher and main character in the film, found that dark-skinned women were miss-classified up to 35% of the time compared to less than 1% for male white faces! Error rates of this level can have life-altering consequences when used in policing, judicial decisions, or surveillance applications.

Screen Capture

It all started when Joy was looking for facial recognition software to recognize her face for an art project. She would have to put a white mask on in order to be detected by the camera. This initial experience led her down to a new path of research. If she was experiencing this problem, who else and how would this be impacting others that looked like her. Eventually, she stumbled upon the work of Kathy O’Neil, Weapons of Math Destruction: How How Big Data Increases Inequality and Threatens Democracy, discovering the world of Algorithmic activism already underway.

The documentary weaves in multiple cases where FR misclassification is having a devastating impact on people’s lives. Unfortunately, the burden is falling mostly on the poor and people of color. From an apartment complex in Brooklyn, the streets of London, and a school district in Houston, local activists are mobilizing political energy to expose the downsides of FR. In doing so, Netflix Coded Bias shows not only the problem but also sheds light on the growing movement that arose to correct it. In that, we can find hope.

If this wasn’t clear before, here it is: watch the documentary Coded Bias multiple times. This one is worth your time.

The Call for Algorithmic Justice

The fight for equality in the 21st century will be centered on algorithmic justice. What does that mean? Algorithms are fast becoming embedded in growing areas of decision-making. From movie recommendations to hiring, cute apps to judicial decisions, self-driving cars to who gets to rent a house, algorithms are influencing and dictating decisions.

Yet, they are only as good as the data used to train them. If that data contains present inequities and or is biased towards ruling majorities, they will inevitably disproportionately impact minorities. Hence, the fight for algorithmic justice starts with the regulation and monitoring of their results. The current lack of transparency in the process is no longer acceptable. While some corporations may intended to discriminate, their neglect of oversight makes them culpable.

Because of its ubiquitous impact, the struggle for algorithmic justice is not just the domain of data scientists and lawmakers. Instead, this is a fight that belongs to all of us. In the next blog, I’ll be going over recent efforts to regulate facial recognition. This marks the next step in Coded Bias call for algorithmic justice.

Stay tuned.

4 Surprising Ways Emotional AI is Making Life Better

It’s been a long night and you have driven for over 12 hours. The exhaustion is such that you are starting to blackout. As your eyes close and your head drops, the car slows down, moves to the shoulder, and stops. You wake up and realize your car saved your life. This is just one of many examples of how emotional AI can do good.

It doesn’t take much to see the ethical challenges of computer emotion recognition. Worse case scenarios of control and abuse quickly pop into mind. In this blog, I will explore the potential of emotional AI for human flourishing through 4 examples. We need to examine these technologies with a holistic view that weighs their benefits against their risks. Hence, here are 4 examples of how affecting computing could make life better.

1. Alert distracted drivers

Detecting signs of fatigue or alcohol intoxication early enough can be the difference between life and death. This applies not only to the driver but also to passengers and occupants of nearby vehicles. Emotional AI can detect blurry eyes, excessive blinking, and other facial signs that the driver is losing focus. As this mental state is detected early, the system can intervene through many means.

For example, it could alert the driver that they are too tired to drive. It could lower the windows or turn on loud music to jolt the driver into focus. More extreme interventions would include shocking the drivers’ hands through the steering wheel, and also slowing or stopping the car in a safe area.

As an additional benefit, this technology could also detect other volatile mental states such as anger, mania, and euphoria. This could lead to interventions like changing temperature, music, or even locking the car to keep the driver inside. In effect, this would not only reduce car accidents but could also diminish episodes of road rage.

2. Identify Depression in Patients

As those who suffer from depression would attest, the symptoms are not always clear to patients themselves. In fact, some of us can go years suffering the debilitating impacts of mental illness and think it is just part of life. This is especially true for those who live alone and therefore do not have the feedback of another close person to rely on.

Emotional AI trained to detect signs of depression in the face could therefore play an important role in moving clueless patients into awareness. While protecting privacy, in this case, is paramount, adding this to smartphones or AI companions could greatly help improve mental health.

Our faces let out a lot more than we realize. In this case, they may be alerting those around us that we are suffering in silence.

3. Detect emotional stress in workplaces

Workplaces can be toxic environments. In such cases, the fear of retaliation may keep workers from being honest with their peers or supervisors. A narrow focus on production and performance can easily make employees feel like machines. Emotional AI systems embedded through cameras and computer screens could detect a generalized increase in stress by collecting facial data from multiple employees. This in turn could be sent over to responsible leaders or regulators for appropriate intervention.

Is this too invasive? Well, it depends on how it is implemented. Many tracking systems are already present in workplaces where employee activity in computers and phones are monitored 24-7. Certainly, this could only work in places where there is trust, transparency and consent. It also depends on who has access to this data. An employee may not be comfortable with their bosses having this data but may agree to ceding this data to an independent group of peers.

4. Help autistic children socialize in schools

The last example shows how emotional AI can play a role in education. Autistic children process and respond to social queues differently. In this case, emotional AI in devices or a robot could gently teach the child to both interpret and respond to interactions with less anxiety.

This is not an attempt to put therapists or special-needs workers out of a job. It is instead an important enhancement to their essential work. The systems can be there to augment, expand and inform their work with each individual child. It can also provide a consistency that humans also fail to provide. This is especially important for kids who tend to thrive in structured environments. As in the cases above, privacy and consent must be at the forefront.

These are just a few examples of the promise of emotional AI. As industries start discovering and perfecting emotional AI technology, more use cases will emerge.

How does reading these examples make you feel? Do they sound promising or threatening? What other examples can you think of?

A Beginner’s Guide to Emotional AI, Its Challenges and Opportunities

You walk into your living room, Alexa dims the lights, lowers the temperature, and says: “You look really sad today. Would you like me to play Adele for you?” This could be a reality in a few years. Are we prepared? This beginner’s guide to emotional AI will introduce the technology, its applications and ethical challenges.

We will explore both the opportunities and dangers of this emerging AI application. This is part of the broader discipline of affective computing that use different inputs from the human body (i.e.: heartbeat, sweat, facial expression, speech, eye movement, etc) to interpret, emulate and predict emotion. For this piece, we’ll focus on the use of facial expressions to infer emotion.

According to Gartner, by 2022, 10% of smartphones will have affective computing capabilities. Latest Apple phones can already detect your identity through your face. The next step is detecting your mental state through that front camera. Estimates of the Emotive AI market range around $36 Billion in 5 years. Human emotion detection is no longer a Sci-Fi pipe dream but a reality poised to transform societies. Are we ready for it?

How does Emotional AI work?

Our beginner’s guide to emotional AI must start with explaining how it works. While this technology is relatively new, its foundation dates back to the mid-nineteenth century. founded primarily in the idea humans display universal facial cues for their emotions. Charles Darwin was one of the first to put forth this idea. A century later, American psychologist Paul Eckman further elaborated on it through extensive field studies. Recently, scholars have challenged this universality and there is now no consensus on its validity. AI entrepreneurs bet that we can find universal patterns. Their endeavors are testing this theory in real-time with machine learning.

The first step is “training” a computer to read emotions through a process of supervised learning. This entails feeding pictures of people’s faces along with labels that define that person’s emotion. For example, one could feed the picture of someone smiling with the label “happy.” For the learning process to be effective thousands if not millions of these examples are created.

The computer then uses machine learning algorithms to detect common patterns in the many examples of each emotion. This enables it to establish a general idea of what each emotion looks like in every face. Therefore, it is able to classify new cases, including faces it has never encountered before, with these emotions.

By Prawny from Pixabay

Commercial and Public Applications

As you can imagine, there are manifold applications to this type of technology. For example, one of the greatest challenges in marketing is collecting accurate feedback from customers. Satisfaction surveys are few and far between and often inaccurate. Hence, companies could use emotional AI to capture instantaneous human reactions to an ad or experience not from a survey but from their facial reactions.

Affectiva, a leading company using this technology, already claims it can detect emotions from any face. It collected 10M expressions from 87 countries, hand-labeled by crowd workers from Cairo. With its recent merger with Smart Eye, the company is poised to become the leader in Automotive, in-cabin driver mental state recognition for the automotive industry. This could be a valuable safety feature detecting when a driver is under the influence, sleepy, or in emotional distress.

More controversial applications include using it for surveillance as it is in the case of China’s treatment of the Uighur population. Police departments could use it as a lie-detection device in interrogations. Governments could use it to track the general mood of the population by scanning faces in the public square. Finally, employers could use it as part of the interview process to measure the mental state of an applicant.

Ethical Challenges for Emotional AI

No beginner’s guide on emotional AI without considering the ethics of its impact. Kate Crawford, a USC research professor, has sounded the alarm on emotional AI. In her recently published book and through a number of articles, she makes the case for regulating this technology. Her primary argument is that using facial recognition to detect emotions is based on shaky science. That is, the overriding premise that human emotion can universally be categorized through a set of facial expressions is faulty. It minimizes a plethora of cultural factors lending itself to dangerous bias.

This is not just conjecture, a recent University of Maryland study detected an inherent bias that tends to show black faces with more negative emotions than white faces. She also notes that the machine learning process is questionable as it is based on pictures of humans emulating emotions. The examples come from people that were told to make a type of facial expression as opposed to capturing a real reaction. This can lead to an artificial establishment of what a facial representation of emotion should look like rather than real emotional displays.

This is not limited to emotion detection. Instead it is part of a broader partner of error in facial recognition. In 2018 paper, MIT researcher Joy Bulowanmini analyzed disparities in the effectiveness of commercial facial recognition applications. She found that misclassification rates for dark-skinned women were up to 34% compared to 0.8% for white males.

Photo by Azamat Zhanisov on Unsplash

The Sanctity of the Human Face

The face is the window to the human soul. It the part of the body most identified with an individual’s unique identity. When we remember someone, it is their countenance that shows up in our mind. It is indeed the best indicator of our internal emotional state which may often betray the very things we speak.

Up to recently, interpreting the mystery of our faces was the job of humans and animals. What does it mean to now have machines that can intelligently decipher our inner states? This is certainly a new frontier in the human-ai interface that we must tread carefully if for no other reason than to respect the sanctity of the human soul. If left for commerce to decide, the process will most likely be faster than we as a society are comfortable with. That is where the calls for regulation are spot on.

Like every technology – the devil is in the details. It would be premature to outlaw the practice altogether as the city of Seattle has done recently. We should, however, limit and monitor its uses – especially in areas where the risk of bias can have a severe adverse impact on the individual. We must also ponder whether we want to live in a society where even our facial expressions are subject to monitoring.

Green Tech: How Scientists are Using AI to Fight Deforestation

In the previous blog, I talked about upcoming changes to US AI policy with a new administration. Part of that change is a renewed focus on harnessing this technology for sustainability. Here I will showcase an example of green tech – how machine learning models are helping researchers detect illegal logging and burning in the vast Amazon rainforest. This is an exciting development and one more example of how AI can work for good.

The problem

Imagining trying to patrol an area nearly the size of the lower 48 states of dense rainforest! It is as the proverbial saying goes: finding needle in a haystack. The only way to to catch illegal activity is to find ways to narrow the surveilling area. Doing so gives you the best chances to use your limited resources of law enforcement wisely. Yet, how can that be done?

How do illegal logging and burning happen in the Amazon? Are there any patterns that could help narrow the search? Fortunately, there is. A common trait for them is happening near a road. In fact, 95% of them occur within 6 miles from a road or a river. These activities require equipment that must be transported through dense jungle. For logging, lumber must be transported so it can be traded. The only way to do that is either through waterways or dirt roads. Hence, tracking and locating illegal roads go along way to honing in areas of possible illegal activity.

While authorities had records for the government-built roads, no one knew the extent of the illegal network of roads in the Amazon. To attack the problem, enforcing agencies needed richer maps that could spot this unofficial web. Only then could they start to focus resources around these roads. Voila, there you have, green tech working for preserving rather than destroying the environment.

An Ingenious solution

In order to solve this problem, Scientist from Imazon (Amazon’s Institute of Humans and the Environment) went to work in a search for ways to detect these roads. Fortunately, by carefully studying satellite imagery they could manually trace these additional roads. In 2016 they completed this initial heroic but rather tedious work. The new estimate was now 13 times the size of the original! Now they had something to work with.

Once the initial tracing was complete, it became clear updating it manually would be an impossible task. These roads could spring up overnight as loggers and ranchers worked to evade monitoring. That is when they turned to computer vision to see if it could detect new roads. The initial manual work became the training dataset that taught the algorithm how to detect these roads from the satellite images. In supervised learning, one must first have a collection of data that shows the actual target (labels) to the algorithm (i.e: an algorithm to recognize cats must first be fed with millions of Youtube videos of cats to work).

The result was impressive. At first, the model achieved 70% accuracy and with some additional processing on top, it increased to 90%. The research team presented their results in the latest meeting of the American Geophysical Union. They also plan to share their model with neighboring countries so they can use it for their enforcement of the Amazon in areas outside Brazil.

Reflection

Algorithms can be effective allies in the fight for preserving the environment. As the example of Imazon shows, it takes some ingenuity, hard work, and planning to make that happen. While a lot of discussions around AI quickly devolve into cliches of “machines replacing humans”, this example shows how it can augment human problem-solving abilities. It took a person to connect the dots between the potential of AI for solving a particular problem. Indeed the real future of AI may be in green tech.

In this blog and in our FB community we seek to challenge, question and re-imagine how technologies like AI can empower human flourishing. Yet, this is not limited to humans but to the whole ecosystem we inhabit. If algorithms are to fulfill their promise, then they must be relevant in sustainability.

How is your work making life more sustainable on this planet?