3 Effective Ways to Improve AI Ethics Discussions

Facebook
Twitter
LinkedIn

Let’s face it: the quality of discourse in the blogosphere and social media is dismal! What gets traffic is most often not the best representation of a topic but instead the most outrageous click-bait title. As a recent WSJ report suggests, content creators face the constant temptation (including myself and this portal) to trade well-crafted arguments for divisive pieces that emphasize controversy. When it comes to discussions on AI ethics, the situation is no different. Useless outrageous claims abound while a nuanced conversation that would help improve AI ethics discussions are rare.

It is time to raise the level of discourse on AI impact. While I am encouraged to see this topic get the attention is getting, I fear that it is fraught with hyperboles and misinformation which degrade rather than improve dialogue. Consequently, most pieces lead to precipitated conclusions rather than the thoughtful dialogue the topic requires. In this blog, I put forth three ways to improve the quality of dialogue in this space. By keeping them in mind, you can differentiate what is worth your attention from what can be ignored.

Impact is not the same as Intent

The narrative of big business or government seeking to hurt the small guy is an attractive one. We are hard-wired to choose simple explanations and simple storylines of good and evil fit the bill. Most often, they are a good front for our addiction to escape-goating. By putting all evil in one entity, we are excused from looking at evil among ourselves. Most importantly, they undermine the reality that a lot of evil happens as unintended consequences of well-intended efforts.

When it comes to AI bias, I am concerned that too many stories imply a definite villain without probing further to understand systemic dynamics. Consider this article from TNW, titled “Stop Calling it Bias: AI is Racist.” The click-bait title should you give a reason to pause. Moreover, the author seems to assign a human intent to complex systems without probing further into the causes. This type of hyperbolic rhetoric does more harm than good, assigning blame towards one group while ignoring the technical complexities of the issue at hand.

Photo by Christina @ wocintechchat.com on Unsplash

By implying intent to impact, these pieces miss the opportunity to ask broader questions such as: what environmental factors amplified the harmful impact of this problem? How could other actors such as consumers, users, businesses, and regulators play a part in mitigating risks in the future? What technical limitations helped cause or expand the problem? These are a few questions that can elevate the discussion on AI’s impact. Above all, they help us get past the idea that for every harmful impact lies a morally deficient actor behind it.

Generalizations Do More Harm than Good

Ironically, this is precisely at the root of the problem. What do I mean by that? AI algorithms err because they rely on generalizations of past data. Then, when they see new cases, they tend to, as the adage goes, “jump to conclusions.” While many times this has harmless consequences, such as recommending Strawberry Shortcake for me in my Netflix cue, other times this selection can cause serious harm.

Yet, when it comes to articles on AI, the problem is when the author takes one case and generalizes to all cases. Consider this Forbes article about AI. It takes a few statements by Elon Musk, one study, and a lot of speculation to tell us that AI is dangerous. While some of the points are valid, the article does nothing to help us understand why exactly it is dangerous and what we can do about it. In that sense, it does more harm than good giving the reader reasons for worry without grounding them on evidence or proposing solutions.

Taking anecdotal evidence (one case) devoid of statistical backing and stating it as the norm is very misleading. We often pay attention to these stories because they tend to describe extreme cases of AI adverse impact. Not that we should dismiss them outright but consider them in context. We should also ask how prevalent is the problem? Is it increasing or decreasing? What can be done to ensure such cases remain rare or non-existent? By staying at a general level we miss the opportunity to better understand the problem. Thus, it does very little to improve AI ethics discussions

Show me the Data

This leads me to the third and final recommendation. Discussions on AI ethics must stand on empirical evidence. In fact, given that data is at the foundation of algorithm formation, data is also readily available to evaluate its impact. The accessibility of data is both an advantage and also a reminder that transparency is crucial. This is probably one of the key roles regulators can play, ensuring that companies, government, and NGOs make their data available to the public.

This is not limited to impact but extends into the inputs. That is, understanding the data that algorithms train on is as important as understanding the downstream impact they create. For example, if past data shows that white applicants get higher approval rates for mortgages than people of color, guess what? The models will inevitably replicate this bias in their results. This is certainly a problem that needs to be recognized in the front-end rather than monitored on the outcomes only.

Discussions on AI ethics must include statistical evidence for the issues at hand. That is, when presenting a problem, the claims must be accompanied by numbers. Consider this article from the World Economic Forum. It makes appropriate claims, avoids generalizations, and backs up each claim with research. It not only informs the reader but provides references for further. By doing so, it goes a long way to improve AI ethics discussions.

Conclusion

It is encouraging to see the growing interest in AI. The public must engage with his topic as it affects many aspects of our lives. Expanding dialogue and welcoming new voices to the table is critical to ensure AI will work towards human flourishing. With that said, it is now time to ground AI ethics discussions on real evidence and sober assessments of cause and effect. We must resist the temptation of escape-goating and lazy generalizing. Instead, we must pursue the careful path of relentless probing and examining evidence.

This starts with content creators. As those who frame the topic, we must do a better job to represent the issues clearly. We must avoid misleading statements that attract eyeballs but confuse minds. We must also have a commitment to accurate reporting and transparency with our sources.

Can we take the challenge to improve AI ethics discussions? I hope so.

More Posts

A Priest at an AI Conference

Created through prompt by Dall-E Last month, I attended the 38th conference of the Association for the Advancement of Artificial Intelligence. The conference was held

Send Us A Message

Don't know where to start?

Download our free AI primer and get a comprehensive introduction to this emerging technology
Free