Painting Alternative Futures: Scenario Planning for AI and Faith

In the previous blog, I introduced the new project AIT is embarking on and we invited the reader to start thinking about the future by looking first at the past. We now turn to scenario planning as a way to prepare for the future of AI and Faith. For those curious about this, Future studies is an academic field that has developed solid business practices in the last 50 years. I even considered pursuing a degree in it in my early 30’s but that’s a story for another day. The main point here is to mine some of these practices to see what could be useful as we engage in a project to imagine alternative futures for AI and faith.

What is Scenario Planning?

A common foresight practice for large institutions to engage in is scenario planning. In the 1970’s Royal Dutch Shell corporation leadership wanted to engage in a robust process to prepare for an uncertain future. While the company already employed forecasting techniques for short-term planning, leaders felt the need for a different approach as they looked into the mid and long-term future. They turned to a practice developed a decade earlier by the Rand corporation to help them imagine new futures.

Instead of spending too much energy trying to predict the future, the leadership group sought to create plausible scenarios. That is, instead of simply extrapolating current trends, they wanted to paint pictures of possible futures at a conceptual level. Their point was not to “get it right” but to challenge executives to consider multiple alternatives.

In the early ’00s, I participated in one of these sessions with my employer. It was an exciting experience for a young professional and probably one of the reasons I got hooked on future thinking and what inspired me to consider scenario planning for AI and faith. On that occasion, the group chose 2 main variables that would define our scenarios. Then, plotting in a graph, we would create 4 scenarios that would alternate high and low for each of the variables. Each quadrant would have a catchy name that described the combination of the two variables for each scenario as illustrated in the picture below:

Taken from Ribbonfarm.com

In essence, scenarios are nothing more than narratives about the future. They are not striving for accuracy but must be compelling, plausible, and memorable. This way, they can play an important role in painting a picture of the future that the decision-maker can prepare for.

Why Focus on Multiple Futures?

Looking at the chart above can be overwhelming and it begs the question: why build multiple futures? Wouldn’t that create more confusion over what to do next? That’s a fair question to anyone encountering this practice. Yet, there is a strong reason for doing so. Futurist Amy Webb explains it this way:

It’s about flexibility. Most people and organizations are very inflexible in how they think about the future. In fact, it’s difficult to imagine yourself in the future, and there are neurological reasons for that. Our brains are designed to deal with immediate problems, not future ones. That plus the pace of technology improvement is becoming so fast that we’re increasingly focused on the now. Collectively, we are learning to be “nowists,” not futurists.

Here’s the problem with a “nowist” mentality: when faced with uncertainty, we become inflexible. We revert to historical patterns, we stick to a predetermined plan, or we simply refuse to adopt a new mental model.

Amy Webb – interview for the World Economic Forum

Thinking through alternative options forces us out of our short-term mentality. It also breaks us out of pre-conceived ideas based on the past about how the future may unfold. In short, scenario planning undercuts the tendency to predict the future putting the focus instead on the range of possibilities.

Who should engage in this practice?

By now, it should be clear why large organizations are already embedding this practice into their planning cycle. Yet, is that limited to large institutions? Should smaller entities or individuals consider this practice? I would contend the answer is a resounding yes. In a world of increasing uncertainty, there is a growing incentive for democratizing scenario planning.

Photo by Samantha Gades on Unsplash

Certainly, in the field of AI and faith, there is a pressing need for considering alternative futures. It would not be prudent to assume AI adoption or even the make-up of the faithful will remain constant. Communities of faith are still reeling from the disruptive effects of the COVID-19 crisis. AI development and adoption continue to march on at neck-breaking speed. Just between these two factors, the possibilities are quite numerous not even considering the uncertainties around climate change and geopolitics.

In a fast-changing world, we need to reject the dichotomy of resorting to old thinking patterns or accepting change in passive resignation. There is a third way which is preparing for possibilities with courage, caution, and hope. That is why AI theology is engaging in scenario planning discussions to paint alternative futures. This is how we can best serve church, industry, and academia.

Stay tuned!

Let’s Talk About the Future of AI and Faith

AI Theology is embarking on an exciting new project about the future. In the next months, we’ll be exploring how faith and AI will interact decades from now. Through research, dialogue, speculation, and prayerful reflection, we hope to paint plausible pictures of what may be ahead of us. More to come on that.

You might wonder why even engage in such pursuit.

Here is one reason:

“Those who think about the future can shape it while those who do not only react to it” (Insta post)

Now that I got your attention, let me flesh this out a bit further. In the last few months, I have been discussing what AIT was about with the advisory board. We wanted to take time to re-group and re-think priorities. Like any endeavor, we have limited time and resources. Hence, choosing what to focus on is crucial if we are to have any chance of being effective.

What we realized is that any discussion about emerging technologies is inevitably a dialogue about the future. While AI technologies present daunting challenges for our time, this is little compared to where it is taking us as a society. By playing out current possibilities in decades to come, we can offer a clearer picture of the dangers and opportunities of AI so we can instigate action in the present. In a digital environment where attention is measured in milliseconds, it is time we recover the long view.

Looking into the Past for Clues

If you think this is a daunting task, you are not alone. Often, we can barely plan what to eat for dinner. Imagine looking 20 years out! At first, it seems overwhelming if not a pointless exercise. Yet, stay with me for a bit.

A good starting point, albeit imperfect, is projecting your memory back. Do you remember what life was 20 years ago? Some of you weren’t even born or were too young to remember. Others may feel a rush of nostalgia for a time long gone.

Photo by Alexander Schimmeck on Unsplash

I, for one, (age alert ahead) remember very well where I was and what I was doing in 2002. It was a pivotal year when I graduated from college, got married, and started working for the same bank I still work for today (2 mergers later). What a year! One that would define the course of my life for decades to come.

As I reflect back on my younger self, I wonder what would surprise that 22-year-old if he could join me in 2042. Which technology would wow him the most and which one would horrify him? Certainly, he would be excited to learn that video calls on a handheld device were now possible. For somebody who dated 4 years long-distance that would have been a game changer! While cel phones already existed then, they were as dumb as they go, barely allowing you to make phone calls.

This is not just a trip down memory land but a way to calibrate our thoughts to the perception of change over time. That is, considering how our younger self would experience our present not only helps us understand what has changed but most importantly, how fast things have changed. The rate of change is key if we are to think about the future. Getting a sense of the rate of change is probably the best takeaway from this exercise in order to engage with the actual future projection we originally intended to do.

Peering into the future

Older version of me created by the Face app

Who is the wise older man in the right? Well, this is AI’s prediction of what I will look like in the future. Not bad, huh? I am not sure I can pull the beard off since the best I can do today are scattered patches of hair throughout my face. With that said, I am impressed with how realistic the picture looked.

If AI can draw such a compelling future of my looks how about if I ask it to tell me more about the world in 2042? Well, that’s what I did. Using Open AI GPT-3 language model, I typed in the prompt: “What will the world look like in 2042?”

Here is its response:

In 2042, the world will be a very different place. Technology will have advanced significantly, and humanity will have made great strides in solving many of the problems that plague us today. Poverty, hunger, and disease will be largely eradicated, and global warming will be under control. People will be living longer, healthier lives, and we will have colonized other planets. The internet will be even more omnipresent than it is today, and we will be able to communicate with each other instantaneously no matter where we are in the world. We will have flying cars, and robots will be a part of everyday life. The world will be a much more peaceful and prosperous place, and it will be an exciting time to be alive.

Ok, AI – dream on! That was both too general and too optimistic for my taste. No doubt, AI has a long way to go before it can give us a comprehensive picture of the future.

How about you?

What would your 20-years-younger-self notice about today? What would surprise him or her?

When you project yourself forward 20 years, considering the rate of change of the last 20 and keeping it mostly constant, what do you see?

What world do you visit?

Drop your answers in the “send us a message” section on the right or in our FB community

We would love to hear from you.

What is the Connection Between Liberation Theology, Data, and Employment Law?

In the second part summary of our November AITAB meeting, we explored AI for good in Europe, Industry, and Academia. This final blog closes out our summary of our meeting in November. In this piece, you will find varied insights that range from Biblical interpretation, law, liberation theology, and exploring different definitions of data.

Model Training and Biblical Wisdom

Brian:  I’ll jump to the question of the theological and biblical frameworks for AI for good. Something Scott said sparked a direction I hadn’t considered but could be an interesting resource. When Scott mentioned taking one model that was already trained and then training it differently, that opens up exciting new avenues of meaning in terms of how AI is formed, what inputs guide its development. Everything the Scripture has to say about how people are formed can potentially guide the way that we do machine learning.

The book of Proverbs and other Wisdom literature in the Bible address the way people are formed, the way sound instruction can shape people. And what’s really interesting is that these books approach wisdom in a variety of dimensions, all aspects of our lives. Wisdom means not only insight or understanding but practical skill, morality, experience, sound judgment. And that multivalence is important. We as people are formed by so many different inputs: we don’t exist in discrete bundles of attributes. I’m not only a student. I’m a student and a person from my family, a member of my local community. Those things overlap and can’t be easily separated. I don’t stop being from middle Tennessee when I enter the classroom. So teaching and learning must take account of this overlap and strive for our integration, the formation of the whole person. And Wisdom literature exemplifies that in some respects. 

Photo by Nitin Arya from Pexels

Elias: That’s a great point. One of the biggest searches of my life has been the search for integration. Even AI Theology started as a journey of integration of my work, machine learning, and my theological and Christian faith. I love when we start seeing these connections. It may seem a little awkward. How can the book of Proverbs connect to machine learning? As you stay with it, eventually something comes up, something you haven’t thought about. 

Liberation Theology and Employment Law

Levi: In a different lens of what Brian was talking about: I’m working on a book project right now from the question about “preferential option for the poor ” (Catholic term). It comes from liberation theology and how it is understood. How you typically hear and study about it in theology. Being voices for the voiceless and champions of justice.

Yet, one of the biggest problems that are overlooked within this perspective is a recognition that the poor experience the world differently. The dignity of the poor is typically overlooked in societies where the ruling class identities are the ones that get imposed.

You mentioned the question of bias. We know for the most part, like facial recognition bias, isn’t because the programmers thought “I hate people from different races, so I’ll make sure this technology doesn’t work.” Most of the time, it’s because they weren’t aware of these problems. And that happens when you are a part of the dominant group.

When we look at the people who write about preferential options for the poor, they are people who aren’t poor. On the one hand, that is a great problem, that AI has currently, has been, and will continue to have the bias of the people who program. And these people are mostly upper-middle-class of white men. Even in places outside of Western countries, they still mostly are men.

The way AI works is based on what data it receives. If the data is given by white men, it’s going to be data they have curated. But if you bring data from different people, you will have different perspectives. And this perspective has great potential. When I listen to people from different countries, backgrounds, social economics classes, I can be sympathetic but I won’t ever understand fully.

If AI is trained by the data from people of different backgrounds, it can potentially be a better advocate of those things. One of the great advantages is that we think of AI as objective, and we think of the perspective of outcasts as being jaded. It’s harder to say that the computer’s outputs and ideas are not conducive to the realities of the poor. This is one of the great opportunities that help to bring the theological concept of preferential option for the poor and make it a preferential option “of the poor”, instead of only on their behalf. 

Davi: Trying to navigate these waters as an attorney. The EEOC, is the US agency that handles employment discrimination cases. It just launched an initiative called “listening sessions”. They are starting to tackle the problem of algorithmic bias in the law. They are starting to see a lot of cases related to selection tests and association tests (IQ tests) that companies use to hire people. The right answer is based on a specific type of cultural background. If you come from a different background, make different associations, you score badly.

These listening sessions are open to the public. You can see how the government of the US is dealing with these problems. In Congress and other legal areas, you still have fewer folks raising these issues. So the law is being decided in the court in big cases, like FB on FR. AI for good may be creating some democratization through these listening sessions, and I hope this will be one way to get input from other people besides big companies only. 

Data as the Voice of the Poor

Wen:  I’d like to contribute by reflecting on what others have said and adding some thoughts.  Several others have mentioned the democratization of AI with open source courses and data. Additionally, as different AI toolsets become more powerful and simpler to use, these will allow non-technical people who are not data scientists to work with AI.  An analogy is how it used to be difficult to create fancy data visualizations, but now there are tools for anyone to create them with just a few clicks.  As AI tools do increasingly more, the role of data scientists will differ in the years to come.

Scott mentioned a lot of AI tools are from ivory tower and/or homogenous model developers. There is a lot of bias encoded in those AI tools.  Levi mentioned AI algorithms and training data tend to favor upper-class white men and overlook the experiences of the poor.

When we think about amplifying the views and voices of the poor, I’d like to speak from my perspective of Data Strategy:  How are we defining “data”?  What data is collected?  Who collects the data?  How is the data structured?

Photo by Vitaly Vlasov from Pexels

Most people who work with data think of spreadsheets, tables, and numbers. We need to also think about qualitative data, things in the realm of social science and anthropology. And audio and visual data, such as from self-driving cars, selfies, and surveillance cameras.

How can these datasets be used to serve poor and marginalized communities?  For example, spf.io is a platform that captions and translates live events and church services into many languages, including less common languages; this increases the accessibility of informative content for people in lesser-known communities.

I want to widen this conversation on data.  There are things we don’t currently collect as data, things that are happening but aren’t being captured, such as someone’s intuition in making decisions. We also need to explore the realm of epistemology – what is knowledge and information? And what are categories we haven’t considered yet?

Painting a Global View of AI for Good: Part 2

This blog continues the summary for our AITAB meeting in November. Given the diverse group of voices, we were able to cover a lot of ground in the area of AI for good. In the first part I introduced the 3 main trends of AI for good: democratization of AI skills, green AI, and AI justice. In this blog, we cover examples of AI for good in the industry, academia, and a global perspective from Eastern Europe. Our board members spoke from experience and also listed some great resources to anyone interested in getting deeper into the field.

AI for Good in the Industry

Davi: Another way AI and machine learning are helping in sustainability, is by improving companies’ consumption of non-renewables. For example, one of the largest expenses of a cruise line company is fuel consumption. Mega ships require untold amounts of fuel to move them across oceans around the globe. And, in maritime, the exact same route may require differing amounts of fuel due to the many variables that impact fuel consumption such as the weight of the ship, seasonal and unexpected currents, and different weather patterns.

AI and machine learning have expanded the capacity to calculate, with never-seen-before precision, the amounts of fuel needed for such mega-ships to safely complete each of their routes in real-time. This newfound capability is not only good for these companies’ bottom lines but also helps them preserve the environment by diminishing emissions.

Elias: That’s a great point. A recent study by PricewaterhouseCoopers estimates that AI applications in transportation can reduce greenhouse emissions by as much as 1.5% globally so this is definitely an important trend to track.

Photo by Christian Lue on Unsplash

A Report from Eastern Europe   

Frantisek : I tried to investigate and revise my knowledge in the three areas Elias proposed. Regarding the first topic, democratization of AI skills, I think from the perspective of Prague and the Czech Republic, we are at the crossroads between Eastern and Western Europe. There are initiatives that focus on AI education and popularization and issues related to that. I would like to point out a specific Prague AI initiative corporation of different academic and private companies as well.

This kind of initiative is more technological, and they are just beginning to grasp the idea that they need some ethical additions like philosophers. Yet, they are not inviting theologians to the table. I guess we need to prove our value before we will be “invited to the club”.

With that said, Prague AI wants to transform the city into a European center for AI. They have good resources for that, both human and institutional support. So, I wouldn’t be surprised if they achieve this goal, and I wish them all the best. My research group aims at connecting with them too. But we need first to establish ourselves a bit better within the context of our university.

On another front, we established contact recently with a Ukrainian Catholic University which aims at opening an interdisciplinary program for technology and theology. However, we do not know yet, how far they are with this plan. We intend to learn more since I am in process of scheduling an in-person meeting with the dean of their Theological Faculty. It was not yet possible due to the pandemic. We are very much interested in this cooperation.

We also aspire to establish a conversation with the Dicastery for Integral Human Development and other Vatican-related organizations in Rome where issues of new technologies and AI receive great attention especially in relation to ethics and Catholic Social Teaching. In December 2021 one of my team members went to Rome to start conversations leading towards that aim.

Photo by Guillaume Périgois on Unsplash

In summary, here in Central and Eastern Europe democratization of AI is more focused on education and popularization. People are getting acquainted with the issue.

Regarding sustainable AI, we are following the footprints of the European Commission. One of the European commissioners that formed this agenda is from the Czech Republic. And maybe because of that, the European Commission sponsored a big conference in September which was in large part focused on green AI. The contribution about the role of AI in processing plastic materials was especially interesting because it has a great potential for green AI implementation.

The European Commission introduced a plan for the third decade of this century. It’s called the Digital Decade. It includes objectives like digitalization of public buildings, digital economics, and the growth of digital literacy among citizens with large support for the field of AI.

In Europe, AI justice is a big issue. There is hope and a lot of potential in AI to contribute towards the effectiveness and quality of judicial procedures. Yet there is an equivalent concern about the fundamental rights of individuals. I’m not very well acquainted with these issues, but it is an important topic here in Europe.

AI for Good In Academia

AI for good
Photo by Vadim Sherbakov on Unsplash

Scott: I’m a professor of physics at Belmont University. I started working with machine learning around 2013 /2014 with respect to developing audio signal processing applications. I managed to get into an AI Ethics grant in 2017 and went to Oxford for a couple of summers.

My background a long time ago was astrophysics, but recently, I eventually became focused on machine learning. I split my time between doing technical work and also doing philosophical and ethical thinking. I recently taught a general education undergraduate class that integrated machine learning and ethics. We talked about how machine learning and algorithms work while also discussing various ethical principles and players in the field.

Then the university requested I teach a more advanced course more focused on coding for upper-level students. This fall I’m teaching AI deep learning and ethics, and it’s kicking my butt because I am writing a lot of the lessons from scratch. One of the things I’m doing in this course is integrating a lot with things from the open-source community and the public free machine learning and deep learning education. There’s Google, Facebook, and then there’s everybody else. 

So I’ve taken a lot of classes online. I’m pretty involved with the Fast AI community of developers, and through their ancillary groups like Hugging Face ,for example. It’s a startup but also a community. This makes me think in terms of democratization, in addition to proliferation around the world, there’s also a proliferation with everybody that’s not at one of these big tech firms as far as disseminating education.

Democratization of AI and Open Source

I think a couple of big things that come to mind are open source communities that are doing their own work like Luther AI. They released their own GPT model that they trained. It’s a sort of grassroots community group that is loosely affiliated but managed to pull this off through people donating their time and expertise. 

Photo by Shahadat Rahman on Unsplash

One of the things they teach in Fast AI is a lot about transfer learning. Instead of training a giant model from scratch, we’re taking an existing model and fine-tuning it. That relates to sustainability as well. There are ecological concerns about the power consumption needed to train language models. An example would be Megatron-Turing Natural Language Generation (MT-NLP) from Microsoft, a gigantic language model. 

With transfer learning, we can start with an initialization of a model that doesn’t require much power. This allows people all over the globe to run them with little computational resources. The idea is to take Ivory Tower’s deep learning research and apply it to other things. Of course one of the questions people think about is what are we inheriting when we grab a big model and then fine-tune it. Yet, nobody really knows how much of that late structure stays intact after the fine-tuning.

It’s an interesting and accessible area. Considering how many people, myself included, post free content online for education. You can take free courses, free blog posts for learning about machine learning, developing tools and ethics as well. The open-source movement is a nice microcosm of democratization of content that relates both AI ethics and sustainable AI. 

Photo by niko photos on Unsplash

Elias: Thank you, Scott. I want to seize on that to make a point. Open source in the tech world is a great example of the mustard seed technology idea. It starts through grassroots efforts where many donate their time to create amazing things. That is the part I think technology culture is teaching theology to us by actualizing the gift economy. In the real world we live in we pay for companies and they focus on profit. It is highly transactional and calculating. Here you have an alternative economy where an army of volunteers are creating things for free and inviting anyone to take it as needed. They build it simply for the pleasure of building it. It’s a great example. 

Scott: I’m also starting to do some work on algorithmic auditing, and I just found this kid from a group called data science for social good. Other people may find it interesting as I do.

Painting a Global View of AI for Good: Part I

In early November, AITAB (AI Theology Advisory Board) met for another fruitful conversation. As in our previous meeting, the dynamic interaction between our illustrious members took the dialogue to places I did not anticipate. In this first part, we set up the dialogue by framing the key issues in AI for good. We then move to a brief report on how this is playing out in East Asia. In a conversation often dominated by Silicon Valley perspectives, it is refreshing to take a glimpse at less-known stories of how AI technologies are reshaping our world.

Defining AI for Good

Elias: Let me say a few words to start the discussion. This is different from our other discussions where we focused on ethics. In those instances, we were reflecting on what was “wrong” with technology and AI. Today, I wanted to flip this script and focus more on the positive, what I call “AI for good”. Good theology starts with imagination. Today is going to be more of an exercise of imagination to notice what’s happening but doesn’t necessarily make the news.

More specifically, there are three main areas where I see global AI for good starting to take shape. The first is the democratization of AI skills – spreading technological knowledge to underrepresented communities. This is a very important area, since as we discussed before, people make AI in their own image. If we don’t widen the group, we will have the same type of AI. A great example is Data Science Nigeria. Just yesterday I was in one of their bootcamps as a speaker. It’s very encouraging to see young men and women from Nigeria and other African countries getting involved in data science. It started as a vision of two data scientists that want to train 10 million data scientists in Nigeria for the next 10 years. It’s a bold goal, and I sure pray they achieve it.

The other topic is about Green AI or Sustainable AI. How AI can help us become more sustainable. One example is using computer vision to identify illegal fires in the Amazon – using AI to affect change with an eye on sustainability.  And the last one is AI justice. The same way AI is creating bias, it’s using AI tools to identify and call out this bias.  That is the work of some organizations like Algorithmic Justice League led by Joy Buolamwini. That’s also an area that is growing. These three areas cover the main themes of global AI for good.

Global AI for Good
by Bruno Melo from Unsplash.com

Re-Framing Technology

Let me frame them within a biblical context. In technology when we usually mean big tech that comes from Silicon Valley. As an alternative, I want to introduce this different concept, which is mustard seed technology. In the gospels, Jesus talks of the kingdom of God being like a mustard seed. Though it’s one of the smallest seeds it becomes a big tree where birds can come and rest in their shade.

I love this idea about this grassroots technology, either being developed or being deployed to provide for others. Just think of farmers in Kenya using their phones to make payments and doing other things they haven’t been able to do before. Those are the stories I wanted to think about today. I wanted to start thinking geographically.  How does global AI for good look like in different places of the world?

Photo by Akson on Unsplash

AI for Good in East Asia

Levi : Here in East Asia, the turning point came in 2016 when DeepMind AlphaGo (Google supercomputer) beat Lee Se Dol in a game of Go. It created a very interesting push in South Korea and China to rapidly advance and develop AI infrastructures. I’m involved with a group on AI social and ethical concerns focused on Asia. The group has nine different scholars from 6 different Asian countries. One of the things we are going to discuss soon is a report from MIT from interviewing several Asian business owners about direction. This report is 2 years old, but it’s interesting to see how small focused the state of China was then. Now they are one of the world leaders in AI development.

There is a major push in this part of the world. Asia across the board was late to the industrial game, except for Japan. As many countries like South Korea, China have massively industrialized in the last decades, they see AI as a way to push into the future. This opens a lot of questions. Like the ones about democratization and justice that need to be addressed. But one of the interesting things is that Asian countries are interested in pushing towards AI regulation compared to the USA or other European countries. There is also this recognition of wanting to be the best in advanced technology but also the best in “getting it right”. 

Where that’s going to go it’s hard to say. We know for that in China, the government directs most of AI development. So the question of democratization may not be the question at hand. South Korea allocated billions of won to developing AI. around the same time. It will likely engage in more democratization than China.

It is interesting to see how justice issues, like how facial recognition fails to recognize people that aren’t white men. When you’re training this tech in Chinese data sets, you have a much larger data set – one billion and a half people rather than 350 million (in the US), which allows the possibility to get rid of these biases which offers great potential for global AI for good.

There is also the problem of natural language processing. GPT-3 recently came out, and just like GTP-2, is based on English web pages. This means there is bias from the English-speaking world that is coded in those AI systems. But if you start training those same systems on Chinese, Korean, Japanese, Hindi language pages, you are going to end up with different frameworks. The bigger question will be, is there a way to put these in dialogue? I think this is a much more complicated question. Because there is so much development going around in this part of the world, it opens up the recognition that many of the biases encoded in western development of AI will not be the same as the rest of the world.

Conclusion

In this first part, we introduced the discussion on a global view of AI for good. It includes three main categories: democratizing AI skills, sustainable AI and AI justice. We then framed it within a mustard seed technology perspective. That is, we focus on the margins as opposed to the geo-centers of technological power. We are less interested in Silicon Valley and more on what is happening in the street corners of global cities.

Recreating our World Through Mustard Seed Technology

In this blog, I sketch the outline of an alternative story for technology. It starts with an ancient parable and how it has sprung into a multiplicity of meanings in our time. It is a story of grassroots change, power from below, organic growth, and life-giving transformation. Those are terms we often do not associate with technology. Yet, this is about to change as we introduce the concept of mustard seed technology.

Narratives are powerful meaning-making tools. They bring facts together and organize them in a compelling way, making them hard to refute. Most often, their influence goes beyond story-telling to truth defining. That is, the reader becomes a passive, uncritical receiver of its message mostly taking for granted the fact that it is only a narrative. The story often becomes an undisputed fact.

Looking Behind the Curtain

When it comes to technology, the situation is no different. The dominant narrative tells the story of Silicon Valley overlords who rule our world through their magical gadgets, constantly capturing our attention and our desires. Other times, it hinges on a Frankenstein perspective of creation turning against their creators where machines conspire to re-shape our world without our consent. While both narratives hold kernels of truth, their power is not in their accuracy but in their influence. That is, they are not important because they are true but because we believe in them.

Photo by Frederico Beccari from unsplash.com

The role of the theologian, or the critical thinker if you will, is to expose and dismantle them. Yet, they do that not by direct criticism alone but also by offering alternative compelling narratives that connect the facts in new ways. Most dominant narratives around technology share a bent towards despair. It is most often the story of a power greater than us, a god if you will, imposing their will to our detriment. Hence, the best antidote is a narrative of hope that does not ignore the harms and dangers but weighs them properly against the vast opportunities human creativity has to offer the world.

The best challenge to algorithmic determinism is human flourishing against all odds.

That is what AI theology aspires to. As we seek to intersect technological narratives with ancient text, we look both for ethical considerations as well as the lens of hope, both in short supply in the world of techno-capitalism and techno-authoritarianism. In the worship of profit, novelty, and order, these two dominant currents tell only part of the story. Yet, unfortunately, as they proclaim it with powerful loudspeakers parallel stories are overshadowed.

A Biblical Parable

According to the Evangelists, Jesus liked to teach through parables. He knew the power of narrative. The gospels contain many examples of these short stories often meant to make the hearer find meaning in their environment. They were surprisingly simple, memorable, and yet penetrating. Instead of being something to discern, it discerned the listener as they encounter themselves in the story.

Photo by Mishaal Zahed on Unsplash

One of them is the seminal parable about the mustard seed. Evangelist Matthew puts it this way:

He put before them another parable: “The kingdom of heaven is like a mustard seed that someone took and sowed in his field;  it is the smallest of all the seeds, but when it has grown it is the greatest of shrubs and becomes a tree, so that the birds of the air come and make nests in its branches.

Matthew 13:31-32

From this short passage, we can gather two main paths of meaning. The first one is of small beginnings becoming robust and large over time. It is not just about the fast growth of a startup but more a movement that takes time to take hold eventually becomes an undisputed reality that no one can deny.

The other path of meaning is one of function. Once it is grown, is not there simply to be admired and revered. Instead, it is there to provide shelter for other beings who do not have a home. It is a picture of hospitality, inclusion, and invitation. The small seed becomes a big tree that can now help small animals. It can provide shade from the sun and a safe place for rest.

A Contemporary Story from the Margins

Jesus was not talking directly about technology. We can scarcely claim to know the original meaning of the text. That is not the task here. It is instead an attempt to transpose the rich avenues of meaning from the text into our current age and in turn, build a new narrative about the development of technology in our time. A story about how technology is emerging from the margins and solving problems in a life-giving way, rather than a flashy but profitable manner. That is what I would define as mustard seed technology.

What does that look like in concrete examples? From the great continent of Africa, I can tell of at least two examples. One is the story of a boy who built a wind generator to pump water to his village. With limited access to books, parts, and no one to teach him, he organized an effort to build the generator using an old bike motor. The Netflix movie The Boy who Harnessed the Wind tells this story and is worth your time. Another example is how Data Science Nigeria is training millions of data scientists in Africa. Through hackathons, boot camps, and online courses, the organization is a the forefront of AI skills democratization efforts.

Beyond these two examples, consider the power unleashed through the creative economy. As billions get access to free content on YouTube and other video platforms, knowledge can be transferred a lot faster than before. Many can learn new skills from the comfort of their home. Others can share their art and crafts and sell them in a global cyber marketplace. Entrepreneurship is flourishing at the margins as the world is becoming more connected.

Conclusion

These examples of mustard seed technology tell a different story. They speak of a subversive undercurrent of goodness in history that will not quiet down even in the midst of despair, growing inequality, and polarization. It is the story of the mustard seed technology springing up in the margins of our global home. They are growing into robust trees of creativity and economic empowerment.

Do you have the eyes to see and the courage to testify to their truth? When you consider technology, I invite you to keep the narratives of despair at bay. For a moment, start looking for the mustard seeds happening all around you. As you find them, they will grow into trees of hope and encouragement in your heart.

How is AI Hiring Impacting Minorities? Evidence Points to Bias

Thousands of resumes, few positions, and limited time. The story repeats itself in companies globally. Growing economies and open labor markets, now re-shaped by platforms like Linkedin and Indeed, a growing recruiting industry opened wide the labor market. While this has expanded opportunity, it left employers with the daunting task to sift through the barrage of applications, cover letters, resumes thrown in their way. Enters AI, with its promise to optimize and smooth out the pre-selection process. That sounds like a sensible solution, right? Yet, how is AI hiring impacting minorities?

Not so fast – a 2020 paper summarizing data from multiple studies found that using AI for both selection and recruiting has shown evidence of bias. As in the case of facial recognition, AI for employment is also showing disturbing signs of bias. This is a concerning trend that requires attention from employers, job applicants, citizens, and government entities.

Photo by Cytonn Photography on Unsplash

Using AI for Hiring

MIT podcast In Machines we Trust goes under the hood of AI hiring. What they found was surprising and concerning. Firstly, it is important to highlight how widespread algorithms are in every step of hiring decisions. One of the most common ways is through initial screening games that narrow the applicant pool for interviews. These games come in many forms that vary depending on vendor and job type. What they share in common is that, unlike traditional interview questions, they do not directly relate to skills relevant to the job at hand.

AI game creators claim that this indirect method is intentional. This way, the candidate is unaware of how the employer is testing them and therefore cannot “fake” a suitable answer. Instead, many of these tools are trying to see whether the candidate exhibits traits of past successful employees for that job. Therefore, employers claim they get a better measurement of the candidate fit for the job than they would otherwise.

How about job applicants? How do they fare when AI decides who gets hired? More specifically, how does AI hiring impact minorities’ prospects of getting a job? On the other side of the interview table, job applicants do not share in the vendor’s enthusiasm. Many report an uneasiness in not knowing how the tests’ criteria. This unease in itself can severely impact their interview performance creating additional unnecessary anxiety. More concerning is how these tests impact applicants with disabilities. Today, thanks to the legal protections, job applicants do not have to report disabilities in the interviewing process. Now, some of these tests may force them to do it earlier.

What about Bias?

Unfortunately, bias does not happen only for applicants with disabilities. Other minority groups are also feeling the pinch. The MIT podcast tells the story of an African-American woman, who though having the pre-requisite qualifications did not get a single call back after applying to hundreds of positions. She eventually found a job the old-fashioned way – getting an interview through a network acquaintance.

The problem of bias is not entirely surprising. If machine learning models are using past data of job functions that are already fairly homogenous, they will only reinforce and duplicate this reality. Without examining the initial data or applying intentional weights, the process will continue to perpetuate this problem. Hence, when AI is training on majority-dominated datasets, the algorithms will tend to look for majority traits at the expense of minorities.

This becomes a bigger problem when AI applications go beyond resume filtering and selection games. They are also part of interviewing process itself. AI hiring companies like Hirevue claim that their algorithm can predict the success of a candidate by their tone of voice in an interview. Other applications will summarize taped interviews to select the most promising candidates. While these tools clearly can help speed up the hiring process, bias tendencies can severely exclude minorities from the process.

The Growing Need for Regulation

AI in hiring is here to stay and they can be very useful. In fact, the majority of hiring managers state that AI tools are saving them time in the hiring process. Yet, the biggest concern is how they are bending power dynamics towards employers – both sides should benefit from its applications. AI tools are now tipping the balance toward employers by shortening the selection and interview time.

If AI for employment is to work for human flourishing, then it cannot simply be a time-saving tool for employers. It must also expand opportunity for under-represented groups while also meeting the constant need for a qualified labor force. Above all, it cannot claim to be a silver bullet for hiring but instead an informative tool that adds a data point for the hiring manager.

There is growing consensus that AI in hiring cannot go on unregulated. Innovation in this area is welcome but expecting vendors and employers to self-police against disparate impact is naive. Hence, we need intelligent regulation that ensures workers get a fair representation in the process. As algorithms become more pervasive in the interviewing process, we must monitor their activity for adverse impact.

Job selection is not a trivial activity but is foundational for social mobility. We cannot afford to get this wrong. Unlike psychometric evaluations used in the past that have scientific and empirical evidence, these new tools are mostly untested. When AI vendors claim they can predict job success by the tone of voice or facial expression, then the burden is on them to prove the fairness of their methods. Should AI decide who gets hired? Given the evidence so far, the answer is no.

AI for Good in the Majority World: Data Science Nigeria

Data Science Nigeria has an ambitious goal: to train 1 million Nigerian data scientists by the end of the decade. Yet, it does not end there, the non-profit aims to make the largest African nation a leading player in the growing global AI industry. Hence, DSN is a shining example of the growing trend of AI for good in the majority world.

AI holds great potential to solve intractable socioeconomic problems. It is not a silver-bullet solution, but a great enabler to speed up, optimize, and greatly improve decision making. Hence, it is not surprising to see the burgeoning AI for good trend emerging in the majority of the world. Yet, what makes DSN stand apart is that it goes a step further. It seeks not only to solve social problems but also to create economic opportunity that would not exist otherwise.

It is this abundance mentality that will best align AI with the flourishing of life.

Re-framing Who Are AI’s Customers

I learned about DSN while attending Pew Research recent webinar on AI ethics. One of its panelist was Dr. Uyi Stewart, DSN board member and IBM distinguished engineer, whose perspective stood out. While others discussed AI ethics in abstract terms, he proposed that AI should be about solving problems for 75% of the world population. That is, AI is not limited to solving complex business problems for the world’s largest corporations. Instead, it can and should be part of the daily life of those living in remote villages and cramped urban centers in the Southern Hemisphere.

Photo by Nqobile Vundla on Unsplash

How so? He went further to provide an example. The world’s poor today face life-and-death choice around the scarcity of resources. The farmers must contend with the fluctuations of a warming climate. The urban dweller, must make key decisions with very limited financial resources. Most of them already own a phone. Hence, he believes industry should develop decision support solutions through their devices so they can make better choices. These are not ways to optimize profit but can represent the difference between life and death for some.

Where most see a social problem, Dr. Stewart envisions a potent market opportunity.

From Scarcity to Abundance

Our economic system is mostly based on the concept of scarcity. That is, the idea that resources are finite and therefore must be allocated efficiently. It is scarcity mentality that drives the market to increase prices for commodities even when they are abundant. Moreover, companies and government may limit production of a product simply to simulate this effect and therefore achieve higher profit margins.

The digital economy has turned the concept of scarcity on its head. When knowledge is digitized and storage is cheap, we move from finite resources to limitless solutions. Even so, one must first optimize these solutions which is why AI becomes crucial in the digital economy. The promise of AI for good in the majority world is unleashing this wealth of opportunity in places where physical resources are scarce. DSN is leading the way by empowering young Nigerians to become data scientists. With this knowledge, they can unlock hidden opportunities in the communities they live.

By investing in the Nigerian youth, this organization is tapping into the majority world’s greatest resource. This is what AI for good is all about: technology for the flourishing of humanity in places of scarcity.

4 Surprising Ways Emotional AI is Making Life Better

It’s been a long night and you have driven for over 12 hours. The exhaustion is such that you are starting to blackout. As your eyes close and your head drops, the car slows down, moves to the shoulder, and stops. You wake up and realize your car saved your life. This is just one of many examples of how emotional AI can do good.

It doesn’t take much to see the ethical challenges of computer emotion recognition. Worse case scenarios of control and abuse quickly pop into mind. In this blog, I will explore the potential of emotional AI for human flourishing through 4 examples. We need to examine these technologies with a holistic view that weighs their benefits against their risks. Hence, here are 4 examples of how affecting computing could make life better.

1. Alert distracted drivers

Detecting signs of fatigue or alcohol intoxication early enough can be the difference between life and death. This applies not only to the driver but also to passengers and occupants of nearby vehicles. Emotional AI can detect blurry eyes, excessive blinking, and other facial signs that the driver is losing focus. As this mental state is detected early, the system can intervene through many means.

For example, it could alert the driver that they are too tired to drive. It could lower the windows or turn on loud music to jolt the driver into focus. More extreme interventions would include shocking the drivers’ hands through the steering wheel, and also slowing or stopping the car in a safe area.

As an additional benefit, this technology could also detect other volatile mental states such as anger, mania, and euphoria. This could lead to interventions like changing temperature, music, or even locking the car to keep the driver inside. In effect, this would not only reduce car accidents but could also diminish episodes of road rage.

2. Identify Depression in Patients

As those who suffer from depression would attest, the symptoms are not always clear to patients themselves. In fact, some of us can go years suffering the debilitating impacts of mental illness and think it is just part of life. This is especially true for those who live alone and therefore do not have the feedback of another close person to rely on.

Emotional AI trained to detect signs of depression in the face could therefore play an important role in moving clueless patients into awareness. While protecting privacy, in this case, is paramount, adding this to smartphones or AI companions could greatly help improve mental health.

Our faces let out a lot more than we realize. In this case, they may be alerting those around us that we are suffering in silence.

3. Detect emotional stress in workplaces

Workplaces can be toxic environments. In such cases, the fear of retaliation may keep workers from being honest with their peers or supervisors. A narrow focus on production and performance can easily make employees feel like machines. Emotional AI systems embedded through cameras and computer screens could detect a generalized increase in stress by collecting facial data from multiple employees. This in turn could be sent over to responsible leaders or regulators for appropriate intervention.

Is this too invasive? Well, it depends on how it is implemented. Many tracking systems are already present in workplaces where employee activity in computers and phones are monitored 24-7. Certainly, this could only work in places where there is trust, transparency and consent. It also depends on who has access to this data. An employee may not be comfortable with their bosses having this data but may agree to ceding this data to an independent group of peers.

4. Help autistic children socialize in schools

The last example shows how emotional AI can play a role in education. Autistic children process and respond to social queues differently. In this case, emotional AI in devices or a robot could gently teach the child to both interpret and respond to interactions with less anxiety.

This is not an attempt to put therapists or special-needs workers out of a job. It is instead an important enhancement to their essential work. The systems can be there to augment, expand and inform their work with each individual child. It can also provide a consistency that humans also fail to provide. This is especially important for kids who tend to thrive in structured environments. As in the cases above, privacy and consent must be at the forefront.

These are just a few examples of the promise of emotional AI. As industries start discovering and perfecting emotional AI technology, more use cases will emerge.

How does reading these examples make you feel? Do they sound promising or threatening? What other examples can you think of?

How AI and Faith Communities Can Empower Climate Resilience in Cities

AI technologies continue to empower humanity for good. In a previous blog, we explored how AI was empowering government agencies to fight deforestation in the Amazon. In this blog, we discuss the role AI is playing to build climate resilience in cities. We will also look at how faith communities can use AI-enabled microgrids to serve communities hit by climate disassters.

A Changing Climate Puts Cities in Harm way.

I recently listened to an insightful Technopolis podcast on how cities are preparing for an increased incidence of natural disasters. The episode discussed manifold ways city leaders are using technology to prepare, predict and mitigate the impact of climate events. This is a complex challenge that requires a combination of good governance, technological tools, and planning to tackle.

Climate resilience is not just about decreasing carbon footprint, it is also about preparing for the increased incidence of extreme weather. Whether there are fires in California, Tifoons in East Asia, or severe droughts in Northern Africa, the planet is in for a bumpy ride in the coming decades. They will also exacerbate existing problems such as air pollution, water scarcity and heat diseases in urban areas. Governments and civic society groups need to start bracing for this reality by taking bold preventive steps in the present.

Cities illustrate the costs of delaying action on climate change by enshrining resource-intensive infrastructure and behaviors. The choices cities make today will determine their ability to handle climate change and reap the benefits of resource-efficient growth. Currently, 51% of the world’s population lives in cities and within a generation, an estimated two-thirds of the world’s population will live in cities. Hence, addressing cities’ vulnerabilities will be crucial for human life on the planet.

Photo by Karim MANJRA on Unsplash

AI and Climate Resilience

AI is a powerful tool to build climate resilience. We can use it to understand our current reality better, predict future weather events, create new products and services, and minimize human impact. By doing so, we can not only save and improve lives but also create a healthier world while also making the economy more efficient.

Deep learning, for example, enables better predictions and estimates of climate change than ever before. This information can be used to identify major vulnerabilities and risk zones. For example, in the case of fires, better prediction can not only identify risk areas but also help understand how it will spread in those areas. As you can imagine, predicting the trajectory of a fire is a complex task that involves a plethora of variables related to wind, vegetation, humidity, and other factors

The Gifts of Satellite Imagery

Another crucial area in that AI is becoming essential is satellite imagery. Research led by Google, the Mila Institute and the German Aerospace Center harness AI to develop and make sense of extensive datasets on Earth. This in turn empowers us to better understand climate change from a global perspective and to act accordingly.

Combining integrated global imagery with sophisticated modeling capabilities gives communities at risk precious advance warning to prepare. Governments can work with citizens living in these areas to strengthen their ability to mitigate extreme climate impacts. This will become particularly salient in coastal communities that should see their shores recede in the coming decades.

This is just one example of how AI can play a prominent role in climate resilience. A recent paper titled “Tackling Climate Change with Machine Learning,” revealed 13 areas where ML can be developed. They include but are not limited to energy consumption, CO2 removal, education, solar energy, engineering, and finance. Opportunities in these areas include the creation of new low-carbon materials, better monitoring of deforestation, and cleaner transport.

Photo by Biel Morro on Unsplash

Microgrids and Faith Communities

If climate change is the defining test of our generation, then technology alone will not be enough. As much as AI can help find solutions, the threat calls for collective action at unprecedented levels. This is both a challenge and an opportunity for faith communities seeking to re-imagine a future where their relevance surpasses the confines of their pews.

Thankfully, faith communities already play a crucial role in disaster relief. Their buildings often double as shelter and service centers when calamity strikes. Yet, if climate-related events will become more frequent, these institutions must expand their range of services offered to affected populations.

An example of that is in the creation of AI-managed microgrids. They are small, easily controllable electricity systems consisting of one or more generating units connected to nearby users and operated locally. Microgrids contain all the elements of a complex energy system, but because they maintain a balance between production and consumption, they operate independently of the grid. These systems work well with renewable energy sources further decreasing our reliance on fossil fuels

When climate disaster strikes, one of the first things to go is electricity. What if houses of worship, equipped with microgrids, become the places to go for those out of power? When the grid fails, houses of worship could become the lifeline for a neighborhood helping impacted populations communicate with family, charge their phones, and find shelter from cold nights. Furthermore, they could sell their excess energy units in the market finding new sources of funding for their spiritual mission.

Microgrids in churches, synagogues, and mosques – that’s an idea the world can believe in. It is also a great step towards climate resilience.