Let’s Talk About the Future of AI and Faith

AI Theology is embarking on an exciting new project about the future. In the next months, we’ll be exploring how faith and AI will interact decades from now. Through research, dialogue, speculation, and prayerful reflection, we hope to paint plausible pictures of what may be ahead of us. More to come on that.

You might wonder why even engage in such pursuit.

Here is one reason:

“Those who think about the future can shape it while those who do not only react to it” (Insta post)

Now that I got your attention, let me flesh this out a bit further. In the last few months, I have been discussing what AIT was about with the advisory board. We wanted to take time to re-group and re-think priorities. Like any endeavor, we have limited time and resources. Hence, choosing what to focus on is crucial if we are to have any chance of being effective.

What we realized is that any discussion about emerging technologies is inevitably a dialogue about the future. While AI technologies present daunting challenges for our time, this is little compared to where it is taking us as a society. By playing out current possibilities in decades to come, we can offer a clearer picture of the dangers and opportunities of AI so we can instigate action in the present. In a digital environment where attention is measured in milliseconds, it is time we recover the long view.

Looking into the Past for Clues

If you think this is a daunting task, you are not alone. Often, we can barely plan what to eat for dinner. Imagine looking 20 years out! At first, it seems overwhelming if not a pointless exercise. Yet, stay with me for a bit.

A good starting point, albeit imperfect, is projecting your memory back. Do you remember what life was 20 years ago? Some of you weren’t even born or were too young to remember. Others may feel a rush of nostalgia for a time long gone.

Photo by Alexander Schimmeck on Unsplash

I, for one, (age alert ahead) remember very well where I was and what I was doing in 2002. It was a pivotal year when I graduated from college, got married, and started working for the same bank I still work for today (2 mergers later). What a year! One that would define the course of my life for decades to come.

As I reflect back on my younger self, I wonder what would surprise that 22-year-old if he could join me in 2042. Which technology would wow him the most and which one would horrify him? Certainly, he would be excited to learn that video calls on a handheld device were now possible. For somebody who dated 4 years long-distance that would have been a game changer! While cel phones already existed then, they were as dumb as they go, barely allowing you to make phone calls.

This is not just a trip down memory land but a way to calibrate our thoughts to the perception of change over time. That is, considering how our younger self would experience our present not only helps us understand what has changed but most importantly, how fast things have changed. The rate of change is key if we are to think about the future. Getting a sense of the rate of change is probably the best takeaway from this exercise in order to engage with the actual future projection we originally intended to do.

Peering into the future

Older version of me created by the Face app

Who is the wise older man in the right? Well, this is AI’s prediction of what I will look like in the future. Not bad, huh? I am not sure I can pull the beard off since the best I can do today are scattered patches of hair throughout my face. With that said, I am impressed with how realistic the picture looked.

If AI can draw such a compelling future of my looks how about if I ask it to tell me more about the world in 2042? Well, that’s what I did. Using Open AI GPT-3 language model, I typed in the prompt: “What will the world look like in 2042?”

Here is its response:

In 2042, the world will be a very different place. Technology will have advanced significantly, and humanity will have made great strides in solving many of the problems that plague us today. Poverty, hunger, and disease will be largely eradicated, and global warming will be under control. People will be living longer, healthier lives, and we will have colonized other planets. The internet will be even more omnipresent than it is today, and we will be able to communicate with each other instantaneously no matter where we are in the world. We will have flying cars, and robots will be a part of everyday life. The world will be a much more peaceful and prosperous place, and it will be an exciting time to be alive.

Ok, AI – dream on! That was both too general and too optimistic for my taste. No doubt, AI has a long way to go before it can give us a comprehensive picture of the future.

How about you?

What would your 20-years-younger-self notice about today? What would surprise him or her?

When you project yourself forward 20 years, considering the rate of change of the last 20 and keeping it mostly constant, what do you see?

What world do you visit?

Drop your answers in the “send us a message” section on the right or in our FB community

We would love to hear from you.

Painting a Global View of AI for Good: Part 2

This blog continues the summary for our AITAB meeting in November. Given the diverse group of voices, we were able to cover a lot of ground in the area of AI for good. In the first part I introduced the 3 main trends of AI for good: democratization of AI skills, green AI, and AI justice. In this blog, we cover examples of AI for good in the industry, academia, and a global perspective from Eastern Europe. Our board members spoke from experience and also listed some great resources to anyone interested in getting deeper into the field.

AI for Good in the Industry

Davi: Another way AI and machine learning are helping in sustainability, is by improving companies’ consumption of non-renewables. For example, one of the largest expenses of a cruise line company is fuel consumption. Mega ships require untold amounts of fuel to move them across oceans around the globe. And, in maritime, the exact same route may require differing amounts of fuel due to the many variables that impact fuel consumption such as the weight of the ship, seasonal and unexpected currents, and different weather patterns.

AI and machine learning have expanded the capacity to calculate, with never-seen-before precision, the amounts of fuel needed for such mega-ships to safely complete each of their routes in real-time. This newfound capability is not only good for these companies’ bottom lines but also helps them preserve the environment by diminishing emissions.

Elias: That’s a great point. A recent study by PricewaterhouseCoopers estimates that AI applications in transportation can reduce greenhouse emissions by as much as 1.5% globally so this is definitely an important trend to track.

Photo by Christian Lue on Unsplash

A Report from Eastern Europe   

Frantisek : I tried to investigate and revise my knowledge in the three areas Elias proposed. Regarding the first topic, democratization of AI skills, I think from the perspective of Prague and the Czech Republic, we are at the crossroads between Eastern and Western Europe. There are initiatives that focus on AI education and popularization and issues related to that. I would like to point out a specific Prague AI initiative corporation of different academic and private companies as well.

This kind of initiative is more technological, and they are just beginning to grasp the idea that they need some ethical additions like philosophers. Yet, they are not inviting theologians to the table. I guess we need to prove our value before we will be “invited to the club”.

With that said, Prague AI wants to transform the city into a European center for AI. They have good resources for that, both human and institutional support. So, I wouldn’t be surprised if they achieve this goal, and I wish them all the best. My research group aims at connecting with them too. But we need first to establish ourselves a bit better within the context of our university.

On another front, we established contact recently with a Ukrainian Catholic University which aims at opening an interdisciplinary program for technology and theology. However, we do not know yet, how far they are with this plan. We intend to learn more since I am in process of scheduling an in-person meeting with the dean of their Theological Faculty. It was not yet possible due to the pandemic. We are very much interested in this cooperation.

We also aspire to establish a conversation with the Dicastery for Integral Human Development and other Vatican-related organizations in Rome where issues of new technologies and AI receive great attention especially in relation to ethics and Catholic Social Teaching. In December 2021 one of my team members went to Rome to start conversations leading towards that aim.

Photo by Guillaume Périgois on Unsplash

In summary, here in Central and Eastern Europe democratization of AI is more focused on education and popularization. People are getting acquainted with the issue.

Regarding sustainable AI, we are following the footprints of the European Commission. One of the European commissioners that formed this agenda is from the Czech Republic. And maybe because of that, the European Commission sponsored a big conference in September which was in large part focused on green AI. The contribution about the role of AI in processing plastic materials was especially interesting because it has a great potential for green AI implementation.

The European Commission introduced a plan for the third decade of this century. It’s called the Digital Decade. It includes objectives like digitalization of public buildings, digital economics, and the growth of digital literacy among citizens with large support for the field of AI.

In Europe, AI justice is a big issue. There is hope and a lot of potential in AI to contribute towards the effectiveness and quality of judicial procedures. Yet there is an equivalent concern about the fundamental rights of individuals. I’m not very well acquainted with these issues, but it is an important topic here in Europe.

AI for Good In Academia

AI for good
Photo by Vadim Sherbakov on Unsplash

Scott: I’m a professor of physics at Belmont University. I started working with machine learning around 2013 /2014 with respect to developing audio signal processing applications. I managed to get into an AI Ethics grant in 2017 and went to Oxford for a couple of summers.

My background a long time ago was astrophysics, but recently, I eventually became focused on machine learning. I split my time between doing technical work and also doing philosophical and ethical thinking. I recently taught a general education undergraduate class that integrated machine learning and ethics. We talked about how machine learning and algorithms work while also discussing various ethical principles and players in the field.

Then the university requested I teach a more advanced course more focused on coding for upper-level students. This fall I’m teaching AI deep learning and ethics, and it’s kicking my butt because I am writing a lot of the lessons from scratch. One of the things I’m doing in this course is integrating a lot with things from the open-source community and the public free machine learning and deep learning education. There’s Google, Facebook, and then there’s everybody else. 

So I’ve taken a lot of classes online. I’m pretty involved with the Fast AI community of developers, and through their ancillary groups like Hugging Face ,for example. It’s a startup but also a community. This makes me think in terms of democratization, in addition to proliferation around the world, there’s also a proliferation with everybody that’s not at one of these big tech firms as far as disseminating education.

Democratization of AI and Open Source

I think a couple of big things that come to mind are open source communities that are doing their own work like Luther AI. They released their own GPT model that they trained. It’s a sort of grassroots community group that is loosely affiliated but managed to pull this off through people donating their time and expertise. 

Photo by Shahadat Rahman on Unsplash

One of the things they teach in Fast AI is a lot about transfer learning. Instead of training a giant model from scratch, we’re taking an existing model and fine-tuning it. That relates to sustainability as well. There are ecological concerns about the power consumption needed to train language models. An example would be Megatron-Turing Natural Language Generation (MT-NLP) from Microsoft, a gigantic language model. 

With transfer learning, we can start with an initialization of a model that doesn’t require much power. This allows people all over the globe to run them with little computational resources. The idea is to take Ivory Tower’s deep learning research and apply it to other things. Of course one of the questions people think about is what are we inheriting when we grab a big model and then fine-tune it. Yet, nobody really knows how much of that late structure stays intact after the fine-tuning.

It’s an interesting and accessible area. Considering how many people, myself included, post free content online for education. You can take free courses, free blog posts for learning about machine learning, developing tools and ethics as well. The open-source movement is a nice microcosm of democratization of content that relates both AI ethics and sustainable AI. 

Photo by niko photos on Unsplash

Elias: Thank you, Scott. I want to seize on that to make a point. Open source in the tech world is a great example of the mustard seed technology idea. It starts through grassroots efforts where many donate their time to create amazing things. That is the part I think technology culture is teaching theology to us by actualizing the gift economy. In the real world we live in we pay for companies and they focus on profit. It is highly transactional and calculating. Here you have an alternative economy where an army of volunteers are creating things for free and inviting anyone to take it as needed. They build it simply for the pleasure of building it. It’s a great example. 

Scott: I’m also starting to do some work on algorithmic auditing, and I just found this kid from a group called data science for social good. Other people may find it interesting as I do.

Painting a Global View of AI for Good: Part I

In early November, AITAB (AI Theology Advisory Board) met for another fruitful conversation. As in our previous meeting, the dynamic interaction between our illustrious members took the dialogue to places I did not anticipate. In this first part, we set up the dialogue by framing the key issues in AI for good. We then move to a brief report on how this is playing out in East Asia. In a conversation often dominated by Silicon Valley perspectives, it is refreshing to take a glimpse at less-known stories of how AI technologies are reshaping our world.

Defining AI for Good

Elias: Let me say a few words to start the discussion. This is different from our other discussions where we focused on ethics. In those instances, we were reflecting on what was “wrong” with technology and AI. Today, I wanted to flip this script and focus more on the positive, what I call “AI for good”. Good theology starts with imagination. Today is going to be more of an exercise of imagination to notice what’s happening but doesn’t necessarily make the news.

More specifically, there are three main areas where I see global AI for good starting to take shape. The first is the democratization of AI skills – spreading technological knowledge to underrepresented communities. This is a very important area, since as we discussed before, people make AI in their own image. If we don’t widen the group, we will have the same type of AI. A great example is Data Science Nigeria. Just yesterday I was in one of their bootcamps as a speaker. It’s very encouraging to see young men and women from Nigeria and other African countries getting involved in data science. It started as a vision of two data scientists that want to train 10 million data scientists in Nigeria for the next 10 years. It’s a bold goal, and I sure pray they achieve it.

The other topic is about Green AI or Sustainable AI. How AI can help us become more sustainable. One example is using computer vision to identify illegal fires in the Amazon – using AI to affect change with an eye on sustainability.  And the last one is AI justice. The same way AI is creating bias, it’s using AI tools to identify and call out this bias.  That is the work of some organizations like Algorithmic Justice League led by Joy Buolamwini. That’s also an area that is growing. These three areas cover the main themes of global AI for good.

Global AI for Good
by Bruno Melo from Unsplash.com

Re-Framing Technology

Let me frame them within a biblical context. In technology when we usually mean big tech that comes from Silicon Valley. As an alternative, I want to introduce this different concept, which is mustard seed technology. In the gospels, Jesus talks of the kingdom of God being like a mustard seed. Though it’s one of the smallest seeds it becomes a big tree where birds can come and rest in their shade.

I love this idea about this grassroots technology, either being developed or being deployed to provide for others. Just think of farmers in Kenya using their phones to make payments and doing other things they haven’t been able to do before. Those are the stories I wanted to think about today. I wanted to start thinking geographically.  How does global AI for good look like in different places of the world?

Photo by Akson on Unsplash

AI for Good in East Asia

Levi : Here in East Asia, the turning point came in 2016 when DeepMind AlphaGo (Google supercomputer) beat Lee Se Dol in a game of Go. It created a very interesting push in South Korea and China to rapidly advance and develop AI infrastructures. I’m involved with a group on AI social and ethical concerns focused on Asia. The group has nine different scholars from 6 different Asian countries. One of the things we are going to discuss soon is a report from MIT from interviewing several Asian business owners about direction. This report is 2 years old, but it’s interesting to see how small focused the state of China was then. Now they are one of the world leaders in AI development.

There is a major push in this part of the world. Asia across the board was late to the industrial game, except for Japan. As many countries like South Korea, China have massively industrialized in the last decades, they see AI as a way to push into the future. This opens a lot of questions. Like the ones about democratization and justice that need to be addressed. But one of the interesting things is that Asian countries are interested in pushing towards AI regulation compared to the USA or other European countries. There is also this recognition of wanting to be the best in advanced technology but also the best in “getting it right”. 

Where that’s going to go it’s hard to say. We know for that in China, the government directs most of AI development. So the question of democratization may not be the question at hand. South Korea allocated billions of won to developing AI. around the same time. It will likely engage in more democratization than China.

It is interesting to see how justice issues, like how facial recognition fails to recognize people that aren’t white men. When you’re training this tech in Chinese data sets, you have a much larger data set – one billion and a half people rather than 350 million (in the US), which allows the possibility to get rid of these biases which offers great potential for global AI for good.

There is also the problem of natural language processing. GPT-3 recently came out, and just like GTP-2, is based on English web pages. This means there is bias from the English-speaking world that is coded in those AI systems. But if you start training those same systems on Chinese, Korean, Japanese, Hindi language pages, you are going to end up with different frameworks. The bigger question will be, is there a way to put these in dialogue? I think this is a much more complicated question. Because there is so much development going around in this part of the world, it opens up the recognition that many of the biases encoded in western development of AI will not be the same as the rest of the world.

Conclusion

In this first part, we introduced the discussion on a global view of AI for good. It includes three main categories: democratizing AI skills, sustainable AI and AI justice. We then framed it within a mustard seed technology perspective. That is, we focus on the margins as opposed to the geo-centers of technological power. We are less interested in Silicon Valley and more on what is happening in the street corners of global cities.

Citizens Unite: Global Efforts to Stand Up to Digital Monopolies

Politicians lack the knowledge to regulate technology. This was comically demonstrated in 2018 when Senator Hatch asked how Zuckerberg could keep Facebook free. Zuckerberg’s response became a viral meme:

Taken from Tenor.com

Zuckerberg’s creepy smile aside, the meme drives home the point that politicians know little about emerging technologies. 

What can be done about this? Lawmakers cannot be experts on everything – they need good counsel. An example of that is how challenging it was for the governments to contain COVID with no help from microbiologists or researchers.  The way we get to good policy is by having expert regulators who act as referees, weighing the pros and cons of different strategies to help the lawmakers deliberate with at least some knowledge. 

A Global Push to Fight Digital Monopolies

When we take a look at monopolies around the world, it’s clear that digital monopolies are everywhere, and alongside them are the finance companies and banks. We live in a capitalist world. Technology walks holding hands with the urge to profit and make money. That is why it is so hard to go against these monopolies.

But not all hope is lost. If we look across the globe, we can find different countries regulating big tech companies. Australia has been working for more than a year now, proposing a legislation that would force tech platforms like Google and Facebook to pay news publishers for content. The tension was so big that Facebook took an extreme measure and blocked all kinds of news in Australia. The government thinks that Facebook’s news ban was too aggressive and will only push the community even more further from Facebook. 

The Australian Prime Minister Scott Morrison, shared on his Facebook page his concerns and beliefs saying that this behavior from Facebook only shows how these Big Tech Companies think they are bigger than the government itself and that rules should not apply to them. He also says that he recognizes how big tech companies are changing the world, but that does not mean they run it.

Discussions on how to stop big companies using every content for free is also happening in other countries like France, Canada and even the United States. Governments around the world are considering new laws to keep these companies in check. The question is how far they can go against the biggest digital monopolies in the world. 

Fortunately, there are many examples where governments are working with tech companies to help consumers. Early this year, the French government approved the New Tech Repairability Index. This index is going to show how repairable an electronic is, like smartphones, laptops, TVs, and even lawnmowers. This will help consumers buy more durable goods and force companies to make repairs possible. It is not only a consumer-friendly measure but also environmentally friendly as it helps reduce electronic waste.   

Another example that big technology companies have to hear from the government is in Brazil. On February 16, a Brazilian congressman was arrested for making and sharing videos that go against the law by uplifting a very dark moment in Brazilian history, the military dictatorship they had to go through in the 60s. And a few days later, Facebook, Twitter, and Instagram had to ban his accounts because of a court order, since he was still updating his account from inside prison. 

Brazil still doesn’t know how this congressman’s story will end, but we can at least hope that the cooperation between big companies and the government will increase even more. These laws and actions by the people in charge of countries have already waited too long to come along. We have to fight for our rights and always remember that no one is above the law. 

From Consumers to Citizens

Technological monopolies can make us feel like they rule the world. But the truth is that we are the consumers, so we need to have our voices heard and rights respected. 

I believe that the most efficient way to deal with tech monopolies is by creating committees that will assist the government to make antitrust laws. These committees should have experts and common citizens that don’t have any ties with big tech companies. Antitrust laws are statutes developed by governments to protect consumers from predatory business practices and ensure fair competition. They basically ensure companies don’t have questionable activities like market allocation, bid rigging, price-fixing, and the creation of monopolies. 

Protecting consumer privacy and deterring artificially high prices should be a priority. But can these committees really be impartial? Can we trust the government to make these laws?

The only way is for consumers to act as citizens. That is, we need to vote for representatives that are not tied to Big Tech lobbies. We need to make smarter choices with our purchases. We need to organize interest groups that put humanity back at the center of technology. 

How are you standing up to digital monopolies today? 

Surveillance Capitalism: Exposing the Power of Digital Monopolies

On January 28, I attended the online forum Medium in Conversation: How to Destroy Surveillance Capitalism. In this blog, I summarize the main points from the discussion along with some reflections on how we can respond.

Maybe at first glance, we can’t really see what surveillance capitalism has to do with AI. But the two topics walk side by side. Surveillance capitalism is sustained by digital monopolies that rely on massive amounts of personal data (hence the surveillance part). This deluge of data is fed into powerful AI algorithms which drive content curation. One depends on the other to thrive.

The Current State of Affairs

It’s a new era for Big Tech. Weeks after the de-platforming of Donald Trump—and with a new administration in the White House—the time is ripe to reexamine the power wielded by the giants of surveillance capitalism. How did corporations like Facebook, Google, and Amazon amass such power? How do we build a more open Web?

According to Cory Doctorow, If we´re going to break big techs’ dominance in our digital lives, we will have to fight monopolies. That may sound pretty mundane and old-fashioned, something out of the new deal era. Yet, breaking up monopolies is something we have forgotten how to do. The trust-busting era cannot begin until we find the political will. Only when politicians prove that they have the average citizen’s backs against the richest most powerful men in the world.

For politicians to take notice, citizens must first speak up.  

What is the problem with Monopolies?

In case we need a refresher, monopoly is a bad deal for consumers. It means that the market has only one seller with the ability to set prices, and tell people what a service costs.  People line up to buy their product even if it costs too much simply because they have no choice. 

Facebook is a monopoly if you think of the prices it set for its ad platform. The ad buyer has very little choice allowing Zuckerberg’s empire to dictate the terms. In addition to that, the platform behemoth retains its monopoly by impeding other apps to grow.

Anticompetitive conduct in big tech has been rampant. Mark Zuckerberg bought competing apps (snapchat, instagram for example) leaving little room for competitors. Apple pursued it in the hardware side by shutting down “right to repair bills” so that people are forced to buy new phones. In effect, they dictated when your phone can be repaired or when it has to be thrown away.  

These actions led to an unprecedented concentration of power where a small group of people can make decisions of global consequence.

People of the World, Unite

Is it a realistic operation to create an open web or are we too far gone? Although these forces seem impenetrable and timeless, they actually are relatively new, and have weaknesses. If it was about just changing our relationship with technology, it would be a hard lift.

Yet, according to Cory Doctorow, there is a wave sweeping the world with anger about monopolies in every domain. This discontent seek to return power to communities so they can decide their future. 

It has been done before. In the beginning of the 20th century, popular discontent drove politicians to rein in powerful monopolies such as Andrew Carneggie’s control of the steel industry and Rockefeller’s Oil’s monopoly. Their efforts culminated with the passage of sweeping anti-trust legislation.

Are we reaching a tipping point with big tech in the beginning of the 21st century? 

Conclusion

Surveillance Capitalism affects the entire world and can be scary sometimes. There is a need to seek freedom from the domain of digital monopolies. Once again, it is necessary to find the political will to fight for change. While legislation will not solve this problem completely, it is an important first step.

Certainly this is not just a North American problem. Some countries are already pressing these big companies to answer for their actions paving the way for a future where power is more evenly distributed.

In the next blog, I’ll provide an overview of anti-trust efforts around the world.

Intelligence for Leadership: AI in Decision Making

Kings have advisors, presidents have cabinets, CEOs have boards and TV show hosts have writers – every public figure relies on a cadre of trusted advisors for making decisions. Whenever crucial decisions are made, an army of astute specialists have spent countless hours researching, studying and preparing to communicate the most essential information to inform a decision maker on that issue. Without them, leaders would lead by instinct and most likely often get it wrong. What if these advisors were not human only but also AI-enabled decision systems?

This is what Modeling Religion Project is doing. Developed by a group of scientists, philosophers, and religion scholars, the project consists of a computer simulation populated by “virtual agents” mimicking the characteristics and beliefs of a country’s population. The model is then fed evidence-based social science tendencies of human behavior under certain conditions. For example, a sudden influx of foreigners may increase the probability of hostility by native groups.

Using this initial state as a baseline, they experiment using different scenarios to evaluate the effects of changes in the environment. Levers for change include adding newcomers, investing in education, changing economic policy among other factors. The model then simulates outcomes from the changes allowing for scholars and policy makers to understand the effects of decisions or trends in a nation. While the work focuses on religion, its findings have broad implications for other social sciences such as Psychology, Sociology and Political Science. Among others, one of their primary goal is to better understand what factors can impact the level of religious violence. The government of Norway is about to put the models to test, where they hope to use the insights of the model to better integrate refugees to their nation.

Certainly, a project of such ambition is not without difficulties. For one, there are ethical questions around who gets to decide what is a good outcome and what is not. For example, one of the models provides recommendation on how to speed up secularization in a nation. Is secularization a good path for every nation? Clearly, while the model raises interesting insights, using them in the real world may prove much harder than the complex math involved in building them. Furthermore, irresponsible use can quickly lead to social engineering.

While hesitation is welcome, the demand for effective decision making will only increase. Leaders from household to national levels face increasing complex scenarios. Consider the dilemma that parents face when planing for their children’s education knowing that future job market will be different from today. Consider organizational leaders working on 5-10 year plans when markets can change in minutes, demand can change in days and societies in the course of a few years. Hence, the need for AI-generated insights will only increase with time.

What are to make of AI-enabled advice for public policy? First, it is important to note that this already is a reality in large multi-national corporations. In recent years, companies have developed intelligent systems that seek to extract insights from the reams of customer data available to these organizations. These systems may not rise to the sophistication of the project above, but soon they will. Harnessing the power of data can provide an invaluable perspective to the decision making process. As complexity increases, intelligent systems can distill large amounts of data into digestible information that can make the difference between becoming a market leader or descending into irrelevancy. This dilemma will be true for governments as well. Missing data insights can be the difference between staying in power or losing the next election.

With this said, it is important to highlight that AI-enabled simulations will never be a replacement for wise decision making. The best models can only perform as well as the data they contain. They represent a picture of the past but are not suitable for anticipating black swan events. Moreover, leaders may have pick up signals of change that have not yet been detected by data collection systems. In this case, human intuition should not be discarded for computer calculations. There is also an issue of trust. Even if computer perform better than humans in making decisions, can humans trust it beyond their own capabilities? Would we trust an AI judge to sentence a person to death?

Here, as in other situations, it bears out to bring the contrast between automation and augmentation. Using AI systems to enhance human decision making is much better than using it to replace it altogether. While they will become increasingly necessary in public policy, they should never replace popular elected human decision-makers.

The Future of Service: How Google, Apple and Facebook are Using AI to Simplify Our Lives

Companies want to have satisfied customers who will come back for more and recommend their brands to others, and AI can help them to achieve this goal. There are many ways in which people benefit from this, for instance getting a quick reply to their questions.

Artificial intelligence is becoming “humanized” as it helps people in several ways. Whether it’s face recognition, dictating directly to the mobile phones, shopping online, ordering food, self-driving cars, and many more, they are making our lives easier.

Let’s take a look at three major enterprises and ways they use artificial intelligence to “make life easier” for their customers.

  1. Google

 

  • Google spent between $20 and $30 billion on artificial intelligence in 2016.

  • The Google Self driving cars use AI to map and move on the roads.

  • Google’s voice recognition technology claims that there is a 98% accuracy.

  • Youtube increased watch time by 50% by tuning its video recommendations using AI.

  • Google Photos can recognize faces, create animations, or suggest a photo filter.

 

  1. Facebook

  • Facebook Deeptext understands text with near-human accuracy.

  • Artificial intelligence is used to stop fake news from going viral.

  • Facebook uses deep neural networks for ad placement.

  • It has AI embedded into its messenger.

  • In 2017 it rolled out an AI project that could spot people with suicidal tendencies.

 

  1. Apple

  • Apple uses neural engine for face recognition to unlock the phone and transfer facial expression onto animate emoji.

  • It uses deep learning to detect fraud on the Apple Store and for face detection.

  • Machine learning helps Apple choose new stories and recognizes faces and locations in the photos.

  • It is building an autonomous driving system that could be implemented in existing cars.

  • Apple’s Siri is a virtual personal assistant that communicates using text-to-speech system.

These companies are just the tip of the iceberg, and many others such as Sephora and Nordstrom are also jumping on the AI bandwagon, as they realize how beneficial it can be for their business. In the next five years many people will turn to using artificial intelligence. The 47% of people will start using a home or family assistant, 46% of people will start using a health coach, and 41% will use financial advisers.

The following statistics and the fact that the worldwide spending on cognitive and AI systems will reach an astonishing $57.6 billion in 2021 show just how bright the future of artificial intelligence is.

  • 60% of retail and ecommerce brands will implement AI in 2018.

  • 100% of IoT initiatives will be supported by AI capabilities by 2019.

  • 20% of business content will be authored by machines in 2018.

  • 85% of customer interactions with the enterprise will be managed without human intervention by 2020.

The use of artificial intelligence is only going to expand in the following years, as more and more companies decide to use it. With this pace, chatbots will be indistinguishable from humans by 2029, at least according to famous futurist Ray Kurzweil.

While this is welcome news for the customer, the question will be whether how these companies steward customer data. As AI takes a more prominent role, the need for data collection will only increase. Ensuring that this is done in an appropriate manner can be the difference between stellar customer service and costly lawsuits. The company that successfully balances privacy concerns while also harnessing data through effective AI algorithms is poised to become a market leader.

Karthik Reddy, Community Manager at www.16best.net, is the author of India’s Number 1 travel blog. Boasting an MBA in computer science, he once decided to get away from the office desk life and take a breathtaking journey around the world. He is eager to use the power of the global network to inspire others. A passionate traveler and photography enthusiast, he aspires to share his experiences and help people see the world through his lens.

 

Hybrid Intelligence: When Machines and Humans Work Together

In a previous blog, I argued that the best way to look into AI was not from a machine versus human perspective but more from a human PLUS machine paradigm. That is, the goal of AI should not be replacement but augmentation. Artificial Intelligence should be about enhancing human flourishing rather than simply automating human activities. Hence, I was intrigued to learn about the concept of HI (Hybrid Intelligence). HI is basically a manifestation of augmentation when human intelligence works together with machine intelligence towards a common goal.

As usual, the business world leads in innovation, and in this case, it is no different. Hence, I was intrigued to learn about Cindicator, a startup that combines the collective intelligence of human analysts with machine learning models to make investment decisions. Colin Harper puts it this way:

Cindicator fuses together machine learning and market analysis for asset management and financial analytics. The Cindicator team dubs this human/machine predictive model Hybrid Intelligence, as it combines artificial intelligence with the opinions of human analysts “for the efficient management of investors’ capital in traditional financial and cryptomarkets.”

This is probably the first enterprise to approach investment management from an explicitly hybrid approach. You may find other examples in which investment decisions are driven by analysts and others that rely mostly on algorithms. This approach seeks to combine the two for improved results.

How Does Hybrid Intelligence Work?

One could argue that any example of machine learning is at its core hybrid intelligence. There is some truth to that. Every exercise in machine learning requires human intelligence to set it up and tune the parameters. Even as some of these tasks are now being automated, one could still argue that the human imprint of intelligence is still there.

Yet, this is different. In the Cindicator example, I see a deliberate effort to harness the best of both machines and humans.

On the human side, the company is harnessing the wisdom of crowds by aggregating analysts’ insights. The reason why this is important is that machine learning can only learn from data and not all information is data. Analysts may have inside information that is not visible in the data world and can therefore bridge that gap. Moreover, human intuition is not (yet) present in machine learning systems. Certain signals require a sixth sense that only humans have. For example, a human analyst may catch deceptive comments from company executives that would pass unnoticed by algorithms.

On the machine side, the company developed multiple models to uncover predictive patterns from the data available. This is important because humans can only consider a limited amount of scenarios. That is one reason why AI has beaten humans in games where it could consider millions of scenarios in seconds. Their human counterparts had to rely on experience and hunches. Moreover, machine learning models are superior tools for finding significant trends in vast data, which humans would often overlook.

Image by Gerd Altmann from Pixabay

Can Hybrid Intelligence Lead to Human Flourishing?

HI holds much promise in augmenting rather than replacing human intelligence. At its core, it starts from the principle that humans can work harmoniously with intelligent machines. The potential for its uses is limitless. An AI aided approach can supercharge research for the cure of diseases, offer innovative solutions to environmental problems and even tackle intractable social ills with humane solutions.

This is the future of work: collective human intelligence partnering with high-performing Artificial Intelligence to solve difficult problems, create new possibilities and beautify the world.

Much is said about how many jobs AI will replace. What is less discussed is the emergence of new industries made possible by the partnership between intelligent machines and collective human wisdom. A focus on job losses assumes an economy of scarcity where a fixed amount of work is available to be filled by either humans or machines. An abundance perspective looks at the same situation and sees the empowerment of humans to reach new heights. Think about how many problems remain to be solved, how many endeavors are yet to be pursued, and how much innovation is yet to be unleashed.

Is this optimist future scenario inevitable? Not by a long shot. The move from AI to HI will take time, effort and many failures. Yet, looking at AI as an enabler rather than a threat is a good start. In fact, I would say that the best response to the AI threat is not returning to a past of dumb machines but lies in the partnership between machine and human entities steering innovation for the flourishing of our planet. Only HI can steer AI towards sustainable flourishing.

There is work to do, folks. Let’s get on with the business of creating HI for a better world!

4 Reasons Why We Should be Teaching AI to Kids

In a previous blog, I talked about a multi-disciplinary approach to STEM education. In this blog I want to explore how teaching AI to kids can accomplish those goals while also introducing youngsters to an emerging technology that will greatly impact their future. If you are parent, you may be asking: why should my child learn about AI? Recently, the importance of STEM education has been emphasized by many stakeholders. Yet, what about learning AI that makes it different from other STEM subjects?

First it is important to better define what learning AI means. Lately, the AI term has been used for any instance a computer acts like a human. This varies from automation of tasks all the way to humanoids like Sophia . Are we talking about educating children to build sentient machines? No, at least not at first. The underlying technology that enables AI is machine learning. Simply put, as hinted by its name, these are algorithms that allow computers to learn directly from data or interaction with an environment rather than through programming. This is not a completely automated process as the data scientist and/or developer must still manage the processes of learning. Yet, at its essence, it is a new paradigm for how to use computers. We go from a programming in which we instruct computer to carry out tasks to machine learning where we feed the computer with data so it can discover patterns and learn tasks on its own. The question then is why should we teach AI (machine learning) to kids?

Exposes Them to Coding

Teaching AI to kids start with coding. While we’ll soon have advanced interfaces for machine learning, some that will allow a “drag-and-drop” experience, for now doing machine learning requires coding. That is good news for educational purposes. I don’t need to re-hash here the benefits of coding education. In recent years, there has been a tremendous push to get children to start coding early. Learning to code introduces them to a type of thinking that will help them later in life even if they do not become programmers. It requires logic and mathematical reasoning that can be applied to many endeavors.

Furthermore, generation Z grew up with computers, tablets and smart phones. They are very comfortable with using them and incorporating them into their world. Yet, while large tech companies have excelled in ensuring no child is left without a device, we have done a poor job in helping them understand what is under the hood of all this technology they use. Learning to code is a way to do exactly that: lift up the hood so they can see how these things work. Doing so, empowers them to become creators with technology rather than mere consumers.

Works Well With Gaming

The reality is that AI really started with games. One the first experiment with AI was to make a computer learn to play a game of Checkers. Hence, the combination between AI and gaming is rather complementary. While there are now some courses that teach children to build games, teaching AI goes a step further. They actually get to teach the computer to play games. This is important because games are a common part of their world. Teaching AI with games helps them engage in the topic by bringing it to a territory that is familiar to their imagination.

I suspect that gaming will increasingly become part of education in the near future. What once was the scourge of educators is turning out to be an effective tool to engage children in the learning process. There are clear objectives, instant rewards and challenges to overcome. Teaching machine learning with games, rides this wave of this and enhances it by giving them an opportunity to fine tune learning algorithms with objectives that captivate their imagination.

Promotes Data Fluency

Data is the electricity of the 21st century. Helping children understand how to collect, examine and analyze data sets them up for success in the world of big data. We are moving towards a society where data-driven methods are increasingly shaping our future. Consider for example how data is transforming fields like education, criminal courts and healthcare. This trends shows not signs of slowing down in the near future.

This trend will not be limited to IT jobs. As the sensors become more advanced, data collection will start happening in multi-form ways. Soon fitness programs will be informed, shaped and measured by body sensors that can provide more precise information about our bodies’ metabolism. Sports like Baseball  and Football are already being transformed by the use of data. Thus, it is not far-fetched to assume that they will eventually be working in jobs or building business that live on data. They may not all become data scientist or analysts, but they will likely need to be familiar with data processes.

Opens up Discussions About Our Humanity

Because AI looms large in Science-Fiction, the topic opens the way for discussions on Literature, Ethics, Philosophy and Social Studies. The development of AI forces us to re-consider what it means to be human. Hence, I believe it provides a great platform to add Humanities to an otherwise robust STEM subject. AI education can and should include a strong component of reading and writing.

Doing so develops critical thinking and also helps them connect the “how” with the “why”. It is not enough to just learn how to build AI applications but foremost why we should do it. What does it mean to outsource reasoning and decision-making to machines? How much automation can happen without compromising human flourishing? You may think these are adult question but we underestimate our children’s ability to reflect deeply about the destiny of humanity. They, more than us, need to think about these issues for they will inherit this world.

If we can start with them early, maybe they can make better choices and clean up the mess we have made. Also, teaching AI to kids can be a lot easier than we think.

Education with a Purpose: A New Approach To Teach STEM

After years of denial, we are finally acknowledging that we have a problem with STEM (Science, Technology, Engineering and Math) education in this country. It is not an issue of access or supply of opportunities but one of perception: most students think it is simply too hard. That is unfortunate since research shows that students that pursue STEM degrees are more employable and earn more than their non-STEM counter parts. Yet, only a third of college students currently pursue these degrees.

I can speak from experience. When finishing high school, I really enjoyed Physics and thought that I would major in it in college. After one college-level Physics class, I realized quickly that pursuing it would be a difficult path. Social sciences seemed an easier and a more natural fit. I eventually declared my major in Political Science staying away from science and math as much as I could in my liberal arts degree. Years later, I would regret this decision as my career took a decidedly more technical path. I certainly could have used some of those math and science classes.

While difficulty is definitely a factor, now that I reflect on it, the issue was deeper than that. What drew me to the social sciences was that they told me stories about human struggle, tragedy and triumph. In my degree, I got to learn the history of how nations were formed, regimes were taken down and societies changed. That was something I could eat it up. While I wanted to prepare for a career, to me education was about expanding my horizons and discovering new worlds. STEM subjects, while fascinating in their own right, lacked this human connection that I found in the social sciences.

Later in my career, I was drawn to data science because of what it could do. I could take data and create insights that were previously hidden. In some instances, I could even “predict” something before it happened. That got me hooked and it is how I learned on the job and through professional training to become a data scientist. As I started reflecting on the potential of the field I was working it, I finally caught that human connection that I was missing in college. I realized that data science was not just about reams of data being processed through algorithms but that it could literally change people lives. Consider the example where algorithms are being used to predict who is most likely to commit suicide. In this case, data science is literally saving lives.

I understand that my story is one data point but I believe there is a theme embedded in it that can be explored. Maybe the issue is not just that it is hard but that often times STEM education is disconnected from a higher purpose. There are some of us who will study science for its own sake. Others are naturally fascinated by how things work and want to learn to make things. Yet, there is a whole group of students that would pursue STEM subjects if educational programs helped them make the connection with a humanitarian purpose.

After being studying a social science and then becoming a technology worker, I realized a curious paradox. The social sciences are deeply concerned about social problems. They go through great lengths to describe causes, factors and catalysts that worsen or alleviate them. Yet, technology, this cold application of science, has shown the greatest potential for actually solving them. Just consider the potential of mobile phones in Sub-Saharan Africa. While living in places that lack electricity and sanitation, many in these countries can afford a mobile phone. This technology along with micro-finance are empowering the poor by allowing them to make financial transactions and create businesses, hence forging a way out of poverty.

What is missing is the connection between purpose and know-how. Getting people that care deeply about their communities and teaching them technical skills to do something about it – doing technology with a humanitarian purpose. People with a passion to serve and the technical know-how to leverage emerging technologies can change the world in ways not previously seen. Unfortunately, most of technological innovation happens in for-profit institutions that are more interested in meeting a quarterly goal for stockholders than making a positive impact in the communities that surround them.

Hence, I want to propose a STEM education with a telos. Telos is a Greek word that can be roughly translated as an “end goal.” Yet, it is not a goal like our new year’s resolution. Instead, it is a long term, guiding ideal that directs everything we do. It is akin to a higher purpose.

What if STEM education was not just about teaching technical skills and but actually connecting them to a humanitarian purpose? In other word, teaching student not just the “how” but also the “why”.  Such education would raise a tech-literate generation that was less concerned about acquiring the latest gadgets and more about using technology to enhance human flourishing. It would not only expand STEM knowledge to under-represented groups but also unleash future innovation for the common good.