How can Machine Learning Empower Human Flourishing?

As a practicing Software Product Manager who is currently working on the 3rd integration of a Machine Learning (ML) enabled product my understanding and interaction with models is much more quotidian, and at times, downright boring. But it is precisely this form of ML that needs more attention because ML is the primary building block to Artificial Intelligence (AI). In other words, in order to get AI right, we need to first focus on how to get ML right. To do so, we need to take a step back and reflect on the question: how can machine learning work for human flourishing?

First, we’ll take some cues from liberation theology to properly orient ourselves. Second, we need to understand how ML models are already impacting our lives. Last, I will provide a pragmatic list of questions for those of us in the technology field that can help move us towards better ML models, which will hopefully lead to better AI in the future. 

Gloria Dei, Vivens Homo

Let’s consider Elizabeth Johnson’s recap of Latin American liberation theology. To the stock standard elements of Latin American liberation theology–preferential option for the poor, the Exodus narrative, and the sermon on the Mt –she raises a consideration from St. Irenaeus’s phrase Gloria Dei, vivens homo. Translated as “the glory of God is the human being fully alive,” this means that human flourishing is God’s glory manifesting in the common good. One can think of the common good not simply as an economic factor. Instead, it is an intentional move towards the good of others by seeking to dismantle the structural issues that prevent flourishing.

Now, let’s dig into this a bit deeper –what prevents human flourishing?  Johnson points to two things: 1) inflicting violence or 2) neglecting their good. Both of these translate “into an insult to the Holy One” (82). Not only do we need to not inflict violence on others (which we can all agree is important), but we also need to be attentive to their good. Now, let’s turn to the current state of ML.

Big Tech and Machine Learning

We’ll look at two recent works to understand the current impact of ML models and hold them to the test. Do they inflict violence? Do they neglect the good? The 2020 investigative documentary entitled (with a side of narrative drama) The Social Dilemma (Netflix) and Cathy O’Neil’s Weapons of Math Destruction are both popular and accessible introductions to how actual ML models touch our daily lives. 

Screen capture of Social Dilemma

The Social Dilemma takes us into the fast-paced world of the largest tech companies (Google, Facebook, Instagram, etc.) that touch our daily lives. The primary use cases for machine learning in these companies is to drive engagement, by scientifically focusing on methods of persuasion. More clicks, more likes, more interactions, more is better. Except, of course, when it isn’t.

The film sheds light on how a desire to increase activity and to monetize their products has led to social media addiction, manipulation, and even provides data on the increased rates of sucide amongst pre-teen girls.  Going even further, the movie points out, for these big tech companies, the applications themselves are not the product, but instead, it’s humans. That is, the gradual but imperceptible change in behavior itself is the product.

These gradual changes are fueled and intensified by hundreds of daily small randomized tests that A/B change minor variables to influence behavior. For example, do more people click on this button when it’s purple or green? With copious amounts of data flowing into the system, the models become increasingly more accurate so the model knows (more than humans) who is going to click on a particular ad or react to a post.

This is how they generate revenue. They target ads at people who are extremely likely to click on them. These small manipulations and nudges to elicit behavior have become such a part of our daily lives we no longer are aware of their pervasiveness. Hence, humans become commodities that need to be continuously persuaded. Liberation theology would look to this documentary as a way to show concrete ways in which ML is currently inflicting violence and neglecting the good. 

from Pixabay.com

Machine Learning Outside the Valley

Perhaps ‘normal’ companies fare better? Non-tech companies are getting in on the ML game as well. Unlike tech companies that focus on influencing user behavior for ad revenue, these companies focus on ML as a means to reduce the workload of individual workers or reduce headcount and make more profitable decisions. Here are a few types of questions they would ask: “Need to order stock and determine which store it goes to? Use Machine Learning. Need to find a way to match candidates to jobs for your staffing agency? Use ML. Need to find a way to flag customers that are going to close their accounts? ML.” And the list goes on. 

Cathy O’Neil’s work helps us to get insight into this technocratic world by sharing examples from credit card companies, predictions of recidivism, for-profit colleges, and even challenges the US News & World Report College Rankings. O’Neil coins the term “WMD”, Weapons of Math Destruction for models that inflict violence and neglect the good. The three criteria of WMD’s are models that lack transparency, grow exponentially, and cause a pernicious feedback loop, it’s the third that needs the most unpacking.

The pernicious feedback loop is fed by biases of selectivity in the original data set–the example that she gives in chapter 5 is PredPol, a big data startup in order to predict crime used by police departments. This model learns from historical data in order to predict where crime is likely to happen, using geography as its key input. The difficulty here is that when police departments choose to include nuisance data in the model (panhandling, jaywalking, etc), the model will be more likely to predict new crime will happen in that location, which in turn will prompt the police department to send more patrols to that area. More patrols mean a greater likelihood of seeing and ticketing minor crimes, which in turn, feeds more data into the model. In other words, the models become a self-fulfilling prophecy. 

A Starting Point for Improvement

As we can see based on these two works, we are far from the topic of human flourishing. Both point to many instances where ML Models are currently not only neglecting the good of others, they are also inflicting violence. Before we can reach the ideal of Gloria Dei, vivens homo we need to make a Liberationist move within our technology to dismantle the structural issues that prevent flourishing. This starts at the design phase of these ML models. At that point, we can ask key questions to address egregious issues from the start. This would be a first for making ML models (and later AI) work for human flourishing and God’s glory. 

Here are a few questions that will start us on that journey:

  1. Is this data indicative of anything else (can it be used to prove another line of thought)? 
  2. If everything went perfectly (everyone took this recommendation, took this action), then what? Is this a desirable state? Are there any downsides to this? 
  3. How much proxy data am I using? In general proxy data or data that ‘stands-in’ for other data.
  4. Is the data balanced (age, gender, socio-economic)? What does this data tell us about our customers? 
  5. What does this data say about our assumptions? This is a slightly different cut from above, this is more aimed at the presuppositions of who is selecting the data set. 
  6. Last but not least: zip codes. As zip codes are often a proxy for race, use zip codes with caution. Perhaps using state level data or three digit zip code levels average out the results and monitor results by testing for bias. 

Maggie Bender is a Senior Product Manager at Bain & Company within their software solutions division. She has a M.A. in Theology from Marquette University with a specialization in biblical studies where her thesis explored the implications of historical narratives on group cohesion. She lives in Milwaukee, Wisconsin, enjoys gardening, dog walking, and horseback riding.

Sources:

Johnson, Elizabeth A. Quest for the Living God: Mapping Frontiers in the Theology of God (New York: Continuum, 2008), 82-83.

O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Broadway Books, 2017), 85-87.

Orlowski, Jeff. The Social Dilemma (Netflix, 2020) 1hr 57, https://www.netflix.com/title/81254224.

How Does AI Compare with Human Intelligence? A Critical Look

In the previous post I argued that AI can be of tremendous help in our theological attempt to better understand what makes humans distinctive and in the image of God. But before jumping to theological conclusions, it is worth spending some time trying to understand what kind of intelligence machines are currently developing, and how much similarity is there between human and artificial intelligence.Image by Gordon Johnson from Pixabay

The short answer is, not much. The current game in AI seems to be the following: try to replicate human capabilities as well as possible, regardless of how you do it. As long as an AI program produces the desired output, it does not matter how humanlike its methods are. The end result is much more important than what goes on ‘on the inside,’ even more so in an industry driven by enormous financial stakes.

Good Old Fashioned AI

This approach was already at work in first wave of AI, also known as symbolic AI or GOFAI (good old-fashioned AI). Starting with the 1950s, the AI pioneers struggled to replicate our ability to do math and play chess, considered the epitome of human intelligence, without any real concern for how such results were achieved. They simply assumed that this must be how the human mind operates at the most fundamental level, through the logical manipulation of a finite number of symbols.

GOFAI ultimately managed to reach human-level in chess. In 1996, an IBM program defeated the human world-champion, Gary Kasparov, but it did it via brute force, by simply calculating millions of variations in advance. That is obviously not how humans play chess.

Although GOFAI worked well for ‘high’ cognitive tasks, it was completely incompetent in more ‘mundane’ tasks, such as vision or kinesthetic coordination. As roboticist Hans Moravec famously observed, it is paradoxically easier to replicate the higher functions of human cognition than to endow a machine with the perceptive and mobility skills of a one-year-old. What this means is that symbolic thinking is not how human intelligence really works.

The Advent of Machine Learning

Photo by Kevin Ku on Unsplash

What replaced symbolic AI since roughly the turn of the millennium is the approach known as machine learning (ML). One subset of ML that has proved wildly successful is deep learning, which uses layers of artificial neural networks. Loosely inspired by the brain’s anatomy, this algorithm aims to be a better approximation of human cognition. Unlike previous AI versions, it is not instructed on how to think. Instead, these programs are being fed huge sets of selected data, in order to develop their own rules for how the data should be interpreted.

For example, instead of teaching an ML algorithm that a cat is a furry mammal with four paws, pointed ears, and so forth, the program is trained on hundreds of thousands of pictures of cats and non-cats, by being ‘rewarded’ or ‘punished’ every time it makes a guess about what’s in the picture. After extensive training, some neural pathways become strengthened, while others are weakened or discarded. The end result is that the algorithm does learn to recognize cats. The flip side, however, is that its human programmers no longer necessarily understand how the conclusions are reached. It is a sort of mathematical magic.

ML algorithms of this kind are behind the impressive successes of contemporary AI. They can recognize objects and faces, spot cancer better than human pathologists, translate text instantly from one language to another, produce coherent prose, or simply converse with us as smart assistants. Does this mean that AI is finally starting to think like us? Not really.

When machines fail, they fail badly, and for different reasons than us.

Even when machines manage to achieve human or super-human level in certain cognitive tasks, they do it in a very different fashion. Humans don’t need millions of examples to learn something, they sometimes do very fine with at as little as one example. Humans can also usually provide explanations for their conclusions, whereas ML programs are often these ‘black boxes’ that are too complex to interrogate.

More importantly, the notion of common sense is completely lacking in AI algorithms. Even when their average performance is better than that of human experts, the few mistakes that they do make reveal a very disturbing lack of understanding from their part. Images that are intentionally perturbed so slightly that the adjustment is imperceptible to humans can still cause algorithms to misclassify them completely. It has been shown, for example, that sticking minuscule white stickers, almost imperceptible to the human eye, on a Stop sign on the road causes the AI algorithms used in self-driving vehicles to misclassify it as a Speed Limit 45 sign. When machines fail, they fail badly, and for different reasons than us.

Machine Learning vs Human Intelligence

Perhaps the most important difference between artificial and human intelligence is the former’s complete lack of any form of consciousness. In the words of philosophers Thomas Nagel and David Chalmers, “it feels like something” to be a human or a bat, although it is very difficult to pinpoint exactly what that feeling is and how it arises. However, we can intuitively say that very likely it doesn’t feel like anything to be a computer program or a robot, or at least not yet. So far, AI has made significant progress in problem-solving, but it has made zero progress in developing any form of consciousness or ‘inside-out-ness.’

Current AI is therefore very different from human intelligence. Although we might notice a growing functional overlap between the two, they differ strikingly in terms of structure, methodology, and some might even say ontology. Artificial and human intelligence might be capable of similar things, but that does not make them similar phenomena. Machines have in many respects already reached human level, but in a very non-humanlike fashion.

For Christian anthropology, such observations are particularly important, because they can inform how we think of the developments in AI and how we understand our own distinctiveness as intelligent beings, created in the image of God. In the next post, we look into the future, imagining what kind of creature an intelligent robot might be, and how humanlike we might expect human-level AI to become.

AI Impact on Jobs: How can Workers Prepare?

In a previous blog, I explored the main findings from a recent MIT paper on AI’s impact on work. In this blog, I want to offer practical advice for workers worried about their jobs future. There is a lot automation anxiety surrounding the topic which often gets amplified through click-bait sensational articles. Fortunately, the research from the MIT-IBM Watson paper offers sensible and detailed enough information to help workers take charge of their careers. Here are the main highlights.

From Jobs to Tasks

The first important learning from the report is to think of your job as group of tasks rather than a homogenous unit. The average worker performs a wide range of tasks from communicating issues, solving problems, selling ideas to evaluating others. If you never thought of your job this way, here is a suggestion: track what you do in one work day. Pay attention to the different tasks you perform and write down the time it takes to complete them. Be specific enough in descriptions that go beyond “checking emails.” When you read and write emails, you are trying to accomplish something. What is it?

Once you do that for a few days, you start getting a clearer picture of your job as a collection of tasks. The next step then is to evaluate each task asking the following questions:

  • Which tasks brings the most value to the organization you are working for?
  • Which tasks are repetitive enough to be automated?
  • Which tasks can be delegated or passed on to other in your team?
  • Which tasks can you do best and which ones do you struggle the most?
  • Which tasks do you enjoy the most?

As you evaluate your job through these questions, you can better understand not just how good of a fit it is for your as an individual but also how automation may transform your work in the coming years. As machine learning becomes more prevalent, the repetitive parts of your job are most likely to disappear.

Tasks on the rise

The MIT-IBM Watson report analyzed job listings over a period of ten years and identified groups of tasks that were in higher demand than others. That is, as job change, certain tasks become more valuable either because they cannot be replaced by machine learning or because there is growing need for it.

According to the research, tasks in ascendance are:

  • Administrative
  • Design
  • Industry Knowledge
  • Personal care
  • Service

Note that the last two tend to be part of lower wage jobs. Personal care is an interesting one (i.e.: hair stylist, in-home nurses, etc.). Even with the growing trend in automations, we still cannot teach a robot to cut hair. That soft but precise touch from the human hand is very difficult to replicate, at least for now.

How much of your job consists of any of the tasks above?

Tasks at risk

On the flip side, some tasks are in decline. Some of this is particular to more mature economies like the US while others have a more general impact due to wide-spread adoption of technologies. The list of these tasks highlighted in the report are:

  • Media
  • Writing
  • Manufacturing
  • Production

The last two are no surprise as the trend of either offshoring or mechanizing these tasks has been underway for decades. The first two, however, are new. As technologies and platforms abound, these tasks either become more accessible to wider pool of workers which makes them less valuable in the workplace. Just think about what it took to broadcast a video in the past and what it takes to do it now. In the era of Youtube, garage productions abound sometimes with almost as much quality as studio productions.

If your job consists mostly of these tasks, beware.

Occupational Shifts

While looking at tasks is important, overall occupations are also being impacted. As AI adoption increases, these occupations either disappear or get incorporated into other occupations. Of those, it is worth noting that production and clerical jobs are in decline. Just as an anecdote, I noticed how my workplace is relying less and less on administrative assistants. The main result is that everybody now is doing scheduling what before used to be the domain of administrative jobs.

Occupations in ascendance are those in IT, Health care and Education/Training. The latter is interesting and indicative of a larger trend. As new applications emerge, there is a constant need for training and education. This benefits both traditional educational institutions but also entrepreneurial start ups. Just consider the rise of micro-degrees and coding schools emerging in cities all over this country.

Learning as a Skill

In short, learning is imperative. What that means is that every worker, regardless of occupation or wage level will be required to learn new tasks or skills. Long gone are the days where someone would learn all their professional knowledge in college and then use it for a lifetime career. Continual training is the order of the day for anyone hoping to stay competitive in the workplace.

I am not talking just about pursuing formal training paths through academic degrees or even training courses. I am talking about learning as a skill and discipline for you day-to-day job. Whether from successes or mistakes, we must always look for learning opportunities. Sometimes, the learning can come through research on an emerging topic. Other times, it can happen through observing others do something well. There are many avenues for learning new skills or information for those who are willing to look for it.

Do you have a training plan for your career? Maybe is time to consider one.

AI Impact on Work: Latest Research Spells Both Hope and Concern

In a recent blog I explored Mckinsey’s report on the AI impact for women in the workplace. As the hype around AI subsides, a clearer picture emerges. The “robots coming to replace humans” picture fades. Instead, the more realistic picture is one where AI automates distinct tasks, changing the nature of occupations rather than replacing them entirely. Failure to understand this important distinction will continue to fuel the misinformation on this topic.

A Novel Approach

In this blog, I want to highlight another source that paints this more nuanced picture. The MIT-IBM Watson released a paper last week entitled “The Future of Work: How New Technologies Are Transforming Tasks.” The paper was significant because of its innovative methodology. It is the first research to use NLP to extract and analyze information on tasks coming from 170 million online job postings from 2010-2017 in the US market. In doing so, it is able to detect changes not only in the volume but also in job descriptions themselves. This allows for a view on how aspects of the same job may change over time.

The research also sheds light on how these changes translate into dollars. By looking at compensation, the paper can analyze how job tasks are valued in the labor market and how this will impact workers for years to come. Hence, they can test whether changes are eroding or increasing income for workers.

With that said, this approach also carry some limitations. Because they look only at job postings, they have no visibility into jobs where the worker has stayed consistently for the period analyzed. It is also relying on proposed job descriptions which often time do not materialize in reality. A job posting represents a manager’s idea for the job at that time. Yet circumstances around the position can significantly change making the actual job look very different. With that said, some data is better than perfect data and this researches open new avenues of understanding into this complex phenomenon.

Good News: Change is Gradual

For the period analyzed, researches conclude that the shift in jobs has been gradual. Machine learning is not re-shaping jobs a neck-breaking speed as some may have believed. Instead, it is slowly replacing tasks within occupations over time. On average, the worker is asked to perform 3.7 less tasks in 2017 as compared to 2010. As the researchers dig further, they also found a correlation between suitability to machine learning and faster replacement. Tasks more suitable to machine learning do show a larger average of replacement, at around 4.3 tasks while those not suited for machine learning show 2.9 average replacement. In general, jobs are becoming leaner and machine learning is making the process go faster.

This is good news but not necessarily reassuring. As more industries adopt AI strategies the rate of task replacement should increase. There is little reason to believe what we saw in 2010-2017 will repeat itself in the next 10 years. What the data signal demonstrates is that the replacement of tasks has indeed started. What is not clear is how fast it will accelerate in the next years. The issue is not the change but the speed in which it happens. Fast change can be de-stabilizing for workers and it is something that requires monitoring.

Bad News: Job Inequality Increased

If the pace is gradual, its impact has been uneven. Mid-income jobs are the worst hit by task replacement. As machine learning automate tasks, top tier middle income jobs move to the top income bracket while jobs at the bottom of the middle income income move to the low income jobs. That is, occupations in the low tier of the middle become more accessible to workers with less education or technical training. At the top, machine learning replace simpler tasks and those jobs now require more specialized skills.

This movement is translating into changes in income. Middle jobs has seen an overall erosion in compensation while both high and low income jobs have experienced an increase in compensation. This polarizing trend is concerning and worthy of further study and action.

For now, the impact of AI in the job market is further exacerbating monetary value of different tasks. The aggregate effect is that jobs with more valued tasks will see increases while those with less value will either become more scarce or pay less. Business and government leaders must heed to these warnings as they spell future trouble for businesses and political unrest for societies.

What about workers? How can these findings help workers navigate the emerging changes in the workplace? That is the topic for my next blog

AI for Scholarship: How Machine Learning can Transform the Humanities

 In a previous blog, I explored how AI will speed up scientific research. In this blog, I will examine the overlooked  potential that AI has to transform the Humanities. This connection may not be clear at first since most of these fields do not include an element of science or math. They are more preoccupied with developing theories than testing hypotheses through experimentation. Subjects like Literature, Philosophy, History, Languages and Religious Studies (and Theology) rely heavily in the interpretation and qualitative analysis of texts. In such environment, how could mathematical algorithms be of any use? 

Before addressing the question above, we must first look at the field of Digital Humanities that created a bridge from ancient texts to modern computation. The field dates back the 1930’s, before the emergence of Artificial Intelligence. Ironically, and interestingly relevant to this blog, the first project in this area was a collaboration between an English professor, a Jesuit Priest and IBM to create a concordance for Thomas Aquinas’ writings. As digital technology advanced and texts became digitized, the field has continued to grow in importance. Its primary purpose is to both apply digital methods to Humanities as well as reflect on its use. That is, they are not only interested in digitizing books but also evaluating how the use of digital medium affect human understanding of these texts. 

Building on the foundation of Digital Humanities, the connection with AI becomes all too clear. Once computers can ingest these texts, text mining and natural language processing are now a possibility. With the recent advances in machine learning algorithms, cheapening of computing power and the availability of open source tools the conditions are ripe for an AI revolution in the Humanities.

How can that happen? The use of machine learning in combination with Natural Language Processing can open avenues of meaning that were not possible before. For centuries, these academic subjects have relied on the accumulated analysis of texts performed by humans. Yet, human capacity to interpret, analyze and absorb texts is finite. Humans do a great job in capturing meaning and nuances in texts of hundreds or even a few thousand pages. Yet, as the volume increases, machine learning can detect patterns that  are not apparent to a human reader.  This can be especially critical in applications such as author attribution (determining who the writer was when that information is not clear or in question), analysis of cultural trends,  semantics, tone and relationship between disparate texts. 

Theology is a field that is particularly poised to benefit from this combination. For those unfamiliar with Theological studies, it is a long and lonely road. Brave souls aiming to master the field must undergo more schooling than Physicians. In most cases, aspiring scholars must a complete a five-year doctorate program on top of 2-4 years of master-level studies. Part of the reason is that the field has accumulated an inordinate amount of primary sources and countless interpretations of these texts. They were written in multiple ancient and modern languages and have a span over thousands of years. In short, when reams of texts can become Big Data, machine learning can do wonders to synthesize, analyze and correlate large bodies of texts. 

To be clear, that does not mean the machine learning will replace painstaking scholarly work. Quite the opposite, it has the potential to speed up and automate some tasks so scholars can focus on high level abstract thinking where humans still hold a vast advantage over machines. If anything it should make their lives easier and possibly shorter the time it takes to master the field.

Along these lines of augmentation, I am thinking about a possible project. What if we could employ machine learning algorithms in a theologian body of work and compare it to the scholarship work that interprets it? Could we find new avenues or meaning that could complement or challenge prevailing scholarship in the topic? 

I am curious to see what such experiment could uncover. 

The Machine Learning Paradigm: How AI Can Teach Us About God

It is no secret that AI is becoming a growing part of our lives and institutions. There is no shortage of article touting the dangers (and a few times the benefits) of this development. What is less publicized is the very technology that enables the growing adoption of AI, namely Machine Learning (ML). While ML has been around for decades, its flourishing depended on advanced hardware capabilities that have only become available recently. While we tend to focus on Sci-Fi like scenarios of AI, it is Machine Learning that is most likely to revolutionize how we do computing by enabling computers to act more like partners rather than mere servants in the discovery of new knowledge. In this blog, I explain how Machine Learning is a new paradigm for computing and use it as a metaphor to suggest how it can change our view of the divine. Who says technology has nothing to teach religion? Let the skeptics read on.

What is Machine Learning?

Before explaining ML, it is important to understand how computer programming works. At its most basic level, programs (or code) are sets of instructions that tell the computer what to do given certain conditions or inputs from a user. For example, in the WordPress code for this website, there is an instruction to show this blog in the World Wide Web once I click the button “Publish” in my dashboard. All the complexities of putting this text into a platform that can be seen by people all over the world are reduced to lines of code that tell the computer and the server how to do that The user, in this case me, knows nothing of that except that when I click “Publish,” I expect my text to show up in a web address. That is the magic of computer programs.

Continuing on this example, it is important to realize that this program was once written by a human programmer. He or she had to think about the user and its goals and the complexity of making that happen using computer language. The hardware, in this scenario was simply a blind servant that followed the instructions given to it. While we may think of computers as smart machines they are as smart as they are programmed to be. Remove the instructions contained in the code and the computer is just a box of circuits.

Let’s contrast that with the technique of Machine Learning. Consider now that you want to write a program for your computer to play and consistently win an Atari game of Pong (I know, not the best example, but when you are preparing a camp for Middle Schoolers that is the only example that comes to mind). The programming approach would be to play the game yourself many times to learn strategies to win the game. Then, the player would write them down and codify these strategies in a language the computer can understand. She or he would then spend countless hours writing the code that spells out multiple scenarios and what the computer is supposed to do in each one of them. Just writing about it seems exhausting.

Now compare that with an alternative approach in which the computer actually plays the game and maximizes the score in each game based on past playing experiences. After some initial coding, the rest of the work would be incumbent on the computer to play the game millions of time until it reaches a level of competency where it wins consistently. In this case, the human outsources the game playing to the computer and only monitors the machine’s progress. Voila, there is the magic of Machine Learning.

A New Paradigm for Computing

As the example above illustrates, Machine Learning changes the way we do computing. In a programming paradigm, the computer is following detailed instructions from the programmer. In the ML paradigm, the learning and discovery is done by the algorithm itself. The programmer (or data scientist) is there primarily to set the parameters for how the learning will occur as opposed to giving instructions for what the computer is to do. In the first paradigm, the computer is a blind servant following orders. In the second one, the computer is a partner in the process.

There are great advantages to this paradigm. Probably the most impactful one is that now the computer can learn patterns that would be impossible for the human mind to learn. This opens the space to new discoveries that was previously inaccessible when the learning was restricted to the human programmer.

The downside is also obvious. Since the learning is done through the algorithm, it is not always possible to understand why the computer arrived at a certain conclusion. For example, last week I watched the Netflix documentary on the recent triumph of a computer against a human player in the game of Go. It is fascinating and worth watching in its own right. Yet, I found striking that the coders of Alpha Go could not always tell why the computer was making a certain move. At times, the computer seemed delusional to human eyes. There lies the danger: as we transfer the learning process to the machine we may be at the mercy of the algorithm.

A New Paradigm for Religion

How does this relate to religion? Interestingly enough these contrasting paradigms in computing shed light in a religious context for describing the relationship between humans and God. As the foremost AI Pastor Christopher Benek once said: “We are God’s AI.” Following this logic, we can see how of a paradigm of blind obedience to one of partnership can have revolutionary implications for understanding our relationship with the divine. For centuries, the tendency was to see God as the absolute Monarch demanding unquestioning loyalty and unswerving obedience from humans. This paradigm, unfortunately, has also been at the root of many abusive practices of religious leaders. This is especially dangerous when the line between God and the human leader is blurry. In this case, unswerving obedience to God can easily be mistaken by blind obedience to a religious leader.

What if instead, our relationship with God could be described as a partnership? Note that this does not imply an equal partnership. However, it does suggest the interaction between two intelligent beings who have separate wills. What would be like for humanity to take on responsibility for its part in this partnership? What if God is waiting for humanity to do so? The consequences of this shift can be transformative.

Why Is Artificial Intelligence All Over The News Lately?

AI hype has come and gone in the past. Why is back on the spotlight now? I will answer this question by describing the three main trends that are driving the AI revolution.

Artificial Intelligence has been around since the 1950’s. Yet after much promise and fanfare, AI entered a winter period in the 80’s where investment, attention and enthusiasm greatly diminished.  Why has this technology re-emerged in the last few years? What has changed? In this blog, I will answer this question by describing the three main trends that are driving the AI revolution: breakthroughs in computing power, the emergence of big data and advances in machine learning algorithms. These three trends converged to catapult AI to the spotlight.

Computing Power Multiplies

When neural networks (the first algorithm to be considered Artificial Intelligence) were theorized and developed, the computers of the time did not have the processing power to effectively run them. The science was far ahead of the technology, therefore delaying its testing and improvement for later years.

Thanks to Moore’s law, we are now in a place where computing power is affordable, available and effective enough for some of these algorithms to be tested. My first computer in the early 90’s had 128K of RAM memory. Today, we have thumb drives with 100,000 the size of this memory! Even so, there are still ways to go as these algorithms can still be resource-expensive with existing hardware. Yet, as system architects leverage distributed computing and chip manufacturers experiment with quantum computing, AI will become even more viable. The main point is that some of these algorithms can now be tested even if it takes hours or days when before that was inconceivable.  

Data Gets Big

With smartphones, tablets and digital sensors becoming common in our lives, the amount of data available has grown exponentially. Just think about how much data you generate in one day anytime you use your phone, computer and/or enter retail stores. This is just a few examples of data being collected on an individual. For another example, consider the amount of data collected and stored by large corporations on customers’ transactions on a daily basis.

Why is this relevant? The AI is only as good as the data fed into it for learning. A great example is the data available for Google in searches and catalogued websites. That is why Google can use Artificial Intelligence to translate texts. It does that by simply comparing translation of large bodies of texts. This way, it can transcend a word-by-word translation rules to understand colloquialism and probable meaning of words based on context. It is not as good as human translation but fast becoming comparable with it.  

There is more. Big data is about to get bigger because of the Internet of Things (IoT). This new technology expands data capture beyond phones and tablets to all types of appliances. Think about a fridge that tells you when the milk is about to expire. As sensors and processors spread to all electronics, the amount of data available for AI applications will grow exponentially.

Machine Learning Comes of Age            

The third trend comes from recent breakthroughs proving the effectiveness of Machine Learning algorithms. This is the very foundation of AI Technology because it enables computers to detect patterns from data without being programmed to do so. Even as computing power improved and data became abundant, the technology was mostly untested in real-life examples breeding skepticism from scientists and investors. In 2012, a computer was able to identify cats accurately from watching YouTube videos using deep learning algorithms. The experiment was hailed as a major breakthrough for computer vision. Success stories like this and others like it brought machine learning to the spotlight. These algorithms started getting attention not just from the academic community but also from investors and CEO’s. Investment in Artificial Intelligence has significantly increased since then and is now projected to reach $47 Billion by 2020. Now there was both abundance of data and enough computing power to process it, machine learning could finally be effectively used. These trends paved the way for Artificial Intelligence to become a viable possibility again.

Pulling All Together  

These trends have turned Artificial Intelligence from a fixture of science fiction to a present reality we must contend with. This has not happened overnight but emerged through a convergence of technological advances that created a ripe environment for AI to flourish. As they came together, the media, politicians and industry titans started to notice. Hence, that’s why your Twitter feed is exploding with AI-related articles.

Because the trends leading the emergence of AI show no sign of slowing down, this is probably only the beginning of an AI springtime. While there are events that could derail this virtuous cycle, the forecast is for continuous advancement in the years and possibly decades to come. So, for now, the attention and enthusiasm is bound to stay steady for the foreseeable future.         

As AI applications are being tested by large companies and start-ups alike, this is the time to start asking the right questions about how it will impact our future. The good news is that there is still time to steer the advance of AI towards human flourishing. Hence, let the conversation around it continue. I hope the attention engendered by the media will keep us engaged, active and curious on this topic.