[column-group]
[column]
Artificial Intelligence

 

Definition and History

Simply put, artificial intelligence refers to the ability to program machines so they mimic human-like characteristics such as reasoning, moving, grasping, problem solving, decision making and planning. This branch of computer science has a history of over 60 years dating back to the 50’s where the term and a number of the applications we use today were introduced.

The beginning of AI is often associated with 1956 Dartmouth conference, where the term was first coined by John McCarthy. McCarthy had invited a number of researchers to discuss an interdisciplinary approach to studying and simulating human intelligence. Three years later, an AI lab was established at MIT to further study the possibility of creating artificial intelligence. From there the field gathered steam quickly gaining notoriety as the US government awarded $2.2 Million to develop machine aided cognition. As the US was fighting a Cold War and eager to develop new weapons, this promising technology received lavish funding. The country that owned and mastered this technological advance could gain significant military advantage.

Initially, AI researchers were seeking to develop a general-problem solver. If intelligence was a matter of logic and therefore the challenge was a matter of codifying logic for machines to execute. Yet, what became clear by the early 70’s was that the complexity of emulating human capabilities such as common sense and intuition proved too difficult to codify given the computing power available at the time. The human brain had too many connections to be reproduced with the computing technology existing at the time.This initiated what was considered an AI winter period where investment and interest in the topic waned. At that time, researchers shifted from trying to develop general intelligence to applying principles of machine learning for specific problems. That is, one could develop an algorithm to recognize handwriting as opposed to looking for a general breakthrough that would apply to all cases of human vision recognition. Thus, AI became divided into specialized tasks. This shift allowed them to turn the promise of AI into narrow problems, therefore yielding more concrete results for government and industry applications.

To better understand this shift, let’s look at human body and its different abilities. What if we divided our five senses into different tasks? Let’s take the ability to see as an example. Seeing is something most of us do with ease. What we don’t know is that the images we see are formed in our brains based on the input coming from our surrounding and information already residing in our brain. Without that, the input we receive in our eyes would be meaningless. Even as we consider the visual sense, what if we broke down in more specialized abilities such as face recognition? Our brains have an acute ability to recognize faces. Often times, it will happen that I will spot somebody’s face and recognize that it is someone familiar even if I can’t remember their name or where I know that person from. Our face recognition ability is faster then our ability to recall the context associated with that face. These are two different abilities, one to recognize a familiar face and the second to classify it with other known information. This simple illustration demonstrates how what we consider intelligence is in essence a large group of abilities than can be treaded in isolation.

This breaking down of intelligence into tasks defines the difference between general AI from special AI. That is, general AI has to do with machines being able not only to act like but have awareness of what is being done. That is different from special AI, in which computer scientist break down different human abilities and work on creating AI to mimic that action. The first one is the familiar picture of fictional AI where machines become so indistinguishable from humanity that they actually turn into a threat. The second is what you see more in reality in computer applications, automated processes and robots. By most estimates, we are still a few decades out of the first one with some even questioning its feasibility. The second one is an emerging reality that could revolutionize our lives in the next years. From self-driving cars, to digital assistants, predictive and risk models, decision engines and recommenders (see specific applications here), this narrower type of AI is fast becoming a source of competitive advantage for businesses and may soon become a deciding factor in politics as well.

The technical views focuses on the present technologies that allow for these specific AI applications. See graph below for a big-picture view of what they are.

The chart shows a rich though not exhaustive list of the different special AI applications currently in implementation in our world today. Every time you speak to Siri or Alexa, the device is using speech recognition to understand your request. It then applies data analytics and reasoning to come up with an answer. At that point, it communicates that answer to you in natural language. As you can see, in a simple interaction, we have at least four different AI applications all of which had to be developed and perfected separately so you can get an experience that mimics a personal assistant to you.

top

Philosophical View

The previous section described how AI came into being and its limitations. While the technology of general AI is still elusive, the idea that we could create intelligent beings continues to be a source of fear, inspiration and intense debate. In this section I want to explore AI from a philosophical perspective, allowing us to go deeper into what it means to co-exist with machines that are able to mimic human intelligence. Here is a good place to introduce the concept of consciousness. While intelligence can be defined around that external expressions of our brains; namely reasoning, speaking, listening and interpreting; consciousness focus on the inward processes. Consciousness has to do with awareness of self and others. It entails the ability to feel, respond and absorb to the world around it and project one-self into it. Furthermore, it refers to our human ability to construct meaning through narrative. The world is not just a collection of things but the scenery of a compelling story. This is precisely the part that general AI aims to achieve while special AI can exist without it. Machines can display the external actions of communicating an answer without being aware of what it means. When Watson answers a question such as “who started WWII?”, it may find the right answer without having any feelings or awareness of the horrors that it entailed.

Whether artificial consciousness is possible or not, the fact that computers can now perform mental tasks more accurately and faster then we can should cause us to pause and reflect. In other words, even if machines never acquire true consciousness the mere fact that we can interact with them as if they were conscious beings is already significant. A few times, I have caught myself saying “thank you” to Google home after she (it is a female voice after all) answered my question. Then, I had to remind myself that Google was not expecting appreciation.

Before starting this reflection, it is important to ask: what is intelligence? Historically, intelligence was regarded as the ability to reason or to think logically. More broadly, intelligence refers to the human ability to absorb knowledge and apply that in different situations. It entails an ability to learn, assimilate new information, adjust assumptions and come up with conclusions. It also speaks of the creative ability to solve problems and formulate new ideas. Yet, even that would not be enough. I would also add here the skill demonstrated by athletes in accomplishing the perfect shot, a beautiful goal, the highest jump or the perfect dive.

How would artificial differ from human intelligence? It is artificial in the way that it mimics the outcomes of intelligence as opposed to trying to re-create the human brain. Computers process information through silicon chips, while the human brain process it through 140 million neurons with billions of connections. A helpful analogy here is the difference between birds and airplanes. In reality, airplanes do not fly, they glide through the air. The design of the wings and speed allows planes to suspend and glide in the air. This is altogether different from birds. Birds flap their wings, taking off in flight. They both accomplish the same goal through different means. One could say that planes mimic the natural phenomenon that birds undergo in flight even it through different means.

These questions are important because no technology strikes a cord with our humanity like AI. Understanding the technical part is only half of the story. The other part is how AI diminishes or enhances our humanity. If we lose sight of this aspect, we miss an important part of AI challenge.

top

Fictional Perspectives

The last decades have witnessed no shortage of fictional depictions of AI. Wikipedia dedicated a whole page just for AI movies. The idea of intelligent machines fuels our imagination. The very idea that one of our creation would have a will of their own is indeed a fearful and exhilarating speculation. A philosophical view helps opens up this paradigm, but to fully explore it requires the imagination of scientific fiction. Only there can we explore the many possibilities afforded by AI technologies.

While some may trace this fictional genre to the books of Isaac Asimov, this idea in literature goes back to the 18th century with Frankenstein. In the danger of being reductionist, sci-fi about AI tends to follow either a narrative of fear or hope. The first one is well known in movies like 2001: A Space Odissey, Blade Runner and more recently Ex-Machina. Deeply embedded in these narratives, it is the fear that developing artificial intelligence represents crossing a boundary that is meant to be left untouched. It is not necessarily a fear of robots or machines but really a fear of ourselves. Have we tinkered with nature to the point that threatens the continuation of our species? We fear that by replicating ourselves into inanimate bodies, we will uncover the worse of ourselves in them rather than our best.

With their own particularities, these movies tell the story of technological experiments gone wrong. They show the initial promise of technological advancement only to lead to tragic ending where AI turns against their creators. If technology is a mirror of ourselves, these narratives express the fear that once inanimate objects acquire a consciousness they will rebel against their creators. This story echoes the age-old story of Genesis where humans turn against their creator. Yet, unlike Genesis, in this case the creation overtakes and destroys the creator. Thus, if androids are to become humans, they will emulate the worse of us.

Less frequent are the sci-fi stories that follow a narrative of hope. In this category I would put the recent movie Her and the 90’s Steven Spielberg’s AI. These stories also follow similar initial trajectory showing the wonders of technological advancement. They enter into a place of conflict as machines acquire consciousness and start asking existential questions. At the end, these inanimate objects emulate the best of human traits such as love and sacrifice. Instead of destroying and usurping, they diminish or even die for the sake of humanity. The creation chooses to honor the creator in ultimate sacrifice. This optimist view, wagers that consciousness will lead machines to emulate the best in us not the worse.

As we ponder on the advances of AI for the next years, it is important to balance these two narratives. They are both valid and plausible. We should hope that development of AI will reflect the best of our humanity without ever losing sight of our capacity to do evil. It is in this fearful hope that we can start looking to the horizon and making choices today that will impact us for years to come. As stated in the technical view, general  AI as depicted in these movies is still far off. Yet, AI capabilities are fast becoming a reality. To understand what is behind these technological advances, we must first attend to the field of machine learning.

top

How do machines learn?

At the core of Artificial Intelligence is the field of machine learning. This is what moves computers from simply organizing information to actually uncovering insights, or even creating new knowledge. It is also at the foundation of mimicking human abilities such as processing language, recognizing objects, analytics and decision making. Traditional computer science would approach these challenges by trying to program them. While this work for some applications, mimicking the ability to see or listen would be a tedious task of figuring out all instances and exceptions. Think about creating rules to recognize a face in a image. The advantage of machine learning is that it enables computers to learn from examples instead of programming all possibilities.  In other words, this way computers are actually “taught” by feeding massive amounts of data through their processing memory.

After all, how do machines learn? Machine learning denotes algorithms that allow computers to find patterns in data without being programmed to do so. It divides itself into two main groups: supervised learning and unsupervised learning. In the first group, these are algorithms that will train a computer using past data on specific outcomes. An example of that would be to decipher text from handwriting samples. In this case, the computer will be fed thousands of examples of handwritten letters. As it is fed this data, the algorithm will try to decipher which letter that is. Then, it will compare its result to the actual letter. With time, it will adjust itself based on its trials and errors so that eventually it can recognize letters in a new sample of hand-writings. In essence, the computer “learns” how to read handwriting by going through thousands of examples and verifying its accuracy in each guess given.

Unsupervised learning is different in that there are no known outcomes. Instead of trying to get computer to “guess” the right answer, you allow it to discover patterns on its own. As an example, let’s say you want to classify a thousand types of flowers into groups. You could first try to program rules based on size, color, number of petals, etc. Yet, with such a large group, this proposition would quickly become a tedious process of finding the right criteria to create the groups. Instead, you can run the 1,000 cases of flowers with its respective properties through an algorithm that will find the most significant commonalities among the flowers and therefore creating homogeneous groups for them.

In many ways, machines learn like humans. Let’s consider how unsupervised learning happens for humans. Many of us have played pick up games as children. Let’s say it is our birthday and we want to pick a team and we look at our ten friends in front of us and we want to pick the best additions for our team. Unaware, we are actually testing different attributes of each player. We consider height, weight, reputation and of course relational factors (who do we like and who do we rather leave for another team). We start the process without a fixed final outcome in mind. Instead, we arrive at best team we can collect given the choices that is given to us. Computers are doing the same, except in a much larger scale, considering many more attributes and logically weighing the importance of each one of them.

The same goes for supervised learning. As children we learn words by first hearing them within a certain context. Eventually we heard it enough that we start using that word. At that point, we may receive feedback directly or indirectly that our use of that word is accurate. As the process continues we eventually reach a point in which we master enough vocabulary to carry on conversations with others effectively. We can understand them as well make ourselves understood by them.

Thus, by emulating its human counterpart, machine learning allows computers to absorb new information in a meaningful way. In fact, because computers can process some types of data faster than we can, they can actually uncover patterns that we could not on our own. This gave rise to the burgeoning field of predictive analytics. Assuming that past patterns will continue in the future, we can now build models that anticipate future behavior. This started with simple questions like: what will revenue be next year? Then it moves to more specific like: who is more likely to buy a product? As data increased exponentially, we can now model questions like what is the next product the customer will need and what time of the day is best to reach those customers.

top

AI Applications

As you can imagine, the potential for this technology is immense. While some of these algorithms such as neural networks existed for decades, it is only recently that they have become viable. The combination of big data, increased computing power, cloud storage and open-source software transformed machine learning from a clunky, time-consuming and expensive process to a reality available not only to large corporations but to individuals like you and me. The enormity of this shift is only now starting to be realized as businesses, governments and academia are investing heavily in budding AI capabilities. In this section I want to showcase a few notable examples from different industries and sectors of society.

Technology Firms

When Deep Blue computer beat Chess master Kasparov in 2008, this marked a milestone in the history of artificial intelligence. For the first time, a machine beat a human in a game of chess. This is significant because, while you can feed all possible scenarios of chess into a computer, it still has to deal with the uncertainty of the human player’s next move. A game of chess that is highly strategic requires intuition and planning that is difficult to duplicate. So, when a computer was able to beat the best of us, this signaled the beginning of a new era between humans and machines. It demonstrated that at least for one specific application, the machine, though unaware of its accomplishment, could out do the best human in that specific game. The company that created Deep Blue was later purchased by Google.

IBM was a forerunner in the AI commercial area when it unveiled Watson in 2010. What is Watson? Basically, it is a system that can understand questions and respond in natural language also known as QAM (Question and Answer Machines). It made the headlines in 2011 when it beat previous Jeopardy winners on national TV. The event, besides being an important milestone, also became a great marketing tool for IBM. Soon the company was selling Watson capabilities to a number of different industries.

More recently, technology giants Google, Amazon and Microsoft launched their own version of artificial intelligence in virtual assistants. These systems can carry simple conversation with humans by answering questions and connecting users to services. These small devices rely on the cloud to do the heavy lifting data processing to both decipher what is being said, search for an answer and reply through voice.

These gadgets points to the future of user interface where consumers will start migrating from keyboard and screens to voice conversations with computers. Alexa, Amazon’s version, has already sold over 18 million units in the last 3 years. This is a remarkable success for a technology that is still in its infancy stages.

top

Financial Services

The financial services is slowly being transformed by AI technologies. Areas like fraud detection, credit scoring, check image capture and loan underwriting are just a few examples of how these algorithms are changing the way these companies do business. Late last year, Bank of America unveiled Erica, the first virtual assistant able to answer questions and provide advice on financial decisions.

Another area impacted by AI is loan underwriting. Underwriting is a complex task in which bankers are evaluating the risk of giving a loan to an individual or a business. It encompasses a number of factors including an individual’s financial history, current income, type of loan, amount and others. A number of banks now are experimenting with machine learning models to make decisions on loans. This is far from a fool-proof process. In a notable instance, a bank denied a mortgage to someone in between jobs with a good financial history. This person happened to be Henry Paulson, as he was leaving the job of treasury secretary of the US. The algorithm found the fact that he was between jobs to be too risky and denied him the loan. Now you can imagine the embarrassment this bank had to endure. Thankfully, a human came to the rescue eventually overriding the model’s decision.

Visa was a pioneer for using neural networks to detect fraud in the point-of-sale (like a cash register). Prior to that there was a significant time-lag between the time of the purchase and identifying a fraud. This model, shortened that period by assigning a probability of fraud at the point of sale. Visa had developed a model that could predict fraudulent transactions base on the myriad of factors such as transaction type, amount, location, etc. While the prediction was always correct (in fact it was over-cautious confusing legitimate transactions with fraud), it greatly shortened the time needed to detect fraud for vendors, allowing them to deny suspicious transactions on the spot.

top

Healthcare

The healthcare industry was one of the first to experiment with Watson’s capabilities in a real-world application. Watson provides diagnosis suggestions to doctors based on information given by patients. It takes the input from patients and searches through troves of medical literature to look for diagnosis options. It then turns up the top matches that it finds as an input to the doctor. The doctor then can choose to take the Watson’s advice or rely on his or her own instincts.

Using Watson to help diagnosis is only the beginning. Apixio, an analytical startup, is aiming to extract and make meaning from millions of patients records. Even with all the efforts to digitize health records, there are still a large number of records in paper copies with handwritten notes. Apixio plans to use machine learning in order to effectively digitize this information in a way that it can be mined and explored. By enriching medical record data, the quality of diagnosis will greatly improve. 

In a recent test, computers outperformed doctors in correctly diagnosing lung cancer type and severity. Using a machine learning approach, the computer model assessed thousand of images which were often analyzed by pathologists. In the past, humans would always have an advantage in interpreting abstract concepts such as images. It wasn’t until the advent of computer vision, enabled by machine learning, that computers could detect patterns in images without being programmed to do so. It seems that advantage is no longer a reality.

Education

The education sector has been slow to adopt AI technologies, yet even there, we see signs of this emerging shift. Georgia State here in Atlanta is now using predictive models to score students on their probability to graduate. While many enter college, GSU still sees a large number of students quitting without completing their degrees. The idea is that, if students at risk of dropping out are spotted early, counselors can take action to get the student the help they need. This is far from virtual assistants but it bears the same machine learning techniques that enable its more advanced counter-parts.

Mc-Graw Hill, originally in the publishing industry, recently purchased ALEKS start-up, a web-based intelligent evaluation system for learners. It breaks down knowledge domain into a network of concepts. This way, they can evaluate how much someone has learned along the different concepts. Once the student masters one concept, they are then directed to the next. This allows students to learn in different speeds which is very difficult to do in a traditional classroom setting. Such capability allows for personalized learning which is tailored around the student’s need rather than the teacher’s curriculum.

Given the myriad of pressures and disruptions on the education sector, we should expect AI becoming a more present force in our schools and universities in the next years.

top

Defense

This is the part that gets really scary. What if governments would use AI technology for military purposes? What if machine learning could be used to optimize destruction in missile strikes? What if instead of people-controlled drones, we had fast-learning automated drones, whose main purpose was to take out targets? Given the applications so far, such capability is not far from becoming reality.

Currently US, China, UK, Russia, Israel and South Korea are currently researching autonomous weapons. While it is not clear what that would look like, one can think of the combination of self-driving cars with winning gaming systems that could combine to create killer machines. There is already widespread concern about what this could mean for the balance of power in modern warfare.

One of the first issues of concern is seeing the decision to terminate a life outside of human hands. These machines could be programmed to kill specific targets with little regard for means. While some ethical rules could be programmed, situational ethics are very difficult to replicate. Germany has already spoken up defending a position in which no robot should have sole decision for lethal power. This way, it has opposed fully-autonomous systems.

Last year, over 3,000 AI experts have called for an international ban on automated weapons.  The group included a diverse group of scientists and industry leaders all recognizing the pandora box that could be open if this technology is used by a nation-state or even worse, criminal or terrorists groups. The debate will continue and expanded as the technology required for such weapons are fast becoming a reality.

top

Socio-Economic Impacts

Digital Economy
Licensed from: kentoh / yayimages.com

The emergence of cloud computing and AI is fueling talk of a 4th industrial revolution. The first industrial revolution was driven by steam engine in the 18th century. The second by combustion engines and electrification. The third and most recent one was the digital revolution brought about the wide-spread adoption of personal computers and the Internet. We are now entering a new industrial phase in which AI, IoT, robotics and 3D printing will usher in cyber physical systems, namely the smart factory. These new technologies combined will not only accelerate automation as well as re-invent the workplace.

Hence, the impact of AI goes way beyond blue-collar jobs, threatening to render even the most sophisticated knowledge workers unemployed. The basic rule around automation is: if your job can be easily codified into predictable instructions, that job is liable to be taken over by algorithms. If a job follows specific procedures, regardless of how sophisticated they are, that can be done by an intelligent machine. Carl Frey and Michael Osborne estimate that as much as 47% of US jobs are at risk of being automated by emerging technologies such as AI. While this number is disputed, the message is clear: advances in machine learning will inevitably significantly disrupt the job market in ways we have not experienced in the past. An imminent example is truck drivers. As self-driving vehicles become a reality, truck drivers could be out of a job in the next 5 to 10 years.

With that said, not all is doom and gloom for workers in the future of AI. In the past, new technologies have often created as much jobs as they have destroyed. There are whole industries that will emerge that currently do not exist. That’s why it is important to distinguish between an automation perspective to augmentation. From an automation view, machines are seen as cost-saving devices in lieu of human resources. Augmentation, sees machines as ways to enhance and expand current human capabilities. In some scenarios, computers can automate tedious part of the job so the person can focus in the more creative parts. When I worked in finance, my incentive to learn excel and macros stemmed from my acute distaste for repetitive work. I can see how automating parts of a job can actually be beneficial to the worker. Consider also the example above from GSU where algorithms started being used to predict drop out rates. Instead diminishing counseling jobs, they actually expanded their staff, since now they could add more value with AI insights previously uncovered. That is, because they now had a better way to identify struggling students, they needed more counselors to intervene before the drop out would happen.

If AI is to become as pervasive as suggested, even at a conservative estimate would mean a significant shift in the number and types of jobs available. While it is true that jobs will be created, just like in the recent automation of manufacturing jobs, the new jobs will either require a different set of skills or will most likely be less paying than the existing ones. Such shift is bound to create a whole class of workers that will lose their livelihood permanently. If it truly makes the economy more productive with less human work, societies need to start thinking about how to address the unrest caused by significant job losses.

One idea being circulated is of a basic universal income. This would be welfare in steroids, guaranteeing that all adult citizens have a minimal income coming from the government. The idea here is that the only way to effectively share the wealth created by AI’s efficiencies is to spread it throughout society. In essence, such arrangement could decouple income from employment. In its best case, citizens would be free to pursue occupations irrespective of the financial compensation. At its worse, this could plunge a large group of people into idleness. This radical idea would most likely not work in our current political system but the discussion for jobs in AI-saturated world needs to continue. It must address changes in our education system, job training and even societal views on work and full-time employment. Hopefully by discussing these scenarios now, we can better address them when they come. This only highlights the need for theological and ethical thinking around AI. This is not just about new and cool gadgets but a disruptive force that has the potential for much good or damage.

top