Blog

Who Will Win The Global AI Race? Part I: China vs USA

While the latest outrageous tweet by Kanye West fills up the news, a major development goes unnoticed: the global AI race for supremacy. Currently, many national governments are drafting plans for boosting AI research and development within their borders. The opportunities are vast and the payoff fairly significant. From a military perspective alone, AI supremacy can be the deciding factor for which country will be the next super-power. Further more, an economy driven by a thriving AI industry can spur innovation in multiple industries while also boosting economic growth. On the flip-side, a lack of planning on this area could lead to increasing social unrest as automation destroys existing jobs and workers find themselves excluded from AI-created wealth. There is just too much at stake to ignore. In this two-part blog, I’ll look at how the top players in the AI race are planning to harness technology to their advantage while also mitigating its potential dangers.

 

China’s Moonshot Effort for Dominance

China has bold plans for AI development in the next years. The country aims to be the AI undisputed leader by 2030. They hold a distinctive advantage in the ability to collect data from its vast population yet they are still behind Western countries in algorithms and research. China does not have the privacy requirements that are standard in the West and that allows them almost unfettered access to data. If data is the raw material for AI, then China is rich in supply. However, China is a late-comer to the race and therefore lacks the accumulated knowledge held by leading nations. The US, for example, started tinkering with AI technology as early as the 1950’s. While the gap is not insurmountable, it will take a herculean effort to match and then overtake the current leadership held by Western countries.

Is China up to the challenge? Judging by its current plan, the country has a shot. The ambitious strategy both acknowledges the areas where China needs to improve as well as outlines a plan to address them. At the center of it is the plan to develop a complete ecosystem of research, development and commercialization connecting government, academia and businesses. Within that, it includes plans to use AI for making the country’s infrastructure “smarter” and safer. Furthermore, it anticipates heavy AI involvement in key industries like manufacturing, healthcare, agriculture and national defense. The last one clearly brings concern for neighboring countries that fear a rapid change in the Asian balance of power. Japan and South Korea will be following these developments closely.

It seeks to accomplish these goals through a partnership between government and large corporations. In this case, the government has greater ability to control both the data and the process in which these technologies develop. This may or may not play to China’s advantage. Only time will tell. Of all plans, they have the longest range and assuming the Communist party remains in power, the advantage of continuity often missing from liberal democracies.

While portions of China’s strategy are concerning, the world has much to learn from the country’s moonshot effort in this area. Clearly, the Chinese government has realized the importance and the potential for the future of humanity. They are now ensuring that this technology leads to a prosperous Chinese future. Developing countries will do well to learn from the Chinese example or see themselves once again politically subjugated by the nations that master these capabilities first. Unlike China, most of these nations do not count on a vast population and favored geo-political position. The key for them will be to find niche areas where they can excel to focus their efforts.

US’ Decentralized Entrepreneurship

Uncle Sam sits in a paradoxical position in this race. While the undisputed leader, having an advantage in patents and having an established ecosystem for research development, the country lacks a clear plan from the government. This was not always the case.  In 2016, the Obama administration was one of the first to spell out principles to ensure public investment in the technology. The plan recognized that the private sector would lead innovation, yet it aimed at establishing a role for the government to steward the development and application of AI. With the election of Donald Trump in 2016, this plan is now uncertain. No decision has been announced on the matter so it is difficult to say what role the government will play in the future development of AI in the United States. While the current administration has kept investment levels untouched, there is no resolution on a future direction.

Given that many breakthroughs are happening in large American corporations like Google, Facebook and Microsoft – the US will undoubtedly play a role in the development of AI for years to come. However, a lack of government involvement could mean a lopsided focus on commercial applications. The danger in such path is that common good applications that do not yield a profit will be replaced by those that do. For example, the US could become the country that has the most advanced gadgets while the majority of its population do not have access to AI-enabled healthcare solutions.

Another downside for a corporate-focused AI strategy is that these large conglomerates are becoming less and less tied to their nation of origin. Their headquarters may still be in the US, but a lot of the work and even research is now starting to be done in other countries. Even for the offices in the country, the workforce is often-times foreign-born. We can discuss the merits and downsides of this development but for a president that was elected to put “America first” his administration’s disinterest in AI is quite ironic. This is even more pressing as other nations put together their strategies for harnessing the benefits and stewarding the dangers of AI. For a president that loves to tweet, his silence on this matter is rather disturbing.

The Bottom Line

China and US are currently pursuing very different paths in the AI race. Without a clear direction from the government, the US is relying on private enterprise’s to lead the progress in this field. Given the US’ current lead, such strategy can work, at least in the short-run. China is coming from the opposite side where the government is leading the effort to coordinate and optimize the nation’s efforts for AI development. China’s wealth of centralized data also gives a competitive advantage, one that it must leverage in order to make up for being a late comer.

Will this be a battle between entrepreneurship and central planning? Both approaches have its advantages. The first counts on the ingenuity of companies to lead innovation. The business competitive advantage for AI leaders has huge upsides in profit and prestige. It is this entrepreneurial culture that has driven the US to lead the world in technology and research. Hence, such de-centralized effort can still yield great results. On the flip-side, a centralized effort, while possibly stifling innovation, has the advantage of focusing efforts across companies and industries. Given AI potential to transform numerous industries, this approach can succeed and yield tremendous return.

What is missing from both strategies is a holistic view of how AI will impact society. While there are institutions in the US that are working on this issue, the lack of coordination with other sectors can undermine even the best efforts. In this range of centralized planning and de-centralized entrepreneurship, there must be a middle ground. This is the topic of the next blog, where I’ll talk about Europe’s AI strategy.

 

Travelers Theology: Wrestling With a Powerful AI God

Recently, I was browsing through new shows on Netlflix when I stumbled upon Travelers. The premise seemed interesting enough to to make me want to check it out. From the very first episode, I was hooked. Soon after, my wife watched the first episode and it became a family affair. Starring Erick McCormack (Will & Grace) and directed by Brad Wright (Stargate), Travelers is a show about people from a distant future that come to the 21st century in an effort to change history to re-write their present.

You may wonder “Nothing new here, many shows and movies have explored this premise.” That is true. What makes Travelers unique is how they arrive in the present and how they explore emerging technologies in a thoughtful and plausible way. They travel back to time by sending the consciousness of people from the future into the bodies of those who are about to die in the 21st century. Having the benefit of knowing history allows them to pintpoint the exact time for arrival which makes for some pretty interesting situations (a wife about to be killed by her husband’s abuse, a mentally-challenged woman about to be attacked by robbers and a heroin-addict about to be overdose). The travelers then continue the life of their “host” making those around belief that they are still the same person that died.

Spoiler alert – the next paragraphs will openly discuss plots from the show

By the end of first season, we learn about the pivotal role AI plays in the plot. Throughout the first episodes, the travelers keep talking about “the director” who has a “grand plan.” That becomes their explanation for carrying out missions when they cannot understand why they are doing what they are told. They also follow 6 rules to ensure their behavior limits their interference in the past. At first, the viewers think they are talking about a person who is leading the effort. In the last episode of season 1, we learn that “the Director” is actually a Super Computer (a Quantum Frame) who is able to consider millions of possible scenarios and therefore direct travelers to their assign missions. We are really dealing with AI God, who is quasi-omniscient and demands human’s trust and devotion.

Exploring Rich Religious Imagery

While the show explores religious imagery throughout, this aspect comes to the forefront in episode 8 of Season two. In it, one of the travelers (aptly and ironically named Grace), is to be judged by three judges (programmers). The setting for that: a church. As they gathered in the sanctuary, the “Trinity” of programmers initiates proceedings under the watchful eye of the Director (through a tiny camera that records the event). Grace, an obnoxious traveler who is devoid of social skills, is charged with the crime of treason for taking action on her own initiative in direct challenge to the grand plan.

As the judgement unfolds, scenes that juxtapose the programmer judges with an empty cross in the background reinforce the explicit religious connection the writers are making here. Throughout the hearings, Grace insists that her actions, even if unorthodox, were only to save the Director. Yet, she is surprised to learn that the Director itself had summoned her judgment. She seems disappointed at that, wondering how would the Director judge her if it knew her intentions. This is an interesting assertion because it implies that the director actually knew her thoughts, raising it to the level of a god.

Grace is found guilty by the programmer trinity and is handed over to the director for her sentencing. They speculate that she will be overwritten. That is the worse punishment, which means she would not only die in the 21st century but her consciousness would cease to exist. It is the theological equivalent to eternal death or annihilation.

The next scene is probably one of the most profound and provocative of the whole show so far. Grace goes to a small room where she faces three large screens from where the Director will speak directly to her. This is the first time in the show where the audience gets to see the Director in action by itself rather than through messengers.

While she is no longer in the sanctuary, the room still has an empty cross in the background and evokes the idea of a confessional booth. At that point, I was really curious to know how they would portray the director. What kind of images would she see? Would it be of the machine itself or something else?

No machine but human faces show up in the screen. They are all older and seem to be in some type of life support. At times, they seem to represent Grace’s parents but that was not clear. In this climatic scene, Grace finds forgiveness from the Director and is not overwritten. The machine communicate divine qualities through human faces. Grace finds peace and absolution and re-affirms her trust and devotion to the Director. In short, she experiences a theophany: a watershed personal moment that reveals a new facet of the divine being to a human receiver.

Photo by Bruno van der Kraan on Unsplash
Photo by Bruno van der Kraan on Unsplash

A New Perspective on Omniscience

What to make of this? I must say that when I first learned of Levandowski’s efforts to create an AI religion, I discounted as sensational journalism. Surely there is a fringe of techno-enthusiasts that would follow that path. Yet, I could no see how such idea could be appealing to a wider audience. Seeing Traveler’s religious treatment of AI have made me re-think. Maybe an AI religion is not as far-fetched as I originally thought. An advanced AI bolstered by powerful hardware and connected to a vast digital history of information could indeed do a great job in optimizing timelines. That is, it could consider a vast amount of scenarios in ways that are unfathomable to the human mind. This could make it quasi-omniscient in a way that could elicit a god-like trust from humans. One could say such arrangement would be the triumph of secular science replacing a mythical god with a technological one.

From a Judeo-Christian perspective, an AI god would be the epitome of human idolatry. People worshiping idols except that for calf images are replaced by silicon superstructures that actually can hear, speak and think faster than any human. This would be an example of idols in steroids. As a firm believer in the benefits of AI, I do worry about human inclination towards idolizing tools. As a Christian, I owe my allegiance to a transcendent God. AI can only be formidable tool but nothing more.

Yet, the prospect of an AI god is still interesting in that it may helps us understand a transcendent God better. How so, you may ask? Religion is often defined by powerful metaphors. For some monotheistic faiths, God is a father. Such metaphor has obvious benefits as it elicits image of authority, provision and comfort. I wonder if using a powerful AI as a metaphor could reveal part of divinity that we have not explored before.

In a previous blog, I suggested that AI offered a paradigm of partnership for religion as opposed to blind obedience. Reflecting on Travelers’ portrayal of an AI God sheds light into the aspect of God’s omniscience and wisdom. A timeless being with infinite “processing capacity” could very well consider all the possible alternatives and come up with the best one that leads to the best outcome (to whatever that best is defined). In computer science terms, the best is defined by an objective function – basically the goal you are trying to achieve.

How is that different from previous views of omniscience and wisdom? In the past, omniscience was seen as the idea that God knows what decision  we will make and therefor ultimately knows the future. In some traditions, this idea was amplified into the concept of Predestination. The problem with such approach is that it limits God to one outcome and makes humans “automatons.” In other words, there is really no choice or risk – everything is pre-determined from the beginning. I suspect this view of God was heavily based on our own human mind that cannot consider more than 1 scenario for the future at a time.

What if God’s omniscience was more like the Super AI knowledge that is able to simultaneously consider multiple outcomes and then guide towards the better one or correct it when that path is undermined? Wouldn’t that be a fuller view of omniscience? This scenario allows for human choice while still attributing superior knowledge and control to God. Furthermore, this metaphor reveals a “smarter” God that is not bound by the one-track linear thinking of humans. Humanity realizes that their choices matter and can create alternative futures. Even so, they still have the comfort of a God who can see through all this, and guide it from a perspective that can consider manifold outcomes.

Such God would certainly be worthy of human obedience, awe and praise.

“Do You Trust This Computer?”: A Wake Up Call on Superintelligence

It is not everyday that Elon Musk endorses and partially funds a documentary. Needless to say, when that happens, anyone tracking the tech industry takes notice. In “Do you Trust Your Computer?“, Chris Paine brings together experts, journalists and CEOs from the tech industry and academia, to make a compelling point about the dangers of Superintelligence for humanity’s future. In this blog, I will review the documentary and offer some thoughts on how we respond to the issues raised by it.

 

 

In an age of misguided attention, I welcome any effort to raise awareness of the social impacts of AI. While AI has gained notoriety recently, there has been little thoughtful discussion of its impacts. I believe this documentary does exactly that and for that reason alone, I encourage everyone to watch it.

Surprisingly, the documentary did not uncover any new information. Most of the examples cited have been mentioned in other media discussing AI. The documentary contributes to the discussion not because of its content per say but because how it frames the issue of Superintelligence. Many of us have heard of singularity, the rise of killer AI, the death of privacy through Big Data and the dangers of automated weapons. Chris Paine’s genius was to bring those issues together to construct a cohesive argument that shows the plausability and the danger of the rise of superintelligence. The viewer comes away with greater clarity and awareness on the subject.

Compelling but Incomplete

In short, Paine argues that if we develop AI without proper safeguards, it could literally destroy us as a species. It wouldn’t do that intentionally but in the way to maximizing its goal. The example he gives is of how we humans have no qualms of removing an ant mound in the way to build a path. Superintelligent entities would look at us with the same regard we look at ants and therefore lack any human-centered ethical norms. Beyond that, he also touched on other topics like: the impending job elimination, Big Data’s impact in our lives and the danger of automated weapons. While the documentary was not overly alarmist it does challenge us to take these matter seriously and encouraging conversation at multiple levels of society.

In spite of its compelling argument, I found the treatment of the topic to be lacking in some aspects. For one, the film could have explored more how AI can lead to human flourishing and economic advancement. While at times it touched on the potential of AI, these bits were overshadowed by the parts that focused on its dangers.  I wish they had discussed how, just like previous emerging technologies, AI will not only eliminate jobs but also create new industries and economic ecosystems. Surely its impact is bound to create winners and losers. However, to overlook its potential for job creation does a disservice to the goal of an honest dialogue about our AI future.

Moreover, the rise of artificial Superintelligence, though likely, it far from being a certainty. At one point, one of the experts talked about how we have become numb to the dangers of AI primarily because of all the Hollywood’s exhaustive exploitation of this theme. That was a great point, however, that skepticism may not be completely unfounded. AI hype happened before and so did an AI winter. In the early 60’s, many already predicted a take over of robots as AI technology had just entered the scene. It turned out that technical challenges and hardware limitations slowed AI development enough so that government and business leaders lost interest in it. This was the first AI winter from the mid-70s to the mid-90’s. This historical lesson is worth remembering because AI is not the only emerging technology competing for funding and attention at this moment.

Exposing The Subtle Impact of AI

I certainly hope that leaders in business and politics are heeding to Chris Paine’s warnings. My critique above does not diminish the importance of the threat posed by Superintelligence. However, most of us will not be involved in this decision process. We may be involved in choosing who will be at the table but not at the decision-making process directly. So, while this issue is very important, we as individual citizens will have little agency in setting the direction of Superintelligence development.

With that said, the documentary did a good job in discussing the more subtle impacts of AI in our daily lives. That to me, turned out to be the best contribution to the AI dialogue because it helped expose how many of us are unwilling participants in the process. Because AI lives and dies on data, data collection practices are fairly consequential to the future of its development. China is leaping ahead in the AI race primarily because of its government ability to collect personal data with little to no restrictions. More recently, the Facebook-Cambridge-Analytica scandal exposed how data collection done by large corporations can also be unethical and harmful to our democracy.

Both examples show that centralized data collection efforts are ripe for abuse. The most consequential act we can take in the development of AI is to be more selective on how and to who we give personal data to. Moreover, as consumers and citizens, we must ensure we are sharing in the benefits our data creates. This process of data democratization is the only way to keep effective controls on how data is collected and used. As data collection decentralizes, the risk of an intelligence monopoly decreases and the benefits of AI can be more equitably shared among humanity.

Moreover, it is time we start questioning the imperative of digitization. Should everything be tracked through electronic devices? Some aspects of our analog earth are not meant to be digitized and processed by machines. The challenges is to define these boundaries and ensure they are kept out of reach from intelligent machines. This is an important question to ask as we increasingly use our smart phones to record every aspect of our lives. In this environment, writing a journal by hand, having unrecorded face-to-face conversations and taking a technology sabbatical can all be effective acts of resistance.

Hybrid Intelligence: When Machines and Humans Work Together

In a previous blog, I argued that the best way to look into AI was not from a machine versus human perspective but more from a human PLUS machine paradigm. That is, the goal of AI should not be replacement but augmentation. Artificial Intelligence should be about enhancing human flourishing rather than simply automating human activities. Hence, I was intrigued to learn about the concept of HI (Hybrid Intelligence). HI is basically a manifestation of augmentation when human intelligence works together with machine intelligence towards a common goal.

As usual, the business world leads in innovation, and in this case, it is no different. Hence, I was intrigued to learn about Cindicator, a startup that combines the collective intelligence of human analysts with machine learning models to make investment decisions. Colin Harper puts it this way:

Cindicator fuses together machine learning and market analysis for asset management and financial analytics. The Cindicator team dubs this human/machine predictive model Hybrid Intelligence, as it combines artificial intelligence with the opinions of human analysts “for the efficient management of investors’ capital in traditional financial and cryptomarkets.”

This is probably the first enterprise to approach investment management from an explicitly hybrid approach. You may find other examples in which investment decisions are driven by analysts and others that rely mostly on algorithms. This approach seeks to combine the two for improved results.

How Does Hybrid Intelligence Work?

One could argue that any example of machine learning is at its core hybrid intelligence. There is some truth to that. Every exercise in machine learning requires human intelligence to set it up and tune the parameters. Even as some of these tasks are now being automated, one could still argue that the human imprint of intelligence is still there.

Yet, this is different. In the Cindicator example, I see a deliberate effort to harness the best of both machines and humans.

On the human side, the company is harnessing the wisdom of crowds by aggregating analysts’ insights. The reason why this is important is that machine learning can only learn from data and not all information is data. Analysts may have inside information that is not visible in the data world and can therefore bridge that gap. Moreover, human intuition is not (yet) present in machine learning systems. Certain signals require a sixth sense that only humans have. For example, a human analyst may catch deceptive comments from company executives that would pass unnoticed by algorithms.

On the machine side, the company developed multiple models to uncover predictive patterns from the data available. This is important because humans can only consider a limited amount of scenarios. That is one reason why AI has beaten humans in games where it could consider millions of scenarios in seconds. Their human counterparts had to rely on experience and hunches. Moreover, machine learning models are superior tools for finding significant trends in vast data, which humans would often overlook.

Image by Gerd Altmann from Pixabay

Can Hybrid Intelligence Lead to Human Flourishing?

HI holds much promise in augmenting rather than replacing human intelligence. At its core, it starts from the principle that humans can work harmoniously with intelligent machines. The potential for its uses is limitless. An AI aided approach can supercharge research for the cure of diseases, offer innovative solutions to environmental problems and even tackle intractable social ills with humane solutions.

This is the future of work: collective human intelligence partnering with high-performing Artificial Intelligence to solve difficult problems, create new possibilities and beautify the world.

Much is said about how many jobs AI will replace. What is less discussed is the emergence of new industries made possible by the partnership between intelligent machines and collective human wisdom. A focus on job losses assumes an economy of scarcity where a fixed amount of work is available to be filled by either humans or machines. An abundance perspective looks at the same situation and sees the empowerment of humans to reach new heights. Think about how many problems remain to be solved, how many endeavors are yet to be pursued, and how much innovation is yet to be unleashed.

Is this optimist future scenario inevitable? Not by a long shot. The move from AI to HI will take time, effort and many failures. Yet, looking at AI as an enabler rather than a threat is a good start. In fact, I would say that the best response to the AI threat is not returning to a past of dumb machines but lies in the partnership between machine and human entities steering innovation for the flourishing of our planet. Only HI can steer AI towards sustainable flourishing.

There is work to do, folks. Let’s get on with the business of creating HI for a better world!

Blockchain and AI: Powerful Combination or Concerning Trend?

Bitcoin is all over the news lately. After the meteoric rise of these financial instruments, the financial world is both excited and fearful. Excited to get in the bandwagon while it is on the rise but scarred that this could be another bubble. Even more interesting has been the rise of blockchain, the underlying technology that enables Bitcoin to run (for those wondering about what this technology is, check out this video). In this blog, I reflect on the combination between AI and blockchain by examining an emerging startup on the field. Can AI and blockchain work together? If so, what type of applications can this combination be used for? 

Recently, I came across this article from Coincentral that starts answering the questions above. In it, Colin Harper interviews CEO of Deep Brain Chain, one of the first startups attempting to bring AI and blockchain technology together. DBC’s CEO He Yong puts it this way:

DBC will allow AI companies to acquire data more easily.  Because a lot of data are private data.  They have heavy security, such as on health and on finance.  For AI companies, it’s almost next to impossible to acquire these data.  The reason being, these data are easy to copy.  After the data is copied, they’re easy to leak.  One of DBC’s core features is trying to solve this data leakage issue, and this will help AI companies’ willingness to share data, thus reducing the cost you spend on data, to increase the flow of data in society. This will expand and really show the value of data.

As somebody who works within a large company using reams of private data, I can definitely see the potential for this combination. Blockchain can provide the privacy through encryption that could facilitate the exchange of data between private companies. Not that this does not happen already but it is certainly discouraged given the issues the CEO raises above. Certainly, the upside of aggregating this data for predictive modeling is fairly significant. Companies would have complete datasets that allow them to see sides of the customer that are not visible to that company.

However, as a citizen, such development also makes me ponder. Who will get access to this shared data? Will this be done in a transparent way so that regulators and the general population can monitor the process?  Who will really benefit from this increased exchange of private data? While I agree the efficiency and cost would decrease, my main concern is for what aims will this data be used. Don’t get me wrong, targeted marketing that follows privacy guidelines can actually be beneficial to everybody. Richer data can also help a company improve customer service.

With that said, the way He Yong describes, it looks like this combination will primarily benefit large private companies that will use this data for commercial aims. Is this really the promise of an AI and Blockchain combination: allowing large companies to know even more about us?

Further in the interview, He Yong suggested the Blockchain could actually help assuage some of the fears of that AI could get out of control:

Some people claim that AI is threatening humanity.  We think that blockchain can stop that, because AI is actually not that intelligent at the moment, so the threat is relatively low.  But in a decade, two decades, AI will be really strong, a lot stronger than it is now.  When AI is running on blockchain, on smart contracts, we can refrain AI.  For example, we can write a blockchain algorithm to restrain the computational power of AI and keep it from acting on its own.

Given my limited knowledge on Blockchain, it is difficult to evaluate whether this is indeed the case. I still believe that the biggest threat of AI is not the algorithms themselves but how they are used. Blockchain, as described here, can help make them process more robust, giving human controllers more tools to halt algorithms gone haywire. Yet, beyond that I am not sure how much more can it act as a true “check and balance” for AI.

I’ll be monitoring this trend in the next few months to see how this develops. Certainly, we’ll be seeing more and more businesses emerging seeking to marry blockchain and AI. These two technologies will disrupt many industries by themselves. Combining them could be even more impactful. I am interested to see if they can be combined for human flourishing goals. That remains yet to be seen.

The Machine Learning Paradigm: How AI Can Teach Us About God

It is no secret that AI is becoming a growing part of our lives and institutions. There is no shortage of article touting the dangers (and a few times the benefits) of this development. What is less publicized is the very technology that enables the growing adoption of AI, namely Machine Learning (ML). While ML has been around for decades, its flourishing depended on advanced hardware capabilities that have only become available recently. While we tend to focus on Sci-Fi like scenarios of AI, it is Machine Learning that is most likely to revolutionize how we do computing by enabling computers to act more like partners rather than mere servants in the discovery of new knowledge. In this blog, I explain how Machine Learning is a new paradigm for computing and use it as a metaphor to suggest how it can change our view of the divine. Who says technology has nothing to teach religion? Let the skeptics read on.

What is Machine Learning?

Before explaining ML, it is important to understand how computer programming works. At its most basic level, programs (or code) are sets of instructions that tell the computer what to do given certain conditions or inputs from a user. For example, in the WordPress code for this website, there is an instruction to show this blog in the World Wide Web once I click the button “Publish” in my dashboard. All the complexities of putting this text into a platform that can be seen by people all over the world are reduced to lines of code that tell the computer and the server how to do that The user, in this case me, knows nothing of that except that when I click “Publish,” I expect my text to show up in a web address. That is the magic of computer programs.

Continuing on this example, it is important to realize that this program was once written by a human programmer. He or she had to think about the user and its goals and the complexity of making that happen using computer language. The hardware, in this scenario was simply a blind servant that followed the instructions given to it. While we may think of computers as smart machines they are as smart as they are programmed to be. Remove the instructions contained in the code and the computer is just a box of circuits.

Let’s contrast that with the technique of Machine Learning. Consider now that you want to write a program for your computer to play and consistently win an Atari game of Pong (I know, not the best example, but when you are preparing a camp for Middle Schoolers that is the only example that comes to mind). The programming approach would be to play the game yourself many times to learn strategies to win the game. Then, the player would write them down and codify these strategies in a language the computer can understand. She or he would then spend countless hours writing the code that spells out multiple scenarios and what the computer is supposed to do in each one of them. Just writing about it seems exhausting.

Now compare that with an alternative approach in which the computer actually plays the game and maximizes the score in each game based on past playing experiences. After some initial coding, the rest of the work would be incumbent on the computer to play the game millions of time until it reaches a level of competency where it wins consistently. In this case, the human outsources the game playing to the computer and only monitors the machine’s progress. Voila, there is the magic of Machine Learning.

A New Paradigm for Computing

As the example above illustrates, Machine Learning changes the way we do computing. In a programming paradigm, the computer is following detailed instructions from the programmer. In the ML paradigm, the learning and discovery is done by the algorithm itself. The programmer (or data scientist) is there primarily to set the parameters for how the learning will occur as opposed to giving instructions for what the computer is to do. In the first paradigm, the computer is a blind servant following orders. In the second one, the computer is a partner in the process.

There are great advantages to this paradigm. Probably the most impactful one is that now the computer can learn patterns that would be impossible for the human mind to learn. This opens the space to new discoveries that was previously inaccessible when the learning was restricted to the human programmer.

The downside is also obvious. Since the learning is done through the algorithm, it is not always possible to understand why the computer arrived at a certain conclusion. For example, last week I watched the Netflix documentary on the recent triumph of a computer against a human player in the game of Go. It is fascinating and worth watching in its own right. Yet, I found striking that the coders of Alpha Go could not always tell why the computer was making a certain move. At times, the computer seemed delusional to human eyes. There lies the danger: as we transfer the learning process to the machine we may be at the mercy of the algorithm.

A New Paradigm for Religion

How does this relate to religion? Interestingly enough these contrasting paradigms in computing shed light in a religious context for describing the relationship between humans and God. As the foremost AI Pastor Christopher Benek once said: “We are God’s AI.” Following this logic, we can see how of a paradigm of blind obedience to one of partnership can have revolutionary implications for understanding our relationship with the divine. For centuries, the tendency was to see God as the absolute Monarch demanding unquestioning loyalty and unswerving obedience from humans. This paradigm, unfortunately, has also been at the root of many abusive practices of religious leaders. This is especially dangerous when the line between God and the human leader is blurry. In this case, unswerving obedience to God can easily be mistaken by blind obedience to a religious leader.

What if instead, our relationship with God could be described as a partnership? Note that this does not imply an equal partnership. However, it does suggest the interaction between two intelligent beings who have separate wills. What would be like for humanity to take on responsibility for its part in this partnership? What if God is waiting for humanity to do so? The consequences of this shift can be transformative.

4 Reasons Why We Should be Teaching AI to Kids

In a previous blog, I talked about a multi-disciplinary approach to STEM education. In this blog I want to explore how teaching AI to kids can accomplish those goals while also introducing youngsters to an emerging technology that will greatly impact their future. If you are parent, you may be asking: why should my child learn about AI? Recently, the importance of STEM education has been emphasized by many stakeholders. Yet, what about learning AI that makes it different from other STEM subjects?

First it is important to better define what learning AI means. Lately, the AI term has been used for any instance a computer acts like a human. This varies from automation of tasks all the way to humanoids like Sophia . Are we talking about educating children to build sentient machines? No, at least not at first. The underlying technology that enables AI is machine learning. Simply put, as hinted by its name, these are algorithms that allow computers to learn directly from data or interaction with an environment rather than through programming. This is not a completely automated process as the data scientist and/or developer must still manage the processes of learning. Yet, at its essence, it is a new paradigm for how to use computers. We go from a programming in which we instruct computer to carry out tasks to machine learning where we feed the computer with data so it can discover patterns and learn tasks on its own. The question then is why should we teach AI (machine learning) to kids?

Exposes Them to Coding

Teaching AI to kids start with coding. While we’ll soon have advanced interfaces for machine learning, some that will allow a “drag-and-drop” experience, for now doing machine learning requires coding. That is good news for educational purposes. I don’t need to re-hash here the benefits of coding education. In recent years, there has been a tremendous push to get children to start coding early. Learning to code introduces them to a type of thinking that will help them later in life even if they do not become programmers. It requires logic and mathematical reasoning that can be applied to many endeavors.

Furthermore, generation Z grew up with computers, tablets and smart phones. They are very comfortable with using them and incorporating them into their world. Yet, while large tech companies have excelled in ensuring no child is left without a device, we have done a poor job in helping them understand what is under the hood of all this technology they use. Learning to code is a way to do exactly that: lift up the hood so they can see how these things work. Doing so, empowers them to become creators with technology rather than mere consumers.

Works Well With Gaming

The reality is that AI really started with games. One the first experiment with AI was to make a computer learn to play a game of Checkers. Hence, the combination between AI and gaming is rather complementary. While there are now some courses that teach children to build games, teaching AI goes a step further. They actually get to teach the computer to play games. This is important because games are a common part of their world. Teaching AI with games helps them engage in the topic by bringing it to a territory that is familiar to their imagination.

I suspect that gaming will increasingly become part of education in the near future. What once was the scourge of educators is turning out to be an effective tool to engage children in the learning process. There are clear objectives, instant rewards and challenges to overcome. Teaching machine learning with games, rides this wave of this and enhances it by giving them an opportunity to fine tune learning algorithms with objectives that captivate their imagination.

Promotes Data Fluency

Data is the electricity of the 21st century. Helping children understand how to collect, examine and analyze data sets them up for success in the world of big data. We are moving towards a society where data-driven methods are increasingly shaping our future. Consider for example how data is transforming fields like education, criminal courts and healthcare. This trends shows not signs of slowing down in the near future.

This trend will not be limited to IT jobs. As the sensors become more advanced, data collection will start happening in multi-form ways. Soon fitness programs will be informed, shaped and measured by body sensors that can provide more precise information about our bodies’ metabolism. Sports like Baseball  and Football are already being transformed by the use of data. Thus, it is not far-fetched to assume that they will eventually be working in jobs or building business that live on data. They may not all become data scientist or analysts, but they will likely need to be familiar with data processes.

Opens up Discussions About Our Humanity

Because AI looms large in Science-Fiction, the topic opens the way for discussions on Literature, Ethics, Philosophy and Social Studies. The development of AI forces us to re-consider what it means to be human. Hence, I believe it provides a great platform to add Humanities to an otherwise robust STEM subject. AI education can and should include a strong component of reading and writing.

Doing so develops critical thinking and also helps them connect the “how” with the “why”. It is not enough to just learn how to build AI applications but foremost why we should do it. What does it mean to outsource reasoning and decision-making to machines? How much automation can happen without compromising human flourishing? You may think these are adult question but we underestimate our children’s ability to reflect deeply about the destiny of humanity. They, more than us, need to think about these issues for they will inherit this world.

If we can start with them early, maybe they can make better choices and clean up the mess we have made. Also, teaching AI to kids can be a lot easier than we think.

Integrated STEM Education: Thoughtful, Experiential and Practical

In a previous blog, I proposed the idea of teaching STEM with a purpose. In this blog, I want to take a step back to evaluate how traditional STEM education fails to prepare students for life and propose an alternative way forward: Integrated STEM education.

One of the cardinal sins of our 19th century based education system is its inherent fragmentation. Western academia has compartmentalized the questions of “why” and “how”  into separate disciplines.[note] While I am speaking based on my experience in the US, I suspect these issues are also prevalent in the Majority World as well.[/note] Let STEM students focus on the “how”(skills)  and let the questions of “why”(critical thinking) to philosophers, ethicists and theologians. This way,  students are left to make the connection between these questions on their own.

I understand that this will vary for different subjects. The technical rigors and complexity of some disciplines may leave little space for reflection. Yet, if STEM education is all about raising detached observers of nature or obsessed makers of new gadgets, then we have failed. GDP may grow and the economy may benefit from them, yet have we really enriched the world?

One could argue that Liberal Arts colleges already do that. As one who graduated from a Liberal Arts program, there is some truth to this statement. Students are required to take a variety of courses that span Science, Literature, Social Studies, Art and Math. Even so, students take these classes separately with little to no help in integrating them. Rarely they have opportunities to engage in multi-disciplinary projects that challenge them to bring together what they learned. The professors themselves are specialists in a small subset of their discipline often having little experience in interacting outside their disciplinary guild. Furthermore, while a Liberal Arts education does a good job in exposing students to a variety of academic disciplines it does a rather poor job in teaching practical skills. Some students come out of it with the taste and confidence to continue learning. Yet, many leave these degrees confused and end up having to pursue professional degrees in order to pick a career.

Professional training does the opposite. It is precisely what a Liberal Arts education is not: highly practical, short, focused learning for a specific skill. As one who took countless professional training courses, I certainly see their value. Also, they do bring together different disciplines and tend to be project based. The downside is that very few people can efficiently learn anything in week-long 6 hour class days. The student is exposed to the contours of a skill but the learning really happens later when and if that student tries to apply that skill to a real-world work problem. They also never have time to reflect on the implications of what they are doing. Students are often paid by their companies to get the skill quickly so they can increase productivity for the firm. Such focus on efficiency greatly degrades the quality of the learning. Students here are most likely to forget what he or she learned in the long run.

Finally there is learning through experiences. Most colleges recognize that and offer study abroad semesters for students wanting to take their learning to the world. I had the opportunity to spend a summer in South Korea and it truly changed me in enduring ways. The same can be said for less structured experiences such as parenting, doing community service, being involved with a community of faith and work experiences. A word of caution here is that just going through an experience does not ensure the individual actually learns. While some of the learning is assimilated, a lot of it is lost if the individual does not digest the experience through reflection, writing and talking about it to others.

Clearly these approaches in of themselves are incomplete forms of education. A Liberal Arts education alone will only fill one’s head of knowledge (and a bit of pride too). Professional training will help workers get the job done but they will not develop as individuals. Experiences apart from reflection will only produce salutary memories. What is needed is an approach that combines the strengths of all three.

I believe a hands on project-based, ethically reflective STEM education draws from the strength of all of these. It is broad enough like Liberal Arts, skill-building like professional training and experience-rich through its hands-on projects. Above all, it should occur in a nurturing environment where young students are encouraged to take risks while still receiving the guidance so they can learn from their mistakes. To create a neatly controlled environment for learning is akin to the world of movies where main characters come up with plans in a whim and execute on them flawlessly.  Real life never happens that way. It is full of failures, setbacks, disappointments and occasionally some glorious successes. The more our education experience mimics that, the better it will prepare students for the real world.

Black Panther: A Powerful Postcolonial, African-Futurist Manifesto

Black Panther is more than a movie, it is a manifesto of possibilities and a vivid expression of Postcolonial imagination. Much has been said about the importance of having an African super-hero. I want to discuss why Black Panther matters to all of us, Western white people included. I never thought I would be able to address Postcolonialism, Theology and Technology in one blog. Black Panther allows me to do just that. I encourage everyone to see it and will do my best to keep this piece free of spoilers.

Back in Seminary, I did an independent study on Theology and Postcolonialism (you can check one of my papers from that class here). In the middle of the last century, as most colonies had gained their independence, Majority World scholars realized that political freedom was not enough to undo the shackles of Colonialism. They realized that colonial paradigms still persisted in the very sources of knowledge of Modernity. Therefore, what was needed was a full deconstruction of knowledge as it was handed to them by Euro-centric scholars. Inspired by Foucault’s idea that speech is power, this movement started first in Literature and then moved to the Social Sciences. This project of deconstruction continues till this day. In my view, Black Panther represents the next step in this progression. If the first Postcolonial authors were there to identify and de-construct Western biases embedded in literature, the writers of Black Panther start the re-construction in the creation of a Postcolonial imagination.

How is that so? First, it is important to say what Black Panther is not.  It is not a depiction of African suffering under the White oppressor like 12 Years a Slave. As necessary as this type of movie is, it is still enclosed in a Colonial paradigm that albeit critically still puts the White man at the center of the story. It is also not a depiction of African harsh social realities like Moonlight and City of God. While such narratives are also important and represent progress from the previous category (here minorities are at the center of the story), they lack a prophetic imagination of how things could change.

Black Panther represents a new category of its own. It paints an alternative hopeful image, grounded in the Sci-Fi genre, of what these societies could be if they were to realize their God-given potential independent of Western Colonialism. What impress me most is that the writers went to great lengths to imagine a future that was authentically African even as it become technologically advanced. Therefore, this African Futurism not only portrays a future of what it could be for Majority World but also challenges our current Western ideals of technology.  It portrays a technology that is not there to replace but to merge with nature. This sustainable picture is maybe the best gift of African Futurism to the world.

Moreover, I thought that it was important that not only the hero but also the anti-hero was of African descent. Here there is some controversy and push back as Christopher Lebron’s essay brings up. Fair enough, yet a movie that depicted an African hero against a White villain would have missed the opportunity to re-imagine a postcolonial future by re-enforcing the colonial past. I cannot speak for those of African descent. Yet, as one born in the Majority World and inevitably linked to its story of struggle, I can say that true postcolonial imagination happens when we are able to see that our main problems are the ones coming from within. This is very difficult task given the burden of oppressive structures that persist even to the present day. Yet, it is only when the problem become our own, and not the Colonizer’s, that we can recover the power taken from us.

It is encouraging to see how this movie has become a catalyst for the African diaspora all over the world to re-think and re-imagine their identify.  It is not just a fantasy that imagines a perfect world without problems but one in which good redeems a hopeless present. Here is where Theology comes in. Wakanda is a great picture of what the Jewish writers envisioned as the Kingdom of God coming to Earth. It does not happen through power or violence but through invitation and outreach. This is the type of Christ-like upside-down power that the white Evangelical church in this country has forgotten. When we align ourselves with those who protect guns and against refugees we have failed to understand the very heart of the gospel. I could say much more but for now, let those who have ears hear.

Black Panther is an invitation for new Postcolonial imaginations to emerge. I call on Latin Americans, Asians and Pacific Islanders to give us their version of a hopeful future. Our world will be richer for it. Let the forgotten find their voice, not only of pain but also of creativity, joy and transformation.

___________________________________//______________________________________

A year after I posted this, I wrote a blog going more into the actual architecture of Wakanda. To read that blog, click here.

Reflections on the 2018 LAMP Symposium on the Future of Life

Last Saturday I attended the 2018 LAMP (Leadership and Multi-faith Program) symposium, a collaborative endeavor between Emory University and Georgia Tech. The topic for this year was “Religious and Scientific Perspectives in the Future of Life.” The event was sub-divided in three parts, starting with life in the body and mind (religion meets science in deciphering the soul), life in our planet (warnings about Global Warning inaction) and life in outer-space (an introduction to Astrobiology). For lunch, we also learned about a AAAS (American Association for the Advancement of Science) initiative to build bridges between seminary students and scientists.

Unfortunately, I was not able to stay for the last session and therefore cannot speak to it in detail. Yet, the very idea that there is an academic discipline studying the possibility of life in other planets is fascinating. I am encouraging my 8 year-old daughter to look into that for a college major. It sounds like a truly exciting field.

The symposium opened with Dr. Arri Eisen describing his experience of teaching science to Buddhist monks in Tibet. Apart from some entertaining stories, the main gist followed along the lines of “we have much to learn from each other if we are open” theme. While this is not earth-shattering, it was refreshing to see a Scientist affirm that his craft is not immune to personal and/or cultural biases. Of all the speakers that followed, the most interesting was Dr. Mascaro’s description of her work to test the health impacts of meditation. She first showed the overwhelming evidence for the correlation between social connection and health. That is, the more lonely we are the more physically sick we become. Hence, any activity that can increase our sense of connection with others should also have health benefits which proved to be the case. This is an important finding that hopefully with time will move us to look at physical health from a more holistic perspective.

I was particularly unimpressed by the contribution of the speakers from the religious side to the dialogue. To be fair, each of them had little time to fully state their case but their observations really added little to the debate. For example, the Muslim scholar’s main point was to question the reliability of the mind without fully describing how that really differs from the soul. I think what he meant was a suspicion of the Western cult of objectivity and rationality yet that was not clearly stated. The Jewish Scholar spoke of her research on ritual bath without really making clear connections as to how that contributes to the dialogue between Religion and Science or the connection between mind and soul.

The lunch talk was informative and hopeful as I learned about how Columbia Seminary students were being exposed to Scientific knowledge though a speaker series. The hope here is that as they become pastors they will become more engaged with Science and this engagement will makes it way to the pulpit and Sunday school classes. However, such initiative would have been much more consequential in conservative evangelical seminaries where Science is often seen as the enemy of faith. It is an encouraging beginning nevertheless.

The after-lunch session turned out to be a call to action for engaging religious community with Global Warming activism. Of the speakers in this session, I was impressed by Rabbi Kornblau’s holistic approach to the Torah that included a commitment to caring for the environment. I was disappointed by the Christian Theologian’s exploration of Eschatology and Ecology. While he brought a valid point that his generation was less concerned about a shift in worldview to moving to action, there was a missed opportunity in developing this many connections of the these two topics. Moreover, while I will second their concern with Global Warming, I was looking for discussion on the current scientific developments in life extension. I was also hoping for an acknowledgement of the role of technology in their research.

I realize that the tone of my review is rather negative. I was expecting much more from a discussion on the future of life. As someone keenly interested in the dialogue between Technology and Religion, I am rather impatient with the slow pace of the dialogue between Religion and Science in academic circles. The latter lays the groundwork for the former. Yet, given its slow pace, we may be years away from a robust dialogue between on the role of Religion in emerging technologies. I see a lot of preliminary discussion but very little in the way of actionable insights. I understand that this stems in part from the academic focus on research and theory. Even so, I find that unacceptable given the pace of change brought forth by emerging technologies (AI, VR and CRISP to name a few) on our humanity. While there are some institutions in the forefront of this dialogue (i.e.: Pittsburg Seminary and University of Durham), I was hoping the leading academic institutions of a growing metropolis like Atlanta would be making inroads in this area.

This leads me to believe that most insights and breakthroughs in this area will not come from Academia but from practitioners (pastors and technologists). Academic institutions will find themselves having to catch up with the new knowledge being uncovered by innovators in the field. This is unfortunate given academic institutions’ wealth of resources for research. I hope that changes but if what I saw on Saturday is any indication, Academia is a long way from leading in this dialogue.