Tech Community Centers: The Cure for Automation Anxiety

Automation anxiety is real. In a recent Pew survey, 72 percent of Americans worry about the impact of automation on their jobs. Besides, automation is slowly becoming part of our lives: self-service cashiers in grocery stores, smart phone enabled banking, robocalls, chatbots and many other examples. As a basic rule of thumb any task that is simple and repetitive can be automated. The benefits for us as consumers are clear – convenience and lower prices. For workers, in all levels, the story is altogether different as many now worry for their livelihood. AI enabled applications could do the jobs of accountants, lawyers and managers. Automated robotic arms can destroy manufacturing jobs and automated cars can make professional drivers obsolete.

How can this be addressed? Robert E. Litan from the Brookings Institute offers four 4 practical suggestions so governments and leaders can prepare their communities for automation:

  1. Ensure the economy is at full employment – This means keeping unemployment at around 4% or lower. Economies where people who want to work are currently working will be better prepared to absorb the shocks of automation.
  2. Insure Wages – Develop an insurance system for displaced workers so they have time to make the transition into new careers. Workers need time to adapt to new circumstances and it is difficult to do so when they have not safety net to rely on.
  3. Finance Lifetime Learning – Fund worker loans to educational institutions that offer practical training for jobs in high-demand fields. This is not about pursuing new 2-4 year degrees but 6 month to 1 year certificates that can prepare workers for a new career.
  4. Targeted distressed places – Automation impact will be uneven so governments should focus their efforts in areas of greatest needs as opposed to enacting one-size-fits-all policies.

While the suggestions above are intended for governments, much of it can be applied to individuals. In short, individuals seeking to shield themselves from automation impacts should save up, train and learn often. They should also work with local organizations to help their neighbors. Strong communities are more likely to weather through automation shocks just as did though past disruptions. Here is how this could happen again.

Makerspaces

For centuries, communities were formed and held together by central spaces. Whether it is a place of worship, the town plaza, the mall or even the soccer pitch, communities were formed and nourished as people gathered around common activities. Fast forward to our time, communities small and large find themselves pulled apart by many forces. One of the main culprits is technology-enabled experiences that drive local population to replace physical interactions with those mediated by machines. While online connections can at times translate into actual face-to-face interactions (apps like Meetup allow locals with shared interest to quickly assemble), the overall trend is local isolation even as global connections flourish. We are more likely to share commonalities with people across the globe than with those just across the street.

What if technology education could become a catalyst for strengthening local communities? Recently, makerspaces are popping up in many US metro areas. They are non-profit, community-run spaces where people gather to build, learn and experiment with technology. Visiting one here in the Atlanta area, I discovered I could learn new skills from knitting, welding to programming arduinos. While classes are free, the spaces also offer paid memberships so members can get 24/7 access to the facility and storage space. I would describe it as a place that attract people who are already tinkering around with technology in their garages to do so in a group setting. This allows them to share ideas and also to pool their resources. In some cases, these centers have equipment that would be too expensive for an individual to have and maintain in their home. In this way, the maker space enhances each member’s ability to learn, tinker and experiment with new types of technology such as 3D printers and laser cutters.

Tech Community Centers

In order to become a catalyst of community renewal, maker spaces need to expand their visions into becoming community tech centers: a place where tinkerers, inventors, scientists and tech enthusiast can come together and work for the betterment of their surrounding community. Basically, it is tinkering in community for a purpose. Channeling the energy and knowledge of technical professionals into projects of community transformation. This starts with education and professional training but can go much further than that. What if these community centers could also be incubators for start-ups that create jobs in the community? What if they can also run projects creating apps, databases and predictive models for non-profits? The sky is the limit, all it is required is a desire to transform their community and vivid imagination to dream up possibilities.

Are you up to the challenge? If you know of any local initiative like this or are considering starting one, please write in the comment section or reach out through our contact form. I would love to hear your story.

The Future of Service: How Google, Apple and Facebook are Using AI to Simplify Our Lives

Companies want to have satisfied customers who will come back for more and recommend their brands to others, and AI can help them to achieve this goal. There are many ways in which people benefit from this, for instance getting a quick reply to their questions.

Artificial intelligence is becoming “humanized” as it helps people in several ways. Whether it’s face recognition, dictating directly to the mobile phones, shopping online, ordering food, self-driving cars, and many more, they are making our lives easier.

Let’s take a look at three major enterprises and ways they use artificial intelligence to “make life easier” for their customers.

  1. Google

 

  • Google spent between $20 and $30 billion on artificial intelligence in 2016.

  • The Google Self driving cars use AI to map and move on the roads.

  • Google’s voice recognition technology claims that there is a 98% accuracy.

  • Youtube increased watch time by 50% by tuning its video recommendations using AI.

  • Google Photos can recognize faces, create animations, or suggest a photo filter.

 

  1. Facebook

  • Facebook Deeptext understands text with near-human accuracy.

  • Artificial intelligence is used to stop fake news from going viral.

  • Facebook uses deep neural networks for ad placement.

  • It has AI embedded into its messenger.

  • In 2017 it rolled out an AI project that could spot people with suicidal tendencies.

 

  1. Apple

  • Apple uses neural engine for face recognition to unlock the phone and transfer facial expression onto animate emoji.

  • It uses deep learning to detect fraud on the Apple Store and for face detection.

  • Machine learning helps Apple choose new stories and recognizes faces and locations in the photos.

  • It is building an autonomous driving system that could be implemented in existing cars.

  • Apple’s Siri is a virtual personal assistant that communicates using text-to-speech system.

These companies are just the tip of the iceberg, and many others such as Sephora and Nordstrom are also jumping on the AI bandwagon, as they realize how beneficial it can be for their business. In the next five years many people will turn to using artificial intelligence. The 47% of people will start using a home or family assistant, 46% of people will start using a health coach, and 41% will use financial advisers.

The following statistics and the fact that the worldwide spending on cognitive and AI systems will reach an astonishing $57.6 billion in 2021 show just how bright the future of artificial intelligence is.

  • 60% of retail and ecommerce brands will implement AI in 2018.

  • 100% of IoT initiatives will be supported by AI capabilities by 2019.

  • 20% of business content will be authored by machines in 2018.

  • 85% of customer interactions with the enterprise will be managed without human intervention by 2020.

The use of artificial intelligence is only going to expand in the following years, as more and more companies decide to use it. With this pace, chatbots will be indistinguishable from humans by 2029, at least according to famous futurist Ray Kurzweil.

While this is welcome news for the customer, the question will be whether how these companies steward customer data. As AI takes a more prominent role, the need for data collection will only increase. Ensuring that this is done in an appropriate manner can be the difference between stellar customer service and costly lawsuits. The company that successfully balances privacy concerns while also harnessing data through effective AI algorithms is poised to become a market leader.

Karthik Reddy, Community Manager at www.16best.net, is the author of India’s Number 1 travel blog. Boasting an MBA in computer science, he once decided to get away from the office desk life and take a breathtaking journey around the world. He is eager to use the power of the global network to inspire others. A passionate traveler and photography enthusiast, he aspires to share his experiences and help people see the world through his lens.

 

Who Will Win the AI Race? Part II: The European Way

In a previous blog, I compared the approaches from China and the US as they compete in the global AI race. In short, China proposed a government-led approach while the US is leaning on a business-led approach. The European approach represents an attempt to balance both business’ and government’s efforts in directing AI innovation, therefore showing a third way to compete in the global AI race.

Great Britain recently announced a national AI strategy. In a mixture of private, academic and government resources, the country is pledging $ 1.4 Billion in investment. A remarkable piece of the plan is allocated funding for a Center for Data Ethics. The center will develop codes for safety and ethical use of machine learning.  Another noteworthy part of the plan is the initiative to fund 1,000 new PhDs and 8,000 teachers for UK secondary schools. This move will not only spur further innovation but also ensure the British workforce is prepared to absorb the changes brought by AI developments. It is essential that governments plan ahead to prepare the next generation for the challenges of opportunities of emerging technologies like AI. In this area, the UK’s plan sets a good precedent for others countries to follow as they look for ways to prepare their workforce for future AI disruptions. Such moral leadership will be a guide not only to European institutions but also help companies worldwide make better choices with their AI technologies. This perspective is essential to ensure AI development does not descend into an uncontrolled arms race.

 

In the European Union, France has also announced a national plan following a similar approach as the UK. Beyond the mix of private and government investment to the total of 1.5 billion euros, the country is also setting up an open data approach that both helps businesses and customers. On the one hand business can look at a centralized place for data on the other customers get centralized transparency of how their data is being collected and used. If executed well, this central data place can both provide quality data for AI models while still ensuring privacy concerns are mitigated. The strategy also includes innovative ideas such as harnessing the power of AI to solve environmental challenges and a narrow focus on industries that country can compete in. Similar to the British approach, the French plan also includes funding for an Ethics center.

While Germany has not announced a comprehensive plan to date, the country already leads in AI within the automotive industry. Berlin is considered the 4th largest hub for AI startups. An area in Southern Germany known as Cyber Valley is becoming a hub for collaboration between academia and industry for AI. Even without a stated national strategy, the country is well-positioned to become a hub of AI innovation for years to come.

These countries individual strategies are further bolstered by a regional strategy that aims to foster collaboration between countries. Earlier this year, the European commission pledged 20 Billion Euros over the next 2 years for the 25 country bloc. It proposed a three-pronged approach: 1) increase  investment in AI; 2) prepare for socio-economic changes; 3)Devise an appropriate ethical and legal framework. This holistic approach may not win the race but will certainly keep Europe as the moral leader in the field.

Conclusion

This short survey from these two blogs gives us a glimpse of the unfolding global AI race. The list here is not complete but represent three different types of approaches. In an axis of government involvement, China is at one extreme (most) compared to the US on the other (least). Europeans countries sit somewhere in the middle. In all cases, advances in AI will come from education, government and private enterprise. Yet a nation’s ability to coordinate, focus and control the development of AI can be the difference between harnessing the upcoming technological revolution for prosperity of their people and those that will struggle to survive its disruptions. Unlike previous races, this is not just about military supremacy. It touches every aspect of society and could become the dividing line between thriving and struggling nations.

Furthermore, how countries pursue this race can also have global impacts on the application of AI. This is where I believe the European model holds the most promise. The plans put forth by France and the UK could not only ensure these countries geo-political position but could have benefits for all nations. The regional approach and focus can also yield significant fruits for the future. Tying AI development efforts with ethical principles and sound policy is the best way to ensure that AI will be used towards human flourishing. I hope other countries follow their lead and start anticipating how they want AI to be used inside their borders. The true winner of the global AI race should not be any nation or region but humanity as a whole. Here is where France’s intention to use AI innovation to address environmental challenges is most welcome. When humanity wins, all countries benefit and the planet is better for it.

Who Will Win The Global AI Race? Part I: China vs USA

While the latest outrageous tweet by Kanye West fills up the news, a major development goes unnoticed: the global AI race for supremacy. Currently, many national governments are drafting plans for boosting AI research and development within their borders. The opportunities are vast and the payoff fairly significant. From a military perspective alone, AI supremacy can be the deciding factor for which country will be the next super-power. Further more, an economy driven by a thriving AI industry can spur innovation in multiple industries while also boosting economic growth. On the flip-side, a lack of planning on this area could lead to increasing social unrest as automation destroys existing jobs and workers find themselves excluded from AI-created wealth. There is just too much at stake to ignore. In this two-part blog, I’ll look at how the top players in the AI race are planning to harness technology to their advantage while also mitigating its potential dangers.

 

China’s Moonshot Effort for Dominance

China has bold plans for AI development in the next years. The country aims to be the AI undisputed leader by 2030. They hold a distinctive advantage in the ability to collect data from its vast population yet they are still behind Western countries in algorithms and research. China does not have the privacy requirements that are standard in the West and that allows them almost unfettered access to data. If data is the raw material for AI, then China is rich in supply. However, China is a late-comer to the race and therefore lacks the accumulated knowledge held by leading nations. The US, for example, started tinkering with AI technology as early as the 1950’s. While the gap is not insurmountable, it will take a herculean effort to match and then overtake the current leadership held by Western countries.

Is China up to the challenge? Judging by its current plan, the country has a shot. The ambitious strategy both acknowledges the areas where China needs to improve as well as outlines a plan to address them. At the center of it is the plan to develop a complete ecosystem of research, development and commercialization connecting government, academia and businesses. Within that, it includes plans to use AI for making the country’s infrastructure “smarter” and safer. Furthermore, it anticipates heavy AI involvement in key industries like manufacturing, healthcare, agriculture and national defense. The last one clearly brings concern for neighboring countries that fear a rapid change in the Asian balance of power. Japan and South Korea will be following these developments closely.

It seeks to accomplish these goals through a partnership between government and large corporations. In this case, the government has greater ability to control both the data and the process in which these technologies develop. This may or may not play to China’s advantage. Only time will tell. Of all plans, they have the longest range and assuming the Communist party remains in power, the advantage of continuity often missing from liberal democracies.

While portions of China’s strategy are concerning, the world has much to learn from the country’s moonshot effort in this area. Clearly, the Chinese government has realized the importance and the potential for the future of humanity. They are now ensuring that this technology leads to a prosperous Chinese future. Developing countries will do well to learn from the Chinese example or see themselves once again politically subjugated by the nations that master these capabilities first. Unlike China, most of these nations do not count on a vast population and favored geo-political position. The key for them will be to find niche areas where they can excel to focus their efforts.

US’ Decentralized Entrepreneurship

Uncle Sam sits in a paradoxical position in this race. While the undisputed leader, having an advantage in patents and having an established ecosystem for research development, the country lacks a clear plan from the government. This was not always the case.  In 2016, the Obama administration was one of the first to spell out principles to ensure public investment in the technology. The plan recognized that the private sector would lead innovation, yet it aimed at establishing a role for the government to steward the development and application of AI. With the election of Donald Trump in 2016, this plan is now uncertain. No decision has been announced on the matter so it is difficult to say what role the government will play in the future development of AI in the United States. While the current administration has kept investment levels untouched, there is no resolution on a future direction.

Given that many breakthroughs are happening in large American corporations like Google, Facebook and Microsoft – the US will undoubtedly play a role in the development of AI for years to come. However, a lack of government involvement could mean a lopsided focus on commercial applications. The danger in such path is that common good applications that do not yield a profit will be replaced by those that do. For example, the US could become the country that has the most advanced gadgets while the majority of its population do not have access to AI-enabled healthcare solutions.

Another downside for a corporate-focused AI strategy is that these large conglomerates are becoming less and less tied to their nation of origin. Their headquarters may still be in the US, but a lot of the work and even research is now starting to be done in other countries. Even for the offices in the country, the workforce is often-times foreign-born. We can discuss the merits and downsides of this development but for a president that was elected to put “America first” his administration’s disinterest in AI is quite ironic. This is even more pressing as other nations put together their strategies for harnessing the benefits and stewarding the dangers of AI. For a president that loves to tweet, his silence on this matter is rather disturbing.

The Bottom Line

China and US are currently pursuing very different paths in the AI race. Without a clear direction from the government, the US is relying on private enterprise’s to lead the progress in this field. Given the US’ current lead, such strategy can work, at least in the short-run. China is coming from the opposite side where the government is leading the effort to coordinate and optimize the nation’s efforts for AI development. China’s wealth of centralized data also gives a competitive advantage, one that it must leverage in order to make up for being a late comer.

Will this be a battle between entrepreneurship and central planning? Both approaches have its advantages. The first counts on the ingenuity of companies to lead innovation. The business competitive advantage for AI leaders has huge upsides in profit and prestige. It is this entrepreneurial culture that has driven the US to lead the world in technology and research. Hence, such de-centralized effort can still yield great results. On the flip-side, a centralized effort, while possibly stifling innovation, has the advantage of focusing efforts across companies and industries. Given AI potential to transform numerous industries, this approach can succeed and yield tremendous return.

What is missing from both strategies is a holistic view of how AI will impact society. While there are institutions in the US that are working on this issue, the lack of coordination with other sectors can undermine even the best efforts. In this range of centralized planning and de-centralized entrepreneurship, there must be a middle ground. This is the topic of the next blog, where I’ll talk about Europe’s AI strategy.

 

Travelers Theology: Wrestling With a Powerful AI God

Recently, I was browsing through new shows on Netlflix when I stumbled upon Travelers. The premise seemed interesting enough to to make me want to check it out. From the very first episode, I was hooked. Soon after, my wife watched the first episode and it became a family affair. Starring Erick McCormack (Will & Grace) and directed by Brad Wright (Stargate), Travelers is a show about people from a distant future that come to the 21st century in an effort to change history to re-write their present.

You may wonder “Nothing new here, many shows and movies have explored this premise.” That is true. What makes Travelers unique is how they arrive in the present and how they explore emerging technologies in a thoughtful and plausible way. They travel back to time by sending the consciousness of people from the future into the bodies of those who are about to die in the 21st century. Having the benefit of knowing history allows them to pintpoint the exact time for arrival which makes for some pretty interesting situations (a wife about to be killed by her husband’s abuse, a mentally-challenged woman about to be attacked by robbers and a heroin-addict about to be overdose). The travelers then continue the life of their “host” making those around belief that they are still the same person that died.

Spoiler alert – the next paragraphs will openly discuss plots from the show

By the end of first season, we learn about the pivotal role AI plays in the plot. Throughout the first episodes, the travelers keep talking about “the director” who has a “grand plan.” That becomes their explanation for carrying out missions when they cannot understand why they are doing what they are told. They also follow 6 rules to ensure their behavior limits their interference in the past. At first, the viewers think they are talking about a person who is leading the effort. In the last episode of season 1, we learn that “the Director” is actually a Super Computer (a Quantum Frame) who is able to consider millions of possible scenarios and therefore direct travelers to their assign missions. We are really dealing with AI God, who is quasi-omniscient and demands human’s trust and devotion.

Exploring Rich Religious Imagery

While the show explores religious imagery throughout, this aspect comes to the forefront in episode 8 of Season two. In it, one of the travelers (aptly and ironically named Grace), is to be judged by three judges (programmers). The setting for that: a church. As they gathered in the sanctuary, the “Trinity” of programmers initiates proceedings under the watchful eye of the Director (through a tiny camera that records the event). Grace, an obnoxious traveler who is devoid of social skills, is charged with the crime of treason for taking action on her own initiative in direct challenge to the grand plan.

As the judgement unfolds, scenes that juxtapose the programmer judges with an empty cross in the background reinforce the explicit religious connection the writers are making here. Throughout the hearings, Grace insists that her actions, even if unorthodox, were only to save the Director. Yet, she is surprised to learn that the Director itself had summoned her judgment. She seems disappointed at that, wondering how would the Director judge her if it knew her intentions. This is an interesting assertion because it implies that the director actually knew her thoughts, raising it to the level of a god.

Grace is found guilty by the programmer trinity and is handed over to the director for her sentencing. They speculate that she will be overwritten. That is the worse punishment, which means she would not only die in the 21st century but her consciousness would cease to exist. It is the theological equivalent to eternal death or annihilation.

The next scene is probably one of the most profound and provocative of the whole show so far. Grace goes to a small room where she faces three large screens from where the Director will speak directly to her. This is the first time in the show where the audience gets to see the Director in action by itself rather than through messengers.

While she is no longer in the sanctuary, the room still has an empty cross in the background and evokes the idea of a confessional booth. At that point, I was really curious to know how they would portray the director. What kind of images would she see? Would it be of the machine itself or something else?

No machine but human faces show up in the screen. They are all older and seem to be in some type of life support. At times, they seem to represent Grace’s parents but that was not clear. In this climatic scene, Grace finds forgiveness from the Director and is not overwritten. The machine communicate divine qualities through human faces. Grace finds peace and absolution and re-affirms her trust and devotion to the Director. In short, she experiences a theophany: a watershed personal moment that reveals a new facet of the divine being to a human receiver.

Photo by Bruno van der Kraan on Unsplash
Photo by Bruno van der Kraan on Unsplash

A New Perspective on Omniscience

What to make of this? I must say that when I first learned of Levandowski’s efforts to create an AI religion, I discounted as sensational journalism. Surely there is a fringe of techno-enthusiasts that would follow that path. Yet, I could no see how such idea could be appealing to a wider audience. Seeing Traveler’s religious treatment of AI have made me re-think. Maybe an AI religion is not as far-fetched as I originally thought. An advanced AI bolstered by powerful hardware and connected to a vast digital history of information could indeed do a great job in optimizing timelines. That is, it could consider a vast amount of scenarios in ways that are unfathomable to the human mind. This could make it quasi-omniscient in a way that could elicit a god-like trust from humans. One could say such arrangement would be the triumph of secular science replacing a mythical god with a technological one.

From a Judeo-Christian perspective, an AI god would be the epitome of human idolatry. People worshiping idols except that for calf images are replaced by silicon superstructures that actually can hear, speak and think faster than any human. This would be an example of idols in steroids. As a firm believer in the benefits of AI, I do worry about human inclination towards idolizing tools. As a Christian, I owe my allegiance to a transcendent God. AI can only be formidable tool but nothing more.

Yet, the prospect of an AI god is still interesting in that it may helps us understand a transcendent God better. How so, you may ask? Religion is often defined by powerful metaphors. For some monotheistic faiths, God is a father. Such metaphor has obvious benefits as it elicits image of authority, provision and comfort. I wonder if using a powerful AI as a metaphor could reveal part of divinity that we have not explored before.

In a previous blog, I suggested that AI offered a paradigm of partnership for religion as opposed to blind obedience. Reflecting on Travelers’ portrayal of an AI God sheds light into the aspect of God’s omniscience and wisdom. A timeless being with infinite “processing capacity” could very well consider all the possible alternatives and come up with the best one that leads to the best outcome (to whatever that best is defined). In computer science terms, the best is defined by an objective function – basically the goal you are trying to achieve.

How is that different from previous views of omniscience and wisdom? In the past, omniscience was seen as the idea that God knows what decision  we will make and therefor ultimately knows the future. In some traditions, this idea was amplified into the concept of Predestination. The problem with such approach is that it limits God to one outcome and makes humans “automatons.” In other words, there is really no choice or risk – everything is pre-determined from the beginning. I suspect this view of God was heavily based on our own human mind that cannot consider more than 1 scenario for the future at a time.

What if God’s omniscience was more like the Super AI knowledge that is able to simultaneously consider multiple outcomes and then guide towards the better one or correct it when that path is undermined? Wouldn’t that be a fuller view of omniscience? This scenario allows for human choice while still attributing superior knowledge and control to God. Furthermore, this metaphor reveals a “smarter” God that is not bound by the one-track linear thinking of humans. Humanity realizes that their choices matter and can create alternative futures. Even so, they still have the comfort of a God who can see through all this, and guide it from a perspective that can consider manifold outcomes.

Such God would certainly be worthy of human obedience, awe and praise.

“Do You Trust This Computer?”: A Wake Up Call on Superintelligence

It is not everyday that Elon Musk endorses and partially funds a documentary. Needless to say, when that happens, anyone tracking the tech industry takes notice. In “Do you Trust Your Computer?“, Chris Paine brings together experts, journalists and CEOs from the tech industry and academia, to make a compelling point about the dangers of Superintelligence for humanity’s future. In this blog, I will review the documentary and offer some thoughts on how we respond to the issues raised by it.

 

 

In an age of misguided attention, I welcome any effort to raise awareness of the social impacts of AI. While AI has gained notoriety recently, there has been little thoughtful discussion of its impacts. I believe this documentary does exactly that and for that reason alone, I encourage everyone to watch it.

Surprisingly, the documentary did not uncover any new information. Most of the examples cited have been mentioned in other media discussing AI. The documentary contributes to the discussion not because of its content per say but because how it frames the issue of Superintelligence. Many of us have heard of singularity, the rise of killer AI, the death of privacy through Big Data and the dangers of automated weapons. Chris Paine’s genius was to bring those issues together to construct a cohesive argument that shows the plausability and the danger of the rise of superintelligence. The viewer comes away with greater clarity and awareness on the subject.

Compelling but Incomplete

In short, Paine argues that if we develop AI without proper safeguards, it could literally destroy us as a species. It wouldn’t do that intentionally but in the way to maximizing its goal. The example he gives is of how we humans have no qualms of removing an ant mound in the way to build a path. Superintelligent entities would look at us with the same regard we look at ants and therefore lack any human-centered ethical norms. Beyond that, he also touched on other topics like: the impending job elimination, Big Data’s impact in our lives and the danger of automated weapons. While the documentary was not overly alarmist it does challenge us to take these matter seriously and encouraging conversation at multiple levels of society.

In spite of its compelling argument, I found the treatment of the topic to be lacking in some aspects. For one, the film could have explored more how AI can lead to human flourishing and economic advancement. While at times it touched on the potential of AI, these bits were overshadowed by the parts that focused on its dangers.  I wish they had discussed how, just like previous emerging technologies, AI will not only eliminate jobs but also create new industries and economic ecosystems. Surely its impact is bound to create winners and losers. However, to overlook its potential for job creation does a disservice to the goal of an honest dialogue about our AI future.

Moreover, the rise of artificial Superintelligence, though likely, it far from being a certainty. At one point, one of the experts talked about how we have become numb to the dangers of AI primarily because of all the Hollywood’s exhaustive exploitation of this theme. That was a great point, however, that skepticism may not be completely unfounded. AI hype happened before and so did an AI winter. In the early 60’s, many already predicted a take over of robots as AI technology had just entered the scene. It turned out that technical challenges and hardware limitations slowed AI development enough so that government and business leaders lost interest in it. This was the first AI winter from the mid-70s to the mid-90’s. This historical lesson is worth remembering because AI is not the only emerging technology competing for funding and attention at this moment.

Exposing The Subtle Impact of AI

I certainly hope that leaders in business and politics are heeding to Chris Paine’s warnings. My critique above does not diminish the importance of the threat posed by Superintelligence. However, most of us will not be involved in this decision process. We may be involved in choosing who will be at the table but not at the decision-making process directly. So, while this issue is very important, we as individual citizens will have little agency in setting the direction of Superintelligence development.

With that said, the documentary did a good job in discussing the more subtle impacts of AI in our daily lives. That to me, turned out to be the best contribution to the AI dialogue because it helped expose how many of us are unwilling participants in the process. Because AI lives and dies on data, data collection practices are fairly consequential to the future of its development. China is leaping ahead in the AI race primarily because of its government ability to collect personal data with little to no restrictions. More recently, the Facebook-Cambridge-Analytica scandal exposed how data collection done by large corporations can also be unethical and harmful to our democracy.

Both examples show that centralized data collection efforts are ripe for abuse. The most consequential act we can take in the development of AI is to be more selective on how and to who we give personal data to. Moreover, as consumers and citizens, we must ensure we are sharing in the benefits our data creates. This process of data democratization is the only way to keep effective controls on how data is collected and used. As data collection decentralizes, the risk of an intelligence monopoly decreases and the benefits of AI can be more equitably shared among humanity.

Moreover, it is time we start questioning the imperative of digitization. Should everything be tracked through electronic devices? Some aspects of our analog earth are not meant to be digitized and processed by machines. The challenges is to define these boundaries and ensure they are kept out of reach from intelligent machines. This is an important question to ask as we increasingly use our smart phones to record every aspect of our lives. In this environment, writing a journal by hand, having unrecorded face-to-face conversations and taking a technology sabbatical can all be effective acts of resistance.

Hybrid Intelligence: When Machines and Humans Work Together

In a previous blog, I argued that the best way to look into AI was not from a machine versus human perspective but more from a human PLUS machine paradigm. That is, the goal of AI should not be replacement but augmentation. Artificial Intelligence should be about enhancing human flourishing rather than simply automating human activities. Hence, I was intrigued to learn about the concept of HI (Hybrid Intelligence). HI is basically a manifestation of augmentation when human intelligence works together with machine intelligence towards a common goal.

As usual, the business world leads in innovation, and in this case, it is no different. Hence, I was intrigued to learn about Cindicator, a startup that combines the collective intelligence of human analysts with machine learning models to make investment decisions. Colin Harper puts it this way:

Cindicator fuses together machine learning and market analysis for asset management and financial analytics. The Cindicator team dubs this human/machine predictive model Hybrid Intelligence, as it combines artificial intelligence with the opinions of human analysts “for the efficient management of investors’ capital in traditional financial and cryptomarkets.”

This is probably the first enterprise to approach investment management from an explicitly hybrid approach. You may find other examples in which investment decisions are driven by analysts and others that rely mostly on algorithms. This approach seeks to combine the two for improved results.

How Does Hybrid Intelligence Work?

One could argue that any example of machine learning is at its core hybrid intelligence. There is some truth to that. Every exercise in machine learning requires human intelligence to set it up and tune the parameters. Even as some of these tasks are now being automated, one could still argue that the human imprint of intelligence is still there.

Yet, this is different. In the Cindicator example, I see a deliberate effort to harness the best of both machines and humans.

On the human side, the company is harnessing the wisdom of crowds by aggregating analysts’ insights. The reason why this is important is that machine learning can only learn from data and not all information is data. Analysts may have inside information that is not visible in the data world and can therefore bridge that gap. Moreover, human intuition is not (yet) present in machine learning systems. Certain signals require a sixth sense that only humans have. For example, a human analyst may catch deceptive comments from company executives that would pass unnoticed by algorithms.

On the machine side, the company developed multiple models to uncover predictive patterns from the data available. This is important because humans can only consider a limited amount of scenarios. That is one reason why AI has beaten humans in games where it could consider millions of scenarios in seconds. Their human counterparts had to rely on experience and hunches. Moreover, machine learning models are superior tools for finding significant trends in vast data, which humans would often overlook.

Image by Gerd Altmann from Pixabay

Can Hybrid Intelligence Lead to Human Flourishing?

HI holds much promise in augmenting rather than replacing human intelligence. At its core, it starts from the principle that humans can work harmoniously with intelligent machines. The potential for its uses is limitless. An AI aided approach can supercharge research for the cure of diseases, offer innovative solutions to environmental problems and even tackle intractable social ills with humane solutions.

This is the future of work: collective human intelligence partnering with high-performing Artificial Intelligence to solve difficult problems, create new possibilities and beautify the world.

Much is said about how many jobs AI will replace. What is less discussed is the emergence of new industries made possible by the partnership between intelligent machines and collective human wisdom. A focus on job losses assumes an economy of scarcity where a fixed amount of work is available to be filled by either humans or machines. An abundance perspective looks at the same situation and sees the empowerment of humans to reach new heights. Think about how many problems remain to be solved, how many endeavors are yet to be pursued, and how much innovation is yet to be unleashed.

Is this optimist future scenario inevitable? Not by a long shot. The move from AI to HI will take time, effort and many failures. Yet, looking at AI as an enabler rather than a threat is a good start. In fact, I would say that the best response to the AI threat is not returning to a past of dumb machines but lies in the partnership between machine and human entities steering innovation for the flourishing of our planet. Only HI can steer AI towards sustainable flourishing.

There is work to do, folks. Let’s get on with the business of creating HI for a better world!

Blockchain and AI: Powerful Combination or Concerning Trend?

Bitcoin is all over the news lately. After the meteoric rise of these financial instruments, the financial world is both excited and fearful. Excited to get in the bandwagon while it is on the rise but scarred that this could be another bubble. Even more interesting has been the rise of blockchain, the underlying technology that enables Bitcoin to run (for those wondering about what this technology is, check out this video). In this blog, I reflect on the combination between AI and blockchain by examining an emerging startup on the field. Can AI and blockchain work together? If so, what type of applications can this combination be used for? 

Recently, I came across this article from Coincentral that starts answering the questions above. In it, Colin Harper interviews CEO of Deep Brain Chain, one of the first startups attempting to bring AI and blockchain technology together. DBC’s CEO He Yong puts it this way:

DBC will allow AI companies to acquire data more easily.  Because a lot of data are private data.  They have heavy security, such as on health and on finance.  For AI companies, it’s almost next to impossible to acquire these data.  The reason being, these data are easy to copy.  After the data is copied, they’re easy to leak.  One of DBC’s core features is trying to solve this data leakage issue, and this will help AI companies’ willingness to share data, thus reducing the cost you spend on data, to increase the flow of data in society. This will expand and really show the value of data.

As somebody who works within a large company using reams of private data, I can definitely see the potential for this combination. Blockchain can provide the privacy through encryption that could facilitate the exchange of data between private companies. Not that this does not happen already but it is certainly discouraged given the issues the CEO raises above. Certainly, the upside of aggregating this data for predictive modeling is fairly significant. Companies would have complete datasets that allow them to see sides of the customer that are not visible to that company.

However, as a citizen, such development also makes me ponder. Who will get access to this shared data? Will this be done in a transparent way so that regulators and the general population can monitor the process?  Who will really benefit from this increased exchange of private data? While I agree the efficiency and cost would decrease, my main concern is for what aims will this data be used. Don’t get me wrong, targeted marketing that follows privacy guidelines can actually be beneficial to everybody. Richer data can also help a company improve customer service.

With that said, the way He Yong describes, it looks like this combination will primarily benefit large private companies that will use this data for commercial aims. Is this really the promise of an AI and Blockchain combination: allowing large companies to know even more about us?

Further in the interview, He Yong suggested the Blockchain could actually help assuage some of the fears of that AI could get out of control:

Some people claim that AI is threatening humanity.  We think that blockchain can stop that, because AI is actually not that intelligent at the moment, so the threat is relatively low.  But in a decade, two decades, AI will be really strong, a lot stronger than it is now.  When AI is running on blockchain, on smart contracts, we can refrain AI.  For example, we can write a blockchain algorithm to restrain the computational power of AI and keep it from acting on its own.

Given my limited knowledge on Blockchain, it is difficult to evaluate whether this is indeed the case. I still believe that the biggest threat of AI is not the algorithms themselves but how they are used. Blockchain, as described here, can help make them process more robust, giving human controllers more tools to halt algorithms gone haywire. Yet, beyond that I am not sure how much more can it act as a true “check and balance” for AI.

I’ll be monitoring this trend in the next few months to see how this develops. Certainly, we’ll be seeing more and more businesses emerging seeking to marry blockchain and AI. These two technologies will disrupt many industries by themselves. Combining them could be even more impactful. I am interested to see if they can be combined for human flourishing goals. That remains yet to be seen.

The Machine Learning Paradigm: How AI Can Teach Us About God

It is no secret that AI is becoming a growing part of our lives and institutions. There is no shortage of article touting the dangers (and a few times the benefits) of this development. What is less publicized is the very technology that enables the growing adoption of AI, namely Machine Learning (ML). While ML has been around for decades, its flourishing depended on advanced hardware capabilities that have only become available recently. While we tend to focus on Sci-Fi like scenarios of AI, it is Machine Learning that is most likely to revolutionize how we do computing by enabling computers to act more like partners rather than mere servants in the discovery of new knowledge. In this blog, I explain how Machine Learning is a new paradigm for computing and use it as a metaphor to suggest how it can change our view of the divine. Who says technology has nothing to teach religion? Let the skeptics read on.

What is Machine Learning?

Before explaining ML, it is important to understand how computer programming works. At its most basic level, programs (or code) are sets of instructions that tell the computer what to do given certain conditions or inputs from a user. For example, in the WordPress code for this website, there is an instruction to show this blog in the World Wide Web once I click the button “Publish” in my dashboard. All the complexities of putting this text into a platform that can be seen by people all over the world are reduced to lines of code that tell the computer and the server how to do that The user, in this case me, knows nothing of that except that when I click “Publish,” I expect my text to show up in a web address. That is the magic of computer programs.

Continuing on this example, it is important to realize that this program was once written by a human programmer. He or she had to think about the user and its goals and the complexity of making that happen using computer language. The hardware, in this scenario was simply a blind servant that followed the instructions given to it. While we may think of computers as smart machines they are as smart as they are programmed to be. Remove the instructions contained in the code and the computer is just a box of circuits.

Let’s contrast that with the technique of Machine Learning. Consider now that you want to write a program for your computer to play and consistently win an Atari game of Pong (I know, not the best example, but when you are preparing a camp for Middle Schoolers that is the only example that comes to mind). The programming approach would be to play the game yourself many times to learn strategies to win the game. Then, the player would write them down and codify these strategies in a language the computer can understand. She or he would then spend countless hours writing the code that spells out multiple scenarios and what the computer is supposed to do in each one of them. Just writing about it seems exhausting.

Now compare that with an alternative approach in which the computer actually plays the game and maximizes the score in each game based on past playing experiences. After some initial coding, the rest of the work would be incumbent on the computer to play the game millions of time until it reaches a level of competency where it wins consistently. In this case, the human outsources the game playing to the computer and only monitors the machine’s progress. Voila, there is the magic of Machine Learning.

A New Paradigm for Computing

As the example above illustrates, Machine Learning changes the way we do computing. In a programming paradigm, the computer is following detailed instructions from the programmer. In the ML paradigm, the learning and discovery is done by the algorithm itself. The programmer (or data scientist) is there primarily to set the parameters for how the learning will occur as opposed to giving instructions for what the computer is to do. In the first paradigm, the computer is a blind servant following orders. In the second one, the computer is a partner in the process.

There are great advantages to this paradigm. Probably the most impactful one is that now the computer can learn patterns that would be impossible for the human mind to learn. This opens the space to new discoveries that was previously inaccessible when the learning was restricted to the human programmer.

The downside is also obvious. Since the learning is done through the algorithm, it is not always possible to understand why the computer arrived at a certain conclusion. For example, last week I watched the Netflix documentary on the recent triumph of a computer against a human player in the game of Go. It is fascinating and worth watching in its own right. Yet, I found striking that the coders of Alpha Go could not always tell why the computer was making a certain move. At times, the computer seemed delusional to human eyes. There lies the danger: as we transfer the learning process to the machine we may be at the mercy of the algorithm.

A New Paradigm for Religion

How does this relate to religion? Interestingly enough these contrasting paradigms in computing shed light in a religious context for describing the relationship between humans and God. As the foremost AI Pastor Christopher Benek once said: “We are God’s AI.” Following this logic, we can see how of a paradigm of blind obedience to one of partnership can have revolutionary implications for understanding our relationship with the divine. For centuries, the tendency was to see God as the absolute Monarch demanding unquestioning loyalty and unswerving obedience from humans. This paradigm, unfortunately, has also been at the root of many abusive practices of religious leaders. This is especially dangerous when the line between God and the human leader is blurry. In this case, unswerving obedience to God can easily be mistaken by blind obedience to a religious leader.

What if instead, our relationship with God could be described as a partnership? Note that this does not imply an equal partnership. However, it does suggest the interaction between two intelligent beings who have separate wills. What would be like for humanity to take on responsibility for its part in this partnership? What if God is waiting for humanity to do so? The consequences of this shift can be transformative.

4 Reasons Why We Should be Teaching AI to Kids

In a previous blog, I talked about a multi-disciplinary approach to STEM education. In this blog I want to explore how teaching AI to kids can accomplish those goals while also introducing youngsters to an emerging technology that will greatly impact their future. If you are parent, you may be asking: why should my child learn about AI? Recently, the importance of STEM education has been emphasized by many stakeholders. Yet, what about learning AI that makes it different from other STEM subjects?

First it is important to better define what learning AI means. Lately, the AI term has been used for any instance a computer acts like a human. This varies from automation of tasks all the way to humanoids like Sophia . Are we talking about educating children to build sentient machines? No, at least not at first. The underlying technology that enables AI is machine learning. Simply put, as hinted by its name, these are algorithms that allow computers to learn directly from data or interaction with an environment rather than through programming. This is not a completely automated process as the data scientist and/or developer must still manage the processes of learning. Yet, at its essence, it is a new paradigm for how to use computers. We go from a programming in which we instruct computer to carry out tasks to machine learning where we feed the computer with data so it can discover patterns and learn tasks on its own. The question then is why should we teach AI (machine learning) to kids?

Exposes Them to Coding

Teaching AI to kids start with coding. While we’ll soon have advanced interfaces for machine learning, some that will allow a “drag-and-drop” experience, for now doing machine learning requires coding. That is good news for educational purposes. I don’t need to re-hash here the benefits of coding education. In recent years, there has been a tremendous push to get children to start coding early. Learning to code introduces them to a type of thinking that will help them later in life even if they do not become programmers. It requires logic and mathematical reasoning that can be applied to many endeavors.

Furthermore, generation Z grew up with computers, tablets and smart phones. They are very comfortable with using them and incorporating them into their world. Yet, while large tech companies have excelled in ensuring no child is left without a device, we have done a poor job in helping them understand what is under the hood of all this technology they use. Learning to code is a way to do exactly that: lift up the hood so they can see how these things work. Doing so, empowers them to become creators with technology rather than mere consumers.

Works Well With Gaming

The reality is that AI really started with games. One the first experiment with AI was to make a computer learn to play a game of Checkers. Hence, the combination between AI and gaming is rather complementary. While there are now some courses that teach children to build games, teaching AI goes a step further. They actually get to teach the computer to play games. This is important because games are a common part of their world. Teaching AI with games helps them engage in the topic by bringing it to a territory that is familiar to their imagination.

I suspect that gaming will increasingly become part of education in the near future. What once was the scourge of educators is turning out to be an effective tool to engage children in the learning process. There are clear objectives, instant rewards and challenges to overcome. Teaching machine learning with games, rides this wave of this and enhances it by giving them an opportunity to fine tune learning algorithms with objectives that captivate their imagination.

Promotes Data Fluency

Data is the electricity of the 21st century. Helping children understand how to collect, examine and analyze data sets them up for success in the world of big data. We are moving towards a society where data-driven methods are increasingly shaping our future. Consider for example how data is transforming fields like education, criminal courts and healthcare. This trends shows not signs of slowing down in the near future.

This trend will not be limited to IT jobs. As the sensors become more advanced, data collection will start happening in multi-form ways. Soon fitness programs will be informed, shaped and measured by body sensors that can provide more precise information about our bodies’ metabolism. Sports like Baseball  and Football are already being transformed by the use of data. Thus, it is not far-fetched to assume that they will eventually be working in jobs or building business that live on data. They may not all become data scientist or analysts, but they will likely need to be familiar with data processes.

Opens up Discussions About Our Humanity

Because AI looms large in Science-Fiction, the topic opens the way for discussions on Literature, Ethics, Philosophy and Social Studies. The development of AI forces us to re-consider what it means to be human. Hence, I believe it provides a great platform to add Humanities to an otherwise robust STEM subject. AI education can and should include a strong component of reading and writing.

Doing so develops critical thinking and also helps them connect the “how” with the “why”. It is not enough to just learn how to build AI applications but foremost why we should do it. What does it mean to outsource reasoning and decision-making to machines? How much automation can happen without compromising human flourishing? You may think these are adult question but we underestimate our children’s ability to reflect deeply about the destiny of humanity. They, more than us, need to think about these issues for they will inherit this world.

If we can start with them early, maybe they can make better choices and clean up the mess we have made. Also, teaching AI to kids can be a lot easier than we think.