AI for Scholarship: How Machine Learning can Transform the Humanities

 In a previous blog, I explored how AI will speed up scientific research. In this blog, I will examine the overlooked  potential that AI has to transform the Humanities. This connection may not be clear at first since most of these fields do not include an element of science or math. They are more preoccupied with developing theories than testing hypotheses through experimentation. Subjects like Literature, Philosophy, History, Languages and Religious Studies (and Theology) rely heavily in the interpretation and qualitative analysis of texts. In such environment, how could mathematical algorithms be of any use? 

Before addressing the question above, we must first look at the field of Digital Humanities that created a bridge from ancient texts to modern computation. The field dates back the 1930’s, before the emergence of Artificial Intelligence. Ironically, and interestingly relevant to this blog, the first project in this area was a collaboration between an English professor, a Jesuit Priest and IBM to create a concordance for Thomas Aquinas’ writings. As digital technology advanced and texts became digitized, the field has continued to grow in importance. Its primary purpose is to both apply digital methods to Humanities as well as reflect on its use. That is, they are not only interested in digitizing books but also evaluating how the use of digital medium affect human understanding of these texts. 

Building on the foundation of Digital Humanities, the connection with AI becomes all too clear. Once computers can ingest these texts, text mining and natural language processing are now a possibility. With the recent advances in machine learning algorithms, cheapening of computing power and the availability of open source tools the conditions are ripe for an AI revolution in the Humanities.

How can that happen? The use of machine learning in combination with Natural Language Processing can open avenues of meaning that were not possible before. For centuries, these academic subjects have relied on the accumulated analysis of texts performed by humans. Yet, human capacity to interpret, analyze and absorb texts is finite. Humans do a great job in capturing meaning and nuances in texts of hundreds or even a few thousand pages. Yet, as the volume increases, machine learning can detect patterns that  are not apparent to a human reader.  This can be especially critical in applications such as author attribution (determining who the writer was when that information is not clear or in question), analysis of cultural trends,  semantics, tone and relationship between disparate texts. 

Theology is a field that is particularly poised to benefit from this combination. For those unfamiliar with Theological studies, it is a long and lonely road. Brave souls aiming to master the field must undergo more schooling than Physicians. In most cases, aspiring scholars must a complete a five-year doctorate program on top of 2-4 years of master-level studies. Part of the reason is that the field has accumulated an inordinate amount of primary sources and countless interpretations of these texts. They were written in multiple ancient and modern languages and have a span over thousands of years. In short, when reams of texts can become Big Data, machine learning can do wonders to synthesize, analyze and correlate large bodies of texts. 

To be clear, that does not mean the machine learning will replace painstaking scholarly work. Quite the opposite, it has the potential to speed up and automate some tasks so scholars can focus on high level abstract thinking where humans still hold a vast advantage over machines. If anything it should make their lives easier and possibly shorter the time it takes to master the field.

Along these lines of augmentation, I am thinking about a possible project. What if we could employ machine learning algorithms in a theologian body of work and compare it to the scholarship work that interprets it? Could we find new avenues or meaning that could complement or challenge prevailing scholarship in the topic? 

I am curious to see what such experiment could uncover. 

Intelligence for Leadership: AI in Decision Making

Kings have advisors, presidents have cabinets, CEOs have boards and TV show hosts have writers – every public figure relies on a cadre of trusted advisors for making decisions. Whenever crucial decisions are made, an army of astute specialists have spent countless hours researching, studying and preparing to communicate the most essential information to inform a decision maker on that issue. Without them, leaders would lead by instinct and most likely often get it wrong. What if these advisors were not human only but also AI-enabled decision systems?

This is what Modeling Religion Project is doing. Developed by a group of scientists, philosophers, and religion scholars, the project consists of a computer simulation populated by “virtual agents” mimicking the characteristics and beliefs of a country’s population. The model is then fed evidence-based social science tendencies of human behavior under certain conditions. For example, a sudden influx of foreigners may increase the probability of hostility by native groups.

Using this initial state as a baseline, they experiment using different scenarios to evaluate the effects of changes in the environment. Levers for change include adding newcomers, investing in education, changing economic policy among other factors. The model then simulates outcomes from the changes allowing for scholars and policy makers to understand the effects of decisions or trends in a nation. While the work focuses on religion, its findings have broad implications for other social sciences such as Psychology, Sociology and Political Science. Among others, one of their primary goal is to better understand what factors can impact the level of religious violence. The government of Norway is about to put the models to test, where they hope to use the insights of the model to better integrate refugees to their nation.

Certainly, a project of such ambition is not without difficulties. For one, there are ethical questions around who gets to decide what is a good outcome and what is not. For example, one of the models provides recommendation on how to speed up secularization in a nation. Is secularization a good path for every nation? Clearly, while the model raises interesting insights, using them in the real world may prove much harder than the complex math involved in building them. Furthermore, irresponsible use can quickly lead to social engineering.

While hesitation is welcome, the demand for effective decision making will only increase. Leaders from household to national levels face increasing complex scenarios. Consider the dilemma that parents face when planing for their children’s education knowing that future job market will be different from today. Consider organizational leaders working on 5-10 year plans when markets can change in minutes, demand can change in days and societies in the course of a few years. Hence, the need for AI-generated insights will only increase with time.

What are to make of AI-enabled advice for public policy? First, it is important to note that this already is a reality in large multi-national corporations. In recent years, companies have developed intelligent systems that seek to extract insights from the reams of customer data available to these organizations. These systems may not rise to the sophistication of the project above, but soon they will. Harnessing the power of data can provide an invaluable perspective to the decision making process. As complexity increases, intelligent systems can distill large amounts of data into digestible information that can make the difference between becoming a market leader or descending into irrelevancy. This dilemma will be true for governments as well. Missing data insights can be the difference between staying in power or losing the next election.

With this said, it is important to highlight that AI-enabled simulations will never be a replacement for wise decision making. The best models can only perform as well as the data they contain. They represent a picture of the past but are not suitable for anticipating black swan events. Moreover, leaders may have pick up signals of change that have not yet been detected by data collection systems. In this case, human intuition should not be discarded for computer calculations. There is also an issue of trust. Even if computer perform better than humans in making decisions, can humans trust it beyond their own capabilities? Would we trust an AI judge to sentence a person to death?

Here, as in other situations, it bears out to bring the contrast between automation and augmentation. Using AI systems to enhance human decision making is much better than using it to replace it altogether. While they will become increasingly necessary in public policy, they should never replace popular elected human decision-makers.

Tech Community Centers: The Cure for Automation Anxiety

Automation anxiety is real. In a recent Pew survey, 72 percent of Americans worry about the impact of automation on their jobs. Besides, automation is slowly becoming part of our lives: self-service cashiers in grocery stores, smart phone enabled banking, robocalls, chatbots and many other examples. As a basic rule of thumb any task that is simple and repetitive can be automated. The benefits for us as consumers are clear – convenience and lower prices. For workers, in all levels, the story is altogether different as many now worry for their livelihood. AI enabled applications could do the jobs of accountants, lawyers and managers. Automated robotic arms can destroy manufacturing jobs and automated cars can make professional drivers obsolete.

How can this be addressed? Robert E. Litan from the Brookings Institute offers four 4 practical suggestions so governments and leaders can prepare their communities for automation:

  1. Ensure the economy is at full employment – This means keeping unemployment at around 4% or lower. Economies where people who want to work are currently working will be better prepared to absorb the shocks of automation.
  2. Insure Wages – Develop an insurance system for displaced workers so they have time to make the transition into new careers. Workers need time to adapt to new circumstances and it is difficult to do so when they have not safety net to rely on.
  3. Finance Lifetime Learning – Fund worker loans to educational institutions that offer practical training for jobs in high-demand fields. This is not about pursuing new 2-4 year degrees but 6 month to 1 year certificates that can prepare workers for a new career.
  4. Targeted distressed places – Automation impact will be uneven so governments should focus their efforts in areas of greatest needs as opposed to enacting one-size-fits-all policies.

While the suggestions above are intended for governments, much of it can be applied to individuals. In short, individuals seeking to shield themselves from automation impacts should save up, train and learn often. They should also work with local organizations to help their neighbors. Strong communities are more likely to weather through automation shocks just as did though past disruptions. Here is how this could happen again.

Makerspaces

For centuries, communities were formed and held together by central spaces. Whether it is a place of worship, the town plaza, the mall or even the soccer pitch, communities were formed and nourished as people gathered around common activities. Fast forward to our time, communities small and large find themselves pulled apart by many forces. One of the main culprits is technology-enabled experiences that drive local population to replace physical interactions with those mediated by machines. While online connections can at times translate into actual face-to-face interactions (apps like Meetup allow locals with shared interest to quickly assemble), the overall trend is local isolation even as global connections flourish. We are more likely to share commonalities with people across the globe than with those just across the street.

What if technology education could become a catalyst for strengthening local communities? Recently, makerspaces are popping up in many US metro areas. They are non-profit, community-run spaces where people gather to build, learn and experiment with technology. Visiting one here in the Atlanta area, I discovered I could learn new skills from knitting, welding to programming arduinos. While classes are free, the spaces also offer paid memberships so members can get 24/7 access to the facility and storage space. I would describe it as a place that attract people who are already tinkering around with technology in their garages to do so in a group setting. This allows them to share ideas and also to pool their resources. In some cases, these centers have equipment that would be too expensive for an individual to have and maintain in their home. In this way, the maker space enhances each member’s ability to learn, tinker and experiment with new types of technology such as 3D printers and laser cutters.

Tech Community Centers

In order to become a catalyst of community renewal, maker spaces need to expand their visions into becoming community tech centers: a place where tinkerers, inventors, scientists and tech enthusiast can come together and work for the betterment of their surrounding community. Basically, it is tinkering in community for a purpose. Channeling the energy and knowledge of technical professionals into projects of community transformation. This starts with education and professional training but can go much further than that. What if these community centers could also be incubators for start-ups that create jobs in the community? What if they can also run projects creating apps, databases and predictive models for non-profits? The sky is the limit, all it is required is a desire to transform their community and vivid imagination to dream up possibilities.

Are you up to the challenge? If you know of any local initiative like this or are considering starting one, please write in the comment section or reach out through our contact form. I would love to hear your story.

The Future of Service: How Google, Apple and Facebook are Using AI to Simplify Our Lives

Companies want to have satisfied customers who will come back for more and recommend their brands to others, and AI can help them to achieve this goal. There are many ways in which people benefit from this, for instance getting a quick reply to their questions.

Artificial intelligence is becoming “humanized” as it helps people in several ways. Whether it’s face recognition, dictating directly to the mobile phones, shopping online, ordering food, self-driving cars, and many more, they are making our lives easier.

Let’s take a look at three major enterprises and ways they use artificial intelligence to “make life easier” for their customers.

  1. Google

 

  • Google spent between $20 and $30 billion on artificial intelligence in 2016.

  • The Google Self driving cars use AI to map and move on the roads.

  • Google’s voice recognition technology claims that there is a 98% accuracy.

  • Youtube increased watch time by 50% by tuning its video recommendations using AI.

  • Google Photos can recognize faces, create animations, or suggest a photo filter.

 

  1. Facebook

  • Facebook Deeptext understands text with near-human accuracy.

  • Artificial intelligence is used to stop fake news from going viral.

  • Facebook uses deep neural networks for ad placement.

  • It has AI embedded into its messenger.

  • In 2017 it rolled out an AI project that could spot people with suicidal tendencies.

 

  1. Apple

  • Apple uses neural engine for face recognition to unlock the phone and transfer facial expression onto animate emoji.

  • It uses deep learning to detect fraud on the Apple Store and for face detection.

  • Machine learning helps Apple choose new stories and recognizes faces and locations in the photos.

  • It is building an autonomous driving system that could be implemented in existing cars.

  • Apple’s Siri is a virtual personal assistant that communicates using text-to-speech system.

These companies are just the tip of the iceberg, and many others such as Sephora and Nordstrom are also jumping on the AI bandwagon, as they realize how beneficial it can be for their business. In the next five years many people will turn to using artificial intelligence. The 47% of people will start using a home or family assistant, 46% of people will start using a health coach, and 41% will use financial advisers.

The following statistics and the fact that the worldwide spending on cognitive and AI systems will reach an astonishing $57.6 billion in 2021 show just how bright the future of artificial intelligence is.

  • 60% of retail and ecommerce brands will implement AI in 2018.

  • 100% of IoT initiatives will be supported by AI capabilities by 2019.

  • 20% of business content will be authored by machines in 2018.

  • 85% of customer interactions with the enterprise will be managed without human intervention by 2020.

The use of artificial intelligence is only going to expand in the following years, as more and more companies decide to use it. With this pace, chatbots will be indistinguishable from humans by 2029, at least according to famous futurist Ray Kurzweil.

While this is welcome news for the customer, the question will be whether how these companies steward customer data. As AI takes a more prominent role, the need for data collection will only increase. Ensuring that this is done in an appropriate manner can be the difference between stellar customer service and costly lawsuits. The company that successfully balances privacy concerns while also harnessing data through effective AI algorithms is poised to become a market leader.

Karthik Reddy, Community Manager at www.16best.net, is the author of India’s Number 1 travel blog. Boasting an MBA in computer science, he once decided to get away from the office desk life and take a breathtaking journey around the world. He is eager to use the power of the global network to inspire others. A passionate traveler and photography enthusiast, he aspires to share his experiences and help people see the world through his lens.

 

Who Will Win the AI Race? Part II: The European Way

In a previous blog, I compared the approaches from China and the US as they compete in the global AI race. In short, China proposed a government-led approach while the US is leaning on a business-led approach. The European approach represents an attempt to balance both business’ and government’s efforts in directing AI innovation, therefore showing a third way to compete in the global AI race.

Great Britain recently announced a national AI strategy. In a mixture of private, academic and government resources, the country is pledging $ 1.4 Billion in investment. A remarkable piece of the plan is allocated funding for a Center for Data Ethics. The center will develop codes for safety and ethical use of machine learning.  Another noteworthy part of the plan is the initiative to fund 1,000 new PhDs and 8,000 teachers for UK secondary schools. This move will not only spur further innovation but also ensure the British workforce is prepared to absorb the changes brought by AI developments. It is essential that governments plan ahead to prepare the next generation for the challenges of opportunities of emerging technologies like AI. In this area, the UK’s plan sets a good precedent for others countries to follow as they look for ways to prepare their workforce for future AI disruptions. Such moral leadership will be a guide not only to European institutions but also help companies worldwide make better choices with their AI technologies. This perspective is essential to ensure AI development does not descend into an uncontrolled arms race.

 

In the European Union, France has also announced a national plan following a similar approach as the UK. Beyond the mix of private and government investment to the total of 1.5 billion euros, the country is also setting up an open data approach that both helps businesses and customers. On the one hand business can look at a centralized place for data on the other customers get centralized transparency of how their data is being collected and used. If executed well, this central data place can both provide quality data for AI models while still ensuring privacy concerns are mitigated. The strategy also includes innovative ideas such as harnessing the power of AI to solve environmental challenges and a narrow focus on industries that country can compete in. Similar to the British approach, the French plan also includes funding for an Ethics center.

While Germany has not announced a comprehensive plan to date, the country already leads in AI within the automotive industry. Berlin is considered the 4th largest hub for AI startups. An area in Southern Germany known as Cyber Valley is becoming a hub for collaboration between academia and industry for AI. Even without a stated national strategy, the country is well-positioned to become a hub of AI innovation for years to come.

These countries individual strategies are further bolstered by a regional strategy that aims to foster collaboration between countries. Earlier this year, the European commission pledged 20 Billion Euros over the next 2 years for the 25 country bloc. It proposed a three-pronged approach: 1) increase  investment in AI; 2) prepare for socio-economic changes; 3)Devise an appropriate ethical and legal framework. This holistic approach may not win the race but will certainly keep Europe as the moral leader in the field.

Conclusion

This short survey from these two blogs gives us a glimpse of the unfolding global AI race. The list here is not complete but represent three different types of approaches. In an axis of government involvement, China is at one extreme (most) compared to the US on the other (least). Europeans countries sit somewhere in the middle. In all cases, advances in AI will come from education, government and private enterprise. Yet a nation’s ability to coordinate, focus and control the development of AI can be the difference between harnessing the upcoming technological revolution for prosperity of their people and those that will struggle to survive its disruptions. Unlike previous races, this is not just about military supremacy. It touches every aspect of society and could become the dividing line between thriving and struggling nations.

Furthermore, how countries pursue this race can also have global impacts on the application of AI. This is where I believe the European model holds the most promise. The plans put forth by France and the UK could not only ensure these countries geo-political position but could have benefits for all nations. The regional approach and focus can also yield significant fruits for the future. Tying AI development efforts with ethical principles and sound policy is the best way to ensure that AI will be used towards human flourishing. I hope other countries follow their lead and start anticipating how they want AI to be used inside their borders. The true winner of the global AI race should not be any nation or region but humanity as a whole. Here is where France’s intention to use AI innovation to address environmental challenges is most welcome. When humanity wins, all countries benefit and the planet is better for it.

Who Will Win The Global AI Race? Part I: China vs USA

While the latest outrageous tweet by Kanye West fills up the news, a major development goes unnoticed: the global AI race for supremacy. Currently, many national governments are drafting plans for boosting AI research and development within their borders. The opportunities are vast and the payoff fairly significant. From a military perspective alone, AI supremacy can be the deciding factor for which country will be the next super-power. Further more, an economy driven by a thriving AI industry can spur innovation in multiple industries while also boosting economic growth. On the flip-side, a lack of planning on this area could lead to increasing social unrest as automation destroys existing jobs and workers find themselves excluded from AI-created wealth. There is just too much at stake to ignore. In this two-part blog, I’ll look at how the top players in the AI race are planning to harness technology to their advantage while also mitigating its potential dangers.

 

China’s Moonshot Effort for Dominance

China has bold plans for AI development in the next years. The country aims to be the AI undisputed leader by 2030. They hold a distinctive advantage in the ability to collect data from its vast population yet they are still behind Western countries in algorithms and research. China does not have the privacy requirements that are standard in the West and that allows them almost unfettered access to data. If data is the raw material for AI, then China is rich in supply. However, China is a late-comer to the race and therefore lacks the accumulated knowledge held by leading nations. The US, for example, started tinkering with AI technology as early as the 1950’s. While the gap is not insurmountable, it will take a herculean effort to match and then overtake the current leadership held by Western countries.

Is China up to the challenge? Judging by its current plan, the country has a shot. The ambitious strategy both acknowledges the areas where China needs to improve as well as outlines a plan to address them. At the center of it is the plan to develop a complete ecosystem of research, development and commercialization connecting government, academia and businesses. Within that, it includes plans to use AI for making the country’s infrastructure “smarter” and safer. Furthermore, it anticipates heavy AI involvement in key industries like manufacturing, healthcare, agriculture and national defense. The last one clearly brings concern for neighboring countries that fear a rapid change in the Asian balance of power. Japan and South Korea will be following these developments closely.

It seeks to accomplish these goals through a partnership between government and large corporations. In this case, the government has greater ability to control both the data and the process in which these technologies develop. This may or may not play to China’s advantage. Only time will tell. Of all plans, they have the longest range and assuming the Communist party remains in power, the advantage of continuity often missing from liberal democracies.

While portions of China’s strategy are concerning, the world has much to learn from the country’s moonshot effort in this area. Clearly, the Chinese government has realized the importance and the potential for the future of humanity. They are now ensuring that this technology leads to a prosperous Chinese future. Developing countries will do well to learn from the Chinese example or see themselves once again politically subjugated by the nations that master these capabilities first. Unlike China, most of these nations do not count on a vast population and favored geo-political position. The key for them will be to find niche areas where they can excel to focus their efforts.

US’ Decentralized Entrepreneurship

Uncle Sam sits in a paradoxical position in this race. While the undisputed leader, having an advantage in patents and having an established ecosystem for research development, the country lacks a clear plan from the government. This was not always the case.  In 2016, the Obama administration was one of the first to spell out principles to ensure public investment in the technology. The plan recognized that the private sector would lead innovation, yet it aimed at establishing a role for the government to steward the development and application of AI. With the election of Donald Trump in 2016, this plan is now uncertain. No decision has been announced on the matter so it is difficult to say what role the government will play in the future development of AI in the United States. While the current administration has kept investment levels untouched, there is no resolution on a future direction.

Given that many breakthroughs are happening in large American corporations like Google, Facebook and Microsoft – the US will undoubtedly play a role in the development of AI for years to come. However, a lack of government involvement could mean a lopsided focus on commercial applications. The danger in such path is that common good applications that do not yield a profit will be replaced by those that do. For example, the US could become the country that has the most advanced gadgets while the majority of its population do not have access to AI-enabled healthcare solutions.

Another downside for a corporate-focused AI strategy is that these large conglomerates are becoming less and less tied to their nation of origin. Their headquarters may still be in the US, but a lot of the work and even research is now starting to be done in other countries. Even for the offices in the country, the workforce is often-times foreign-born. We can discuss the merits and downsides of this development but for a president that was elected to put “America first” his administration’s disinterest in AI is quite ironic. This is even more pressing as other nations put together their strategies for harnessing the benefits and stewarding the dangers of AI. For a president that loves to tweet, his silence on this matter is rather disturbing.

The Bottom Line

China and US are currently pursuing very different paths in the AI race. Without a clear direction from the government, the US is relying on private enterprise’s to lead the progress in this field. Given the US’ current lead, such strategy can work, at least in the short-run. China is coming from the opposite side where the government is leading the effort to coordinate and optimize the nation’s efforts for AI development. China’s wealth of centralized data also gives a competitive advantage, one that it must leverage in order to make up for being a late comer.

Will this be a battle between entrepreneurship and central planning? Both approaches have its advantages. The first counts on the ingenuity of companies to lead innovation. The business competitive advantage for AI leaders has huge upsides in profit and prestige. It is this entrepreneurial culture that has driven the US to lead the world in technology and research. Hence, such de-centralized effort can still yield great results. On the flip-side, a centralized effort, while possibly stifling innovation, has the advantage of focusing efforts across companies and industries. Given AI potential to transform numerous industries, this approach can succeed and yield tremendous return.

What is missing from both strategies is a holistic view of how AI will impact society. While there are institutions in the US that are working on this issue, the lack of coordination with other sectors can undermine even the best efforts. In this range of centralized planning and de-centralized entrepreneurship, there must be a middle ground. This is the topic of the next blog, where I’ll talk about Europe’s AI strategy.

 

Hybrid Intelligence: When Machines and Humans Work Together

In a previous blog, I argued that the best way to look into AI was not from a machine versus human perspective but more from a human PLUS machine paradigm. That is, the goal of AI should not be replacement but augmentation. Artificial Intelligence should be about enhancing human flourishing rather than simply automating human activities. Hence, I was intrigued to learn about the concept of HI (Hybrid Intelligence). HI is basically a manifestation of augmentation when human intelligence works together with machine intelligence towards a common goal.

As usual, the business world leads in innovation, and in this case, it is no different. Hence, I was intrigued to learn about Cindicator, a startup that combines the collective intelligence of human analysts with machine learning models to make investment decisions. Colin Harper puts it this way:

Cindicator fuses together machine learning and market analysis for asset management and financial analytics. The Cindicator team dubs this human/machine predictive model Hybrid Intelligence, as it combines artificial intelligence with the opinions of human analysts “for the efficient management of investors’ capital in traditional financial and cryptomarkets.”

This is probably the first enterprise to approach investment management from an explicitly hybrid approach. You may find other examples in which investment decisions are driven by analysts and others that rely mostly on algorithms. This approach seeks to combine the two for improved results.

How Does Hybrid Intelligence Work?

One could argue that any example of machine learning is at its core hybrid intelligence. There is some truth to that. Every exercise in machine learning requires human intelligence to set it up and tune the parameters. Even as some of these tasks are now being automated, one could still argue that the human imprint of intelligence is still there.

Yet, this is different. In the Cindicator example, I see a deliberate effort to harness the best of both machines and humans.

On the human side, the company is harnessing the wisdom of crowds by aggregating analysts’ insights. The reason why this is important is that machine learning can only learn from data and not all information is data. Analysts may have inside information that is not visible in the data world and can therefore bridge that gap. Moreover, human intuition is not (yet) present in machine learning systems. Certain signals require a sixth sense that only humans have. For example, a human analyst may catch deceptive comments from company executives that would pass unnoticed by algorithms.

On the machine side, the company developed multiple models to uncover predictive patterns from the data available. This is important because humans can only consider a limited amount of scenarios. That is one reason why AI has beaten humans in games where it could consider millions of scenarios in seconds. Their human counterparts had to rely on experience and hunches. Moreover, machine learning models are superior tools for finding significant trends in vast data, which humans would often overlook.

Image by Gerd Altmann from Pixabay

Can Hybrid Intelligence Lead to Human Flourishing?

HI holds much promise in augmenting rather than replacing human intelligence. At its core, it starts from the principle that humans can work harmoniously with intelligent machines. The potential for its uses is limitless. An AI aided approach can supercharge research for the cure of diseases, offer innovative solutions to environmental problems and even tackle intractable social ills with humane solutions.

This is the future of work: collective human intelligence partnering with high-performing Artificial Intelligence to solve difficult problems, create new possibilities and beautify the world.

Much is said about how many jobs AI will replace. What is less discussed is the emergence of new industries made possible by the partnership between intelligent machines and collective human wisdom. A focus on job losses assumes an economy of scarcity where a fixed amount of work is available to be filled by either humans or machines. An abundance perspective looks at the same situation and sees the empowerment of humans to reach new heights. Think about how many problems remain to be solved, how many endeavors are yet to be pursued, and how much innovation is yet to be unleashed.

Is this optimist future scenario inevitable? Not by a long shot. The move from AI to HI will take time, effort and many failures. Yet, looking at AI as an enabler rather than a threat is a good start. In fact, I would say that the best response to the AI threat is not returning to a past of dumb machines but lies in the partnership between machine and human entities steering innovation for the flourishing of our planet. Only HI can steer AI towards sustainable flourishing.

There is work to do, folks. Let’s get on with the business of creating HI for a better world!

Blockchain and AI: Powerful Combination or Concerning Trend?

Bitcoin is all over the news lately. After the meteoric rise of these financial instruments, the financial world is both excited and fearful. Excited to get in the bandwagon while it is on the rise but scarred that this could be another bubble. Even more interesting has been the rise of blockchain, the underlying technology that enables Bitcoin to run (for those wondering about what this technology is, check out this video). In this blog, I reflect on the combination between AI and blockchain by examining an emerging startup on the field. Can AI and blockchain work together? If so, what type of applications can this combination be used for? 

Recently, I came across this article from Coincentral that starts answering the questions above. In it, Colin Harper interviews CEO of Deep Brain Chain, one of the first startups attempting to bring AI and blockchain technology together. DBC’s CEO He Yong puts it this way:

DBC will allow AI companies to acquire data more easily.  Because a lot of data are private data.  They have heavy security, such as on health and on finance.  For AI companies, it’s almost next to impossible to acquire these data.  The reason being, these data are easy to copy.  After the data is copied, they’re easy to leak.  One of DBC’s core features is trying to solve this data leakage issue, and this will help AI companies’ willingness to share data, thus reducing the cost you spend on data, to increase the flow of data in society. This will expand and really show the value of data.

As somebody who works within a large company using reams of private data, I can definitely see the potential for this combination. Blockchain can provide the privacy through encryption that could facilitate the exchange of data between private companies. Not that this does not happen already but it is certainly discouraged given the issues the CEO raises above. Certainly, the upside of aggregating this data for predictive modeling is fairly significant. Companies would have complete datasets that allow them to see sides of the customer that are not visible to that company.

However, as a citizen, such development also makes me ponder. Who will get access to this shared data? Will this be done in a transparent way so that regulators and the general population can monitor the process?  Who will really benefit from this increased exchange of private data? While I agree the efficiency and cost would decrease, my main concern is for what aims will this data be used. Don’t get me wrong, targeted marketing that follows privacy guidelines can actually be beneficial to everybody. Richer data can also help a company improve customer service.

With that said, the way He Yong describes, it looks like this combination will primarily benefit large private companies that will use this data for commercial aims. Is this really the promise of an AI and Blockchain combination: allowing large companies to know even more about us?

Further in the interview, He Yong suggested the Blockchain could actually help assuage some of the fears of that AI could get out of control:

Some people claim that AI is threatening humanity.  We think that blockchain can stop that, because AI is actually not that intelligent at the moment, so the threat is relatively low.  But in a decade, two decades, AI will be really strong, a lot stronger than it is now.  When AI is running on blockchain, on smart contracts, we can refrain AI.  For example, we can write a blockchain algorithm to restrain the computational power of AI and keep it from acting on its own.

Given my limited knowledge on Blockchain, it is difficult to evaluate whether this is indeed the case. I still believe that the biggest threat of AI is not the algorithms themselves but how they are used. Blockchain, as described here, can help make them process more robust, giving human controllers more tools to halt algorithms gone haywire. Yet, beyond that I am not sure how much more can it act as a true “check and balance” for AI.

I’ll be monitoring this trend in the next few months to see how this develops. Certainly, we’ll be seeing more and more businesses emerging seeking to marry blockchain and AI. These two technologies will disrupt many industries by themselves. Combining them could be even more impactful. I am interested to see if they can be combined for human flourishing goals. That remains yet to be seen.

4 Reasons Why We Should be Teaching AI to Kids

In a previous blog, I talked about a multi-disciplinary approach to STEM education. In this blog I want to explore how teaching AI to kids can accomplish those goals while also introducing youngsters to an emerging technology that will greatly impact their future. If you are parent, you may be asking: why should my child learn about AI? Recently, the importance of STEM education has been emphasized by many stakeholders. Yet, what about learning AI that makes it different from other STEM subjects?

First it is important to better define what learning AI means. Lately, the AI term has been used for any instance a computer acts like a human. This varies from automation of tasks all the way to humanoids like Sophia . Are we talking about educating children to build sentient machines? No, at least not at first. The underlying technology that enables AI is machine learning. Simply put, as hinted by its name, these are algorithms that allow computers to learn directly from data or interaction with an environment rather than through programming. This is not a completely automated process as the data scientist and/or developer must still manage the processes of learning. Yet, at its essence, it is a new paradigm for how to use computers. We go from a programming in which we instruct computer to carry out tasks to machine learning where we feed the computer with data so it can discover patterns and learn tasks on its own. The question then is why should we teach AI (machine learning) to kids?

Exposes Them to Coding

Teaching AI to kids start with coding. While we’ll soon have advanced interfaces for machine learning, some that will allow a “drag-and-drop” experience, for now doing machine learning requires coding. That is good news for educational purposes. I don’t need to re-hash here the benefits of coding education. In recent years, there has been a tremendous push to get children to start coding early. Learning to code introduces them to a type of thinking that will help them later in life even if they do not become programmers. It requires logic and mathematical reasoning that can be applied to many endeavors.

Furthermore, generation Z grew up with computers, tablets and smart phones. They are very comfortable with using them and incorporating them into their world. Yet, while large tech companies have excelled in ensuring no child is left without a device, we have done a poor job in helping them understand what is under the hood of all this technology they use. Learning to code is a way to do exactly that: lift up the hood so they can see how these things work. Doing so, empowers them to become creators with technology rather than mere consumers.

Works Well With Gaming

The reality is that AI really started with games. One the first experiment with AI was to make a computer learn to play a game of Checkers. Hence, the combination between AI and gaming is rather complementary. While there are now some courses that teach children to build games, teaching AI goes a step further. They actually get to teach the computer to play games. This is important because games are a common part of their world. Teaching AI with games helps them engage in the topic by bringing it to a territory that is familiar to their imagination.

I suspect that gaming will increasingly become part of education in the near future. What once was the scourge of educators is turning out to be an effective tool to engage children in the learning process. There are clear objectives, instant rewards and challenges to overcome. Teaching machine learning with games, rides this wave of this and enhances it by giving them an opportunity to fine tune learning algorithms with objectives that captivate their imagination.

Promotes Data Fluency

Data is the electricity of the 21st century. Helping children understand how to collect, examine and analyze data sets them up for success in the world of big data. We are moving towards a society where data-driven methods are increasingly shaping our future. Consider for example how data is transforming fields like education, criminal courts and healthcare. This trends shows not signs of slowing down in the near future.

This trend will not be limited to IT jobs. As the sensors become more advanced, data collection will start happening in multi-form ways. Soon fitness programs will be informed, shaped and measured by body sensors that can provide more precise information about our bodies’ metabolism. Sports like Baseball  and Football are already being transformed by the use of data. Thus, it is not far-fetched to assume that they will eventually be working in jobs or building business that live on data. They may not all become data scientist or analysts, but they will likely need to be familiar with data processes.

Opens up Discussions About Our Humanity

Because AI looms large in Science-Fiction, the topic opens the way for discussions on Literature, Ethics, Philosophy and Social Studies. The development of AI forces us to re-consider what it means to be human. Hence, I believe it provides a great platform to add Humanities to an otherwise robust STEM subject. AI education can and should include a strong component of reading and writing.

Doing so develops critical thinking and also helps them connect the “how” with the “why”. It is not enough to just learn how to build AI applications but foremost why we should do it. What does it mean to outsource reasoning and decision-making to machines? How much automation can happen without compromising human flourishing? You may think these are adult question but we underestimate our children’s ability to reflect deeply about the destiny of humanity. They, more than us, need to think about these issues for they will inherit this world.

If we can start with them early, maybe they can make better choices and clean up the mess we have made. Also, teaching AI to kids can be a lot easier than we think.

Integrated STEM Education: Thoughtful, Experiential and Practical

In a previous blog, I proposed the idea of teaching STEM with a purpose. In this blog, I want to take a step back to evaluate how traditional STEM education fails to prepare students for life and propose an alternative way forward: Integrated STEM education.

One of the cardinal sins of our 19th century based education system is its inherent fragmentation. Western academia has compartmentalized the questions of “why” and “how”  into separate disciplines.[note] While I am speaking based on my experience in the US, I suspect these issues are also prevalent in the Majority World as well.[/note] Let STEM students focus on the “how”(skills)  and let the questions of “why”(critical thinking) to philosophers, ethicists and theologians. This way,  students are left to make the connection between these questions on their own.

I understand that this will vary for different subjects. The technical rigors and complexity of some disciplines may leave little space for reflection. Yet, if STEM education is all about raising detached observers of nature or obsessed makers of new gadgets, then we have failed. GDP may grow and the economy may benefit from them, yet have we really enriched the world?

One could argue that Liberal Arts colleges already do that. As one who graduated from a Liberal Arts program, there is some truth to this statement. Students are required to take a variety of courses that span Science, Literature, Social Studies, Art and Math. Even so, students take these classes separately with little to no help in integrating them. Rarely they have opportunities to engage in multi-disciplinary projects that challenge them to bring together what they learned. The professors themselves are specialists in a small subset of their discipline often having little experience in interacting outside their disciplinary guild. Furthermore, while a Liberal Arts education does a good job in exposing students to a variety of academic disciplines it does a rather poor job in teaching practical skills. Some students come out of it with the taste and confidence to continue learning. Yet, many leave these degrees confused and end up having to pursue professional degrees in order to pick a career.

Professional training does the opposite. It is precisely what a Liberal Arts education is not: highly practical, short, focused learning for a specific skill. As one who took countless professional training courses, I certainly see their value. Also, they do bring together different disciplines and tend to be project based. The downside is that very few people can efficiently learn anything in week-long 6 hour class days. The student is exposed to the contours of a skill but the learning really happens later when and if that student tries to apply that skill to a real-world work problem. They also never have time to reflect on the implications of what they are doing. Students are often paid by their companies to get the skill quickly so they can increase productivity for the firm. Such focus on efficiency greatly degrades the quality of the learning. Students here are most likely to forget what he or she learned in the long run.

Finally there is learning through experiences. Most colleges recognize that and offer study abroad semesters for students wanting to take their learning to the world. I had the opportunity to spend a summer in South Korea and it truly changed me in enduring ways. The same can be said for less structured experiences such as parenting, doing community service, being involved with a community of faith and work experiences. A word of caution here is that just going through an experience does not ensure the individual actually learns. While some of the learning is assimilated, a lot of it is lost if the individual does not digest the experience through reflection, writing and talking about it to others.

Clearly these approaches in of themselves are incomplete forms of education. A Liberal Arts education alone will only fill one’s head of knowledge (and a bit of pride too). Professional training will help workers get the job done but they will not develop as individuals. Experiences apart from reflection will only produce salutary memories. What is needed is an approach that combines the strengths of all three.

I believe a hands on project-based, ethically reflective STEM education draws from the strength of all of these. It is broad enough like Liberal Arts, skill-building like professional training and experience-rich through its hands-on projects. Above all, it should occur in a nurturing environment where young students are encouraged to take risks while still receiving the guidance so they can learn from their mistakes. To create a neatly controlled environment for learning is akin to the world of movies where main characters come up with plans in a whim and execute on them flawlessly.  Real life never happens that way. It is full of failures, setbacks, disappointments and occasionally some glorious successes. The more our education experience mimics that, the better it will prepare students for the real world.