AI and Women at the Workplace: A Sensible Guide for 2030

Even a few years in, the media craze over AI shows no sign of subsiding. The topic continues to fascinate, scare and befuddle the public. In this environment, the Mckinsey report on AI and Women at the workplace is a refreshing exception. Instead of relying on hyperboles, they project meaningful but realistic impact of AI on jobs. Instead of a robot apocalypse, they speak of a gradual shifting of tasks to AI-enabled applications. This is not to say that the impact will be negligible. Mckinsey still projects that between 40 – 160 M women may need to transition into new careers by 2030 worldwide. This is not a small number when the low end accounts for roughly population of California! Yet, still much less than other predictions.

Impact on Women

So why do a report based on one gender? Simply put, AI-driven automation will affect men and women differently in the workplace as they tend to cluster in different occupations. For example, women are overly represented in clerical and service-oriented occupations, all of which are bound to be greatly impacted by automation. Conversely, women are well-represented in health-care related occupations which are bound to grow in the period forecasted. These facts alone will assure that genders will experience AI impact differently.

There are however, other factors impacting women beyond occupation clusters. Social norms often make it harder for women to make transitions. They have less time to pursue training or search for employment because they spend much more time than men on house work and child care. They also have lower access to digital technology and participation in STEM fields than men. That is why initiatives that empower girls to pursue study in these areas are so important and needed in our time.

The main point of the report is not that automation will simply destroy jobs but that AI will move opportunity between occupations and geographies. The issue is less of an inevitable trend that will wipe out sources of livelihood but one that will require either geographic mobility or skill training. Those willing to make these changes are more likely to survive and thrive in this shifting workplace environment.

What Can You Do?

For women, it is important to keep your career prospects open. Are you currently working in an occupation that could face automation. How can you know? Well, think about the tasks you perform each day. Could they be easily learned and repeated by a machine? While all of our jobs have portions we wish were automated, if that applies to 60-80% of your job description, then you need to re-think your line of work. Look for careers that are bound to grow. That may not may simply learning to code but also consider professions that require human touch and cannot be easily replaced by machines. Also, an openness to moving geographically can greatly improve job prospects.

For parents of young girls, it is important to expose them to STEM subjects early on. A parent encouragement can go a long way in helping them consider those areas as future career options. That does not mean they will become computer programmers. However, early positive experiences with these subjects will give them the confidence later in life to pursue technical occupations if they so choose. A big challenge with STEM is the impression that it is hard, intimidating and exclusive to boys. The earlier we break these damaging paradigms the more we expand job opportunity for the women of the future.

Finally, for the men who are concerned about the future job prospects of their female loved ones, the best advice is get more involved in housework and child rearing. In short, if you care about the future of women in the workplace, change a diaper today and go wash those dishes. The more men participate in unpaid house work and child rearing the more women will be empowered to pursue more promising career paths.

Will AI Deliver? 4 Factors That Can Derail the AI Revolution

2019 is well under way and the attention on Artificial Intelligence persists. An AI revolution is already underway (for a very informative deep dive on this topic check this infographic).  Businesses are investing heavily in the field and staffing up their data science departments, Governments are releasing AI strategy plans and the media continues to churns out fantastic stories about the possibilities of AI. Beyond that, discussions about ethics and appropriate uses are starting to emerge. Even the Vatican is paying attention. What could go wrong?

If history is any guide, we have been through an AI spring before only to see it fall into an AI winter. In the mid 80’s, the funding dried, government programs were shut down and the attention moved on to other emerging technologies. While we live a different reality, a more globalized and connected world, there is no guarantee that the promise of AI will come to pass.

In this blog, as an industry insider and diligent observer, I describe factors that could derail the AI revolution. In short, here are the things that could turn our AI spring into another bitter winter.

#1: Business Projects Fail to Deliver

Honestly, this is a reality I face everyday at work. As a professional deeply involved in a massive AI project, I am often confronted with the thought: what if it fails? Just like me, hundreds of professionals are currently paving the way for an AI future that promises intelligent processes, better customer service and increased profits. So far, Wall Street has believed the claim that AI can unlock business value. Investors and C-level executives have poured in money to staff up, upgrade systems and many time re-configure organizations to usher in an AI revolution in their business.

What is rarely talked about is the enormous challenges project teams face to transform these AI promises into reality. Most organizations are simply not ready for these changes. Furthermore, as the public becomes more aware of privacy breaches, the pressure to be innovative while also addressing ethical concerns is daunting. Even as those are resolved, there is the challenge of buy-in from internal lines of business who can perceive these solutions as an existential threat.

The significant technical, political and operational challenges of innovation all conspire to undermine or dilute the benefits promised by the AI revolution. Wall Street may be buying into the promise now but their patience is short. If AI projects fail to deliver concrete results in a timely manner, investment could dry up and progress in this area could be significantly halted. If it fails in the private sector, I can easily see this cascading into the public sector as well.

#2: Consumers Reject AI-enabled Solutions

Now let’s say the many AI projects happening across industries are technical and organizational successes. Let’s say they translate into compelling products and services that are then offered to consumers all over the globe. What if not enough of them adopt these new products or services? Just think about the Segway that was going to revolutionize mobility years ago but never really took off as a mass product. Adoption always carries the risk inherent in the unpredictable human factor.

Furthermore, accidents and business scandals can have a compounding effect on the public opinion of these products. One cannot deny that the driverless car pedestrian fatality last year in Arizona is already impacting its development possibly delaying launches by months if not years. Concerns with privacy threaten to erode the public’s confidence on business usage of data which could in turn further hamper AI innovation.

Technology is advancing at neck-breaking speed. Can humans keep up and even more importantly, do they care to? For the techno-capitalist, the human need for devices is endless. They spread this message through clever marketing campaigns. Yet, is everyone really buying it? AI-enabled products and services can only succeed if they are able to demonstrate true value in the eyes of the consumer. Otherwise, even technical marvels are destined to fail.

#3: Governments Restrict AI Innovation Through Regulation

Another factor that could derail the AI revolution is government regulation. It is important to note that not all regulation is harmful for innovation. Yet, ill-devised, politically motivated, reactive regulation often does. This could come from both sides of the political spectrum. Progressive politicians could enact burdensome taxes on the use of AI technology discouraging its development. Conservative could create laws siding with large business interests that choke innovation at the start-up level.

Emerging technologies like AI are currently not front-end center topic in elections. This can be a blessing in disguised as it is probably too early to create regulatory apparatus on these technologies. Yet, that does not mean government should not be involved. Virtuous policy should bring different stakeholders to the table by creating an open process of discussion and learning.

With that said, governments all over the world face the challenge to walk the delicate balance between intervention and neglect. Doing this well is very context-dependent not lending itself to sweeping generalizations. Yet, it must start with engagement. It was shocking to see US lawmakers’s ignorance of social media business models demonstrated in recent hearings. That gives me little hope they would be able to grasp the complexities of AI technologies. Hopefully, a new batch of more tech-savy lawmakers will help.

#4: Nationalism Hampers Global Collaboration

The development of AI thrives on an ecosystem where researchers from different countries can freely share ideas and best practices. A free Internet, a relatively peaceful global order and a willingness to share knowledge have so far ensured the flourishing of research through collaboration. As Nationalist movements rise, this ecosystem is in danger of collapsing.

Another concerning scenario is a a geopolitical AI race for dominance. While this can incentivize individual nations to focus their efforts on research, it can also undermine the spread and enhancing of AI technology applications. A true AI revolution should not be limited to one nation or even one region. Instead, it must benefit the whole planet less it becomes another tool for Colonialism.

On the one hand, regional initiatives like the European Union’s AI strategy are a good start. The ambitious Chinese AI strategy is concerning. The jury is still out on the recently released US strategy. What is missing is an overall vision of global collaboration on the field. This will most likely come from intra-governmental organizations like the UN. Until then, nationalist pursuits in AI will continue to challenge global collaboration.

Conclusion

This is all I could come up, a robust but by no means an exhaustive list of what could go wrong. Can you think of other factors? Above all, the deeper question is, if we these factors derail the AI revolution, would that be necessarily tragic? In some ways, this could delay important discoveries and breakthroughs. However, slowing down AI development may not be necessarily bad as conversations on ethics and public awareness is its beginning stages.

In the history of technology we often overestimate their impact in the short run but underestimate it in the long run. What if the AI ushers no revolution but instead a long process of gradual improvements? Maybe that’s a better scenario than the fast change promised to business investors by ambitious entrepreneurs.

AI Ethics: Evaluating Google’s Social Impact

I have noticed a shift in the corporate America recently. Moving away from the unapologetic defense of profit making of the late 20th century, corporations are now asking deeper questions on the purpose of their enterprises. Consider how businesses presented themselves in the Super Bowl broadcast this year. Verizon focused on first-responders life-saving work, Microsoft touted its video-game platform for children with disabilities and the Washington Post paid tribute to recently killed journalists. Big business wants to convince us they also have a big heart.

This does not mean that profit is secondary. As long as there is a stock market and earning expectations drive corporate goals, short-term profit will continue to be king. Yet, it is important to acknowledge the change. Companies realize that customers want more than a good bargain. Instead they want to do business with organizations that are doing meaningful work. Moreover, Companies are realizing they are not just autonomous entities but social actors that must contribute to the common good.

Google AI Research Review of 2018

Following this trend, Google AI Review of 2018 focused on how its research is impacting the world for good. The story is impressive, as it reach encompasses many fields of both philanthropy, the environment and technological breakthroughs. I encourage you to look at it for yourself.

Let me just highlight a few developments that are worth mentioning here. The first one is the development of AI ethical principles. In it, Google promise to develop technologies that are beneficial to society, tested for safety and accountable to people. The company also promises to keep privacy embedded in design, uphold highest levels of scientific excellence while also limiting harmful potential uses of their technology. In the latter, they promise to apply a cost-benefit analysis to ensure the risks of harmful uses does not outweigh its benefits.

In the last section, the company explicitly states applications they will not pursue. These include weapons, surveillance and or those that oppose accepted international law and human rights. That last point, I must admit, is quite vague and open to interpretation. With that said, the fact that Google published these principles to the public shows that they recognize their responsibility to uphold the common good.

Furthermore, the company showcases some interesting examples of using AI for social good. The example includes work on flood and earthquake prediction, identifying whales and diseased cassavas and even detecting exoplanets. The company has also allocated over $25M in funds for external social impact work through its foundation.

A Good Start But is that Enough?

In a previous blog, I mentioned how the private sector drives the US AI strategy . This approach definitely raises concerns as profit-ventures may not always align with the public good in their research goals. However, it is encouraging to see a leader in the industry doing serious ethical reflection and engaging in social work.

Yet, Google must do more to fully recognize the role its technologies play in our global society. For one, Google must do a better job in understanding its impact in local economies. While its technologies empower small businesses and individual actors in remote areas, it also upends existing industries and established enterprises. Is Google paying attention to those in the losing side of its technologies? If so, how are they planning to help them re-invent themselves?

Furthermore, if Google is to exemplify a business with a social conscience does it have appropriate feedback channels for its billions of customers? Given its size and monopoly of the search engine industry, can it really be kept accountable on its own?  The company should not only strive for transparency in its practice but also listen to its customers more attentively.

Technology, Business and Society

The relationship between business and society is being revolutionized through the advance of emerging technologies such as AI. In the example of Google, being the search engine leader makes them the primary knowledge gate-keeper for the Internet. As humans come to rely more on the Internet as an extension of their brain, this places Google in a role equivalent to what religious, educational and political leaders played in the past. This is too important a function to be centralized in one profit-making organization.

To be fair, this was not a compulsory process. It is not that Google took over our brains by force, we willingly gave them this power. Therefore, change is contingent not only in the corporation but in its customers. From a practical standpoint, that may mean skipping that urge to “google things”. We might try different search engines or even crack open a book to seek the information we need. We should also seek alternative ways to finding things in the Internet. That may mean looking at resource sites, social platforms and other alternatives. These efforts may at first make life more complicated but over the long run it will safeguard us from an inordinate dependence on a company.

The technologies developed by Google are a blessing (albeit one that we pay for) to the world. We should leverage them for human flourishing regardless of the company’s intended focus. For that to happen, we the people, must take stock of our own interaction with them . The more responsibly we use it, the more we insure that they remain what they are really meant to be: gifts to humanity.

Intelligence for Leadership: AI in Decision Making

Kings have advisors, presidents have cabinets, CEOs have boards and TV show hosts have writers – every public figure relies on a cadre of trusted advisors for making decisions. Whenever crucial decisions are made, an army of astute specialists have spent countless hours researching, studying and preparing to communicate the most essential information to inform a decision maker on that issue. Without them, leaders would lead by instinct and most likely often get it wrong. What if these advisors were not human only but also AI-enabled decision systems?

This is what Modeling Religion Project is doing. Developed by a group of scientists, philosophers, and religion scholars, the project consists of a computer simulation populated by “virtual agents” mimicking the characteristics and beliefs of a country’s population. The model is then fed evidence-based social science tendencies of human behavior under certain conditions. For example, a sudden influx of foreigners may increase the probability of hostility by native groups.

Using this initial state as a baseline, they experiment using different scenarios to evaluate the effects of changes in the environment. Levers for change include adding newcomers, investing in education, changing economic policy among other factors. The model then simulates outcomes from the changes allowing for scholars and policy makers to understand the effects of decisions or trends in a nation. While the work focuses on religion, its findings have broad implications for other social sciences such as Psychology, Sociology and Political Science. Among others, one of their primary goal is to better understand what factors can impact the level of religious violence. The government of Norway is about to put the models to test, where they hope to use the insights of the model to better integrate refugees to their nation.

Certainly, a project of such ambition is not without difficulties. For one, there are ethical questions around who gets to decide what is a good outcome and what is not. For example, one of the models provides recommendation on how to speed up secularization in a nation. Is secularization a good path for every nation? Clearly, while the model raises interesting insights, using them in the real world may prove much harder than the complex math involved in building them. Furthermore, irresponsible use can quickly lead to social engineering.

While hesitation is welcome, the demand for effective decision making will only increase. Leaders from household to national levels face increasing complex scenarios. Consider the dilemma that parents face when planing for their children’s education knowing that future job market will be different from today. Consider organizational leaders working on 5-10 year plans when markets can change in minutes, demand can change in days and societies in the course of a few years. Hence, the need for AI-generated insights will only increase with time.

What are to make of AI-enabled advice for public policy? First, it is important to note that this already is a reality in large multi-national corporations. In recent years, companies have developed intelligent systems that seek to extract insights from the reams of customer data available to these organizations. These systems may not rise to the sophistication of the project above, but soon they will. Harnessing the power of data can provide an invaluable perspective to the decision making process. As complexity increases, intelligent systems can distill large amounts of data into digestible information that can make the difference between becoming a market leader or descending into irrelevancy. This dilemma will be true for governments as well. Missing data insights can be the difference between staying in power or losing the next election.

With this said, it is important to highlight that AI-enabled simulations will never be a replacement for wise decision making. The best models can only perform as well as the data they contain. They represent a picture of the past but are not suitable for anticipating black swan events. Moreover, leaders may have pick up signals of change that have not yet been detected by data collection systems. In this case, human intuition should not be discarded for computer calculations. There is also an issue of trust. Even if computer perform better than humans in making decisions, can humans trust it beyond their own capabilities? Would we trust an AI judge to sentence a person to death?

Here, as in other situations, it bears out to bring the contrast between automation and augmentation. Using AI systems to enhance human decision making is much better than using it to replace it altogether. While they will become increasingly necessary in public policy, they should never replace popular elected human decision-makers.

Tech Community Centers: The Cure for Automation Anxiety

Automation anxiety is real. In a recent Pew survey, 72 percent of Americans worry about the impact of automation on their jobs. Besides, automation is slowly becoming part of our lives: self-service cashiers in grocery stores, smart phone enabled banking, robocalls, chatbots and many other examples. As a basic rule of thumb any task that is simple and repetitive can be automated. The benefits for us as consumers are clear – convenience and lower prices. For workers, in all levels, the story is altogether different as many now worry for their livelihood. AI enabled applications could do the jobs of accountants, lawyers and managers. Automated robotic arms can destroy manufacturing jobs and automated cars can make professional drivers obsolete.

How can this be addressed? Robert E. Litan from the Brookings Institute offers four 4 practical suggestions so governments and leaders can prepare their communities for automation:

  1. Ensure the economy is at full employment – This means keeping unemployment at around 4% or lower. Economies where people who want to work are currently working will be better prepared to absorb the shocks of automation.
  2. Insure Wages – Develop an insurance system for displaced workers so they have time to make the transition into new careers. Workers need time to adapt to new circumstances and it is difficult to do so when they have not safety net to rely on.
  3. Finance Lifetime Learning – Fund worker loans to educational institutions that offer practical training for jobs in high-demand fields. This is not about pursuing new 2-4 year degrees but 6 month to 1 year certificates that can prepare workers for a new career.
  4. Targeted distressed places – Automation impact will be uneven so governments should focus their efforts in areas of greatest needs as opposed to enacting one-size-fits-all policies.

While the suggestions above are intended for governments, much of it can be applied to individuals. In short, individuals seeking to shield themselves from automation impacts should save up, train and learn often. They should also work with local organizations to help their neighbors. Strong communities are more likely to weather through automation shocks just as did though past disruptions. Here is how this could happen again.

Makerspaces

For centuries, communities were formed and held together by central spaces. Whether it is a place of worship, the town plaza, the mall or even the soccer pitch, communities were formed and nourished as people gathered around common activities. Fast forward to our time, communities small and large find themselves pulled apart by many forces. One of the main culprits is technology-enabled experiences that drive local population to replace physical interactions with those mediated by machines. While online connections can at times translate into actual face-to-face interactions (apps like Meetup allow locals with shared interest to quickly assemble), the overall trend is local isolation even as global connections flourish. We are more likely to share commonalities with people across the globe than with those just across the street.

What if technology education could become a catalyst for strengthening local communities? Recently, makerspaces are popping up in many US metro areas. They are non-profit, community-run spaces where people gather to build, learn and experiment with technology. Visiting one here in the Atlanta area, I discovered I could learn new skills from knitting, welding to programming arduinos. While classes are free, the spaces also offer paid memberships so members can get 24/7 access to the facility and storage space. I would describe it as a place that attract people who are already tinkering around with technology in their garages to do so in a group setting. This allows them to share ideas and also to pool their resources. In some cases, these centers have equipment that would be too expensive for an individual to have and maintain in their home. In this way, the maker space enhances each member’s ability to learn, tinker and experiment with new types of technology such as 3D printers and laser cutters.

Tech Community Centers

In order to become a catalyst of community renewal, maker spaces need to expand their visions into becoming community tech centers: a place where tinkerers, inventors, scientists and tech enthusiast can come together and work for the betterment of their surrounding community. Basically, it is tinkering in community for a purpose. Channeling the energy and knowledge of technical professionals into projects of community transformation. This starts with education and professional training but can go much further than that. What if these community centers could also be incubators for start-ups that create jobs in the community? What if they can also run projects creating apps, databases and predictive models for non-profits? The sky is the limit, all it is required is a desire to transform their community and vivid imagination to dream up possibilities.

Are you up to the challenge? If you know of any local initiative like this or are considering starting one, please write in the comment section or reach out through our contact form. I would love to hear your story.

Who Will Win the AI Race? Part II: The European Way

In a previous blog, I compared the approaches from China and the US as they compete in the global AI race. In short, China proposed a government-led approach while the US is leaning on a business-led approach. The European approach represents an attempt to balance both business’ and government’s efforts in directing AI innovation, therefore showing a third way to compete in the global AI race.

Great Britain recently announced a national AI strategy. In a mixture of private, academic and government resources, the country is pledging $ 1.4 Billion in investment. A remarkable piece of the plan is allocated funding for a Center for Data Ethics. The center will develop codes for safety and ethical use of machine learning.  Another noteworthy part of the plan is the initiative to fund 1,000 new PhDs and 8,000 teachers for UK secondary schools. This move will not only spur further innovation but also ensure the British workforce is prepared to absorb the changes brought by AI developments. It is essential that governments plan ahead to prepare the next generation for the challenges of opportunities of emerging technologies like AI. In this area, the UK’s plan sets a good precedent for others countries to follow as they look for ways to prepare their workforce for future AI disruptions. Such moral leadership will be a guide not only to European institutions but also help companies worldwide make better choices with their AI technologies. This perspective is essential to ensure AI development does not descend into an uncontrolled arms race.

 

In the European Union, France has also announced a national plan following a similar approach as the UK. Beyond the mix of private and government investment to the total of 1.5 billion euros, the country is also setting up an open data approach that both helps businesses and customers. On the one hand business can look at a centralized place for data on the other customers get centralized transparency of how their data is being collected and used. If executed well, this central data place can both provide quality data for AI models while still ensuring privacy concerns are mitigated. The strategy also includes innovative ideas such as harnessing the power of AI to solve environmental challenges and a narrow focus on industries that country can compete in. Similar to the British approach, the French plan also includes funding for an Ethics center.

While Germany has not announced a comprehensive plan to date, the country already leads in AI within the automotive industry. Berlin is considered the 4th largest hub for AI startups. An area in Southern Germany known as Cyber Valley is becoming a hub for collaboration between academia and industry for AI. Even without a stated national strategy, the country is well-positioned to become a hub of AI innovation for years to come.

These countries individual strategies are further bolstered by a regional strategy that aims to foster collaboration between countries. Earlier this year, the European commission pledged 20 Billion Euros over the next 2 years for the 25 country bloc. It proposed a three-pronged approach: 1) increase  investment in AI; 2) prepare for socio-economic changes; 3)Devise an appropriate ethical and legal framework. This holistic approach may not win the race but will certainly keep Europe as the moral leader in the field.

Conclusion

This short survey from these two blogs gives us a glimpse of the unfolding global AI race. The list here is not complete but represent three different types of approaches. In an axis of government involvement, China is at one extreme (most) compared to the US on the other (least). Europeans countries sit somewhere in the middle. In all cases, advances in AI will come from education, government and private enterprise. Yet a nation’s ability to coordinate, focus and control the development of AI can be the difference between harnessing the upcoming technological revolution for prosperity of their people and those that will struggle to survive its disruptions. Unlike previous races, this is not just about military supremacy. It touches every aspect of society and could become the dividing line between thriving and struggling nations.

Furthermore, how countries pursue this race can also have global impacts on the application of AI. This is where I believe the European model holds the most promise. The plans put forth by France and the UK could not only ensure these countries geo-political position but could have benefits for all nations. The regional approach and focus can also yield significant fruits for the future. Tying AI development efforts with ethical principles and sound policy is the best way to ensure that AI will be used towards human flourishing. I hope other countries follow their lead and start anticipating how they want AI to be used inside their borders. The true winner of the global AI race should not be any nation or region but humanity as a whole. Here is where France’s intention to use AI innovation to address environmental challenges is most welcome. When humanity wins, all countries benefit and the planet is better for it.

Who Will Win The Global AI Race? Part I: China vs USA

While the latest outrageous tweet by Kanye West fills up the news, a major development goes unnoticed: the global AI race for supremacy. Currently, many national governments are drafting plans for boosting AI research and development within their borders. The opportunities are vast and the payoff fairly significant. From a military perspective alone, AI supremacy can be the deciding factor for which country will be the next super-power. Further more, an economy driven by a thriving AI industry can spur innovation in multiple industries while also boosting economic growth. On the flip-side, a lack of planning on this area could lead to increasing social unrest as automation destroys existing jobs and workers find themselves excluded from AI-created wealth. There is just too much at stake to ignore. In this two-part blog, I’ll look at how the top players in the AI race are planning to harness technology to their advantage while also mitigating its potential dangers.

 

China’s Moonshot Effort for Dominance

China has bold plans for AI development in the next years. The country aims to be the AI undisputed leader by 2030. They hold a distinctive advantage in the ability to collect data from its vast population yet they are still behind Western countries in algorithms and research. China does not have the privacy requirements that are standard in the West and that allows them almost unfettered access to data. If data is the raw material for AI, then China is rich in supply. However, China is a late-comer to the race and therefore lacks the accumulated knowledge held by leading nations. The US, for example, started tinkering with AI technology as early as the 1950’s. While the gap is not insurmountable, it will take a herculean effort to match and then overtake the current leadership held by Western countries.

Is China up to the challenge? Judging by its current plan, the country has a shot. The ambitious strategy both acknowledges the areas where China needs to improve as well as outlines a plan to address them. At the center of it is the plan to develop a complete ecosystem of research, development and commercialization connecting government, academia and businesses. Within that, it includes plans to use AI for making the country’s infrastructure “smarter” and safer. Furthermore, it anticipates heavy AI involvement in key industries like manufacturing, healthcare, agriculture and national defense. The last one clearly brings concern for neighboring countries that fear a rapid change in the Asian balance of power. Japan and South Korea will be following these developments closely.

It seeks to accomplish these goals through a partnership between government and large corporations. In this case, the government has greater ability to control both the data and the process in which these technologies develop. This may or may not play to China’s advantage. Only time will tell. Of all plans, they have the longest range and assuming the Communist party remains in power, the advantage of continuity often missing from liberal democracies.

While portions of China’s strategy are concerning, the world has much to learn from the country’s moonshot effort in this area. Clearly, the Chinese government has realized the importance and the potential for the future of humanity. They are now ensuring that this technology leads to a prosperous Chinese future. Developing countries will do well to learn from the Chinese example or see themselves once again politically subjugated by the nations that master these capabilities first. Unlike China, most of these nations do not count on a vast population and favored geo-political position. The key for them will be to find niche areas where they can excel to focus their efforts.

US’ Decentralized Entrepreneurship

Uncle Sam sits in a paradoxical position in this race. While the undisputed leader, having an advantage in patents and having an established ecosystem for research development, the country lacks a clear plan from the government. This was not always the case.  In 2016, the Obama administration was one of the first to spell out principles to ensure public investment in the technology. The plan recognized that the private sector would lead innovation, yet it aimed at establishing a role for the government to steward the development and application of AI. With the election of Donald Trump in 2016, this plan is now uncertain. No decision has been announced on the matter so it is difficult to say what role the government will play in the future development of AI in the United States. While the current administration has kept investment levels untouched, there is no resolution on a future direction.

Given that many breakthroughs are happening in large American corporations like Google, Facebook and Microsoft – the US will undoubtedly play a role in the development of AI for years to come. However, a lack of government involvement could mean a lopsided focus on commercial applications. The danger in such path is that common good applications that do not yield a profit will be replaced by those that do. For example, the US could become the country that has the most advanced gadgets while the majority of its population do not have access to AI-enabled healthcare solutions.

Another downside for a corporate-focused AI strategy is that these large conglomerates are becoming less and less tied to their nation of origin. Their headquarters may still be in the US, but a lot of the work and even research is now starting to be done in other countries. Even for the offices in the country, the workforce is often-times foreign-born. We can discuss the merits and downsides of this development but for a president that was elected to put “America first” his administration’s disinterest in AI is quite ironic. This is even more pressing as other nations put together their strategies for harnessing the benefits and stewarding the dangers of AI. For a president that loves to tweet, his silence on this matter is rather disturbing.

The Bottom Line

China and US are currently pursuing very different paths in the AI race. Without a clear direction from the government, the US is relying on private enterprise’s to lead the progress in this field. Given the US’ current lead, such strategy can work, at least in the short-run. China is coming from the opposite side where the government is leading the effort to coordinate and optimize the nation’s efforts for AI development. China’s wealth of centralized data also gives a competitive advantage, one that it must leverage in order to make up for being a late comer.

Will this be a battle between entrepreneurship and central planning? Both approaches have its advantages. The first counts on the ingenuity of companies to lead innovation. The business competitive advantage for AI leaders has huge upsides in profit and prestige. It is this entrepreneurial culture that has driven the US to lead the world in technology and research. Hence, such de-centralized effort can still yield great results. On the flip-side, a centralized effort, while possibly stifling innovation, has the advantage of focusing efforts across companies and industries. Given AI potential to transform numerous industries, this approach can succeed and yield tremendous return.

What is missing from both strategies is a holistic view of how AI will impact society. While there are institutions in the US that are working on this issue, the lack of coordination with other sectors can undermine even the best efforts. In this range of centralized planning and de-centralized entrepreneurship, there must be a middle ground. This is the topic of the next blog, where I’ll talk about Europe’s AI strategy.

 

Blockchain and AI: Powerful Combination or Concerning Trend?

Bitcoin is all over the news lately. After the meteoric rise of these financial instruments, the financial world is both excited and fearful. Excited to get in the bandwagon while it is on the rise but scarred that this could be another bubble. Even more interesting has been the rise of blockchain, the underlying technology that enables Bitcoin to run (for those wondering about what this technology is, check out this video). In this blog, I reflect on the combination between AI and blockchain by examining an emerging startup on the field. Can AI and blockchain work together? If so, what type of applications can this combination be used for? 

Recently, I came across this article from Coincentral that starts answering the questions above. In it, Colin Harper interviews CEO of Deep Brain Chain, one of the first startups attempting to bring AI and blockchain technology together. DBC’s CEO He Yong puts it this way:

DBC will allow AI companies to acquire data more easily.  Because a lot of data are private data.  They have heavy security, such as on health and on finance.  For AI companies, it’s almost next to impossible to acquire these data.  The reason being, these data are easy to copy.  After the data is copied, they’re easy to leak.  One of DBC’s core features is trying to solve this data leakage issue, and this will help AI companies’ willingness to share data, thus reducing the cost you spend on data, to increase the flow of data in society. This will expand and really show the value of data.

As somebody who works within a large company using reams of private data, I can definitely see the potential for this combination. Blockchain can provide the privacy through encryption that could facilitate the exchange of data between private companies. Not that this does not happen already but it is certainly discouraged given the issues the CEO raises above. Certainly, the upside of aggregating this data for predictive modeling is fairly significant. Companies would have complete datasets that allow them to see sides of the customer that are not visible to that company.

However, as a citizen, such development also makes me ponder. Who will get access to this shared data? Will this be done in a transparent way so that regulators and the general population can monitor the process?  Who will really benefit from this increased exchange of private data? While I agree the efficiency and cost would decrease, my main concern is for what aims will this data be used. Don’t get me wrong, targeted marketing that follows privacy guidelines can actually be beneficial to everybody. Richer data can also help a company improve customer service.

With that said, the way He Yong describes, it looks like this combination will primarily benefit large private companies that will use this data for commercial aims. Is this really the promise of an AI and Blockchain combination: allowing large companies to know even more about us?

Further in the interview, He Yong suggested the Blockchain could actually help assuage some of the fears of that AI could get out of control:

Some people claim that AI is threatening humanity.  We think that blockchain can stop that, because AI is actually not that intelligent at the moment, so the threat is relatively low.  But in a decade, two decades, AI will be really strong, a lot stronger than it is now.  When AI is running on blockchain, on smart contracts, we can refrain AI.  For example, we can write a blockchain algorithm to restrain the computational power of AI and keep it from acting on its own.

Given my limited knowledge on Blockchain, it is difficult to evaluate whether this is indeed the case. I still believe that the biggest threat of AI is not the algorithms themselves but how they are used. Blockchain, as described here, can help make them process more robust, giving human controllers more tools to halt algorithms gone haywire. Yet, beyond that I am not sure how much more can it act as a true “check and balance” for AI.

I’ll be monitoring this trend in the next few months to see how this develops. Certainly, we’ll be seeing more and more businesses emerging seeking to marry blockchain and AI. These two technologies will disrupt many industries by themselves. Combining them could be even more impactful. I am interested to see if they can be combined for human flourishing goals. That remains yet to be seen.

Debunking AI Myths: Specialized versus General AI

The noise around AI has been deafening lately. From tales of doom, fears of automation to promises of a new humanity, there is no limit for the speculation around this technology. As one tracking the news and articles around this topic, the task has become impossible. Not one day goes by without multiple articles, blogs, podcasts and TV shows come out exploring the topic. Just this week, technology avatars Elon Musk and Mark Zucheberg traded barbs on whether we should fear AI or not.

Hence, it is a good time to take a step back to separate the hype from reality. It is time to expose some AI myths and look at these challenge with a cautious but informed perspective. The biggest challenge in our time where information flows freely is to know what to ignore and what to pay attention to lest we fall into a perpetual sense of confusion. In this blog, I want to hone in the differences between generalized and specialized AI while also briefly reflecting on their impact in our near future.

The Promise and Limitations of Specialized AI

Many readers of this blog may know this already but it is important to reinforce the difference between specialized and general AI. The first, is the driver around the revolution in industry and most of the buzz in the news. It is specialized because it is intelligence optimized around one specific task. That can be predicting who will do an action, whose face is in the picture or what has someone said. In the baseline section, I show a picture that illustrate well the different types of specialized AI that exist. With improving hardware, a lot of data and the right algorithms, specialized AI will most likely disrupt entire industries from banking to healthcare, transportation to entertainment.

Now before we panic, a few caveats are in order. Just because a technology exists does not mean it will actually create disruption. For example, many thought that the advent of the Internet would end book publishing. While the publishing industry had gone through tremendous change, we still buy books today. So, it is fair to say that even with the advent of self-driving cars that does not mean the end of driving.

For a technology to change industry and culture, it must first prove to be commercially viable. It is only when the smart phone becomes the Iphone that change starts happening. Disruption is not just dependent on the technology but also on how it is used. It is wonderful that computers can now learn like humans but if this does not solve real problems, it is useless. Specialized AI is not a trouble-shoot free proposition. It takes a considerable amount of time, testing, investment and many failures to get to successful applications. At this point, only large corporations or savvy entrepreneurs have the time, energy and resources that it takes to transform this technology into viable solutions. It is true that hardware and open-source software have significantly lowered the barriers of entry into this field. However, people with the right skillset and experience in this area are still scarce. Thus, many AI efforts will fail while few will become breakthroughs. This reality leads me to believe that the forecasts of massive job elimination are over-blown.

The Challenges Around General AI

General AI is still the fodder of scientific fiction. That is the idea that machines could be sentient, being able to think, walk and feel. We are still decades off from that reality. Now, certainly we could get there earlier but before we do, we have some formidable obstacles overcome.

A big one is hardware. In spite of the fact that computers processing speed have grown greatly in the last years, they are still no match for the brain. The difference is between millions to billions of connections. Basically, there is no hardware today that could fully mimic the capacity of the brain. Some believe they never will be able to do so while others are spending billions trying to do exactly that. Only time will tell who is right, but until then General AI will remain elusive.

We often forget that an essential difference between AI and human intelligence is life itself. Artificial Intelligence is not artificial life but only a well-constructed machine made to look, see and think like humans. For all the advances in AI, there are still fundamental differences in how biological functions of our bodies and the processing activity of machines. So, it looks like, at least for the near future, robots will not have a soul even if talking about them as they did can be a helpful exercise in speculative reflection.

What Does This Mean?

Given the points described above, what are we to make of the current fears surrounding AI? Outlining the limits around AI does not mean ignoring its potential dangers nor minimizing its promise. The difference is an informed engagement versus exasperated over-reaction. Specialized AI is bound to eliminate some jobs and there is very little that can change that. Yet, this will not be an overnight smooth transition. It will be filled with advances, setbacks until we reach a new normal. Even as the technology progresses, social-political and economic factors are bound to shape the future of AI. It is not just about the technology but about the people who use it.

Maybe the best advice I can give anyone concerned about AI is “don’t believe everything you read on the Internet.” Check your sources, compare it with others and retain the best. In this case, my hope is that the attention around AI will invite us all to a conversation about how technology is shaping our lives and how it can help us flourish. To dwell on fear will miss the opportunity of discovering how AI can make us better humans. That, to me, is the ultimate question we must be most concerned about.

 

What Would Open AI Look Like?

In a previous blog I talked about how big government and big business were racing to get a piece of the AI revolution. In this blog, I want to explore the parallel grass-roots movement of open AI and its possibilities.

Open-Source Movement: The Democratization of Technology

There was a time in which to compete with technology required a hefty upfront investment. This is no longer the case. For one, consumers and businesses have now the ability to buy hardware as a service which greatly diminishes initial costs. Along with that, most expensive softwares have now an open-source version available for free. So today, open source solutions and hardware services like the cloud allows for even small players to compete alongside Fortune 500 companies.

I can speak from experience. When I entered the field of data science eight years ago, I remember wondering what would it take for me to do the things I did in my corporate job at home. First, I would have to purchase a server to get computing power. Then I would have to buy very expensive software to run the algorithms. At that point, open-source options were emerging in academic circles but service like the cloud did not exist. Today, the scenario could not be different. I can now perform the same tasks by downloading open-source software to my laptop and if necessary rent some space in the cloud for more computing power. Needless to say, the environment is ripe for start-ups to flourish as the barriers of entry are low. The main barrier of entry now is not technology but humans with the know-how to run these widely available tools.

This democratization trend is not limited to technology-related fields but is disrupting other industries like web development, education and the non-profit sector. Web development can now be accomplished through open-source web services like “WordPress” (which I use for this blog). Large Universities are offering online open courses to students all-over the world promising the same level of quality of their on-campus classes. Social entrepreneurs can now raise funds through crowdsourcing, greatly expanding their donor base. The “open” phenomenon is obliterating set up costs empowering individuals and small organizations to do more with less.

What About Open AI?

Because the barriers of entry are low for data science, I don’t see why we should not see a vigorous grass-roots movement to democratize AI. The hardware and software is available and affordable. The biggest challenge is one of skills and know-how. The skills required for running and understanding AI algorithms are very scarce at the moment. Only a small group of professionals and academics have experience working with the advanced algorithms needed to develop AI applications.

Yet, even this current bottleneck is not bound to last long. Numerous coding schools start-ups are offering data science camps enabling data veterans and even new entrants to learn how these algorithms work. Moreover, soon enough entrepreneurs will develop solutions that enable AI development without having to code. Of course, AI is not limited just to machine learning but encompasses robotics and engineering among other technical fields. While I cannot speak from experience in these areas, the rise in high-school robotics competitions and engineering camps for kids tells me that efforts already exist to democratize these skills as well.

Clearly the seeds are in place for an open AI movement to flourish. It is in this context that I plan to invest my time and creative energies in the next few years. As I mentioned in the previous blog, preparing the next generation for an AI future is not about training them for jobs but empowering with tools that can harness their creativity. What would happen if at-risk children today could have a place to learn and “do” AI? What if the unemployed and young adults could become part of learning communities that are experimenting with the latest machine learning technologies?

What kind of problems would they solve and what kind of world would they build?