Will AI Deliver? 4 Factors That Can Derail the AI Revolution

2019 is well under way and the attention on Artificial Intelligence persists. An AI revolution is already underway (for a very informative deep dive on this topic check this infographic).  Businesses are investing heavily in the field and staffing up their data science departments, Governments are releasing AI strategy plans and the media continues to churns out fantastic stories about the possibilities of AI. Beyond that, discussions about ethics and appropriate uses are starting to emerge. Even the Vatican is paying attention. What could go wrong?

If history is any guide, we have been through an AI spring before only to see it fall into an AI winter. In the mid 80’s, the funding dried, government programs were shut down and the attention moved on to other emerging technologies. While we live a different reality, a more globalized and connected world, there is no guarantee that the promise of AI will come to pass.

In this blog, as an industry insider and diligent observer, I describe factors that could derail the AI revolution. In short, here are the things that could turn our AI spring into another bitter winter.

#1: Business Projects Fail to Deliver

Honestly, this is a reality I face everyday at work. As a professional deeply involved in a massive AI project, I am often confronted with the thought: what if it fails? Just like me, hundreds of professionals are currently paving the way for an AI future that promises intelligent processes, better customer service and increased profits. So far, Wall Street has believed the claim that AI can unlock business value. Investors and C-level executives have poured in money to staff up, upgrade systems and many time re-configure organizations to usher in an AI revolution in their business.

What is rarely talked about is the enormous challenges project teams face to transform these AI promises into reality. Most organizations are simply not ready for these changes. Furthermore, as the public becomes more aware of privacy breaches, the pressure to be innovative while also addressing ethical concerns is daunting. Even as those are resolved, there is the challenge of buy-in from internal lines of business who can perceive these solutions as an existential threat.

The significant technical, political and operational challenges of innovation all conspire to undermine or dilute the benefits promised by the AI revolution. Wall Street may be buying into the promise now but their patience is short. If AI projects fail to deliver concrete results in a timely manner, investment could dry up and progress in this area could be significantly halted. If it fails in the private sector, I can easily see this cascading into the public sector as well.

#2: Consumers Reject AI-enabled Solutions

Now let’s say the many AI projects happening across industries are technical and organizational successes. Let’s say they translate into compelling products and services that are then offered to consumers all over the globe. What if not enough of them adopt these new products or services? Just think about the Segway that was going to revolutionize mobility years ago but never really took off as a mass product. Adoption always carries the risk inherent in the unpredictable human factor.

Furthermore, accidents and business scandals can have a compounding effect on the public opinion of these products. One cannot deny that the driverless car pedestrian fatality last year in Arizona is already impacting its development possibly delaying launches by months if not years. Concerns with privacy threaten to erode the public’s confidence on business usage of data which could in turn further hamper AI innovation.

Technology is advancing at neck-breaking speed. Can humans keep up and even more importantly, do they care to? For the techno-capitalist, the human need for devices is endless. They spread this message through clever marketing campaigns. Yet, is everyone really buying it? AI-enabled products and services can only succeed if they are able to demonstrate true value in the eyes of the consumer. Otherwise, even technical marvels are destined to fail.

#3: Governments Restrict AI Innovation Through Regulation

Another factor that could derail the AI revolution is government regulation. It is important to note that not all regulation is harmful for innovation. Yet, ill-devised, politically motivated, reactive regulation often does. This could come from both sides of the political spectrum. Progressive politicians could enact burdensome taxes on the use of AI technology discouraging its development. Conservative could create laws siding with large business interests that choke innovation at the start-up level.

Emerging technologies like AI are currently not front-end center topic in elections. This can be a blessing in disguised as it is probably too early to create regulatory apparatus on these technologies. Yet, that does not mean government should not be involved. Virtuous policy should bring different stakeholders to the table by creating an open process of discussion and learning.

With that said, governments all over the world face the challenge to walk the delicate balance between intervention and neglect. Doing this well is very context-dependent not lending itself to sweeping generalizations. Yet, it must start with engagement. It was shocking to see US lawmakers’s ignorance of social media business models demonstrated in recent hearings. That gives me little hope they would be able to grasp the complexities of AI technologies. Hopefully, a new batch of more tech-savy lawmakers will help.

#4: Nationalism Hampers Global Collaboration

The development of AI thrives on an ecosystem where researchers from different countries can freely share ideas and best practices. A free Internet, a relatively peaceful global order and a willingness to share knowledge have so far ensured the flourishing of research through collaboration. As Nationalist movements rise, this ecosystem is in danger of collapsing.

Another concerning scenario is a a geopolitical AI race for dominance. While this can incentivize individual nations to focus their efforts on research, it can also undermine the spread and enhancing of AI technology applications. A true AI revolution should not be limited to one nation or even one region. Instead, it must benefit the whole planet less it becomes another tool for Colonialism.

On the one hand, regional initiatives like the European Union’s AI strategy are a good start. The ambitious Chinese AI strategy is concerning. The jury is still out on the recently released US strategy. What is missing is an overall vision of global collaboration on the field. This will most likely come from intra-governmental organizations like the UN. Until then, nationalist pursuits in AI will continue to challenge global collaboration.

Conclusion

This is all I could come up, a robust but by no means an exhaustive list of what could go wrong. Can you think of other factors? Above all, the deeper question is, if we these factors derail the AI revolution, would that be necessarily tragic? In some ways, this could delay important discoveries and breakthroughs. However, slowing down AI development may not be necessarily bad as conversations on ethics and public awareness is its beginning stages.

In the history of technology we often overestimate their impact in the short run but underestimate it in the long run. What if the AI ushers no revolution but instead a long process of gradual improvements? Maybe that’s a better scenario than the fast change promised to business investors by ambitious entrepreneurs.

ERLC Statement on AI: An Annotated Christian Response

Recently, the ERLC (Ethics and Religious Liberty Commission) released a statement on AI. This was a laudable step as the Southern Baptist became the first large Christian denomination to address this issue directly. While this is a start, the document fell short in many fronts. From the start, the list of signers had very few technologists and scientists.

In this blog, I show both the original statement and my comments in red. Judge for yourself but my first impression is that we have a lot of work ahead of us.

Article 1: Image of God

We affirm that God created each human being in His image with intrinsic and equal worth, dignity, and moral agency, distinct from all creation, and that humanity’s creativity is intended to reflect God’s creative pattern.

Ok, that’s a good start by locating creativity as God’s gift and affirming dignity of all humanity. Yet, the statement exalts human dignity at expense of creation. Because AI, and technology in general, is about human relationship to creation, setting the foundation right is important. It is not enough to highlight human primacy, one must clearly state our relationship with the rest of creation.

We deny that any part of creation, including any form of technology, should ever be used to usurp or subvert the dominion and stewardship which has been entrusted solely to humanity by God; nor should technology be assigned a level of human identity, worth, dignity, or moral agency.

Are we afraid of a robot take over of humanity? Here it would have been helpful to start distinguishing between general and narrow AI. The first is still decades away while the latter is already here and poised to change every facet of our lives. The challenge of narrow AI is not one of usurping our dominion and stewardship but of possibly leading us to forget our humanity. They seem to be addressing general AI. Maybe including more technologists in the mix would have helped.

Genesis 1:26-28; 5:1-2; Isaiah 43:6-7; Jeremiah 1:5; John 13:34; Colossians 1:16; 3:10; Ephesians 4:24

Article 2: AI as Technology

We affirm that the development of AI is a demonstration of the unique creative abilities of human beings. When AI is employed in accordance with God’s moral will, it is an example of man’s obedience to the divine command to steward creation and to honor Him. We believe in innovation for the glory of God, the sake of human flourishing, and the love of neighbor. While we acknowledge the reality of the Fall and its consequences on human nature and human innovation, technology can be used in society to uphold human dignity. As a part of our God-given creative nature, human beings should develop and harness technology in ways that lead to greater flourishing and the alleviation of human suffering. 

Yes, well done! This affirmation is where Christianity needs to be. We are for human flourishing and the alleviation of suffering. We celebrate and support Technology’s role in these God-given missions.

We deny that the use of AI is morally neutral. It is not worthy of man’s hope, worship, or love. Since the Lord Jesus alone can atone for sin and reconcile humanity to its Creator, technology such as AI cannot fulfill humanity’s ultimate needs. We further deny the goodness and benefit of any application of AI that devalues or degrades the dignity and worth of another human being.

I guess what they mean here is that technology is a limited means and cannot ultimately be the salvation. I see here a veiled critique of Transhumanism. Fair enough, the Christian message should both celebrate AI’s potential but also warn of its limitations less we start giving it unduly worth.

Genesis 2:25; Exodus 20:3; 31:1-11; Proverbs 16:4; Matthew 22:37-40; Romans 3:23

Article 3: Relationship of AI & Humanity

We affirm the use of AI to inform and aid human reasoning and moral decision-making because it is a tool that excels at processing data and making determinations, which often mimics or exceeds human ability. While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility.

This statement seems to suggest the positive role AI can play in augmentation rather than replacement. I am just not sure that was ever in question.

We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created. Only humanity will be judged by God on the basis of our actions and that of the tools we create. While technology can be created with a moral use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision making.

While hard to argue against this statement at face value, it overlooks the complexities of a world that is becoming increasingly reliant on algorithms. The issue is not that we are offloading moral decisions to algorithms but that they are capturing moral decisions of many humans at once. This reality is not addressed by simply stating human moral responsibility. This needs improvement.

Romans 2:6-8; Galatians 5:19-21; 2 Peter 1:5-8; 1 John 2:1

Article 4: Medicine

We affirm that AI-related advances in medical technologies are expressions of God’s common grace through and for people created in His image and that these advances will increase our capacity to provide enhanced medical diagnostics and therapeutic interventions as we seek to care for all people. These advances should be guided by basic principles of medical ethics, including beneficence, nonmaleficence, autonomy, and justice, which are all consistent with the biblical principle of loving our neighbor.

Yes, tying AI-related medical advances with the great commandment is a great start.

We deny that death and disease—effects of the Fall—can ultimately be eradicated apart from Jesus Christ. Utilitarian applications regarding healthcare distribution should not override the dignity of human life. Furthermore, we reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.

Similar to my statement on article 3, this one misses the complexity of the issue. How do you draw the line between enhancement and cure? Also, isn’t the effort of extend life an effective form of alleviation of suffering? These issues do not lend themselves to simple propositions but instead require more nuanced analysis and prayerful consideration.

Matthew 5:45; John 11:25-26; 1 Corinthians 15:55-57; Galatians 6:2; Philippians 2:4​

Article 5: Bias

We affirm that, as a tool created by humans, AI will be inherently subject to bias and that these biases must be accounted for, minimized, or removed through continual human oversight and discretion. AI should be designed and used in such ways that treat all human beings as having equal worth and dignity. AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making.

Bias is inherent in the data fed into machine learning models. Work on the data, monitor the outputs and evaluate results and you can diminish bias. Direction AI to promote equal worth is a good first step.

We deny that AI should be designed or used in ways that violate the fundamental principle of human dignity for all people. Neither should AI be used in ways that reinforce or further any ideology or agenda, seeking to subjugate human autonomy under the power of the state.

What about being used by large corporations? This was a glaring absence here.

Micah 6:8; John 13:34; Galatians 3:28-29; 5:13-14; Philippians 2:3-4; Romans 12:10

Article 6: Sexuality

We affirm the goodness of God’s design for human sexuality which prescribes the sexual union to be an exclusive relationship between a man and a woman in the lifelong covenant of marriage.

This seems like a round-about way to use the topic of AI for fighting culture wars. Why include this here? Or, why not talk about how AI can help people find their mates and even help marriages? Please revise or remove!

We deny that the pursuit of sexual pleasure is a justification for the development or use of AI, and we condemn the objectification of humans that results from employing AI for sexual purposes. AI should not intrude upon or substitute for the biblical expression of sexuality between a husband and wife according to God’s design for human marriage. 

Ok, I guess this is a condemnation of AI porn. Again, it seems misplaced on this list and could have been treated in alternative ways. Yes, AI can further increase objectification of humans and that is a problem. I am just not sure that this is such a key issue to be in a statement of AI. Again, more nuance and technical insight would have helped.

Genesis 1:26-29; 2:18-25; Matthew 5:27-30; 1 Thess 4:3-4

Article 7: Work

We affirm that work is part of God’s plan for human beings participating in the cultivation and stewardship of creation. The divine pattern is one of labor and rest in healthy proportion to each other. Our view of work should not be confined to commercial activity; it must also include the many ways that human beings serve each other through their efforts. AI can be used in ways that aid our work or allow us to make fuller use of our gifts. The church has a Spirit-empowered responsibility to help care for those who lose jobs and to encourage individuals, communities, employers, and governments to find ways to invest in the development of human beings and continue making vocational contributions to our lives together.

This is a long, confusing and unhelpful statement. It seems to be addressing the challenge of job loss that AI can bring without really doing it directly. It gives a vague description of the church’s role in helping individuals find work but does not address the economic structures that create job loss. It simply misses the point and does not add much to the conversation. Please revise!

We deny that human worth and dignity is reducible to an individual’s economic contributions to society alone. Humanity should not use AI and other technological innovations as a reason to move toward lives of pure leisure even if greater social wealth creates such possibilities.

Another confusing and unhelpful statement. Are we making work holy? What does “lives of pure leisure” mean? Is this a veiled attack against Universal Basic Income? I am confused. Throw it out and start it over!

Genesis 1:27; 2:5; 2:15; Isaiah 65:21-24; Romans 12:6-8; Ephesians 4:11-16

Article 8: Data & Privacy

We affirm that privacy and personal property are intertwined individual rights and choices that should not be violated by governments, corporations, nation-states, and other groups, even in the pursuit of the common good. While God knows all things, it is neither wise nor obligatory to have every detail of one’s life open to society.

Another statement that needs more clarification. Treating personal data as private property is a start. However, people are giving data away willingly. What is privacy in a digital world? This statement suggest the drafters unfamiliarity with the issues at hand. Again, technical support is needed.

We deny the manipulative and coercive uses of data and AI in ways that are inconsistent with the love of God and love of neighbor. Data collection practices should conform to ethical guidelines that uphold the dignity of all people. We further deny that consent, even informed consent, although requisite, is the only necessary ethical standard for the collection, manipulation, or exploitation of personal data—individually or in the aggregate. AI should not be employed in ways that distort truth through the use of generative applications. Data should not be mishandled, misused, or abused for sinful purposes to reinforce bias, strengthen the powerful, or demean the weak.

The intention here is good and it is in the right direction. It is also progress to point out that consent is the only guideline and in its condemnation of abusive uses. I would like it to be more specific on its call to corporations, governments and even the church.

Exodus 20:15, Psalm 147:5; Isaiah 40:13-14; Matthew 10:16 Galatians 6:2; Hebrews 4:12-13; 1 John 1:7

Article 9: Security

We affirm that AI has legitimate applications in policing, intelligence, surveillance, investigation, and other uses supporting the government’s responsibility to respect human rights, to protect and preserve human life, and to pursue justice in a flourishing society.

We deny that AI should be employed for safety and security applications in ways that seek to dehumanize, depersonalize, or harm our fellow human beings. We condemn the use of AI to suppress free expression or other basic human rights granted by God to all human beings.

Good intentions with poor execution. The affirmation and denials are contradictory. If you affirm that AI can be use for policing, you have to concede that it will be used to harm some. Is using AI to suppress hate speech acceptable? I am not sure how this adds any insight to the conversation. Please revise!

Romans 13:1-7; 1 Peter 2:13-14

Article 10: War

We affirm that the use of AI in warfare should be governed by love of neighbor and the principles of just war. The use of AI may mitigate the loss of human life, provide greater protection of non-combatants, and inform better policymaking. Any lethal action conducted or substantially enabled by AI must employ human oversight or review. All defense-related AI applications, such as underlying data and decision-making processes, must be subject to continual review by legitimate authorities. When these systems are deployed, human agents bear full moral responsibility for any actions taken by the system.

Surprisingly, this was better than the statement above. It upholds human responsibility but recognizes that AI, even in war, can have life preserving aims. I would have like a better definition of uses for defense, yet that is somewhat implied in the principles of just war. I must say this is an area that needs more discussion and further considerations but this is a good start.

We deny that human agency or moral culpability in war can be delegated to AI. No nation or group has the right to use AI to carry out genocide, terrorism, torture, or other war crimes.

I am glad to see the condemnation of torture here. Lately, I am not sure where evangelicals stand on this issue.

Genesis 4:10; Isaiah 1:16-17; Psalm 37:28; Matthew 5:44; 22:37-39; Romans 13:4​

Article 11: Public Policy

We affirm that the fundamental purposes of government are to protect human beings from harm, punish those who do evil, uphold civil liberties, and to commend those who do good. The public has a role in shaping and crafting policies concerning the use of AI in society, and these decisions should not be left to those who develop these technologies or to governments to set norms.

The statement points to the right direction of public oversight. I would have liked it to be more bold and clear about the role of the church. It should have also addressed corporations more directly. That seems to be a blind spot in a few articles.

We deny that AI should be used by governments, corporations, or any entity to infringe upon God-given human rights. AI, even in a highly advanced state, should never be delegated the governing authority that has been granted by an all-sovereign God to human beings alone.

Glad to see corporations finally mentioned in this document making this a good start.

Romans 13:1-7; Acts 10:35; 1 Peter 2:13-14

Article 12: The Future of AI

We affirm that AI will continue to be developed in ways that we cannot currently imagine or understand, including AI that will far surpass many human abilities. God alone has the power to create life, and no future advancements in AI will usurp Him as the Creator of life. The church has a unique role in proclaiming human dignity for all and calling for the humane use of AI in all aspects of society.

Again, the distinction between narrow and general AI would have been helpful here. The statement seems to be addressing general AI. It also seems to give away the impression that AI is threatening God. Where is that coming from? A more nuanced view of biology and technology would have been helpful here to. They seem to be jumbled together. Please revise!

We deny that AI will make us more or less human, or that AI will ever obtain a coequal level of worth, dignity, or value to image-bearers. Future advancements in AI will not ultimately fulfill our longings for a perfect world. While we are not able to comprehend or know the future, we do not fear what is to come because we know that God is omniscient and that nothing we create will be able to thwart His redemptive plan for creation or to supplant humanity as His image-bearers.

I disagree with the first sentence. There are ways in which AI can affirm and/or diminish our humanity. The issue here seems to be a perceived threat that AI will replace humans or be considered equal to them. I like the hopeful confidence in God for the future but the previous statement suggest that there is fear about this already. The ambiguity in the statements is unsettling. It suggests that AI is a dangerous unknown. Yes, it is true that we cannot know what it can become but why not call out Christians to seize this opportunity for the kingdom? Why not proclaim that AI can help us co-create with God? Let me reiterate one of the verses mentioned below:

For God has not given us a spirit of fear and timidity, but of power, love, and self-discipline

Genesis 1; Isaiah 42:8; Romans 1:20-21; 5:2; Ephesians 1:4-6; 2 Timothy 1:7-9; Revelation 5:9-10

For an alternative but still evolving Christian position on this matter please check out the Christian Transhumanist Association affirmation.

AI for Scholarship: How Machine Learning can Transform the Humanities

 In a previous blog, I explored how AI will speed up scientific research. In this blog, I will examine the overlooked  potential that AI has to transform the Humanities. This connection may not be clear at first since most of these fields do not include an element of science or math. They are more preoccupied with developing theories than testing hypotheses through experimentation. Subjects like Literature, Philosophy, History, Languages and Religious Studies (and Theology) rely heavily in the interpretation and qualitative analysis of texts. In such environment, how could mathematical algorithms be of any use? 

Before addressing the question above, we must first look at the field of Digital Humanities that created a bridge from ancient texts to modern computation. The field dates back the 1930’s, before the emergence of Artificial Intelligence. Ironically, and interestingly relevant to this blog, the first project in this area was a collaboration between an English professor, a Jesuit Priest and IBM to create a concordance for Thomas Aquinas’ writings. As digital technology advanced and texts became digitized, the field has continued to grow in importance. Its primary purpose is to both apply digital methods to Humanities as well as reflect on its use. That is, they are not only interested in digitizing books but also evaluating how the use of digital medium affect human understanding of these texts. 

Building on the foundation of Digital Humanities, the connection with AI becomes all too clear. Once computers can ingest these texts, text mining and natural language processing are now a possibility. With the recent advances in machine learning algorithms, cheapening of computing power and the availability of open source tools the conditions are ripe for an AI revolution in the Humanities.

How can that happen? The use of machine learning in combination with Natural Language Processing can open avenues of meaning that were not possible before. For centuries, these academic subjects have relied on the accumulated analysis of texts performed by humans. Yet, human capacity to interpret, analyze and absorb texts is finite. Humans do a great job in capturing meaning and nuances in texts of hundreds or even a few thousand pages. Yet, as the volume increases, machine learning can detect patterns that  are not apparent to a human reader.  This can be especially critical in applications such as author attribution (determining who the writer was when that information is not clear or in question), analysis of cultural trends,  semantics, tone and relationship between disparate texts. 

Theology is a field that is particularly poised to benefit from this combination. For those unfamiliar with Theological studies, it is a long and lonely road. Brave souls aiming to master the field must undergo more schooling than Physicians. In most cases, aspiring scholars must a complete a five-year doctorate program on top of 2-4 years of master-level studies. Part of the reason is that the field has accumulated an inordinate amount of primary sources and countless interpretations of these texts. They were written in multiple ancient and modern languages and have a span over thousands of years. In short, when reams of texts can become Big Data, machine learning can do wonders to synthesize, analyze and correlate large bodies of texts. 

To be clear, that does not mean the machine learning will replace painstaking scholarly work. Quite the opposite, it has the potential to speed up and automate some tasks so scholars can focus on high level abstract thinking where humans still hold a vast advantage over machines. If anything it should make their lives easier and possibly shorter the time it takes to master the field.

Along these lines of augmentation, I am thinking about a possible project. What if we could employ machine learning algorithms in a theologian body of work and compare it to the scholarship work that interprets it? Could we find new avenues or meaning that could complement or challenge prevailing scholarship in the topic? 

I am curious to see what such experiment could uncover. 

Who Will Win the AI Race? Part II: The European Way

In a previous blog, I compared the approaches from China and the US as they compete in the global AI race. In short, China proposed a government-led approach while the US is leaning on a business-led approach. The European approach represents an attempt to balance both business’ and government’s efforts in directing AI innovation, therefore showing a third way to compete in the global AI race.

Great Britain recently announced a national AI strategy. In a mixture of private, academic and government resources, the country is pledging $ 1.4 Billion in investment. A remarkable piece of the plan is allocated funding for a Center for Data Ethics. The center will develop codes for safety and ethical use of machine learning.  Another noteworthy part of the plan is the initiative to fund 1,000 new PhDs and 8,000 teachers for UK secondary schools. This move will not only spur further innovation but also ensure the British workforce is prepared to absorb the changes brought by AI developments. It is essential that governments plan ahead to prepare the next generation for the challenges of opportunities of emerging technologies like AI. In this area, the UK’s plan sets a good precedent for others countries to follow as they look for ways to prepare their workforce for future AI disruptions. Such moral leadership will be a guide not only to European institutions but also help companies worldwide make better choices with their AI technologies. This perspective is essential to ensure AI development does not descend into an uncontrolled arms race.

 

In the European Union, France has also announced a national plan following a similar approach as the UK. Beyond the mix of private and government investment to the total of 1.5 billion euros, the country is also setting up an open data approach that both helps businesses and customers. On the one hand business can look at a centralized place for data on the other customers get centralized transparency of how their data is being collected and used. If executed well, this central data place can both provide quality data for AI models while still ensuring privacy concerns are mitigated. The strategy also includes innovative ideas such as harnessing the power of AI to solve environmental challenges and a narrow focus on industries that country can compete in. Similar to the British approach, the French plan also includes funding for an Ethics center.

While Germany has not announced a comprehensive plan to date, the country already leads in AI within the automotive industry. Berlin is considered the 4th largest hub for AI startups. An area in Southern Germany known as Cyber Valley is becoming a hub for collaboration between academia and industry for AI. Even without a stated national strategy, the country is well-positioned to become a hub of AI innovation for years to come.

These countries individual strategies are further bolstered by a regional strategy that aims to foster collaboration between countries. Earlier this year, the European commission pledged 20 Billion Euros over the next 2 years for the 25 country bloc. It proposed a three-pronged approach: 1) increase  investment in AI; 2) prepare for socio-economic changes; 3)Devise an appropriate ethical and legal framework. This holistic approach may not win the race but will certainly keep Europe as the moral leader in the field.

Conclusion

This short survey from these two blogs gives us a glimpse of the unfolding global AI race. The list here is not complete but represent three different types of approaches. In an axis of government involvement, China is at one extreme (most) compared to the US on the other (least). Europeans countries sit somewhere in the middle. In all cases, advances in AI will come from education, government and private enterprise. Yet a nation’s ability to coordinate, focus and control the development of AI can be the difference between harnessing the upcoming technological revolution for prosperity of their people and those that will struggle to survive its disruptions. Unlike previous races, this is not just about military supremacy. It touches every aspect of society and could become the dividing line between thriving and struggling nations.

Furthermore, how countries pursue this race can also have global impacts on the application of AI. This is where I believe the European model holds the most promise. The plans put forth by France and the UK could not only ensure these countries geo-political position but could have benefits for all nations. The regional approach and focus can also yield significant fruits for the future. Tying AI development efforts with ethical principles and sound policy is the best way to ensure that AI will be used towards human flourishing. I hope other countries follow their lead and start anticipating how they want AI to be used inside their borders. The true winner of the global AI race should not be any nation or region but humanity as a whole. Here is where France’s intention to use AI innovation to address environmental challenges is most welcome. When humanity wins, all countries benefit and the planet is better for it.

Who Will Win The Global AI Race? Part I: China vs USA

While the latest outrageous tweet by Kanye West fills up the news, a major development goes unnoticed: the global AI race for supremacy. Currently, many national governments are drafting plans for boosting AI research and development within their borders. The opportunities are vast and the payoff fairly significant. From a military perspective alone, AI supremacy can be the deciding factor for which country will be the next super-power. Further more, an economy driven by a thriving AI industry can spur innovation in multiple industries while also boosting economic growth. On the flip-side, a lack of planning on this area could lead to increasing social unrest as automation destroys existing jobs and workers find themselves excluded from AI-created wealth. There is just too much at stake to ignore. In this two-part blog, I’ll look at how the top players in the AI race are planning to harness technology to their advantage while also mitigating its potential dangers.

 

China’s Moonshot Effort for Dominance

China has bold plans for AI development in the next years. The country aims to be the AI undisputed leader by 2030. They hold a distinctive advantage in the ability to collect data from its vast population yet they are still behind Western countries in algorithms and research. China does not have the privacy requirements that are standard in the West and that allows them almost unfettered access to data. If data is the raw material for AI, then China is rich in supply. However, China is a late-comer to the race and therefore lacks the accumulated knowledge held by leading nations. The US, for example, started tinkering with AI technology as early as the 1950’s. While the gap is not insurmountable, it will take a herculean effort to match and then overtake the current leadership held by Western countries.

Is China up to the challenge? Judging by its current plan, the country has a shot. The ambitious strategy both acknowledges the areas where China needs to improve as well as outlines a plan to address them. At the center of it is the plan to develop a complete ecosystem of research, development and commercialization connecting government, academia and businesses. Within that, it includes plans to use AI for making the country’s infrastructure “smarter” and safer. Furthermore, it anticipates heavy AI involvement in key industries like manufacturing, healthcare, agriculture and national defense. The last one clearly brings concern for neighboring countries that fear a rapid change in the Asian balance of power. Japan and South Korea will be following these developments closely.

It seeks to accomplish these goals through a partnership between government and large corporations. In this case, the government has greater ability to control both the data and the process in which these technologies develop. This may or may not play to China’s advantage. Only time will tell. Of all plans, they have the longest range and assuming the Communist party remains in power, the advantage of continuity often missing from liberal democracies.

While portions of China’s strategy are concerning, the world has much to learn from the country’s moonshot effort in this area. Clearly, the Chinese government has realized the importance and the potential for the future of humanity. They are now ensuring that this technology leads to a prosperous Chinese future. Developing countries will do well to learn from the Chinese example or see themselves once again politically subjugated by the nations that master these capabilities first. Unlike China, most of these nations do not count on a vast population and favored geo-political position. The key for them will be to find niche areas where they can excel to focus their efforts.

US’ Decentralized Entrepreneurship

Uncle Sam sits in a paradoxical position in this race. While the undisputed leader, having an advantage in patents and having an established ecosystem for research development, the country lacks a clear plan from the government. This was not always the case.  In 2016, the Obama administration was one of the first to spell out principles to ensure public investment in the technology. The plan recognized that the private sector would lead innovation, yet it aimed at establishing a role for the government to steward the development and application of AI. With the election of Donald Trump in 2016, this plan is now uncertain. No decision has been announced on the matter so it is difficult to say what role the government will play in the future development of AI in the United States. While the current administration has kept investment levels untouched, there is no resolution on a future direction.

Given that many breakthroughs are happening in large American corporations like Google, Facebook and Microsoft – the US will undoubtedly play a role in the development of AI for years to come. However, a lack of government involvement could mean a lopsided focus on commercial applications. The danger in such path is that common good applications that do not yield a profit will be replaced by those that do. For example, the US could become the country that has the most advanced gadgets while the majority of its population do not have access to AI-enabled healthcare solutions.

Another downside for a corporate-focused AI strategy is that these large conglomerates are becoming less and less tied to their nation of origin. Their headquarters may still be in the US, but a lot of the work and even research is now starting to be done in other countries. Even for the offices in the country, the workforce is often-times foreign-born. We can discuss the merits and downsides of this development but for a president that was elected to put “America first” his administration’s disinterest in AI is quite ironic. This is even more pressing as other nations put together their strategies for harnessing the benefits and stewarding the dangers of AI. For a president that loves to tweet, his silence on this matter is rather disturbing.

The Bottom Line

China and US are currently pursuing very different paths in the AI race. Without a clear direction from the government, the US is relying on private enterprise’s to lead the progress in this field. Given the US’ current lead, such strategy can work, at least in the short-run. China is coming from the opposite side where the government is leading the effort to coordinate and optimize the nation’s efforts for AI development. China’s wealth of centralized data also gives a competitive advantage, one that it must leverage in order to make up for being a late comer.

Will this be a battle between entrepreneurship and central planning? Both approaches have its advantages. The first counts on the ingenuity of companies to lead innovation. The business competitive advantage for AI leaders has huge upsides in profit and prestige. It is this entrepreneurial culture that has driven the US to lead the world in technology and research. Hence, such de-centralized effort can still yield great results. On the flip-side, a centralized effort, while possibly stifling innovation, has the advantage of focusing efforts across companies and industries. Given AI potential to transform numerous industries, this approach can succeed and yield tremendous return.

What is missing from both strategies is a holistic view of how AI will impact society. While there are institutions in the US that are working on this issue, the lack of coordination with other sectors can undermine even the best efforts. In this range of centralized planning and de-centralized entrepreneurship, there must be a middle ground. This is the topic of the next blog, where I’ll talk about Europe’s AI strategy.

 

Blockchain and AI: Powerful Combination or Concerning Trend?

Bitcoin is all over the news lately. After the meteoric rise of these financial instruments, the financial world is both excited and fearful. Excited to get in the bandwagon while it is on the rise but scarred that this could be another bubble. Even more interesting has been the rise of blockchain, the underlying technology that enables Bitcoin to run (for those wondering about what this technology is, check out this video). In this blog, I reflect on the combination between AI and blockchain by examining an emerging startup on the field. Can AI and blockchain work together? If so, what type of applications can this combination be used for? 

Recently, I came across this article from Coincentral that starts answering the questions above. In it, Colin Harper interviews CEO of Deep Brain Chain, one of the first startups attempting to bring AI and blockchain technology together. DBC’s CEO He Yong puts it this way:

DBC will allow AI companies to acquire data more easily.  Because a lot of data are private data.  They have heavy security, such as on health and on finance.  For AI companies, it’s almost next to impossible to acquire these data.  The reason being, these data are easy to copy.  After the data is copied, they’re easy to leak.  One of DBC’s core features is trying to solve this data leakage issue, and this will help AI companies’ willingness to share data, thus reducing the cost you spend on data, to increase the flow of data in society. This will expand and really show the value of data.

As somebody who works within a large company using reams of private data, I can definitely see the potential for this combination. Blockchain can provide the privacy through encryption that could facilitate the exchange of data between private companies. Not that this does not happen already but it is certainly discouraged given the issues the CEO raises above. Certainly, the upside of aggregating this data for predictive modeling is fairly significant. Companies would have complete datasets that allow them to see sides of the customer that are not visible to that company.

However, as a citizen, such development also makes me ponder. Who will get access to this shared data? Will this be done in a transparent way so that regulators and the general population can monitor the process?  Who will really benefit from this increased exchange of private data? While I agree the efficiency and cost would decrease, my main concern is for what aims will this data be used. Don’t get me wrong, targeted marketing that follows privacy guidelines can actually be beneficial to everybody. Richer data can also help a company improve customer service.

With that said, the way He Yong describes, it looks like this combination will primarily benefit large private companies that will use this data for commercial aims. Is this really the promise of an AI and Blockchain combination: allowing large companies to know even more about us?

Further in the interview, He Yong suggested the Blockchain could actually help assuage some of the fears of that AI could get out of control:

Some people claim that AI is threatening humanity.  We think that blockchain can stop that, because AI is actually not that intelligent at the moment, so the threat is relatively low.  But in a decade, two decades, AI will be really strong, a lot stronger than it is now.  When AI is running on blockchain, on smart contracts, we can refrain AI.  For example, we can write a blockchain algorithm to restrain the computational power of AI and keep it from acting on its own.

Given my limited knowledge on Blockchain, it is difficult to evaluate whether this is indeed the case. I still believe that the biggest threat of AI is not the algorithms themselves but how they are used. Blockchain, as described here, can help make them process more robust, giving human controllers more tools to halt algorithms gone haywire. Yet, beyond that I am not sure how much more can it act as a true “check and balance” for AI.

I’ll be monitoring this trend in the next few months to see how this develops. Certainly, we’ll be seeing more and more businesses emerging seeking to marry blockchain and AI. These two technologies will disrupt many industries by themselves. Combining them could be even more impactful. I am interested to see if they can be combined for human flourishing goals. That remains yet to be seen.

Automated Research: How AI Will Speed Up Scientific Discovery

The potential of AI is boundless. Currently, there is a lot of buzz around how it will change industries like transportation, entertainment and healthcare. Less known but even more revolutionary is how AI could change science itself. In a previous blog, I speculated about the impact of AI on academic research through text mining. The implications of  automated research described here are even more far-reaching.

Recently, I came upon an article in Aeon that described exactly that. In it, biologist Ahmed Alkhateeb eloquently makes his argument in the excerpt below:

Human minds simply cannot reconstruct highly complex natural phenomena efficiently enough in the age of big data. A modern Baconian method that incorporates reductionist ideas through data-mining, but then analyses this information through inductive computational models, could transform our understanding of the natural world. Such an approach would enable us to generate novel hypotheses that have higher chances of turning out to be true, to test those hypotheses, and to fill gaps in our knowledge.

As a good academic, the author says a lot with a few words in the paragraph above. Let me unpack his statement a bit.

His first point is that in the age of big data, individual human minds are incapable of effectively analyzing, processing and making meaning of all the information available. There was a time where all the knowledge about a discipline was in books that could be read or at least summarized by one person. Furthermore, traditional ways of doing research whether through a lab experimentation, sampling, controlling for externalities, testing hypothesis take a long time and only give a narrow view of reality. Hence, in a time where big data is available, such approach will not be sufficient to harness all the knowledge that could be discovered.

His second point is to suggest a new approach that incorporates Artificial Intelligence through pattern seeking algorithms that can effectively and efficiently mine data. The Baconian method simply means the approach of discovering knowledge through disciplined collection and analysis of observations. He proposes an algorithmic approach that would mine data, come up with hypothesis through computer models then collect new data to test those hypotheses. Furthermore, this process would not be limited to an individual but would draw from the knowledge of a vast scientific community. In short, he proposes including AI in every step of scientific research as a way to improve quality and accuracy. The idea is that an algorithmic approach would produce better hypotheses and also test them more efficiently than humans.

As the author concedes, current algorithms and approaches are not fully adequate for the task. While AI can already mine numeric data well, text mining is more of a recent development. Computers think in numbers so to get them to make sense of text requires time-consuming processes to translate text into numeric values. Relevant to this topic, the Washington Post just put out an article about how computers have now, for the first time beat human performance in a reading and comprehension test. This is an important step if we want to see AI more involved in scientific research and discovery.

How will automated research impact our world?

The promise of AI-assisted scientific discovery is remarkable. It could lead to the cure of diseases, the discovery of new energy sources and unprecedented breakthroughs in technology. Another outcome would be the democratization of scientific research. As research gets automated, it becomes easier for others to do it just like Windows has made the computer accessible to people that do not code.

In spite of all this potential, such development should cause us to pause for reflection. It is impressive how much of our mental capacities are being outsourced to machines. How comfortable are we with this inevitable meshing of bodies and electronics? Who will lead, fund and direct the automated research? Will it lead to enriching corporations or improving quality of life for all? I disagree with the author’s statement that an automated research would make science “limitlessly free.” Even as machines are doing the work, humans are still controlling the direction and scope of the research. As we ship more human activity to machines, ensuring they reflect our ethical standards remains a human mandate.

AI Reformation: How Tech can Empower Biblical Scholarship

In a past blog I talked about how an AI-enabled Internet was bound to bring a new Reformation to the church. In this blog, I want to talk about how AI can revolutionize biblical scholarship. Just like the printing press brought the Bible to homes, AI-enabled technologies can bring advanced study tools to the individual. This change in itself can change the face of Christianity for decades to come.

The Challenges of Biblical Scholarship

First, it is important to define what Biblical scholarship is. For those of you not familiar with it, this field is probably one of the oldest academic disciplines in Western academia. The study of Scripture was one of the primary goals for the creation of Universities in the Middle Ages and hence boasts an arsenal of literature unparalleled by most other academic endeavors. Keep in mind this is not your average Bible study you may find in a church. Becoming a Bible scholar is an arduous and long journey. Students desiring to enter the field must learn at least three ancient languages (Hebrew, Greek and usually Aramaic or Akkadian), German, English (for non-native speakers) and usually a third modern language. It takes about 10 years of Graduate level work to get started. To top that off, those who are able to complete these initial requirements face dismal career options as both seminaries and research interest in the Bible have declined in the last decades. Needless to say, if you know a Bible Scholar pat him in the back and thank them. The work they do is very important not only for the church but also for society in general as the Bible has deeply influenced other fields of knowledge like Philosophy, Law, Ethics and History.

Because of the barriers of entry described above, it is not surprising that many who considered this path as an option (including the writer of this blog) have opted for alternative paths. You may be wondering what that has to do with AI. The reality is that while the supply of Bible scholars is dwindling, the demand for work is increasing. The Bible is by far the most copied text in Antiquity. Just the New Testament alone has a collection of over 5,000 manuscripts found in different geographies and time periods. Many were discovered in the last 50 years. On top of that, because the field has been around for centuries, there are countless commentaries and other works interpreting, disputing, dissecting and adding to the original texts. Needless to say, this looks like a great candidate for machine-enhanced human work. No human being could possibly research, analyze and distill all this information effectively.

AI to the Rescue

As you may know, computers do not see the world in pictures or words. Instead all they see is numbers (0s and 1s to be more exact). Natural Language Processing is the technique that translates words into numbers so the computer can understand it. One simple way to do that is to count all the times each word shows up in a text and list them in a table. This simple word count exercise can already shed light into what the text is about. More advanced techniques will not only account for word incidence but also how close they are from each other by meaning. I could go on but for now suffice it to say that NLP starts “telling the story” of a text albeit in a numeric form to the computer.

What I describe above is already present in leading Bible softwares where one can study word counts till Kingdom come (no pun intended). Yet, this is only the first step in enabling computers to start mining the text for meaning and insight. When you add AI to NLP, that is when things start getting interesting. Think more of a Watson type of algorithm that you can ask a question and it can find the answer in the text. Now one can analyze sentiment, genre, text structure to name a few in a more efficient way. With AI, computers are now able to make connections between text that was only possible previously by the human mind. Except that they can do it a lot faster and, when well-trained, with greater precision.

One example is sentiment analysis where the algorithm is not looking for the text itself but more subjective notions of tone expressed in a text. For example, this technique is currently used to analyze customer reviews in order to understand whether a review is positive or negative. I manually attempted this for a Old Testament class assignment in which I mapped out the “sentiment” of Isaiah. I basically categorized each verse with a color to indicate whether it was positive (blessing or worship) or negative (condemnation or lament). I then zoomed out to see how the book’s  sentiment oscillated throughout the chapters. This laborious analysis made me look at the book in a whole different lens. As AI applications become more common, these analysis and visuals could be created in a matter of seconds.

A Future for Biblical Scholarship

Now, by showing these examples I don’t mean to say that AI will replace Scholars. Algorithms still need to be trained by humans who understand the text’s original languages and its intricacies. Yet, I do see a future where Biblical scholarship will not be hampered by the current barriers of entry I described above. Imagine a future where scholars collaborate with data scientists to uncover new meaning in an ancient text. I also see an opportunity for professionals that know enough about Biblical studies and technology becoming valuable additions to research teams. (Are you listening Fuller Seminary? How about a new MA in Biblical Studies and Text Mining?). The hope is that with these tools, more people can be involved in the process and collaboration between researchers can increase. The task of Biblical research is too large to be limited to a select group highly educated scholars. Instead, AI can facilitate the crowdsourcing of the work to analyze and make meaning of the countless text currently available.

With all that said, it is difficult imagine a time where the Bible is just a book to be analyzed. Instead it is to be experienced, wrestled with and discussed. New technologies will not supplant that. Yet, could they open new avenues of meaning until now never conceived by the human mind. What if AI-enabled Biblical Scholarship could not just uncover new knowledge but also facilitate revelation?

How Will AI Accelerate Cyborgization?

In a previous blog, I described how current technology adoption is transforming us into cyborgs. In this blog, I want to show how AI will accelerate and reinforce cyborgization and why this matters. Much has been said about the emergence of AI as the fearsome “other” that is coming to challenge humanity. A much more pressing conversation is how AI will redefine humanity. It is not the robot against us but the robot in us that we need to think about. That is why it is important to understand how AI will accelerate the march toward cyborgization.

Enabling Better Interfaces

AI technologies will speed up the march towards cyborgization by enabling more human-friendly interfaces. This is difficult to imagine now when our users experience is tied to a “QWERTY” keyboard. Also, voice recognition is still at its beginning stages (hence, why I keep on having arguments with Siri because she does not understand what I am saying). Yet, it is not a leap to imagine a world in which we do not type but speak to devices. This is only possible if these machines can hear and understand us. It is only safe if they can recognize who we are either by voice of vision. Last week, Apple launched Iphone X with a visual recognition feature that allows you to unlock the phone by simply looking at the screen. This is a breakthrough step in face recognition. Also, as Amazon, Google and Apple race to develop the leading voice-assistant, speaking to our devices will become more common. As AI improves so will our ability to interact with machines in more human-like manners. This will transform how we use these devices allowing them to play a larger role in our daily lives.

Bringing Order to Digital Chaos

Interfaces are not the only areas enhanced by AI. Another notable area is AI’s ability to bring order to our current digital experience. To illustrate this concept, just think about what would be like to connect your brain directly to your Facebook or Twitter feed. It would result in a major headache driven a by a jumble of unrelated topics, pictures, comments, flashing through your mind. Needless to say that It would be disorienting. This is the state of social media delivery and why extended uses of it can be harmful to our mental health. Now imagine if our digital experience could be organized and filtered by an intelligent agent. Let say, there was an AI-enabled app that could automatically filter and organize what you see based on who you are, your mood and time of day. Wouldn’t that transform your digital experience?  This way, AI would not only allow easier interface with devices but also enhance the experience with these devices through learning about us as individuals. It is a daunting to imagine that devices could know our thoughts and feelings but this is no longer a far-fetched idea. We are surely giving away enough data about ourselves so they can do just that. As these devices “know” us better we will also be more willing to use, wear or embed them in our bodies.

Facilitating Life Extension

The third area in which AI will impact the march is life extension. This is possibly the most controversial and promising area that AI technologies can impact us. While on paternity leave in the last five weeks, I had the chance to observe our current healthcare system. I am grateful for the care we received through medical interventions and advice as we welcome our son to the world. Yet, it was also clear how rudimentary our health care is. We currently rely on painfully invasive tests, disconnected systems, fragmented knowledge and healthcare worker memories for our medical data. No wonder we are running into so many problems in this area. If we could just improve how we collect, store and analyze health data we could advance the quality of care significantly. To do that would require wearing or embedding sensors in our body for live monitoring and data collection. AI models could analyze the data coming from these sensors and translate them into individualized care plans. That is, medical care that is tailored to your bodies specific genetic and real-time conditions. Moreover, it allows for building predictive models to estimate lifespan. The hope is that those models would not only tell us how long we will live but also uncover ways we can prolong that lifespan.

How Will These Advances Move Us Toward Cyborgization?

If it becomes easier to communicate with devices, it is also easier to involve them in all aspects of our lives. The movie Her explores this trend by imagining a world where humans develop romantic relationships with their digital assistants. Here, I want to suggest it could also lead to making these devices indispensable to our bodies. In short, they would culminate into full-blown auxiliary brains. That is, currently from what we know, our brain has no “hard drive” that stores memories. The brain structure itself is the “memories”. However, if as interface advances from hearing our voice to actually to hearing our thoughts, then it could become an embedded hard drive. This hard drive then could store all the information coming from our senses. With the technology already being developed in intelligent agents, these auxiliary brains would not only store data but also organize, filter and prompt it based on what it learns from us. Moreover, it would benefit from body sensors and connection to advanced medical data to possibly extend our lifespans. Sci-fi literature already explores scenarios with these possibilities. For now, it suffice it to say that our mobile phones are the first generation of our auxiliary brains.

Are we ready to move into enhanced humanity? With the acceleration made possible by Artificial Intelligence, this reality may be here sooner than you think. It would be naive to simply embrace it uncritically or reject it outrightly. The main issue is not IF but HOW this will happen.

Are we willing to embrace the opportunity while also recognizing the dangers of an enhanced humanity?

Are we ready to become responsible cyborgs?

Confronting The March Towards Cyborgization

Science fiction cyborgs are scary. They make me wonder if we would ever co-exist with such beings in real life. Well, if you are looking for cyborgs you might as well start looking at the mirror. Our race to adopt the latest technologies is slowly but surely turning all of us into these fearsome creatures. Cyborgization is upon us whether we like it or not.

It this good, bad? Well, let me describe the trends and then we can discuss it. We’ll never wrestle with a reality that we do not name it first. As the adage goes: “the first step to healing is admitting we have a problem.”

Don’t believe me. Consider the following:

Last week Apple just announced that is making the iWatch less dependent on the iPhone. That means soon you’ll get most of the iPhone functionality in the iWatch. Elon Musk is talking about sending nano transmitters into our blood flow. In Wisconsin, a company is experimenting with just that, paying employees to implant chips in their hands. These are just a few examples of “body hacking”, where people are pushing further the envelope of fusing technology with our bodies. If technology conglomerates have their way, we are moving from buying devices we use to adding them to our bodies. The trend can be depicted as such:

desktop>>laptop>>tablet>>smart phone>>wearable device>>implant

Certainly, not everybody will sign up for implants. Yet, the fact that we now already have people willing to experiment with implants shows how far we have progressed in the spectrum above. Thirty years ago only a few of us owned desktop computers in our homes. Last year, the number of smart phones in the world surpassed 2 billion, just shy of 30% of the world population. The march towards cyborgization is in full-speed.

Cyborgs in Action: Sousveillance

So what would a world with cyborgs look like? What would we do with our extended bodies?

The events of Charlottesville this weekend reminded us of the evil undercurrents of racism that still purveys our culture.

[Here I must stop to make a few comments. Racism is goes against everything Christianity stands for. To say otherwise or to pretend there is equivalency on both sides betrays who we are. My thoughts and prayers are with the victims and their families. I also pray that we are able to come together to confront this evil in our midst.]

Curiously, the march and its aftermath also became a notable experiment in Sousveillance. Don’t know what that means, no worries. I just learned about it this week. Sousveillance is the fancy name for the growing phenomenon where people use their phones to record real-life events.

While this has been happening for a while, the novelty was the crowdsourced attempts to identify members of the march through social media. Basically, somebody recorded the faces of white supremacists marchers and posted their faces in twitter asking users to recognize and identify the individuals. As a result, one of the marchers lost his job after being outed in this social media driven act of sousveillance. Such development would not have been possible without the advent of devices that allow us to film and share images at ease.

This is an example of things to come. Ordinary individuals, leveraging their body extension tools to do things that were not possible otherwise. On the one hand this could lead to quicker apprehension of criminals in both identifying as well as providing physical evidence of their crimes. On the other hand this could quickly lead to an augmented version of mob mentality, where people are quickly branded guilty and made to pay for crimes they did not commit.While many are weary of government surveillance, citizen sousveillance can offer a welcome check. This is just one application of how cyborgization can change our world.

Framing the Conversation

So maybe becoming cyborgs is not such a bad thing. However, before dismissing or embracing this trend, it is important to ask a few questions. Here are some that come to mind.

Who is driving the march and who benefits most from it?

How does it help the most vulnerable?

How does it affect human relationships?

How do these devices enhance or diminish our humanity?

My biggest concern is not how fast these technological extensions are being adopted but how is it done. At this point, most of it is driven through marketing by large technology conglomerates telling us that we must adopt the latest gadget or else become irrelevant. The subtle message is not just that our current gadgets are outdated but that we ourselves are becoming useless.

Certainly marketing of artificial needs should not be the main driver for adopting these technologies. Instead, their adoption should undergo a deliberate process in which the questions listed above are at the forefront. Technology should never be an end to itself but a means to life enriching goals. We need to evolve from technology consumers to thoughtful agents that leverage technology for human flourishing.

At a personal level, a simple question would be: does this device improve my quality of life or not? If it does not, then it may be time to re-think its usage.