5 Changes the Biden-Harris Administration will Bring to AI Policy

As a new administration takes the reins of the federal government, there is a lot of speculation as to how they will steer policy in the area of technology and innovation. This issue is even more relevant as social media giants grapple with free speech in their platforms, Google is struggles with AI ethics and concerns over video surveillance grows. In the global stage, China moves forward with its ambitions of AI dominance and Europe continues to grapple with issues of data governance and privacy.

In this scenario, what will a Biden-Harris administration mean for AI in the US and global stage? In a previous blog, I described the decentralized US AI strategy, mainly driven by large corporations in Silicon Valley. Will a Biden administration bring continuity to this trend or will it change direction? While it is early to say for sure, we should expect 5 shifts as outlined below:

(1) Increased investment in non-military AI applications: In contrast to the $2 Bi promised by the Trump White House, Biden plans to ramp up public investment in R&D for AI and other emerging technologies. Official campaign statements promise a whopping $300 billion of investment. This is a significant change since public research funds tend to aim at socially conscious applications rather than profit-seeking ventures preferred by private investment. These investments should steer innovation towards social goals such as climate change, revitalizing the economy, and expanding opportunity. In the education front, $5 billion is earmarked for graduate programs in teaching STEM. These are important steps as nations across the globe seek to gain the upper hand on this crucial technology.

(2) Stricter bans on facial recognition: While this is mostly speculation at this point, industry observers cite Kamala’s recent statements and actions as an indication of forthcoming stricter rules. In her plan to reform the justice system, she cites concerns with law enforcement’s use of facial recognition and surveillance. In 2018, she sent letters to federal agencies urging them to take a closer look at the use of facial recognition in their practices as well as the industries they oversee. This keen interest in this AI application could eventually translate into strong legislation to regulate, curtail or even ban the use of facial recognition. It will probably fall somewhere between Europe’s 5-year ban on it and China’s pervasive use to keep the population in check.

Photo by ThisisEngineering RAEng on Unsplash

(3) Renewed anti-trust push on Big Tech: The recent move started by Trump administration to challenge the big tech oligarchy should intensify under the new administration. Considering that the “FAMG”(Facebook, Amazon, Microsoft, and Google) group is in the avant-garde of AI innovation, any disruption to their business structures could impact advances in this area. Yet, a more competitive tech industry could also mean an increase in innovation. It is hard to determine how this will ultimately impact AI development in the US but it is a trend to watch in the next few years.

(4) Increased regulation: It is likely but not certain at this point. Every time a Democratic administration takes power, the underlying assumption by Wall Street is that regulation will increase. Compared to the previous administration’s appetite for dismantling regulation, the Biden presidency will certainly be a change. Yet, it remains to be seen how they will go about in the area of technology. Will they listen to experts and put science in front of politics? AI will definitely be a test of it. They will certainly see government as a strong partner with private industry. Also, they will likely walk back Trump’s tax cuts on business which could hamper innovations for some players.

(5) Greater involvement in the global stage: the Biden administration is likely to work closer with allies, especially in Europe. Obama’s AI principles released in 2012 became a starting point for the vigorous regulatory efforts that arose in Europe in the last 5 years. It would be great to see increased collaboration that would help the US establish strong privacy safeguards as the ones outlined by the GDPR. In regards to China, Biden will probably be more assertive than Obama but less belligerent than Trump. This could translate into restricting access to key technologies and holding China’s feet to the fire on surveillance abuses.

The challenges in this area are immense requiring careful analysis and deliberation. Brash decisions based on ideological short-cuts can both hamper innovation and fail to safeguard privacy. It is also important to build a nimble apparatus that can respond to the evolving nature of this technology. While not as urgent as COVID and the economy, the federal government cannot afford to delay reforming regulation for AI. Ethical concerns and privacy protection should be at the forefront seconded by incentives for social innovation.

Union Tech: How AI is Empowering Workers


Is technology empowering or hindering human flourishing?

This week, I found a promising illustration of empowerment. While driving back from South Carolina, I listened to an episode from Technopolis podcast which explores how technology is altering urban landscapes. Just like in a previous post, the podcast did not disappoint. In this episode, they talk to Palak Shah from the National Domestic Worker Alliance digital lab. The advocacy group seeks innovative ways to empower 2.5 million nannies, house cleaners, and care workers in the United States. Because of its highly distributed workforce (most domestic workers work for one or a few households making it difficult to organize in a way that auto workers could), they quickly saw that technology was the best way to reach and engage the workers they trying to reach.

The lab developed two main products: the Alia platform and a La Alianza chatbot. The platform aggregates small contributions from clients to offer benefits for the workers. One of the biggest challenges with domestic workers is that they have no safety net. Most only get paid when they work and do not have health insurance. By pooling workers and getting an additional contribution from clients with little overhead, the platform is able to give the workers some of these benefits. The chatbot offers news and resources to over 200K domestic worker subscribers.

When the pandemic hit, the lab team with some help from Google was able to fully pivot in order to address new emerging problems. The Alia platform became a cash-transfer tool to help workers that were not getting any income. Note that most of them did not receive unemployment or the stimulus checks coming from the government. Furthermore, the chatbot surveyed domestic workers to better understand the impact of the pandemic on their livelihoods so they could adequately respond to their needs.

The NDWA lab story illustrates well the power of harnessing technology for human flourishing.

As a technology worker myself, I wonder how my work is expanding or hindering human flourishing. Some of us may not be doing work that is directly aligned with a noble cause. Yet, there are many ways in which we can take small steps re-direct technology towards a more human future.

Last week, in a history-making move, a group of Google employees formed the first union in a major technology company. Before that, tech employees have played crucial roles as whistleblowers for abuses and excesses from their companies. Beyond that, numerous tech workers have contributed their valuable skills for non-profit efforts in what is often known as the “tech for good” movement. These efforts range from hackathons to long-term projects organized by foundations embedded within large multinational companies.

These are just a few examples of how technology workers are taking steps to keep large corporations accountable and contribute to their communities. There are many other ways in which one can work towards human flourishing.

How is your work contributing to human flourishing today?

Preparing for a Post-COVID-19 AI-driven Workplace

Are we ready for the change this pandemic will bring? Are we ready to encounter the accelerating threats to the workplace that were envisioned only years ahead? What can this pandemic teach us about being useful in the future where AI will continue to re-arrange the workplace?

Sign of Things to Come

As the coronavirus was spreading rapidly through Japan in March, workers in Sugito found a spiking sudden demand for hygiene products such as masks, hand sanitizers, gloves, and medical protection supplies.  To reduce the danger of contamination, the company that operates the center, Paltac, is engaging in a revolutionary idea. They are not just considering, but are already initiating hiring robots to replace human manufacturing, at least until social distancing is no longer needed.

“Robots are just one tool for adapting to the new normal.” Says Will Knight, senior writer for WIRED, in his article where he evaluates the Japanese pandemic situation, and how manufacturing Japanese companies are dealing with social distancing.

Some think that this is an unmatched opportunity to adapt and deliver in the AI community. Especially medical Robo tech – if they had been sought out more thoroughly beforehand, maybe the present outcome wouldn’t have been so catastrophic. Science journalist Matt Simon illustrates this in his article, and reassures that: “Evermore sophisticated robots and AI are augmenting human workers

The greater question is will AI replace or augment workers? Our future may depend on the answer to this question.

A Bigger Threat than a Virus?

In 2016 Harvard scientists released a study on “12 risks that threaten human civilization.” In it, they, not only outline the risks but also show ways that we can prepare for them. Prophetically, the study cites a global pandemic at the top of the list. It correctly classified it as “more likely than assumed” and they could not have been more correct. We now wish global leaders had heeded their warnings.

What other risks does the study warn us about? The scientists consider Artificial Intelligence as one of the major, but unfortunately the least of all comprehended global risks. In spite of its limitless potential, there is a grave risk of such intelligence developing into something uncontrollable.

It is not just a probability, but a questionable enigma of when. It could bring significant economic disruption, predicting that AI could copy and surpass human proficiency in speed and performance. While current technology is nowhere near this scenario, the mere possibility of this predicament should cause us to pause for reflection.

Yet, even as this pandemic has shown, the greatest threats are also the biggest opportunities for doing good in the world.

Learning to Face the Unknown

Our very survival depends on our ability to stay awake, to adjust to new ideas, to remain vigilant and to face the challenge of change.

Martin Luther King Jr.

Change is inevitable. Whether coming by exquisite and unique technology or a deadly virus, it will eventually disrupt our ideal routines. The difference is in how we position ourselves to face these adversities alongside those who we love and are responsible for. If humans can correctly predict tragedies, how much more can we do to avoid them!

The key to the future is the ability to adapt in the face of change. People that only react to what is “predictable” will be replaced by robots or algorithms. For example, as a teacher, I studied many things but never thought that I would have to become a Youtuber.  No one ever taught me about the systems to help me access via the internet. I was not trained for this! Yet, because of this pandemic, I now have to teach through creating videos and uploading them online. I am learning to become a worker of the future.

May we use this quarantined year as an incubating opportunity to prepare ourselves for a world that will not be the same.  May we train ourselves to endure challenges, and also to see the opportunities that lie in plain sight. This is my hope and prayer for all of you.

STAY HOME, STAY SAFE, STAY SANE


AI Impact on Jobs: How can Workers Prepare?

In a previous blog, I explored the main findings from a recent MIT paper on AI’s impact on work. In this blog, I want to offer practical advice for workers worried about their jobs future. There is a lot automation anxiety surrounding the topic which often gets amplified through click-bait sensational articles. Fortunately, the research from the MIT-IBM Watson paper offers sensible and detailed enough information to help workers take charge of their careers. Here are the main highlights.

From Jobs to Tasks

The first important learning from the report is to think of your job as group of tasks rather than a homogenous unit. The average worker performs a wide range of tasks from communicating issues, solving problems, selling ideas to evaluating others. If you never thought of your job this way, here is a suggestion: track what you do in one work day. Pay attention to the different tasks you perform and write down the time it takes to complete them. Be specific enough in descriptions that go beyond “checking emails.” When you read and write emails, you are trying to accomplish something. What is it?

Once you do that for a few days, you start getting a clearer picture of your job as a collection of tasks. The next step then is to evaluate each task asking the following questions:

  • Which tasks brings the most value to the organization you are working for?
  • Which tasks are repetitive enough to be automated?
  • Which tasks can be delegated or passed on to other in your team?
  • Which tasks can you do best and which ones do you struggle the most?
  • Which tasks do you enjoy the most?

As you evaluate your job through these questions, you can better understand not just how good of a fit it is for your as an individual but also how automation may transform your work in the coming years. As machine learning becomes more prevalent, the repetitive parts of your job are most likely to disappear.

Tasks on the rise

The MIT-IBM Watson report analyzed job listings over a period of ten years and identified groups of tasks that were in higher demand than others. That is, as job change, certain tasks become more valuable either because they cannot be replaced by machine learning or because there is growing need for it.

According to the research, tasks in ascendance are:

  • Administrative
  • Design
  • Industry Knowledge
  • Personal care
  • Service

Note that the last two tend to be part of lower wage jobs. Personal care is an interesting one (i.e.: hair stylist, in-home nurses, etc.). Even with the growing trend in automations, we still cannot teach a robot to cut hair. That soft but precise touch from the human hand is very difficult to replicate, at least for now.

How much of your job consists of any of the tasks above?

Tasks at risk

On the flip side, some tasks are in decline. Some of this is particular to more mature economies like the US while others have a more general impact due to wide-spread adoption of technologies. The list of these tasks highlighted in the report are:

  • Media
  • Writing
  • Manufacturing
  • Production

The last two are no surprise as the trend of either offshoring or mechanizing these tasks has been underway for decades. The first two, however, are new. As technologies and platforms abound, these tasks either become more accessible to wider pool of workers which makes them less valuable in the workplace. Just think about what it took to broadcast a video in the past and what it takes to do it now. In the era of Youtube, garage productions abound sometimes with almost as much quality as studio productions.

If your job consists mostly of these tasks, beware.

Occupational Shifts

While looking at tasks is important, overall occupations are also being impacted. As AI adoption increases, these occupations either disappear or get incorporated into other occupations. Of those, it is worth noting that production and clerical jobs are in decline. Just as an anecdote, I noticed how my workplace is relying less and less on administrative assistants. The main result is that everybody now is doing scheduling what before used to be the domain of administrative jobs.

Occupations in ascendance are those in IT, Health care and Education/Training. The latter is interesting and indicative of a larger trend. As new applications emerge, there is a constant need for training and education. This benefits both traditional educational institutions but also entrepreneurial start ups. Just consider the rise of micro-degrees and coding schools emerging in cities all over this country.

Learning as a Skill

In short, learning is imperative. What that means is that every worker, regardless of occupation or wage level will be required to learn new tasks or skills. Long gone are the days where someone would learn all their professional knowledge in college and then use it for a lifetime career. Continual training is the order of the day for anyone hoping to stay competitive in the workplace.

I am not talking just about pursuing formal training paths through academic degrees or even training courses. I am talking about learning as a skill and discipline for you day-to-day job. Whether from successes or mistakes, we must always look for learning opportunities. Sometimes, the learning can come through research on an emerging topic. Other times, it can happen through observing others do something well. There are many avenues for learning new skills or information for those who are willing to look for it.

Do you have a training plan for your career? Maybe is time to consider one.

AI and Women at the Workplace: A Sensible Guide for 2030

Even a few years in, the media craze over AI shows no sign of subsiding. The topic continues to fascinate, scare and befuddle the public. In this environment, the Mckinsey report on AI and Women at the workplace is a refreshing exception. Instead of relying on hyperboles, they project meaningful but realistic impact of AI on jobs. Instead of a robot apocalypse, they speak of a gradual shifting of tasks to AI-enabled applications. This is not to say that the impact will be negligible. Mckinsey still projects that between 40 – 160 M women may need to transition into new careers by 2030 worldwide. This is not a small number when the low end accounts for roughly population of California! Yet, still much less than other predictions.

Impact on Women

So why do a report based on one gender? Simply put, AI-driven automation will affect men and women differently in the workplace as they tend to cluster in different occupations. For example, women are overly represented in clerical and service-oriented occupations, all of which are bound to be greatly impacted by automation. Conversely, women are well-represented in health-care related occupations which are bound to grow in the period forecasted. These facts alone will assure that genders will experience AI impact differently.

There are however, other factors impacting women beyond occupation clusters. Social norms often make it harder for women to make transitions. They have less time to pursue training or search for employment because they spend much more time than men on house work and child care. They also have lower access to digital technology and participation in STEM fields than men. That is why initiatives that empower girls to pursue study in these areas are so important and needed in our time.

The main point of the report is not that automation will simply destroy jobs but that AI will move opportunity between occupations and geographies. The issue is less of an inevitable trend that will wipe out sources of livelihood but one that will require either geographic mobility or skill training. Those willing to make these changes are more likely to survive and thrive in this shifting workplace environment.

What Can You Do?

For women, it is important to keep your career prospects open. Are you currently working in an occupation that could face automation. How can you know? Well, think about the tasks you perform each day. Could they be easily learned and repeated by a machine? While all of our jobs have portions we wish were automated, if that applies to 60-80% of your job description, then you need to re-think your line of work. Look for careers that are bound to grow. That may not may simply learning to code but also consider professions that require human touch and cannot be easily replaced by machines. Also, an openness to moving geographically can greatly improve job prospects.

For parents of young girls, it is important to expose them to STEM subjects early on. A parent encouragement can go a long way in helping them consider those areas as future career options. That does not mean they will become computer programmers. However, early positive experiences with these subjects will give them the confidence later in life to pursue technical occupations if they so choose. A big challenge with STEM is the impression that it is hard, intimidating and exclusive to boys. The earlier we break these damaging paradigms the more we expand job opportunity for the women of the future.

Finally, for the men who are concerned about the future job prospects of their female loved ones, the best advice is get more involved in housework and child rearing. In short, if you care about the future of women in the workplace, change a diaper today and go wash those dishes. The more men participate in unpaid house work and child rearing the more women will be empowered to pursue more promising career paths.

ERLC Statement on AI: An Annotated Christian Response

Recently, the ERLC (Ethics and Religious Liberty Commission) released a statement on AI. This was a laudable step as the Southern Baptist became the first large Christian denomination to address this issue directly. While this is a start, the document fell short in many fronts. From the start, the list of signers had very few technologists and scientists.

In this blog, I show both the original statement and my comments in red. Judge for yourself but my first impression is that we have a lot of work ahead of us.

Article 1: Image of God

We affirm that God created each human being in His image with intrinsic and equal worth, dignity, and moral agency, distinct from all creation, and that humanity’s creativity is intended to reflect God’s creative pattern.

Ok, that’s a good start by locating creativity as God’s gift and affirming dignity of all humanity. Yet, the statement exalts human dignity at expense of creation. Because AI, and technology in general, is about human relationship to creation, setting the foundation right is important. It is not enough to highlight human primacy, one must clearly state our relationship with the rest of creation.

We deny that any part of creation, including any form of technology, should ever be used to usurp or subvert the dominion and stewardship which has been entrusted solely to humanity by God; nor should technology be assigned a level of human identity, worth, dignity, or moral agency.

Are we afraid of a robot take over of humanity? Here it would have been helpful to start distinguishing between general and narrow AI. The first is still decades away while the latter is already here and poised to change every facet of our lives. The challenge of narrow AI is not one of usurping our dominion and stewardship but of possibly leading us to forget our humanity. They seem to be addressing general AI. Maybe including more technologists in the mix would have helped.

Genesis 1:26-28; 5:1-2; Isaiah 43:6-7; Jeremiah 1:5; John 13:34; Colossians 1:16; 3:10; Ephesians 4:24

Article 2: AI as Technology

We affirm that the development of AI is a demonstration of the unique creative abilities of human beings. When AI is employed in accordance with God’s moral will, it is an example of man’s obedience to the divine command to steward creation and to honor Him. We believe in innovation for the glory of God, the sake of human flourishing, and the love of neighbor. While we acknowledge the reality of the Fall and its consequences on human nature and human innovation, technology can be used in society to uphold human dignity. As a part of our God-given creative nature, human beings should develop and harness technology in ways that lead to greater flourishing and the alleviation of human suffering. 

Yes, well done! This affirmation is where Christianity needs to be. We are for human flourishing and the alleviation of suffering. We celebrate and support Technology’s role in these God-given missions.

We deny that the use of AI is morally neutral. It is not worthy of man’s hope, worship, or love. Since the Lord Jesus alone can atone for sin and reconcile humanity to its Creator, technology such as AI cannot fulfill humanity’s ultimate needs. We further deny the goodness and benefit of any application of AI that devalues or degrades the dignity and worth of another human being.

I guess what they mean here is that technology is a limited means and cannot ultimately be the salvation. I see here a veiled critique of Transhumanism. Fair enough, the Christian message should both celebrate AI’s potential but also warn of its limitations less we start giving it unduly worth.

Genesis 2:25; Exodus 20:3; 31:1-11; Proverbs 16:4; Matthew 22:37-40; Romans 3:23

Article 3: Relationship of AI & Humanity

We affirm the use of AI to inform and aid human reasoning and moral decision-making because it is a tool that excels at processing data and making determinations, which often mimics or exceeds human ability. While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility.

This statement seems to suggest the positive role AI can play in augmentation rather than replacement. I am just not sure that was ever in question.

We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created. Only humanity will be judged by God on the basis of our actions and that of the tools we create. While technology can be created with a moral use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision making.

While hard to argue against this statement at face value, it overlooks the complexities of a world that is becoming increasingly reliant on algorithms. The issue is not that we are offloading moral decisions to algorithms but that they are capturing moral decisions of many humans at once. This reality is not addressed by simply stating human moral responsibility. This needs improvement.

Romans 2:6-8; Galatians 5:19-21; 2 Peter 1:5-8; 1 John 2:1

Article 4: Medicine

We affirm that AI-related advances in medical technologies are expressions of God’s common grace through and for people created in His image and that these advances will increase our capacity to provide enhanced medical diagnostics and therapeutic interventions as we seek to care for all people. These advances should be guided by basic principles of medical ethics, including beneficence, nonmaleficence, autonomy, and justice, which are all consistent with the biblical principle of loving our neighbor.

Yes, tying AI-related medical advances with the great commandment is a great start.

We deny that death and disease—effects of the Fall—can ultimately be eradicated apart from Jesus Christ. Utilitarian applications regarding healthcare distribution should not override the dignity of human life. Furthermore, we reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.

Similar to my statement on article 3, this one misses the complexity of the issue. How do you draw the line between enhancement and cure? Also, isn’t the effort of extend life an effective form of alleviation of suffering? These issues do not lend themselves to simple propositions but instead require more nuanced analysis and prayerful consideration.

Matthew 5:45; John 11:25-26; 1 Corinthians 15:55-57; Galatians 6:2; Philippians 2:4​

Article 5: Bias

We affirm that, as a tool created by humans, AI will be inherently subject to bias and that these biases must be accounted for, minimized, or removed through continual human oversight and discretion. AI should be designed and used in such ways that treat all human beings as having equal worth and dignity. AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making.

Bias is inherent in the data fed into machine learning models. Work on the data, monitor the outputs and evaluate results and you can diminish bias. Direction AI to promote equal worth is a good first step.

We deny that AI should be designed or used in ways that violate the fundamental principle of human dignity for all people. Neither should AI be used in ways that reinforce or further any ideology or agenda, seeking to subjugate human autonomy under the power of the state.

What about being used by large corporations? This was a glaring absence here.

Micah 6:8; John 13:34; Galatians 3:28-29; 5:13-14; Philippians 2:3-4; Romans 12:10

Article 6: Sexuality

We affirm the goodness of God’s design for human sexuality which prescribes the sexual union to be an exclusive relationship between a man and a woman in the lifelong covenant of marriage.

This seems like a round-about way to use the topic of AI for fighting culture wars. Why include this here? Or, why not talk about how AI can help people find their mates and even help marriages? Please revise or remove!

We deny that the pursuit of sexual pleasure is a justification for the development or use of AI, and we condemn the objectification of humans that results from employing AI for sexual purposes. AI should not intrude upon or substitute for the biblical expression of sexuality between a husband and wife according to God’s design for human marriage. 

Ok, I guess this is a condemnation of AI porn. Again, it seems misplaced on this list and could have been treated in alternative ways. Yes, AI can further increase objectification of humans and that is a problem. I am just not sure that this is such a key issue to be in a statement of AI. Again, more nuance and technical insight would have helped.

Genesis 1:26-29; 2:18-25; Matthew 5:27-30; 1 Thess 4:3-4

Article 7: Work

We affirm that work is part of God’s plan for human beings participating in the cultivation and stewardship of creation. The divine pattern is one of labor and rest in healthy proportion to each other. Our view of work should not be confined to commercial activity; it must also include the many ways that human beings serve each other through their efforts. AI can be used in ways that aid our work or allow us to make fuller use of our gifts. The church has a Spirit-empowered responsibility to help care for those who lose jobs and to encourage individuals, communities, employers, and governments to find ways to invest in the development of human beings and continue making vocational contributions to our lives together.

This is a long, confusing and unhelpful statement. It seems to be addressing the challenge of job loss that AI can bring without really doing it directly. It gives a vague description of the church’s role in helping individuals find work but does not address the economic structures that create job loss. It simply misses the point and does not add much to the conversation. Please revise!

We deny that human worth and dignity is reducible to an individual’s economic contributions to society alone. Humanity should not use AI and other technological innovations as a reason to move toward lives of pure leisure even if greater social wealth creates such possibilities.

Another confusing and unhelpful statement. Are we making work holy? What does “lives of pure leisure” mean? Is this a veiled attack against Universal Basic Income? I am confused. Throw it out and start it over!

Genesis 1:27; 2:5; 2:15; Isaiah 65:21-24; Romans 12:6-8; Ephesians 4:11-16

Article 8: Data & Privacy

We affirm that privacy and personal property are intertwined individual rights and choices that should not be violated by governments, corporations, nation-states, and other groups, even in the pursuit of the common good. While God knows all things, it is neither wise nor obligatory to have every detail of one’s life open to society.

Another statement that needs more clarification. Treating personal data as private property is a start. However, people are giving data away willingly. What is privacy in a digital world? This statement suggest the drafters unfamiliarity with the issues at hand. Again, technical support is needed.

We deny the manipulative and coercive uses of data and AI in ways that are inconsistent with the love of God and love of neighbor. Data collection practices should conform to ethical guidelines that uphold the dignity of all people. We further deny that consent, even informed consent, although requisite, is the only necessary ethical standard for the collection, manipulation, or exploitation of personal data—individually or in the aggregate. AI should not be employed in ways that distort truth through the use of generative applications. Data should not be mishandled, misused, or abused for sinful purposes to reinforce bias, strengthen the powerful, or demean the weak.

The intention here is good and it is in the right direction. It is also progress to point out that consent is the only guideline and in its condemnation of abusive uses. I would like it to be more specific on its call to corporations, governments and even the church.

Exodus 20:15, Psalm 147:5; Isaiah 40:13-14; Matthew 10:16 Galatians 6:2; Hebrews 4:12-13; 1 John 1:7

Article 9: Security

We affirm that AI has legitimate applications in policing, intelligence, surveillance, investigation, and other uses supporting the government’s responsibility to respect human rights, to protect and preserve human life, and to pursue justice in a flourishing society.

We deny that AI should be employed for safety and security applications in ways that seek to dehumanize, depersonalize, or harm our fellow human beings. We condemn the use of AI to suppress free expression or other basic human rights granted by God to all human beings.

Good intentions with poor execution. The affirmation and denials are contradictory. If you affirm that AI can be use for policing, you have to concede that it will be used to harm some. Is using AI to suppress hate speech acceptable? I am not sure how this adds any insight to the conversation. Please revise!

Romans 13:1-7; 1 Peter 2:13-14

Article 10: War

We affirm that the use of AI in warfare should be governed by love of neighbor and the principles of just war. The use of AI may mitigate the loss of human life, provide greater protection of non-combatants, and inform better policymaking. Any lethal action conducted or substantially enabled by AI must employ human oversight or review. All defense-related AI applications, such as underlying data and decision-making processes, must be subject to continual review by legitimate authorities. When these systems are deployed, human agents bear full moral responsibility for any actions taken by the system.

Surprisingly, this was better than the statement above. It upholds human responsibility but recognizes that AI, even in war, can have life preserving aims. I would have like a better definition of uses for defense, yet that is somewhat implied in the principles of just war. I must say this is an area that needs more discussion and further considerations but this is a good start.

We deny that human agency or moral culpability in war can be delegated to AI. No nation or group has the right to use AI to carry out genocide, terrorism, torture, or other war crimes.

I am glad to see the condemnation of torture here. Lately, I am not sure where evangelicals stand on this issue.

Genesis 4:10; Isaiah 1:16-17; Psalm 37:28; Matthew 5:44; 22:37-39; Romans 13:4​

Article 11: Public Policy

We affirm that the fundamental purposes of government are to protect human beings from harm, punish those who do evil, uphold civil liberties, and to commend those who do good. The public has a role in shaping and crafting policies concerning the use of AI in society, and these decisions should not be left to those who develop these technologies or to governments to set norms.

The statement points to the right direction of public oversight. I would have liked it to be more bold and clear about the role of the church. It should have also addressed corporations more directly. That seems to be a blind spot in a few articles.

We deny that AI should be used by governments, corporations, or any entity to infringe upon God-given human rights. AI, even in a highly advanced state, should never be delegated the governing authority that has been granted by an all-sovereign God to human beings alone.

Glad to see corporations finally mentioned in this document making this a good start.

Romans 13:1-7; Acts 10:35; 1 Peter 2:13-14

Article 12: The Future of AI

We affirm that AI will continue to be developed in ways that we cannot currently imagine or understand, including AI that will far surpass many human abilities. God alone has the power to create life, and no future advancements in AI will usurp Him as the Creator of life. The church has a unique role in proclaiming human dignity for all and calling for the humane use of AI in all aspects of society.

Again, the distinction between narrow and general AI would have been helpful here. The statement seems to be addressing general AI. It also seems to give away the impression that AI is threatening God. Where is that coming from? A more nuanced view of biology and technology would have been helpful here to. They seem to be jumbled together. Please revise!

We deny that AI will make us more or less human, or that AI will ever obtain a coequal level of worth, dignity, or value to image-bearers. Future advancements in AI will not ultimately fulfill our longings for a perfect world. While we are not able to comprehend or know the future, we do not fear what is to come because we know that God is omniscient and that nothing we create will be able to thwart His redemptive plan for creation or to supplant humanity as His image-bearers.

I disagree with the first sentence. There are ways in which AI can affirm and/or diminish our humanity. The issue here seems to be a perceived threat that AI will replace humans or be considered equal to them. I like the hopeful confidence in God for the future but the previous statement suggest that there is fear about this already. The ambiguity in the statements is unsettling. It suggests that AI is a dangerous unknown. Yes, it is true that we cannot know what it can become but why not call out Christians to seize this opportunity for the kingdom? Why not proclaim that AI can help us co-create with God? Let me reiterate one of the verses mentioned below:

For God has not given us a spirit of fear and timidity, but of power, love, and self-discipline

Genesis 1; Isaiah 42:8; Romans 1:20-21; 5:2; Ephesians 1:4-6; 2 Timothy 1:7-9; Revelation 5:9-10

For an alternative but still evolving Christian position on this matter please check out the Christian Transhumanist Association affirmation.

AI Ethics: Evaluating Google’s Social Impact

I have noticed a shift in the corporate America recently. Moving away from the unapologetic defense of profit making of the late 20th century, corporations are now asking deeper questions on the purpose of their enterprises. Consider how businesses presented themselves in the Super Bowl broadcast this year. Verizon focused on first-responders life-saving work, Microsoft touted its video-game platform for children with disabilities and the Washington Post paid tribute to recently killed journalists. Big business wants to convince us they also have a big heart.

This does not mean that profit is secondary. As long as there is a stock market and earning expectations drive corporate goals, short-term profit will continue to be king. Yet, it is important to acknowledge the change. Companies realize that customers want more than a good bargain. Instead they want to do business with organizations that are doing meaningful work. Moreover, Companies are realizing they are not just autonomous entities but social actors that must contribute to the common good.

Google AI Research Review of 2018

Following this trend, Google AI Review of 2018 focused on how its research is impacting the world for good. The story is impressive, as it reach encompasses many fields of both philanthropy, the environment and technological breakthroughs. I encourage you to look at it for yourself.

Let me just highlight a few developments that are worth mentioning here. The first one is the development of AI ethical principles. In it, Google promise to develop technologies that are beneficial to society, tested for safety and accountable to people. The company also promises to keep privacy embedded in design, uphold highest levels of scientific excellence while also limiting harmful potential uses of their technology. In the latter, they promise to apply a cost-benefit analysis to ensure the risks of harmful uses does not outweigh its benefits.

In the last section, the company explicitly states applications they will not pursue. These include weapons, surveillance and or those that oppose accepted international law and human rights. That last point, I must admit, is quite vague and open to interpretation. With that said, the fact that Google published these principles to the public shows that they recognize their responsibility to uphold the common good.

Furthermore, the company showcases some interesting examples of using AI for social good. The example includes work on flood and earthquake prediction, identifying whales and diseased cassavas and even detecting exoplanets. The company has also allocated over $25M in funds for external social impact work through its foundation.

A Good Start But is that Enough?

In a previous blog, I mentioned how the private sector drives the US AI strategy . This approach definitely raises concerns as profit-ventures may not always align with the public good in their research goals. However, it is encouraging to see a leader in the industry doing serious ethical reflection and engaging in social work.

Yet, Google must do more to fully recognize the role its technologies play in our global society. For one, Google must do a better job in understanding its impact in local economies. While its technologies empower small businesses and individual actors in remote areas, it also upends existing industries and established enterprises. Is Google paying attention to those in the losing side of its technologies? If so, how are they planning to help them re-invent themselves?

Furthermore, if Google is to exemplify a business with a social conscience does it have appropriate feedback channels for its billions of customers? Given its size and monopoly of the search engine industry, can it really be kept accountable on its own?  The company should not only strive for transparency in its practice but also listen to its customers more attentively.

Technology, Business and Society

The relationship between business and society is being revolutionized through the advance of emerging technologies such as AI. In the example of Google, being the search engine leader makes them the primary knowledge gate-keeper for the Internet. As humans come to rely more on the Internet as an extension of their brain, this places Google in a role equivalent to what religious, educational and political leaders played in the past. This is too important a function to be centralized in one profit-making organization.

To be fair, this was not a compulsory process. It is not that Google took over our brains by force, we willingly gave them this power. Therefore, change is contingent not only in the corporation but in its customers. From a practical standpoint, that may mean skipping that urge to “google things”. We might try different search engines or even crack open a book to seek the information we need. We should also seek alternative ways to finding things in the Internet. That may mean looking at resource sites, social platforms and other alternatives. These efforts may at first make life more complicated but over the long run it will safeguard us from an inordinate dependence on a company.

The technologies developed by Google are a blessing (albeit one that we pay for) to the world. We should leverage them for human flourishing regardless of the company’s intended focus. For that to happen, we the people, must take stock of our own interaction with them . The more responsibly we use it, the more we insure that they remain what they are really meant to be: gifts to humanity.

Abraham and the Sacrifice of Isaac: How Travelers Re-visits the Biblical Story Through AI Theology

Abraham took the wood of the burnt offering and laid it on his son Isaac, and he himself carried the fire and the knife. So the two of them walked on together. Isaac said to his father Abraham, “Father!” And he said, “Here I am, my son.” He said, “The fire and the wood are here, but where is the lamb for a burnt offering?”  Abraham said, “God himself will provide the lamb for a burnt offering, my son.” So the two of them walked on together. Gen 22: 6-8.

One of the most powerful narratives of the Hebrew Bible is the story of Abraham’s near-sacrifice of his son Isaac. The book of Genesis tells us that God, after promising and delivering a son to Abraham at old age, one day asked him to sacrifice him as an offer back to God. The absurdity of the request is matched by Abraham’s unquestioning obedience.

As he is taking Isaac to the place of sacrifice, the young boy asks where was the animal to be sacrificed. In a prophetic statement, the father of the Hebrew faith simply answered: “God will provide.” The agony and suspense continues as Abraham ties his sleeping son and raises the knife to end the young boy’s life. That is when God intervenes, relieving Abraham from the unbearable task of killing his own son. It was a gruesome test, but Abraham passed. Thinking of my 16 month-old boy, I cannot imagine ever coming this close.

Episode 3 of the third season of Travelers tells a story with too many parallels to the Old Testament story to ignore. For those not familiar with the show, let me give you a quick overview of its plot. Travelers are people from the a distant apocalyptic future whose consciousness travel to the present and take on bodies of those who are about to die. They work in teams to complete missions that are meant to change the course of history. They take their orders from an advanced AI that has the ability to work out the best alternative in other to improve the future. They refer to it simply as the Director. 

In episode three, Mack (Erick McCormack) the team leader, tries to re-trace their last mission. Waking up with a gap in his memory, he suspects that his team altered his memory for some unknown reason. The episode unfolds as Mack pieces together the events from the previous day.

Misguided Good Deeds Lead to Unintended Consequences

In season one, we met an adopted boy called Alecsander. As the team is executing their mission, the historian (the team member who knows the future),  throws a curve ball by sending them to save this little boy. He knew that the boy was in an abusive situation and therefore creates the intervention to save him. Seems like a noble action except that this was not in the Director’s plan. Travelers were trained to never deviate from the plan. Therefore, even though they are able to rescue the boy, the implications of this deviation are unknown.

Fast forward to episode three of season three, we eventually find out the team’s mission for the previous day. The Director, knowing that Alecsander was destined to become a psychopath, task the team to eliminate him. A reckless good deed, operating outside the director’s plan had created bigger problems for the future. It was time to course-correct.

Mack, the team leader, draws the responsibility to himself. They pick up the boy in his current foster home and their fears are confirmed. The boy was growing recluse and disturbingly violent with animals – early signs of a troubled adulthood to come.

Mack takes the boy to a deserted woods with the intention of killing him. . While walking in the woods, they find a struggling coyote who is facing a painful end of life. Mack ends his misery with a shot.

Next, they share a meal around the fire, cooking a rabbit the boy had previously caught. There, they have a heart-to-heart conversation where Mack demonstrates to the boy that he is seen, known and understood. Mack becomes the father that Alecsander never had. All of this only heightens the tension as these tender moments contrast with Mack’s dreadful mission. Just as Abraham, Mack agonizes over his assignment while also showing love to the troubled boy.

As the climactic scene begins, they dig a hole to bury the dead coyote. The altar is ready for the sacrifice. Once they place the dead animal in the designated place, Aleksander asks to say a prayer. As the boy is praying in memory of the dead animal, Mack steps back reluctantly. He pulls out his gun as he see the designated time of boy’s death approaches. He points the gun and prepares to pull the trigger. At that moment, just like Yahweh in Genesis, the director intervenes. Instead of an angel, the AI speaks through the boy : “mission abort.” Just like Isaac, the boy is spared.

Later in the episode, Mack’s teammates inform him that the director had a change of plans. Apparently, Mack’s heart-to-heart conversation with the boy changed his future. The assurance of love from a father figure was enough to halt a future of serial murders.

New Avenues of Meaning

There is so much to unpack in this episode that I can’t hardly do justice in a few paragraphs. As stated above, the episode draws some clear parallels with the biblical story but does not re-tell it outright. I honestly even wonder if the writers had the biblical story in mind when formulating the episode. Yet, using the Biblical story as a backdrop allows us to reflect deeper into the many themes addressed here.

One underlying theme throughout the show is the conflict between the AI’s plan and human action. Often times, travelers struggle to follow through with the mission as conditions on the ground change. At its core, it explores the philosophical debate between free-will and determinism. 

Classical theism resolves this tension on the side of determinism, often referred to as “God’s will.” In its extreme forms, this thinking paints the picture of a detached God whose plans and will cannot be altered. Hebrew Scripture does not always support this script as it contains some examples where Yahweh changed his mind. Yet, this idea of God’s immutability made its way into Western Christian thought early on and has persisted to our time. For many, God is the absolute ruler that controls every aspect of the universe while also demanding blind loyalty from humans. 

For the most part, the same is true in the relationship between the travelers and the Director. Mack, especially, is often the one who claims and demands unquestioning loyalty to the Director’s mission. This episode illustrates this well as Mack showed complete willingness to carry out the unthinkable mission of killing the young boy.

Yet, the emphasis of the episode is not on Mack’s loyalty but in how by showing love to the boy, he altered his future. Mack’s actions changed the director’s plans. It suggests that human action can bending the will of a greater being (or technology in this case).

Sacrificial love can alter divine plans.

Hence, this well-written Science-Fiction series challenges us to re-think our relationship with the divine. Is it possible to move the heart of God or is our job simply to accept his will? Do humans have real power to shape their future or is it all pre-determined by a higher power? 

What do you think?

What Does Beyonce, AI and Democracy All Have in Common?

Answer: they are all mentioned in the Superposition Magazine. 

As some of you may not know, besides keeping this blog I am also writing for a newly launched SuperPosition magazine. I am excited to be part of this new endeavor that aims to broaden the conversation on faith and technology. Superposition is the world’s first theologically informed digital tech magazine. We create new content and aggregate content from around the web to create reality changing observations.  Beyond Technology, the magazine also explores society, culture, the environment and many other topics.

Here is a short list of what you can find there:

Is AI a threat to Democracy? This article explores author Yuval Harari’s current article explaining why he believes so. 

Your Ancestor’s may define you more than you think. Genes may be passing on more than hair color, height and nose size. Could it also be transmitting memories? The research on this topic could have ground-breaking implications in how we understand humanity. 

Robots Must Learn to Think Like Little Children. Here we learn about Toco, a robot built to learn as a child.

God uses Technology to Redeem the World. In this five-part article, Rev Chris Benek shows us how the God who created our natural world can also use technology to fulfill His plans.

How Alexa and Siri May Be Making You A Bigger Jerk. Did you ask your digital assistant nicely for directions today? How about saying “please” and “thank you”? Some thoughts to ponder here.

Beyoncé Uses AI to Teach Compassionate Eating. I didn’t think I would see the words “Beyonce” and “AI” in the same sentence but here it is. The singer is using machine learning to help fans come up with Vegan diet plan. 

These are just a few examples of the content being created there. Please be sure so sign up and comment on the site. Let’s get the word out about this new tool that help us make sense of this ever-changing world. 

Intelligence for Leadership: AI in Decision Making

Kings have advisors, presidents have cabinets, CEOs have boards and TV show hosts have writers – every public figure relies on a cadre of trusted advisors for making decisions. Whenever crucial decisions are made, an army of astute specialists have spent countless hours researching, studying and preparing to communicate the most essential information to inform a decision maker on that issue. Without them, leaders would lead by instinct and most likely often get it wrong. What if these advisors were not human only but also AI-enabled decision systems?

This is what Modeling Religion Project is doing. Developed by a group of scientists, philosophers, and religion scholars, the project consists of a computer simulation populated by “virtual agents” mimicking the characteristics and beliefs of a country’s population. The model is then fed evidence-based social science tendencies of human behavior under certain conditions. For example, a sudden influx of foreigners may increase the probability of hostility by native groups.

Using this initial state as a baseline, they experiment using different scenarios to evaluate the effects of changes in the environment. Levers for change include adding newcomers, investing in education, changing economic policy among other factors. The model then simulates outcomes from the changes allowing for scholars and policy makers to understand the effects of decisions or trends in a nation. While the work focuses on religion, its findings have broad implications for other social sciences such as Psychology, Sociology and Political Science. Among others, one of their primary goal is to better understand what factors can impact the level of religious violence. The government of Norway is about to put the models to test, where they hope to use the insights of the model to better integrate refugees to their nation.

Certainly, a project of such ambition is not without difficulties. For one, there are ethical questions around who gets to decide what is a good outcome and what is not. For example, one of the models provides recommendation on how to speed up secularization in a nation. Is secularization a good path for every nation? Clearly, while the model raises interesting insights, using them in the real world may prove much harder than the complex math involved in building them. Furthermore, irresponsible use can quickly lead to social engineering.

While hesitation is welcome, the demand for effective decision making will only increase. Leaders from household to national levels face increasing complex scenarios. Consider the dilemma that parents face when planing for their children’s education knowing that future job market will be different from today. Consider organizational leaders working on 5-10 year plans when markets can change in minutes, demand can change in days and societies in the course of a few years. Hence, the need for AI-generated insights will only increase with time.

What are to make of AI-enabled advice for public policy? First, it is important to note that this already is a reality in large multi-national corporations. In recent years, companies have developed intelligent systems that seek to extract insights from the reams of customer data available to these organizations. These systems may not rise to the sophistication of the project above, but soon they will. Harnessing the power of data can provide an invaluable perspective to the decision making process. As complexity increases, intelligent systems can distill large amounts of data into digestible information that can make the difference between becoming a market leader or descending into irrelevancy. This dilemma will be true for governments as well. Missing data insights can be the difference between staying in power or losing the next election.

With this said, it is important to highlight that AI-enabled simulations will never be a replacement for wise decision making. The best models can only perform as well as the data they contain. They represent a picture of the past but are not suitable for anticipating black swan events. Moreover, leaders may have pick up signals of change that have not yet been detected by data collection systems. In this case, human intuition should not be discarded for computer calculations. There is also an issue of trust. Even if computer perform better than humans in making decisions, can humans trust it beyond their own capabilities? Would we trust an AI judge to sentence a person to death?

Here, as in other situations, it bears out to bring the contrast between automation and augmentation. Using AI systems to enhance human decision making is much better than using it to replace it altogether. While they will become increasingly necessary in public policy, they should never replace popular elected human decision-makers.