Preparing for a Post-COVID-19 AI-driven Workplace

Are we ready for the change this pandemic will bring? Are we ready to encounter the accelerating threats to the workplace that were envisioned only years ahead? What can this pandemic teach us about being useful in the future where AI will continue to re-arrange the workplace?

Sign of Things to Come

As the coronavirus was spreading rapidly through Japan in March, workers in Sugito found a spiking sudden demand for hygiene products such as masks, hand sanitizers, gloves, and medical protection supplies.  To reduce the danger of contamination, the company that operates the center, Paltac, is engaging in a revolutionary idea. They are not just considering, but are already initiating hiring robots to replace human manufacturing, at least until social distancing is no longer needed.

“Robots are just one tool for adapting to the new normal.” Says Will Knight, senior writer for WIRED, in his article where he evaluates the Japanese pandemic situation, and how manufacturing Japanese companies are dealing with social distancing.

Some think that this is an unmatched opportunity to adapt and deliver in the AI community. Especially medical Robo tech – if they had been sought out more thoroughly beforehand, maybe the present outcome wouldn’t have been so catastrophic. Science journalist Matt Simon illustrates this in his article, and reassures that: “Evermore sophisticated robots and AI are augmenting human workers

The greater question is will AI replace or augment workers? Our future may depend on the answer to this question.

A Bigger Threat than a Virus?

In 2016 Harvard scientists released a study on “12 risks that threaten human civilization.” In it, they, not only outline the risks but also show ways that we can prepare for them. Prophetically, the study cites a global pandemic at the top of the list. It correctly classified it as “more likely than assumed” and they could not have been more correct. We now wish global leaders had heeded their warnings.

What other risks does the study warn us about? The scientists consider Artificial Intelligence as one of the major, but unfortunately the least of all comprehended global risks. In spite of its limitless potential, there is a grave risk of such intelligence developing into something uncontrollable.

It is not just a probability, but a questionable enigma of when. It could bring significant economic disruption, predicting that AI could copy and surpass human proficiency in speed and performance. While current technology is nowhere near this scenario, the mere possibility of this predicament should cause us to pause for reflection.

Yet, even as this pandemic has shown, the greatest threats are also the biggest opportunities for doing good in the world.

Learning to Face the Unknown

Our very survival depends on our ability to stay awake, to adjust to new ideas, to remain vigilant and to face the challenge of change.

Martin Luther King Jr.

Change is inevitable. Whether coming by exquisite and unique technology or a deadly virus, it will eventually disrupt our ideal routines. The difference is in how we position ourselves to face these adversities alongside those who we love and are responsible for. If humans can correctly predict tragedies, how much more can we do to avoid them!

The key to the future is the ability to adapt in the face of change. People that only react to what is “predictable” will be replaced by robots or algorithms. For example, as a teacher, I studied many things but never thought that I would have to become a Youtuber.  No one ever taught me about the systems to help me access via the internet. I was not trained for this! Yet, because of this pandemic, I now have to teach through creating videos and uploading them online. I am learning to become a worker of the future.

May we use this quarantined year as an incubating opportunity to prepare ourselves for a world that will not be the same.  May we train ourselves to endure challenges, and also to see the opportunities that lie in plain sight. This is my hope and prayer for all of you.

STAY HOME, STAY SAFE, STAY SANE


Prophetic Models : Why are Governments Telling Us to Stay Home?

In this blog, I explore the prophetic role of models in advising governments how to respond to the Covid-19 virus.

In a recent blog, I talked about the surprising upside of this crisis. In this blog, I explore the prophetic role of models in advising governments how to respond to the Covid-19 virus. While predictive modeling is already a vital part of decision making in both the private and public sector, this crisis revealed how impactful they can be. They are no longer just predictive but also prophetic models that can alter the future of a nation.

Don’t believe it? A few weeks ago, the British government was considering an alternative approach to lead the nation through this pandemic. The idea was to allow for the spread of the virus, instructing only the 70+ population and those with symptoms to isolate themselves. In this scenario, there would be no school closures, no working from home or even cancellation of mass gatherings.

The rationale was that by allowing the virus to spread, enough people would recover from it to develop herd immunity. That is, when enough people have either been vaccinated or contracted and recovered from the virus, they would protect those who had not, breaking the chain of transmission.

Yet, in March 16th, in a stunning reversal, Boris Johnson had a change of heart. He quickly joined other world leaders in calling for a suppression strategy instructing all citizens to practice social distancing. Why? In short, the government learned that as much as 24% of the population would need hospitalization which would quickly overwhelm the the nation’s healthcare system. It came from a revealing report by the Imperial College London. This report would later find its way across the ocean to inform American policy on the virus response as well.

Prophetic Models that Changed it All

Intrigued by this news and having built predictive models myself for the last 6 years, I decided to go to the source for further investigation. I was interested not only in the findings but also examining the researchers’ methodology and other insights overlooked by articles reporting on it. In the next few paragraphs, I summarize my investigation paying particular attention to the forecasting model that the report was based on.

The model analyzed the predicted outcomes of two strategies: suppression and mitigation. The first one is the more aggressive strategy adopted by places like China and many European countries in suppressing virus transmission through rigorous social distancing in order to reverse epidemic growth. The second, aims only to slow growth, mitigating its worse effects by quarantining only at risk populations and those presenting symptoms.

The model went on to analyze the impact of a combination of NPIs (non-pharmaceutical interventions) by the governments. Mitigation focused on applying case isolation, home quarantine and social distancing only for those who are 70+. This strategy would cut fatalities in half but still result in over 1 million deaths in the US and overwhelm ICU beds 8 times over at highest peak demand! Therefore this option was deemed unacceptable.

Estimating the Impact of Suppression

Image by Pete Linforth from Pixabay

While saving lives continues to be at forefront, the focus turned to a scenario in which the country’s health care system could withstand the increase in cases during the virus peak infection phase. The model simulations found that a combination of 1) general population social distancing; 2) schools and university closures; 3) home quarantine; and 4) case isolation of those infected was the best alternative to achieve this goal. These measures would have to be in place for a sustained period of time.

How long? The scientists ran a few scenarios but the most feasible one was where social distancing and school and university closures were triggered by threshold. That is, when the number of ICU cases must be at 60, 100 or 200 per week before the policies go into effect. This scenario assumes this triggering would be in place for a period of two years or until a vaccine is developed. The numbers below for the suppression scenario assume a trigger of 400 ICU cases per week.

Strategy Estimated Deaths GBEstimated Deaths US
Do Nothing510K2.2M
Mitigation255K1.1M
Suppression39K168K
Estimated fatalities based on the Report Impact of non-pharmaceutical interventions (NPIs) to reduce COVID-19 mortality and healthcare demand.

As shown in the table above, the model predicts significant decreases in fatalities. In doing so, it makes a clear case as to why these governments should apply these drastic measures.

Certainly, the model’s scope is limited. It does not look into the economic impact these shut downs or the indirect fatalities of those that cannot use an overwhelmed health care system. It also does not take into account every mitigating factor that could accelerate or hinder the virus spread. With that said, it is robust enough to make a compelling case for action. That is all we can expect from a good prophetic model.

Models as Modern Prophets

The Hebrew scriptures tells us of prophets who warned their communities of impending doom. One good example of that is the short book of Jonah. In the story, God summons Jonah to speak to the city Nineveh. After a few detours, one which involved spending some time in a fish’s belly, the prophet arrives at the city. There he delivers a simple message: “Change your ways or face destruction.”

Just like a modern forecasting model, the prophet was showing the people of Nineveh a picture of the future if they remained in their ways. He was giving them the “do nothing” or “keep status quo” scenario. He also offered an alternative scenario, where they changed their minds and opted for righteous living. In this scenario, the city would save lives and retain their prosperity.

To the prophet’s own chagrin, the city actually listened. They changed their ways and therefore altered their future. They weighed the consequence of doing nothing versus changing and decided to opt for the latter. Hence, the story tells us that God spared the city who heeded the prophets’ forecast of impending doom.

The model described above played a similar role of warning about the cost of doing nothing. Yet, instead of fiery sermons, it used the mighty power of number. As modern prophets, the scientists from the Imperial College warned leaders in Britain and the US of a collapsed healthcare system and mounting casualties. Prophetic models vividly described the cost of doing nothing and also paint a picture of their altered future. In the model’s assessment, action was imperative and thankfully, these political leaders, like those of Nineveh, listened.

What if the Model is wrong?

Just like in the case of Nineveh, the risk of listening is that the initial prediction could be wrong all along. In fact, the good prophet does their job best when they challenge decision makers to prove their numbers wrong. The point is not to forecast outcomes accurately, even though that is an important part of a rigorous model. The main point is to paint a believable picture of an undesirable future enough to move people to action.

Successful prophetic models are not the one that predict accurately but the ones that lead the community towards a better future. Furthermore, the mounting casualties of the last weeks give proof that this pandemic is not just your average cold. I can’t even imagine how worse they would have been without the concerted global effort of social distancing. Yet, when this crisis is over, many will look at the diminished numbers and wonder if it was all worth it.

This is where I can point to this imperfect but rigorous model to say that the policies put in place will likely save 2 million lives in the US and 500 thousand in Britain!

If that is not a good outcome, I don’t know what is.

Political Intelligence: Empowering Voters With Data

In this blog, I introduce political intelligence. The past 2016 presidential election demonstrated the power of data to influence election in unexpected ways. The Cambridge Analytica scandal revealed the growing trend of data-driven campaign strategies for targeting voters with customized political ads. This practice, which started with marketing in the corporate world has made its way well into political campaigning. Now politicians can optimize their hard-won campaign dollars into efforts that are more likely to turn up the vote for them.

It would be great if such optimization would also diminish the need of money in politics but that has not been the case. I digress. No, this blog is not about using data for better campaign targeting. Quite the opposite, it is about flipping the table and delivering intelligence not candidate but to the electorate. What if election was less about self-promotion and frivolous attack but a true debate on ideas and practices informed by data? What if our political electoral process was more intelligent?

Defining Intelligence

Let me take a step back. It is important to better define intelligence here. For the purposes of this blog, I do not seek a general definition of intelligence that could apply to animals, humans and computers. Instead, I want to borrow the term from the business world. In business analytics, we often use the term business intelligence (BI) that encompasses all the ways in which you present data to business audiences. That is, it includes graphs, dashboards, reports and tables. If Data Science is focused on analyzing and producing insights, BI focuses on delivering those insights in a way that non-technical audiences can understand and take action on it – hence the term actionable insights.

Developing actionable insights has become the goal of every analytics department. Data is not good unless it can guide the direction of a business. It requires a fair amount of distilling, curating, formatting and presenting in a way that helps executives make better decisions.

BI also requires triaging and prioritization. With the abundance of data flowing into a business, it is crucial to decide what to track and what not to track. Picking the right measures is an important step in the process of developing actionable insights. What does not get measured, does not get the attention of management.

Learning from this parallel, intelligence means the process of selecting, tracking and analyzing key measures that give an overall picture of the challenges and opportunities a business face. Could that be applied to government?

Introducing Political Intelligence

Clearly the government sector could learn a lot from the business sector in how to better run their operations. Yet, what I propose here is not about simply producing dashboards for lawmakers. Instead, I propose we leverage this practice to help voters vote more intelligently.

Of the many ills of political campaigning, and there are many, one of the worst is the absence of reliable data. It is not that this data does not exist, it just simply not part of the political discourse. This is a tragic outcome, as data is the best way to provide evidence-based feedback from policies.

Let me break this down a bit. When candidates are not busy attacking each other characters, the best we get today is a debate of ideas. Political parties put out platforms based on principles and ideologies. This is helpful but falls short for at least two reasons. First, untested ideas do very little against real-world, entrenched, long-lasting social problems. They may sound appealing at face value but often times produce many unintended consequences. Second, a debate that lives on abstract discussions will most often lose voter engagement. Voters are looking for pocket-book issue solutions and have little patience for drawn-out discussions on political philosophies.

I believe data remediates both of these problems. 1) It grounds political proposals against real-world results; 2) it quantifies complex issues into easily digestible set of metrics. For example, for those concerned with poverty in their community measures such as unemployment rate, households living under the poverty line and percentage of population in government assistance are good beginning points to have an informed discussion.

A Good Start

Will this settle all political disputes? Absolutely not. Political intelligence does not mend political divides but it does improve political discourse. When we force those running for office to explain their proposals with data, we now have a way to check on their progress. This is much better than what we have now where elected officials spin the results of their efforts to maximize their accomplishments and minimize their shortcomings. On the other side, opponents inflate government failures while also overstating their own abilities to solve it. Both sides portray a distorted view of reality. Agreeing on a set of metrics up-front give us a tool against political propaganda that keeps both major parties in US accountable to the same standard.

Finally, a robust discussion on metrics can further localize a conversation that is often dominated by national issues. Most of politics is local but that rarely comes through the prevailing media sources. A vigorous debate on the metrics that matter for a particular community allows them to focus on local issues rather than transposing national ones into theira community. This improves the political discourse of local elections and allows voters to engage deeper into government decisions that directly affect their lives.

Empowering voters with political intelligence: that is what I call a good start to reform our democracy.

AI Impact on Jobs: How can Workers Prepare?

In a previous blog, I explored the main findings from a recent MIT paper on AI’s impact on work. In this blog, I want to offer practical advice for workers worried about their jobs future. There is a lot automation anxiety surrounding the topic which often gets amplified through click-bait sensational articles. Fortunately, the research from the MIT-IBM Watson paper offers sensible and detailed enough information to help workers take charge of their careers. Here are the main highlights.

From Jobs to Tasks

The first important learning from the report is to think of your job as group of tasks rather than a homogenous unit. The average worker performs a wide range of tasks from communicating issues, solving problems, selling ideas to evaluating others. If you never thought of your job this way, here is a suggestion: track what you do in one work day. Pay attention to the different tasks you perform and write down the time it takes to complete them. Be specific enough in descriptions that go beyond “checking emails.” When you read and write emails, you are trying to accomplish something. What is it?

Once you do that for a few days, you start getting a clearer picture of your job as a collection of tasks. The next step then is to evaluate each task asking the following questions:

  • Which tasks brings the most value to the organization you are working for?
  • Which tasks are repetitive enough to be automated?
  • Which tasks can be delegated or passed on to other in your team?
  • Which tasks can you do best and which ones do you struggle the most?
  • Which tasks do you enjoy the most?

As you evaluate your job through these questions, you can better understand not just how good of a fit it is for your as an individual but also how automation may transform your work in the coming years. As machine learning becomes more prevalent, the repetitive parts of your job are most likely to disappear.

Tasks on the rise

The MIT-IBM Watson report analyzed job listings over a period of ten years and identified groups of tasks that were in higher demand than others. That is, as job change, certain tasks become more valuable either because they cannot be replaced by machine learning or because there is growing need for it.

According to the research, tasks in ascendance are:

  • Administrative
  • Design
  • Industry Knowledge
  • Personal care
  • Service

Note that the last two tend to be part of lower wage jobs. Personal care is an interesting one (i.e.: hair stylist, in-home nurses, etc.). Even with the growing trend in automations, we still cannot teach a robot to cut hair. That soft but precise touch from the human hand is very difficult to replicate, at least for now.

How much of your job consists of any of the tasks above?

Tasks at risk

On the flip side, some tasks are in decline. Some of this is particular to more mature economies like the US while others have a more general impact due to wide-spread adoption of technologies. The list of these tasks highlighted in the report are:

  • Media
  • Writing
  • Manufacturing
  • Production

The last two are no surprise as the trend of either offshoring or mechanizing these tasks has been underway for decades. The first two, however, are new. As technologies and platforms abound, these tasks either become more accessible to wider pool of workers which makes them less valuable in the workplace. Just think about what it took to broadcast a video in the past and what it takes to do it now. In the era of Youtube, garage productions abound sometimes with almost as much quality as studio productions.

If your job consists mostly of these tasks, beware.

Occupational Shifts

While looking at tasks is important, overall occupations are also being impacted. As AI adoption increases, these occupations either disappear or get incorporated into other occupations. Of those, it is worth noting that production and clerical jobs are in decline. Just as an anecdote, I noticed how my workplace is relying less and less on administrative assistants. The main result is that everybody now is doing scheduling what before used to be the domain of administrative jobs.

Occupations in ascendance are those in IT, Health care and Education/Training. The latter is interesting and indicative of a larger trend. As new applications emerge, there is a constant need for training and education. This benefits both traditional educational institutions but also entrepreneurial start ups. Just consider the rise of micro-degrees and coding schools emerging in cities all over this country.

Learning as a Skill

In short, learning is imperative. What that means is that every worker, regardless of occupation or wage level will be required to learn new tasks or skills. Long gone are the days where someone would learn all their professional knowledge in college and then use it for a lifetime career. Continual training is the order of the day for anyone hoping to stay competitive in the workplace.

I am not talking just about pursuing formal training paths through academic degrees or even training courses. I am talking about learning as a skill and discipline for you day-to-day job. Whether from successes or mistakes, we must always look for learning opportunities. Sometimes, the learning can come through research on an emerging topic. Other times, it can happen through observing others do something well. There are many avenues for learning new skills or information for those who are willing to look for it.

Do you have a training plan for your career? Maybe is time to consider one.

AI Impact on Work: Latest Research Spells Both Hope and Concern

In a recent blog I explored Mckinsey’s report on the AI impact for women in the workplace. As the hype around AI subsides, a clearer picture emerges. The “robots coming to replace humans” picture fades. Instead, the more realistic picture is one where AI automates distinct tasks, changing the nature of occupations rather than replacing them entirely. Failure to understand this important distinction will continue to fuel the misinformation on this topic.

A Novel Approach

In this blog, I want to highlight another source that paints this more nuanced picture. The MIT-IBM Watson released a paper last week entitled “The Future of Work: How New Technologies Are Transforming Tasks.” The paper was significant because of its innovative methodology. It is the first research to use NLP to extract and analyze information on tasks coming from 170 million online job postings from 2010-2017 in the US market. In doing so, it is able to detect changes not only in the volume but also in job descriptions themselves. This allows for a view on how aspects of the same job may change over time.

The research also sheds light on how these changes translate into dollars. By looking at compensation, the paper can analyze how job tasks are valued in the labor market and how this will impact workers for years to come. Hence, they can test whether changes are eroding or increasing income for workers.

With that said, this approach also carry some limitations. Because they look only at job postings, they have no visibility into jobs where the worker has stayed consistently for the period analyzed. It is also relying on proposed job descriptions which often time do not materialize in reality. A job posting represents a manager’s idea for the job at that time. Yet circumstances around the position can significantly change making the actual job look very different. With that said, some data is better than perfect data and this researches open new avenues of understanding into this complex phenomenon.

Good News: Change is Gradual

For the period analyzed, researches conclude that the shift in jobs has been gradual. Machine learning is not re-shaping jobs a neck-breaking speed as some may have believed. Instead, it is slowly replacing tasks within occupations over time. On average, the worker is asked to perform 3.7 less tasks in 2017 as compared to 2010. As the researchers dig further, they also found a correlation between suitability to machine learning and faster replacement. Tasks more suitable to machine learning do show a larger average of replacement, at around 4.3 tasks while those not suited for machine learning show 2.9 average replacement. In general, jobs are becoming leaner and machine learning is making the process go faster.

This is good news but not necessarily reassuring. As more industries adopt AI strategies the rate of task replacement should increase. There is little reason to believe what we saw in 2010-2017 will repeat itself in the next 10 years. What the data signal demonstrates is that the replacement of tasks has indeed started. What is not clear is how fast it will accelerate in the next years. The issue is not the change but the speed in which it happens. Fast change can be de-stabilizing for workers and it is something that requires monitoring.

Bad News: Job Inequality Increased

If the pace is gradual, its impact has been uneven. Mid-income jobs are the worst hit by task replacement. As machine learning automate tasks, top tier middle income jobs move to the top income bracket while jobs at the bottom of the middle income income move to the low income jobs. That is, occupations in the low tier of the middle become more accessible to workers with less education or technical training. At the top, machine learning replace simpler tasks and those jobs now require more specialized skills.

This movement is translating into changes in income. Middle jobs has seen an overall erosion in compensation while both high and low income jobs have experienced an increase in compensation. This polarizing trend is concerning and worthy of further study and action.

For now, the impact of AI in the job market is further exacerbating monetary value of different tasks. The aggregate effect is that jobs with more valued tasks will see increases while those with less value will either become more scarce or pay less. Business and government leaders must heed to these warnings as they spell future trouble for businesses and political unrest for societies.

What about workers? How can these findings help workers navigate the emerging changes in the workplace? That is the topic for my next blog

AI and Women at the Workplace: A Sensible Guide for 2030

Even a few years in, the media craze over AI shows no sign of subsiding. The topic continues to fascinate, scare and befuddle the public. In this environment, the Mckinsey report on AI and Women at the workplace is a refreshing exception. Instead of relying on hyperboles, they project meaningful but realistic impact of AI on jobs. Instead of a robot apocalypse, they speak of a gradual shifting of tasks to AI-enabled applications. This is not to say that the impact will be negligible. Mckinsey still projects that between 40 – 160 M women may need to transition into new careers by 2030 worldwide. This is not a small number when the low end accounts for roughly population of California! Yet, still much less than other predictions.

Impact on Women

So why do a report based on one gender? Simply put, AI-driven automation will affect men and women differently in the workplace as they tend to cluster in different occupations. For example, women are overly represented in clerical and service-oriented occupations, all of which are bound to be greatly impacted by automation. Conversely, women are well-represented in health-care related occupations which are bound to grow in the period forecasted. These facts alone will assure that genders will experience AI impact differently.

There are however, other factors impacting women beyond occupation clusters. Social norms often make it harder for women to make transitions. They have less time to pursue training or search for employment because they spend much more time than men on house work and child care. They also have lower access to digital technology and participation in STEM fields than men. That is why initiatives that empower girls to pursue study in these areas are so important and needed in our time.

The main point of the report is not that automation will simply destroy jobs but that AI will move opportunity between occupations and geographies. The issue is less of an inevitable trend that will wipe out sources of livelihood but one that will require either geographic mobility or skill training. Those willing to make these changes are more likely to survive and thrive in this shifting workplace environment.

What Can You Do?

For women, it is important to keep your career prospects open. Are you currently working in an occupation that could face automation. How can you know? Well, think about the tasks you perform each day. Could they be easily learned and repeated by a machine? While all of our jobs have portions we wish were automated, if that applies to 60-80% of your job description, then you need to re-think your line of work. Look for careers that are bound to grow. That may not may simply learning to code but also consider professions that require human touch and cannot be easily replaced by machines. Also, an openness to moving geographically can greatly improve job prospects.

For parents of young girls, it is important to expose them to STEM subjects early on. A parent encouragement can go a long way in helping them consider those areas as future career options. That does not mean they will become computer programmers. However, early positive experiences with these subjects will give them the confidence later in life to pursue technical occupations if they so choose. A big challenge with STEM is the impression that it is hard, intimidating and exclusive to boys. The earlier we break these damaging paradigms the more we expand job opportunity for the women of the future.

Finally, for the men who are concerned about the future job prospects of their female loved ones, the best advice is get more involved in housework and child rearing. In short, if you care about the future of women in the workplace, change a diaper today and go wash those dishes. The more men participate in unpaid house work and child rearing the more women will be empowered to pursue more promising career paths.

AI Theology Goes to Brazil – Part 3: Holistic Mission and Technology

In Part 1 and Part 2, I discussed my two first talks in Brazil. In this blog, I will describe my third talk to a group of local pastors. Expecting to give them some new ideas, I left the meeting with new avenues of reflection. Hoping to teach, I ended up becoming the student.

The talk happened a monthly local pastors’ breakfast. I was elated to learn that they were already meeting regularly to discuss local ministry needs and coordinate actions. It is uncommon to see local religious leaders cooperating on anything. In this case, I could see evidence of joint projects and fruitful dialogue between church leaders in spite of the many different denominations represented.

I gave my opening remarks challenging them to see technology more as an enabler than a threat to their efforts. I spoke of ways in which the churches could participate in furthering the democratization of technology through education, awareness and political involvement. In other words, I wanted them to think of their work beyond the traditional bounds of preaching and Bible teaching. I then opened the floor for questions and comments.

The discussion inevitably steered towards the impact of social media. In that vein, I encouraged them to both model and provide guidance to their communities on healthy ways to use those technologies. I was also surprised to learn about the prevalence of smart phone ownership in Brazil and other areas in the world. There, I learned that there now more smart phones in Brazil than people! Also, one of the pastors, who had recently returned from India, spoke of village that lacked indoor plumbing and electricity but where people could still connect to a common solar panel to charge their phones! This discussion only confirmed my belief that technology, now more than ever, can be an enabler for human development.

I was also glad to hear about local efforts to improve computer literacy in poor areas of the city. Pr Marco Antonio dos Santos, a Methodist pastor and seminary coordinator told me about his church’s community center. It offered classes in music, homework tutoring and computers. I was so impressed that I asked to visit the center the next day. The two story building reflected already the vision I was proposing to pastors. It hosted a community center in the first floor open in the weekdays and a church sanctuary in the second floor for the weekend services. The building was located in a poor neighborhood of the city. In my short visit, I downloaded Scratch software to enable them to start teaching code to the children.

Here is a picture from my visit to the community center. All the way to the right is the Methodist pastor right beside my dad. To my left are two mentors and one of the children served by the center.

What would happen if more pastors had a holistic approach to ministry like Sombra e Agua Fresca (community center’s name which means “shade and fresh water”)? I left my visit convinced that, even with all its shortcomings, churches continue to be a tremendous force for good in the world. For those interested in learning more, click here. The site is all in Portuguese but it gives you a good idea of the diverse work this church is doing in the city.

Reflecting on what I learned, it brought me back to my time at Fuller where I learned about Holistic Mission. While many have heard about Latin American liberation theology, few know about the evangelical variant theology called missao integral. This theology and ministry philosophy transcended the traditional North-American divide between evangelism and social action. Instead of taking sides in this useless binary discussion, Christian leaders in Latin American decided it was about “both and”. That is, Christian mission should always happen in a context of social action. There is no point in sharing the gospel to the hungry without feeding them first. Also, there is no point in building charities that never empower the poor to break out of their cycle of poverty. Pastor Marco Antonio’s work is a vivid example of this theology. On weekdays, the center fleshes out what is preached on Sunday upstairs. This way, the church runs a holistic mission in a place of tremendous need.

It is unfortunate how in the US, mainline churches will focus on social action while evangelical churches focus more on evangelism. Of course, there are a lot of exceptions but that tends to be the case for the most part. Maybe this is where we can learn from the Latin American church. As this relates to technology, Holistic Mission means teaching the poor to code while sharing the gospel with them. These two go hand in hand.

What if more churches had computer labs in their buildings?

Will AI Deliver? 4 Factors That Can Derail the AI Revolution

2019 is well under way and the attention on Artificial Intelligence persists. An AI revolution is already underway (for a very informative deep dive on this topic check this infographic).  Businesses are investing heavily in the field and staffing up their data science departments, Governments are releasing AI strategy plans and the media continues to churns out fantastic stories about the possibilities of AI. Beyond that, discussions about ethics and appropriate uses are starting to emerge. Even the Vatican is paying attention. What could go wrong?

If history is any guide, we have been through an AI spring before only to see it fall into an AI winter. In the mid 80’s, the funding dried, government programs were shut down and the attention moved on to other emerging technologies. While we live a different reality, a more globalized and connected world, there is no guarantee that the promise of AI will come to pass.

In this blog, as an industry insider and diligent observer, I describe factors that could derail the AI revolution. In short, here are the things that could turn our AI spring into another bitter winter.

#1: Business Projects Fail to Deliver

Honestly, this is a reality I face everyday at work. As a professional deeply involved in a massive AI project, I am often confronted with the thought: what if it fails? Just like me, hundreds of professionals are currently paving the way for an AI future that promises intelligent processes, better customer service and increased profits. So far, Wall Street has believed the claim that AI can unlock business value. Investors and C-level executives have poured in money to staff up, upgrade systems and many time re-configure organizations to usher in an AI revolution in their business.

What is rarely talked about is the enormous challenges project teams face to transform these AI promises into reality. Most organizations are simply not ready for these changes. Furthermore, as the public becomes more aware of privacy breaches, the pressure to be innovative while also addressing ethical concerns is daunting. Even as those are resolved, there is the challenge of buy-in from internal lines of business who can perceive these solutions as an existential threat.

The significant technical, political and operational challenges of innovation all conspire to undermine or dilute the benefits promised by the AI revolution. Wall Street may be buying into the promise now but their patience is short. If AI projects fail to deliver concrete results in a timely manner, investment could dry up and progress in this area could be significantly halted. If it fails in the private sector, I can easily see this cascading into the public sector as well.

#2: Consumers Reject AI-enabled Solutions

Now let’s say the many AI projects happening across industries are technical and organizational successes. Let’s say they translate into compelling products and services that are then offered to consumers all over the globe. What if not enough of them adopt these new products or services? Just think about the Segway that was going to revolutionize mobility years ago but never really took off as a mass product. Adoption always carries the risk inherent in the unpredictable human factor.

Furthermore, accidents and business scandals can have a compounding effect on the public opinion of these products. One cannot deny that the driverless car pedestrian fatality last year in Arizona is already impacting its development possibly delaying launches by months if not years. Concerns with privacy threaten to erode the public’s confidence on business usage of data which could in turn further hamper AI innovation.

Technology is advancing at neck-breaking speed. Can humans keep up and even more importantly, do they care to? For the techno-capitalist, the human need for devices is endless. They spread this message through clever marketing campaigns. Yet, is everyone really buying it? AI-enabled products and services can only succeed if they are able to demonstrate true value in the eyes of the consumer. Otherwise, even technical marvels are destined to fail.

#3: Governments Restrict AI Innovation Through Regulation

Another factor that could derail the AI revolution is government regulation. It is important to note that not all regulation is harmful for innovation. Yet, ill-devised, politically motivated, reactive regulation often does. This could come from both sides of the political spectrum. Progressive politicians could enact burdensome taxes on the use of AI technology discouraging its development. Conservative could create laws siding with large business interests that choke innovation at the start-up level.

Emerging technologies like AI are currently not front-end center topic in elections. This can be a blessing in disguised as it is probably too early to create regulatory apparatus on these technologies. Yet, that does not mean government should not be involved. Virtuous policy should bring different stakeholders to the table by creating an open process of discussion and learning.

With that said, governments all over the world face the challenge to walk the delicate balance between intervention and neglect. Doing this well is very context-dependent not lending itself to sweeping generalizations. Yet, it must start with engagement. It was shocking to see US lawmakers’s ignorance of social media business models demonstrated in recent hearings. That gives me little hope they would be able to grasp the complexities of AI technologies. Hopefully, a new batch of more tech-savy lawmakers will help.

#4: Nationalism Hampers Global Collaboration

The development of AI thrives on an ecosystem where researchers from different countries can freely share ideas and best practices. A free Internet, a relatively peaceful global order and a willingness to share knowledge have so far ensured the flourishing of research through collaboration. As Nationalist movements rise, this ecosystem is in danger of collapsing.

Another concerning scenario is a a geopolitical AI race for dominance. While this can incentivize individual nations to focus their efforts on research, it can also undermine the spread and enhancing of AI technology applications. A true AI revolution should not be limited to one nation or even one region. Instead, it must benefit the whole planet less it becomes another tool for Colonialism.

On the one hand, regional initiatives like the European Union’s AI strategy are a good start. The ambitious Chinese AI strategy is concerning. The jury is still out on the recently released US strategy. What is missing is an overall vision of global collaboration on the field. This will most likely come from intra-governmental organizations like the UN. Until then, nationalist pursuits in AI will continue to challenge global collaboration.

Conclusion

This is all I could come up, a robust but by no means an exhaustive list of what could go wrong. Can you think of other factors? Above all, the deeper question is, if we these factors derail the AI revolution, would that be necessarily tragic? In some ways, this could delay important discoveries and breakthroughs. However, slowing down AI development may not be necessarily bad as conversations on ethics and public awareness is its beginning stages.

In the history of technology we often overestimate their impact in the short run but underestimate it in the long run. What if the AI ushers no revolution but instead a long process of gradual improvements? Maybe that’s a better scenario than the fast change promised to business investors by ambitious entrepreneurs.

ERLC Statement on AI: An Annotated Christian Response

Recently, the ERLC (Ethics and Religious Liberty Commission) released a statement on AI. This was a laudable step as the Southern Baptist became the first large Christian denomination to address this issue directly. While this is a start, the document fell short in many fronts. From the start, the list of signers had very few technologists and scientists.

In this blog, I show both the original statement and my comments in red. Judge for yourself but my first impression is that we have a lot of work ahead of us.

Article 1: Image of God

We affirm that God created each human being in His image with intrinsic and equal worth, dignity, and moral agency, distinct from all creation, and that humanity’s creativity is intended to reflect God’s creative pattern.

Ok, that’s a good start by locating creativity as God’s gift and affirming dignity of all humanity. Yet, the statement exalts human dignity at expense of creation. Because AI, and technology in general, is about human relationship to creation, setting the foundation right is important. It is not enough to highlight human primacy, one must clearly state our relationship with the rest of creation.

We deny that any part of creation, including any form of technology, should ever be used to usurp or subvert the dominion and stewardship which has been entrusted solely to humanity by God; nor should technology be assigned a level of human identity, worth, dignity, or moral agency.

Are we afraid of a robot take over of humanity? Here it would have been helpful to start distinguishing between general and narrow AI. The first is still decades away while the latter is already here and poised to change every facet of our lives. The challenge of narrow AI is not one of usurping our dominion and stewardship but of possibly leading us to forget our humanity. They seem to be addressing general AI. Maybe including more technologists in the mix would have helped.

Genesis 1:26-28; 5:1-2; Isaiah 43:6-7; Jeremiah 1:5; John 13:34; Colossians 1:16; 3:10; Ephesians 4:24

Article 2: AI as Technology

We affirm that the development of AI is a demonstration of the unique creative abilities of human beings. When AI is employed in accordance with God’s moral will, it is an example of man’s obedience to the divine command to steward creation and to honor Him. We believe in innovation for the glory of God, the sake of human flourishing, and the love of neighbor. While we acknowledge the reality of the Fall and its consequences on human nature and human innovation, technology can be used in society to uphold human dignity. As a part of our God-given creative nature, human beings should develop and harness technology in ways that lead to greater flourishing and the alleviation of human suffering. 

Yes, well done! This affirmation is where Christianity needs to be. We are for human flourishing and the alleviation of suffering. We celebrate and support Technology’s role in these God-given missions.

We deny that the use of AI is morally neutral. It is not worthy of man’s hope, worship, or love. Since the Lord Jesus alone can atone for sin and reconcile humanity to its Creator, technology such as AI cannot fulfill humanity’s ultimate needs. We further deny the goodness and benefit of any application of AI that devalues or degrades the dignity and worth of another human being.

I guess what they mean here is that technology is a limited means and cannot ultimately be the salvation. I see here a veiled critique of Transhumanism. Fair enough, the Christian message should both celebrate AI’s potential but also warn of its limitations less we start giving it unduly worth.

Genesis 2:25; Exodus 20:3; 31:1-11; Proverbs 16:4; Matthew 22:37-40; Romans 3:23

Article 3: Relationship of AI & Humanity

We affirm the use of AI to inform and aid human reasoning and moral decision-making because it is a tool that excels at processing data and making determinations, which often mimics or exceeds human ability. While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility.

This statement seems to suggest the positive role AI can play in augmentation rather than replacement. I am just not sure that was ever in question.

We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created. Only humanity will be judged by God on the basis of our actions and that of the tools we create. While technology can be created with a moral use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision making.

While hard to argue against this statement at face value, it overlooks the complexities of a world that is becoming increasingly reliant on algorithms. The issue is not that we are offloading moral decisions to algorithms but that they are capturing moral decisions of many humans at once. This reality is not addressed by simply stating human moral responsibility. This needs improvement.

Romans 2:6-8; Galatians 5:19-21; 2 Peter 1:5-8; 1 John 2:1

Article 4: Medicine

We affirm that AI-related advances in medical technologies are expressions of God’s common grace through and for people created in His image and that these advances will increase our capacity to provide enhanced medical diagnostics and therapeutic interventions as we seek to care for all people. These advances should be guided by basic principles of medical ethics, including beneficence, nonmaleficence, autonomy, and justice, which are all consistent with the biblical principle of loving our neighbor.

Yes, tying AI-related medical advances with the great commandment is a great start.

We deny that death and disease—effects of the Fall—can ultimately be eradicated apart from Jesus Christ. Utilitarian applications regarding healthcare distribution should not override the dignity of human life. Furthermore, we reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.

Similar to my statement on article 3, this one misses the complexity of the issue. How do you draw the line between enhancement and cure? Also, isn’t the effort of extend life an effective form of alleviation of suffering? These issues do not lend themselves to simple propositions but instead require more nuanced analysis and prayerful consideration.

Matthew 5:45; John 11:25-26; 1 Corinthians 15:55-57; Galatians 6:2; Philippians 2:4​

Article 5: Bias

We affirm that, as a tool created by humans, AI will be inherently subject to bias and that these biases must be accounted for, minimized, or removed through continual human oversight and discretion. AI should be designed and used in such ways that treat all human beings as having equal worth and dignity. AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making.

Bias is inherent in the data fed into machine learning models. Work on the data, monitor the outputs and evaluate results and you can diminish bias. Direction AI to promote equal worth is a good first step.

We deny that AI should be designed or used in ways that violate the fundamental principle of human dignity for all people. Neither should AI be used in ways that reinforce or further any ideology or agenda, seeking to subjugate human autonomy under the power of the state.

What about being used by large corporations? This was a glaring absence here.

Micah 6:8; John 13:34; Galatians 3:28-29; 5:13-14; Philippians 2:3-4; Romans 12:10

Article 6: Sexuality

We affirm the goodness of God’s design for human sexuality which prescribes the sexual union to be an exclusive relationship between a man and a woman in the lifelong covenant of marriage.

This seems like a round-about way to use the topic of AI for fighting culture wars. Why include this here? Or, why not talk about how AI can help people find their mates and even help marriages? Please revise or remove!

We deny that the pursuit of sexual pleasure is a justification for the development or use of AI, and we condemn the objectification of humans that results from employing AI for sexual purposes. AI should not intrude upon or substitute for the biblical expression of sexuality between a husband and wife according to God’s design for human marriage. 

Ok, I guess this is a condemnation of AI porn. Again, it seems misplaced on this list and could have been treated in alternative ways. Yes, AI can further increase objectification of humans and that is a problem. I am just not sure that this is such a key issue to be in a statement of AI. Again, more nuance and technical insight would have helped.

Genesis 1:26-29; 2:18-25; Matthew 5:27-30; 1 Thess 4:3-4

Article 7: Work

We affirm that work is part of God’s plan for human beings participating in the cultivation and stewardship of creation. The divine pattern is one of labor and rest in healthy proportion to each other. Our view of work should not be confined to commercial activity; it must also include the many ways that human beings serve each other through their efforts. AI can be used in ways that aid our work or allow us to make fuller use of our gifts. The church has a Spirit-empowered responsibility to help care for those who lose jobs and to encourage individuals, communities, employers, and governments to find ways to invest in the development of human beings and continue making vocational contributions to our lives together.

This is a long, confusing and unhelpful statement. It seems to be addressing the challenge of job loss that AI can bring without really doing it directly. It gives a vague description of the church’s role in helping individuals find work but does not address the economic structures that create job loss. It simply misses the point and does not add much to the conversation. Please revise!

We deny that human worth and dignity is reducible to an individual’s economic contributions to society alone. Humanity should not use AI and other technological innovations as a reason to move toward lives of pure leisure even if greater social wealth creates such possibilities.

Another confusing and unhelpful statement. Are we making work holy? What does “lives of pure leisure” mean? Is this a veiled attack against Universal Basic Income? I am confused. Throw it out and start it over!

Genesis 1:27; 2:5; 2:15; Isaiah 65:21-24; Romans 12:6-8; Ephesians 4:11-16

Article 8: Data & Privacy

We affirm that privacy and personal property are intertwined individual rights and choices that should not be violated by governments, corporations, nation-states, and other groups, even in the pursuit of the common good. While God knows all things, it is neither wise nor obligatory to have every detail of one’s life open to society.

Another statement that needs more clarification. Treating personal data as private property is a start. However, people are giving data away willingly. What is privacy in a digital world? This statement suggest the drafters unfamiliarity with the issues at hand. Again, technical support is needed.

We deny the manipulative and coercive uses of data and AI in ways that are inconsistent with the love of God and love of neighbor. Data collection practices should conform to ethical guidelines that uphold the dignity of all people. We further deny that consent, even informed consent, although requisite, is the only necessary ethical standard for the collection, manipulation, or exploitation of personal data—individually or in the aggregate. AI should not be employed in ways that distort truth through the use of generative applications. Data should not be mishandled, misused, or abused for sinful purposes to reinforce bias, strengthen the powerful, or demean the weak.

The intention here is good and it is in the right direction. It is also progress to point out that consent is the only guideline and in its condemnation of abusive uses. I would like it to be more specific on its call to corporations, governments and even the church.

Exodus 20:15, Psalm 147:5; Isaiah 40:13-14; Matthew 10:16 Galatians 6:2; Hebrews 4:12-13; 1 John 1:7

Article 9: Security

We affirm that AI has legitimate applications in policing, intelligence, surveillance, investigation, and other uses supporting the government’s responsibility to respect human rights, to protect and preserve human life, and to pursue justice in a flourishing society.

We deny that AI should be employed for safety and security applications in ways that seek to dehumanize, depersonalize, or harm our fellow human beings. We condemn the use of AI to suppress free expression or other basic human rights granted by God to all human beings.

Good intentions with poor execution. The affirmation and denials are contradictory. If you affirm that AI can be use for policing, you have to concede that it will be used to harm some. Is using AI to suppress hate speech acceptable? I am not sure how this adds any insight to the conversation. Please revise!

Romans 13:1-7; 1 Peter 2:13-14

Article 10: War

We affirm that the use of AI in warfare should be governed by love of neighbor and the principles of just war. The use of AI may mitigate the loss of human life, provide greater protection of non-combatants, and inform better policymaking. Any lethal action conducted or substantially enabled by AI must employ human oversight or review. All defense-related AI applications, such as underlying data and decision-making processes, must be subject to continual review by legitimate authorities. When these systems are deployed, human agents bear full moral responsibility for any actions taken by the system.

Surprisingly, this was better than the statement above. It upholds human responsibility but recognizes that AI, even in war, can have life preserving aims. I would have like a better definition of uses for defense, yet that is somewhat implied in the principles of just war. I must say this is an area that needs more discussion and further considerations but this is a good start.

We deny that human agency or moral culpability in war can be delegated to AI. No nation or group has the right to use AI to carry out genocide, terrorism, torture, or other war crimes.

I am glad to see the condemnation of torture here. Lately, I am not sure where evangelicals stand on this issue.

Genesis 4:10; Isaiah 1:16-17; Psalm 37:28; Matthew 5:44; 22:37-39; Romans 13:4​

Article 11: Public Policy

We affirm that the fundamental purposes of government are to protect human beings from harm, punish those who do evil, uphold civil liberties, and to commend those who do good. The public has a role in shaping and crafting policies concerning the use of AI in society, and these decisions should not be left to those who develop these technologies or to governments to set norms.

The statement points to the right direction of public oversight. I would have liked it to be more bold and clear about the role of the church. It should have also addressed corporations more directly. That seems to be a blind spot in a few articles.

We deny that AI should be used by governments, corporations, or any entity to infringe upon God-given human rights. AI, even in a highly advanced state, should never be delegated the governing authority that has been granted by an all-sovereign God to human beings alone.

Glad to see corporations finally mentioned in this document making this a good start.

Romans 13:1-7; Acts 10:35; 1 Peter 2:13-14

Article 12: The Future of AI

We affirm that AI will continue to be developed in ways that we cannot currently imagine or understand, including AI that will far surpass many human abilities. God alone has the power to create life, and no future advancements in AI will usurp Him as the Creator of life. The church has a unique role in proclaiming human dignity for all and calling for the humane use of AI in all aspects of society.

Again, the distinction between narrow and general AI would have been helpful here. The statement seems to be addressing general AI. It also seems to give away the impression that AI is threatening God. Where is that coming from? A more nuanced view of biology and technology would have been helpful here to. They seem to be jumbled together. Please revise!

We deny that AI will make us more or less human, or that AI will ever obtain a coequal level of worth, dignity, or value to image-bearers. Future advancements in AI will not ultimately fulfill our longings for a perfect world. While we are not able to comprehend or know the future, we do not fear what is to come because we know that God is omniscient and that nothing we create will be able to thwart His redemptive plan for creation or to supplant humanity as His image-bearers.

I disagree with the first sentence. There are ways in which AI can affirm and/or diminish our humanity. The issue here seems to be a perceived threat that AI will replace humans or be considered equal to them. I like the hopeful confidence in God for the future but the previous statement suggest that there is fear about this already. The ambiguity in the statements is unsettling. It suggests that AI is a dangerous unknown. Yes, it is true that we cannot know what it can become but why not call out Christians to seize this opportunity for the kingdom? Why not proclaim that AI can help us co-create with God? Let me reiterate one of the verses mentioned below:

For God has not given us a spirit of fear and timidity, but of power, love, and self-discipline

Genesis 1; Isaiah 42:8; Romans 1:20-21; 5:2; Ephesians 1:4-6; 2 Timothy 1:7-9; Revelation 5:9-10

For an alternative but still evolving Christian position on this matter please check out the Christian Transhumanist Association affirmation.

AI Ethics: Evaluating Google’s Social Impact

I have noticed a shift in the corporate America recently. Moving away from the unapologetic defense of profit making of the late 20th century, corporations are now asking deeper questions on the purpose of their enterprises. Consider how businesses presented themselves in the Super Bowl broadcast this year. Verizon focused on first-responders life-saving work, Microsoft touted its video-game platform for children with disabilities and the Washington Post paid tribute to recently killed journalists. Big business wants to convince us they also have a big heart.

This does not mean that profit is secondary. As long as there is a stock market and earning expectations drive corporate goals, short-term profit will continue to be king. Yet, it is important to acknowledge the change. Companies realize that customers want more than a good bargain. Instead they want to do business with organizations that are doing meaningful work. Moreover, Companies are realizing they are not just autonomous entities but social actors that must contribute to the common good.

Google AI Research Review of 2018

Following this trend, Google AI Review of 2018 focused on how its research is impacting the world for good. The story is impressive, as it reach encompasses many fields of both philanthropy, the environment and technological breakthroughs. I encourage you to look at it for yourself.

Let me just highlight a few developments that are worth mentioning here. The first one is the development of AI ethical principles. In it, Google promise to develop technologies that are beneficial to society, tested for safety and accountable to people. The company also promises to keep privacy embedded in design, uphold highest levels of scientific excellence while also limiting harmful potential uses of their technology. In the latter, they promise to apply a cost-benefit analysis to ensure the risks of harmful uses does not outweigh its benefits.

In the last section, the company explicitly states applications they will not pursue. These include weapons, surveillance and or those that oppose accepted international law and human rights. That last point, I must admit, is quite vague and open to interpretation. With that said, the fact that Google published these principles to the public shows that they recognize their responsibility to uphold the common good.

Furthermore, the company showcases some interesting examples of using AI for social good. The example includes work on flood and earthquake prediction, identifying whales and diseased cassavas and even detecting exoplanets. The company has also allocated over $25M in funds for external social impact work through its foundation.

A Good Start But is that Enough?

In a previous blog, I mentioned how the private sector drives the US AI strategy . This approach definitely raises concerns as profit-ventures may not always align with the public good in their research goals. However, it is encouraging to see a leader in the industry doing serious ethical reflection and engaging in social work.

Yet, Google must do more to fully recognize the role its technologies play in our global society. For one, Google must do a better job in understanding its impact in local economies. While its technologies empower small businesses and individual actors in remote areas, it also upends existing industries and established enterprises. Is Google paying attention to those in the losing side of its technologies? If so, how are they planning to help them re-invent themselves?

Furthermore, if Google is to exemplify a business with a social conscience does it have appropriate feedback channels for its billions of customers? Given its size and monopoly of the search engine industry, can it really be kept accountable on its own?  The company should not only strive for transparency in its practice but also listen to its customers more attentively.

Technology, Business and Society

The relationship between business and society is being revolutionized through the advance of emerging technologies such as AI. In the example of Google, being the search engine leader makes them the primary knowledge gate-keeper for the Internet. As humans come to rely more on the Internet as an extension of their brain, this places Google in a role equivalent to what religious, educational and political leaders played in the past. This is too important a function to be centralized in one profit-making organization.

To be fair, this was not a compulsory process. It is not that Google took over our brains by force, we willingly gave them this power. Therefore, change is contingent not only in the corporation but in its customers. From a practical standpoint, that may mean skipping that urge to “google things”. We might try different search engines or even crack open a book to seek the information we need. We should also seek alternative ways to finding things in the Internet. That may mean looking at resource sites, social platforms and other alternatives. These efforts may at first make life more complicated but over the long run it will safeguard us from an inordinate dependence on a company.

The technologies developed by Google are a blessing (albeit one that we pay for) to the world. We should leverage them for human flourishing regardless of the company’s intended focus. For that to happen, we the people, must take stock of our own interaction with them . The more responsibly we use it, the more we insure that they remain what they are really meant to be: gifts to humanity.