How Big Companies can be Hypocrites about Diversity

Can we trust big companies are saying the truth, or are they being hypocrites? We can say that the human race is somehow evolving and leaving behind discriminatory practices. Or at least some are trying to. And this reflects on the market. More and more, companies around the world are being called out on big problems, involving racism, social injustice, gender inequalities, and even false advertising. But how can we know what changes are real and what are fake? From Ford Motors to Facebook, many companies talk the talk but do not walk the walk.

The rise of Black Lives Matter protests is exposing societies’ crooked and oppressive ways, bringing discussions about systemic and structural racism out in the open. It’s a problem that can’t be fixed with empty promises and window dressing. Trying to solve deep problems isn’t easy and is a sort of “all hands on deck” type of situation. But it’s no longer an option for companies around the world to ignore these issues. That’s when the hypocrisy comes in.

Facebook, Amazon, Ford Motor Company, Spotify, Google, are a few examples of big companies that took a stand against racial inequality on their social media. Most of them also donated money to help the cause. They publicly acknowledged that a change has to be made. It is a start. But it means nothing if this change doesn’t happen inside the company itself.

Today I intend to expose a little bit about Facebook and Amazon’s diversity policies and actions. You can make your own conclusions.

We stand against racism and in support of the Black community and all those fighting for equality and justice every single day.”Facebook

Mark Zuckerberg wrote on his personal Facebook page: “To help in this fight, I know Facebook needs to do more to support equality and safety.” 

In Facebook’s business page, it claims some actions the company is making to fight inequalities. But it mostly revolves around funding. Of course money is important, but changes regarding the companies structure are ignored. They also promised to build a more inclusive workforce by 2023. They aim for 50% of the workforce to be from underrepresented communities. Also working to double the number of Black and Latino employees in the same timeframe.

But in reality, in the most recent FB Diversity Report, White people take up 41% of all roles, Followed by Asians with 44%, Hispanics with 6.3%, Black people with 3.9% and Native Americans with 0.4%. An even though it may seem that Asians are taking benefit in this current situation, White people take 63% of leadership roles in Facebook, reducing only 10% since 2014. Well, can you see the difference between the promises and ACTUAL reality?

Another problem FB employees talk about is leadership opportunities. Even though the company started hiring more people of color, it still doesn’t give them the opportunity to grow and occupy more important roles.  Former Facebook employees filled a complaint with with the Equal Employment Opportunity Commission trying to bring justice for the community. Read more about this case here.

Another big company: Amazon.

Facial recognition technology and police. Hypocrisy or not?

Amazon is also investing in this type of propaganda creating a “Diversity and Inclusion” page on their website. They also made some tweets talking about police abuse and the brutal treatment black Americans are forced to live with. What Amazon didn’t expect, is that it would backfire.

Amazon fabricated and sold technology that supports police abuse towards the Black population. In a 2018 study of Amazon’s Rekognition technology, the  American Civil Liberties Union (ACLU) found people of color were falsely matched at a high rate. Matt Cagle, an attorney for the ACLU of Northern California, called Amazon’s support for racial justice “utterly hypocritical.” Only in June of 2020, Amazon halted selling this technology to the police for one year. And in May of 2021, they extended the pause until further notice.

The ACLU admits that Amazon stopped selling this technology, is a start. But the US government has to “end its use by law enforcement entirely, regardless which company is selling it.” In previous posts, AI Theology talked about bias on facial recognition and algorithmic injustice.

What about Amazon’s workforce?

Another problem Amazon faces is in their workforce. At first sight, white people occupy only 32% of their entire workforce. But it means nothing since the best paid jobs belong to them. Corporate employees are composed of: 47% White, 34% Asian, 7% Black, 7% Latinos, 3% Multiracial, and 0.5% Native Americans. The numbers continue reducing drastically when you look at senior leaders that are composed of: 70% White, 20% Asian, 3,8% Black, 3,9% Latinos, 1.4% Multiracial and 0.2% Native Americans. You can find this data in this link.

What these numbers show us is that minorities are under represented in Amazon’s leadership ranks . Especially in the best paid and more influential roles. We need to be alert when big companies say their roles are equally distributed. Sometimes the hypocrisy is there. The roles may be equal, but the pay isn’t.

What can you do against these big companies actions?

So if the companies aren’t practicing what they preach, how can we change that?

Numbers show that public pressure can spark change. We should learn not to only applaud well built statements but demand concrete actions, exposing hypocrisy. We need to call on large companies to address the structural racism that denies opportunities from capable and innovative people of color. 

Consultant Monica Hawkins believes that executives struggle to raise diversity in senior management mostly because they don’t understand minorities. She believes that leaders need to expand their social and business circles, referrals are a key source of important hires as she mentioned in Reuters.

Another take that companies could consider taking is, instead of only making generic affirmations, they could put out campaigns recognizing their own flaws and challenges and what they are doing to change that reality. This type of action can not only improve the company’s rating but also press other companies to change as well. 

It’s also important that companies keep showing their workforce diversity numbers publicly. That way, we can keep track of changes and see whether they are actually working to improve from the inside.

In other words, does the company talk openly about inequalities? That’s nice. Does it make donations to help social justice organizations? Great. But it’s not enough, not anymore. Inequalities don’t exist just because of financial problems. For companies to thrive and continue alive in the future, they need to start creating an effective plan on how to change their own reality.  

3 Concerning Ways Israel Used AI Warfare in Gaza

We now have evidence that Israel used AI warfare in the conflict with Hamas. In this post, we’ll cover briefly the three main ways Israel used warfare AI and the ethical concerns they raise. Shifting from facial recognition posts, we now turn to this controversial topic.

Semi-Autonomous Machine Gun Robot

Photos surfaced on the Internet showing a machine-gun-mounted robot patrolling the Gaza wall. The intent is to deter Hamas militants from crossing the border or dealing with civil unrest. Just to be clear, they are not fully autonomous as they still have a remote human controller that must make the ultimate striking decision.

Yet, they are more autonomous than drones and other remote-controlled weapons. They can respond to fire or use non-lethal forces if challenged by enemy forces. They are also able to maneuver independently around obstacles.

Israel Defence Force (IDF) seeks to replace soldier patrols with semi-autonomous technologies like this. From their side, the benefits are clear: less risk for soldiers, cheaper patrolling power, and more efficient control of the border. It is part of a broader strategy of creating a smart border wall.

Less clear is how these Jaguars (as they are called) will do a better job in distinguishing enemy combatants from civilians.

US military Robot similar to the one used in Gaza (from Wikipedia)

Anti-Artilhery Systems

Hamas’s primary tactic to attack Israel is through short-range rockets – lots of them. Just in the last conflict, Hamas launched 4,000 rockets against Israeli targets. Destroying them mid-air was a crucial but extremely difficult and costly task.

For decades, IDF has improved its anti-ballistic capability in order to neutralize this threat. The most recent addition in this defense arsenal is using AI to better predict incoming missiles trajectories. By collecting a wealth of data from actual launches, IDF can train better models. This allows them to use anti-missile projectiles sparingly, leaving those destined to uninhabited areas alone.

This strategy not only improves accuracy, which now stands at 90% but also can save the IDF money. At $50K a pop, IDF must use anti-missile projectiles wisely. AI warfare is helping Israel save on resources and hopefully some lives as well.

Target Intelligence

The wealth of data was useful in other areas as well. IDF used intelligence to improve its targeted strikes in Gaza. Using 3-D models of the Gaza territory, they could focus on hitting weapons depots and missile bases. Furthermore, AI technology was employed for other facets of warfare. For example, military strategists used AI to ascertain the best routes for ground force invasions.

This was only possible because of the rich data coming from satellite imaging, surveillance cameras, and intercepted communications. As this plethora of information flowed in, partner-finding algorithms were essential in translating them into actionable intelligence.

Employing AI technology clearly gave Israel a considerable military advantage. Hence, the number of casualties on each side speaks for themselves: 253 in Palestine versus 12 on the Israeli side. AI warfare was a winner for Israel.

With that said, wars are no longer won on the battlefield alone. Can AI do more than giving one side an advantage? Could it actually diminish the human cost of war?

Photo by Clay Banks on Unsplash

Ethical Concerns with AI warfare

As I was writing this blog I reached out to Christian Ethicists and author of Robotic Persons, Dr. Josh Smith for reactions. He sent me the following (edited for length) statement:

“The greatest concern I have is that these defensive systems, which most AI weapons systems are, is that they become a necessary evil. Because every nation is a ‘threat’ we must have weapon systems to defend our liberty, and so on. The economic cost is high. Many children in the world die from a lack of clean water and pneumonia. Yet, we invest billions into AI for the sake of security.

Dr Josh Smith

I could not agree more. As the case for IDF illustrates, AI can do a lot to give military advantage in a conflict. AI warfare is not just about killer robots but encompasses an ecosystem of applications that can improve effectiveness and efficiency. Yet, is that really the best use of AI.

Finally, As we have stated before AI is best when employed for the flourishing of life. Can that happen in warfare? The jury is still out but it is hard to reconcile the flourishing of life with an activity focused on death and destruction.

ERLC Statement on AI: An Annotated Christian Response

Recently, the ERLC (Ethics and Religious Liberty Commission) released a statement on AI. This was a laudable step as the Southern Baptist became the first large Christian denomination to address this issue directly. While this is a start, the document fell short in many fronts. From the start, the list of signers had very few technologists and scientists.

In this blog, I show both the original statement and my comments in red. Judge for yourself but my first impression is that we have a lot of work ahead of us.

Article 1: Image of God

We affirm that God created each human being in His image with intrinsic and equal worth, dignity, and moral agency, distinct from all creation, and that humanity’s creativity is intended to reflect God’s creative pattern.

Ok, that’s a good start by locating creativity as God’s gift and affirming dignity of all humanity. Yet, the statement exalts human dignity at expense of creation. Because AI, and technology in general, is about human relationship to creation, setting the foundation right is important. It is not enough to highlight human primacy, one must clearly state our relationship with the rest of creation.

We deny that any part of creation, including any form of technology, should ever be used to usurp or subvert the dominion and stewardship which has been entrusted solely to humanity by God; nor should technology be assigned a level of human identity, worth, dignity, or moral agency.

Are we afraid of a robot take over of humanity? Here it would have been helpful to start distinguishing between general and narrow AI. The first is still decades away while the latter is already here and poised to change every facet of our lives. The challenge of narrow AI is not one of usurping our dominion and stewardship but of possibly leading us to forget our humanity. They seem to be addressing general AI. Maybe including more technologists in the mix would have helped.

Genesis 1:26-28; 5:1-2; Isaiah 43:6-7; Jeremiah 1:5; John 13:34; Colossians 1:16; 3:10; Ephesians 4:24

Article 2: AI as Technology

We affirm that the development of AI is a demonstration of the unique creative abilities of human beings. When AI is employed in accordance with God’s moral will, it is an example of man’s obedience to the divine command to steward creation and to honor Him. We believe in innovation for the glory of God, the sake of human flourishing, and the love of neighbor. While we acknowledge the reality of the Fall and its consequences on human nature and human innovation, technology can be used in society to uphold human dignity. As a part of our God-given creative nature, human beings should develop and harness technology in ways that lead to greater flourishing and the alleviation of human suffering. 

Yes, well done! This affirmation is where Christianity needs to be. We are for human flourishing and the alleviation of suffering. We celebrate and support Technology’s role in these God-given missions.

We deny that the use of AI is morally neutral. It is not worthy of man’s hope, worship, or love. Since the Lord Jesus alone can atone for sin and reconcile humanity to its Creator, technology such as AI cannot fulfill humanity’s ultimate needs. We further deny the goodness and benefit of any application of AI that devalues or degrades the dignity and worth of another human being.

I guess what they mean here is that technology is a limited means and cannot ultimately be the salvation. I see here a veiled critique of Transhumanism. Fair enough, the Christian message should both celebrate AI’s potential but also warn of its limitations less we start giving it unduly worth.

Genesis 2:25; Exodus 20:3; 31:1-11; Proverbs 16:4; Matthew 22:37-40; Romans 3:23

Article 3: Relationship of AI & Humanity

We affirm the use of AI to inform and aid human reasoning and moral decision-making because it is a tool that excels at processing data and making determinations, which often mimics or exceeds human ability. While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility.

This statement seems to suggest the positive role AI can play in augmentation rather than replacement. I am just not sure that was ever in question.

We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created. Only humanity will be judged by God on the basis of our actions and that of the tools we create. While technology can be created with a moral use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision making.

While hard to argue against this statement at face value, it overlooks the complexities of a world that is becoming increasingly reliant on algorithms. The issue is not that we are offloading moral decisions to algorithms but that they are capturing moral decisions of many humans at once. This reality is not addressed by simply stating human moral responsibility. This needs improvement.

Romans 2:6-8; Galatians 5:19-21; 2 Peter 1:5-8; 1 John 2:1

Article 4: Medicine

We affirm that AI-related advances in medical technologies are expressions of God’s common grace through and for people created in His image and that these advances will increase our capacity to provide enhanced medical diagnostics and therapeutic interventions as we seek to care for all people. These advances should be guided by basic principles of medical ethics, including beneficence, nonmaleficence, autonomy, and justice, which are all consistent with the biblical principle of loving our neighbor.

Yes, tying AI-related medical advances with the great commandment is a great start.

We deny that death and disease—effects of the Fall—can ultimately be eradicated apart from Jesus Christ. Utilitarian applications regarding healthcare distribution should not override the dignity of human life. Furthermore, we reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.

Similar to my statement on article 3, this one misses the complexity of the issue. How do you draw the line between enhancement and cure? Also, isn’t the effort of extend life an effective form of alleviation of suffering? These issues do not lend themselves to simple propositions but instead require more nuanced analysis and prayerful consideration.

Matthew 5:45; John 11:25-26; 1 Corinthians 15:55-57; Galatians 6:2; Philippians 2:4​

Article 5: Bias

We affirm that, as a tool created by humans, AI will be inherently subject to bias and that these biases must be accounted for, minimized, or removed through continual human oversight and discretion. AI should be designed and used in such ways that treat all human beings as having equal worth and dignity. AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making.

Bias is inherent in the data fed into machine learning models. Work on the data, monitor the outputs and evaluate results and you can diminish bias. Direction AI to promote equal worth is a good first step.

We deny that AI should be designed or used in ways that violate the fundamental principle of human dignity for all people. Neither should AI be used in ways that reinforce or further any ideology or agenda, seeking to subjugate human autonomy under the power of the state.

What about being used by large corporations? This was a glaring absence here.

Micah 6:8; John 13:34; Galatians 3:28-29; 5:13-14; Philippians 2:3-4; Romans 12:10

Article 6: Sexuality

We affirm the goodness of God’s design for human sexuality which prescribes the sexual union to be an exclusive relationship between a man and a woman in the lifelong covenant of marriage.

This seems like a round-about way to use the topic of AI for fighting culture wars. Why include this here? Or, why not talk about how AI can help people find their mates and even help marriages? Please revise or remove!

We deny that the pursuit of sexual pleasure is a justification for the development or use of AI, and we condemn the objectification of humans that results from employing AI for sexual purposes. AI should not intrude upon or substitute for the biblical expression of sexuality between a husband and wife according to God’s design for human marriage. 

Ok, I guess this is a condemnation of AI porn. Again, it seems misplaced on this list and could have been treated in alternative ways. Yes, AI can further increase objectification of humans and that is a problem. I am just not sure that this is such a key issue to be in a statement of AI. Again, more nuance and technical insight would have helped.

Genesis 1:26-29; 2:18-25; Matthew 5:27-30; 1 Thess 4:3-4

Article 7: Work

We affirm that work is part of God’s plan for human beings participating in the cultivation and stewardship of creation. The divine pattern is one of labor and rest in healthy proportion to each other. Our view of work should not be confined to commercial activity; it must also include the many ways that human beings serve each other through their efforts. AI can be used in ways that aid our work or allow us to make fuller use of our gifts. The church has a Spirit-empowered responsibility to help care for those who lose jobs and to encourage individuals, communities, employers, and governments to find ways to invest in the development of human beings and continue making vocational contributions to our lives together.

This is a long, confusing and unhelpful statement. It seems to be addressing the challenge of job loss that AI can bring without really doing it directly. It gives a vague description of the church’s role in helping individuals find work but does not address the economic structures that create job loss. It simply misses the point and does not add much to the conversation. Please revise!

We deny that human worth and dignity is reducible to an individual’s economic contributions to society alone. Humanity should not use AI and other technological innovations as a reason to move toward lives of pure leisure even if greater social wealth creates such possibilities.

Another confusing and unhelpful statement. Are we making work holy? What does “lives of pure leisure” mean? Is this a veiled attack against Universal Basic Income? I am confused. Throw it out and start it over!

Genesis 1:27; 2:5; 2:15; Isaiah 65:21-24; Romans 12:6-8; Ephesians 4:11-16

Article 8: Data & Privacy

We affirm that privacy and personal property are intertwined individual rights and choices that should not be violated by governments, corporations, nation-states, and other groups, even in the pursuit of the common good. While God knows all things, it is neither wise nor obligatory to have every detail of one’s life open to society.

Another statement that needs more clarification. Treating personal data as private property is a start. However, people are giving data away willingly. What is privacy in a digital world? This statement suggest the drafters unfamiliarity with the issues at hand. Again, technical support is needed.

We deny the manipulative and coercive uses of data and AI in ways that are inconsistent with the love of God and love of neighbor. Data collection practices should conform to ethical guidelines that uphold the dignity of all people. We further deny that consent, even informed consent, although requisite, is the only necessary ethical standard for the collection, manipulation, or exploitation of personal data—individually or in the aggregate. AI should not be employed in ways that distort truth through the use of generative applications. Data should not be mishandled, misused, or abused for sinful purposes to reinforce bias, strengthen the powerful, or demean the weak.

The intention here is good and it is in the right direction. It is also progress to point out that consent is the only guideline and in its condemnation of abusive uses. I would like it to be more specific on its call to corporations, governments and even the church.

Exodus 20:15, Psalm 147:5; Isaiah 40:13-14; Matthew 10:16 Galatians 6:2; Hebrews 4:12-13; 1 John 1:7

Article 9: Security

We affirm that AI has legitimate applications in policing, intelligence, surveillance, investigation, and other uses supporting the government’s responsibility to respect human rights, to protect and preserve human life, and to pursue justice in a flourishing society.

We deny that AI should be employed for safety and security applications in ways that seek to dehumanize, depersonalize, or harm our fellow human beings. We condemn the use of AI to suppress free expression or other basic human rights granted by God to all human beings.

Good intentions with poor execution. The affirmation and denials are contradictory. If you affirm that AI can be use for policing, you have to concede that it will be used to harm some. Is using AI to suppress hate speech acceptable? I am not sure how this adds any insight to the conversation. Please revise!

Romans 13:1-7; 1 Peter 2:13-14

Article 10: War

We affirm that the use of AI in warfare should be governed by love of neighbor and the principles of just war. The use of AI may mitigate the loss of human life, provide greater protection of non-combatants, and inform better policymaking. Any lethal action conducted or substantially enabled by AI must employ human oversight or review. All defense-related AI applications, such as underlying data and decision-making processes, must be subject to continual review by legitimate authorities. When these systems are deployed, human agents bear full moral responsibility for any actions taken by the system.

Surprisingly, this was better than the statement above. It upholds human responsibility but recognizes that AI, even in war, can have life preserving aims. I would have like a better definition of uses for defense, yet that is somewhat implied in the principles of just war. I must say this is an area that needs more discussion and further considerations but this is a good start.

We deny that human agency or moral culpability in war can be delegated to AI. No nation or group has the right to use AI to carry out genocide, terrorism, torture, or other war crimes.

I am glad to see the condemnation of torture here. Lately, I am not sure where evangelicals stand on this issue.

Genesis 4:10; Isaiah 1:16-17; Psalm 37:28; Matthew 5:44; 22:37-39; Romans 13:4​

Article 11: Public Policy

We affirm that the fundamental purposes of government are to protect human beings from harm, punish those who do evil, uphold civil liberties, and to commend those who do good. The public has a role in shaping and crafting policies concerning the use of AI in society, and these decisions should not be left to those who develop these technologies or to governments to set norms.

The statement points to the right direction of public oversight. I would have liked it to be more bold and clear about the role of the church. It should have also addressed corporations more directly. That seems to be a blind spot in a few articles.

We deny that AI should be used by governments, corporations, or any entity to infringe upon God-given human rights. AI, even in a highly advanced state, should never be delegated the governing authority that has been granted by an all-sovereign God to human beings alone.

Glad to see corporations finally mentioned in this document making this a good start.

Romans 13:1-7; Acts 10:35; 1 Peter 2:13-14

Article 12: The Future of AI

We affirm that AI will continue to be developed in ways that we cannot currently imagine or understand, including AI that will far surpass many human abilities. God alone has the power to create life, and no future advancements in AI will usurp Him as the Creator of life. The church has a unique role in proclaiming human dignity for all and calling for the humane use of AI in all aspects of society.

Again, the distinction between narrow and general AI would have been helpful here. The statement seems to be addressing general AI. It also seems to give away the impression that AI is threatening God. Where is that coming from? A more nuanced view of biology and technology would have been helpful here to. They seem to be jumbled together. Please revise!

We deny that AI will make us more or less human, or that AI will ever obtain a coequal level of worth, dignity, or value to image-bearers. Future advancements in AI will not ultimately fulfill our longings for a perfect world. While we are not able to comprehend or know the future, we do not fear what is to come because we know that God is omniscient and that nothing we create will be able to thwart His redemptive plan for creation or to supplant humanity as His image-bearers.

I disagree with the first sentence. There are ways in which AI can affirm and/or diminish our humanity. The issue here seems to be a perceived threat that AI will replace humans or be considered equal to them. I like the hopeful confidence in God for the future but the previous statement suggest that there is fear about this already. The ambiguity in the statements is unsettling. It suggests that AI is a dangerous unknown. Yes, it is true that we cannot know what it can become but why not call out Christians to seize this opportunity for the kingdom? Why not proclaim that AI can help us co-create with God? Let me reiterate one of the verses mentioned below:

For God has not given us a spirit of fear and timidity, but of power, love, and self-discipline

Genesis 1; Isaiah 42:8; Romans 1:20-21; 5:2; Ephesians 1:4-6; 2 Timothy 1:7-9; Revelation 5:9-10

For an alternative but still evolving Christian position on this matter please check out the Christian Transhumanist Association affirmation.

Blockchain and AI: Powerful Combination or Concerning Trend?

Bitcoin is all over the news lately. After the meteoric rise of these financial instruments, the financial world is both excited and fearful. Excited to get in the bandwagon while it is on the rise but scarred that this could be another bubble. Even more interesting has been the rise of blockchain, the underlying technology that enables Bitcoin to run (for those wondering about what this technology is, check out this video). In this blog, I reflect on the combination between AI and blockchain by examining an emerging startup on the field. Can AI and blockchain work together? If so, what type of applications can this combination be used for? 

Recently, I came across this article from Coincentral that starts answering the questions above. In it, Colin Harper interviews CEO of Deep Brain Chain, one of the first startups attempting to bring AI and blockchain technology together. DBC’s CEO He Yong puts it this way:

DBC will allow AI companies to acquire data more easily.  Because a lot of data are private data.  They have heavy security, such as on health and on finance.  For AI companies, it’s almost next to impossible to acquire these data.  The reason being, these data are easy to copy.  After the data is copied, they’re easy to leak.  One of DBC’s core features is trying to solve this data leakage issue, and this will help AI companies’ willingness to share data, thus reducing the cost you spend on data, to increase the flow of data in society. This will expand and really show the value of data.

As somebody who works within a large company using reams of private data, I can definitely see the potential for this combination. Blockchain can provide the privacy through encryption that could facilitate the exchange of data between private companies. Not that this does not happen already but it is certainly discouraged given the issues the CEO raises above. Certainly, the upside of aggregating this data for predictive modeling is fairly significant. Companies would have complete datasets that allow them to see sides of the customer that are not visible to that company.

However, as a citizen, such development also makes me ponder. Who will get access to this shared data? Will this be done in a transparent way so that regulators and the general population can monitor the process?  Who will really benefit from this increased exchange of private data? While I agree the efficiency and cost would decrease, my main concern is for what aims will this data be used. Don’t get me wrong, targeted marketing that follows privacy guidelines can actually be beneficial to everybody. Richer data can also help a company improve customer service.

With that said, the way He Yong describes, it looks like this combination will primarily benefit large private companies that will use this data for commercial aims. Is this really the promise of an AI and Blockchain combination: allowing large companies to know even more about us?

Further in the interview, He Yong suggested the Blockchain could actually help assuage some of the fears of that AI could get out of control:

Some people claim that AI is threatening humanity.  We think that blockchain can stop that, because AI is actually not that intelligent at the moment, so the threat is relatively low.  But in a decade, two decades, AI will be really strong, a lot stronger than it is now.  When AI is running on blockchain, on smart contracts, we can refrain AI.  For example, we can write a blockchain algorithm to restrain the computational power of AI and keep it from acting on its own.

Given my limited knowledge on Blockchain, it is difficult to evaluate whether this is indeed the case. I still believe that the biggest threat of AI is not the algorithms themselves but how they are used. Blockchain, as described here, can help make them process more robust, giving human controllers more tools to halt algorithms gone haywire. Yet, beyond that I am not sure how much more can it act as a true “check and balance” for AI.

I’ll be monitoring this trend in the next few months to see how this develops. Certainly, we’ll be seeing more and more businesses emerging seeking to marry blockchain and AI. These two technologies will disrupt many industries by themselves. Combining them could be even more impactful. I am interested to see if they can be combined for human flourishing goals. That remains yet to be seen.

Integrated STEM Education: Thoughtful, Experiential and Practical

In a previous blog, I proposed the idea of teaching STEM with a purpose. In this blog, I want to take a step back to evaluate how traditional STEM education fails to prepare students for life and propose an alternative way forward: Integrated STEM education.

One of the cardinal sins of our 19th century based education system is its inherent fragmentation. Western academia has compartmentalized the questions of “why” and “how”  into separate disciplines.[note] While I am speaking based on my experience in the US, I suspect these issues are also prevalent in the Majority World as well.[/note] Let STEM students focus on the “how”(skills)  and let the questions of “why”(critical thinking) to philosophers, ethicists and theologians. This way,  students are left to make the connection between these questions on their own.

I understand that this will vary for different subjects. The technical rigors and complexity of some disciplines may leave little space for reflection. Yet, if STEM education is all about raising detached observers of nature or obsessed makers of new gadgets, then we have failed. GDP may grow and the economy may benefit from them, yet have we really enriched the world?

One could argue that Liberal Arts colleges already do that. As one who graduated from a Liberal Arts program, there is some truth to this statement. Students are required to take a variety of courses that span Science, Literature, Social Studies, Art and Math. Even so, students take these classes separately with little to no help in integrating them. Rarely they have opportunities to engage in multi-disciplinary projects that challenge them to bring together what they learned. The professors themselves are specialists in a small subset of their discipline often having little experience in interacting outside their disciplinary guild. Furthermore, while a Liberal Arts education does a good job in exposing students to a variety of academic disciplines it does a rather poor job in teaching practical skills. Some students come out of it with the taste and confidence to continue learning. Yet, many leave these degrees confused and end up having to pursue professional degrees in order to pick a career.

Professional training does the opposite. It is precisely what a Liberal Arts education is not: highly practical, short, focused learning for a specific skill. As one who took countless professional training courses, I certainly see their value. Also, they do bring together different disciplines and tend to be project based. The downside is that very few people can efficiently learn anything in week-long 6 hour class days. The student is exposed to the contours of a skill but the learning really happens later when and if that student tries to apply that skill to a real-world work problem. They also never have time to reflect on the implications of what they are doing. Students are often paid by their companies to get the skill quickly so they can increase productivity for the firm. Such focus on efficiency greatly degrades the quality of the learning. Students here are most likely to forget what he or she learned in the long run.

Finally there is learning through experiences. Most colleges recognize that and offer study abroad semesters for students wanting to take their learning to the world. I had the opportunity to spend a summer in South Korea and it truly changed me in enduring ways. The same can be said for less structured experiences such as parenting, doing community service, being involved with a community of faith and work experiences. A word of caution here is that just going through an experience does not ensure the individual actually learns. While some of the learning is assimilated, a lot of it is lost if the individual does not digest the experience through reflection, writing and talking about it to others.

Clearly these approaches in of themselves are incomplete forms of education. A Liberal Arts education alone will only fill one’s head of knowledge (and a bit of pride too). Professional training will help workers get the job done but they will not develop as individuals. Experiences apart from reflection will only produce salutary memories. What is needed is an approach that combines the strengths of all three.

I believe a hands on project-based, ethically reflective STEM education draws from the strength of all of these. It is broad enough like Liberal Arts, skill-building like professional training and experience-rich through its hands-on projects. Above all, it should occur in a nurturing environment where young students are encouraged to take risks while still receiving the guidance so they can learn from their mistakes. To create a neatly controlled environment for learning is akin to the world of movies where main characters come up with plans in a whim and execute on them flawlessly.  Real life never happens that way. It is full of failures, setbacks, disappointments and occasionally some glorious successes. The more our education experience mimics that, the better it will prepare students for the real world.

Automated Research: How AI Will Speed Up Scientific Discovery

The potential of AI is boundless. Currently, there is a lot of buzz around how it will change industries like transportation, entertainment and healthcare. Less known but even more revolutionary is how AI could change science itself. In a previous blog, I speculated about the impact of AI on academic research through text mining. The implications of  automated research described here are even more far-reaching.

Recently, I came upon an article in Aeon that described exactly that. In it, biologist Ahmed Alkhateeb eloquently makes his argument in the excerpt below:

Human minds simply cannot reconstruct highly complex natural phenomena efficiently enough in the age of big data. A modern Baconian method that incorporates reductionist ideas through data-mining, but then analyses this information through inductive computational models, could transform our understanding of the natural world. Such an approach would enable us to generate novel hypotheses that have higher chances of turning out to be true, to test those hypotheses, and to fill gaps in our knowledge.

As a good academic, the author says a lot with a few words in the paragraph above. Let me unpack his statement a bit.

His first point is that in the age of big data, individual human minds are incapable of effectively analyzing, processing and making meaning of all the information available. There was a time where all the knowledge about a discipline was in books that could be read or at least summarized by one person. Furthermore, traditional ways of doing research whether through a lab experimentation, sampling, controlling for externalities, testing hypothesis take a long time and only give a narrow view of reality. Hence, in a time where big data is available, such approach will not be sufficient to harness all the knowledge that could be discovered.

His second point is to suggest a new approach that incorporates Artificial Intelligence through pattern seeking algorithms that can effectively and efficiently mine data. The Baconian method simply means the approach of discovering knowledge through disciplined collection and analysis of observations. He proposes an algorithmic approach that would mine data, come up with hypothesis through computer models then collect new data to test those hypotheses. Furthermore, this process would not be limited to an individual but would draw from the knowledge of a vast scientific community. In short, he proposes including AI in every step of scientific research as a way to improve quality and accuracy. The idea is that an algorithmic approach would produce better hypotheses and also test them more efficiently than humans.

As the author concedes, current algorithms and approaches are not fully adequate for the task. While AI can already mine numeric data well, text mining is more of a recent development. Computers think in numbers so to get them to make sense of text requires time-consuming processes to translate text into numeric values. Relevant to this topic, the Washington Post just put out an article about how computers have now, for the first time beat human performance in a reading and comprehension test. This is an important step if we want to see AI more involved in scientific research and discovery.

How will automated research impact our world?

The promise of AI-assisted scientific discovery is remarkable. It could lead to the cure of diseases, the discovery of new energy sources and unprecedented breakthroughs in technology. Another outcome would be the democratization of scientific research. As research gets automated, it becomes easier for others to do it just like Windows has made the computer accessible to people that do not code.

In spite of all this potential, such development should cause us to pause for reflection. It is impressive how much of our mental capacities are being outsourced to machines. How comfortable are we with this inevitable meshing of bodies and electronics? Who will lead, fund and direct the automated research? Will it lead to enriching corporations or improving quality of life for all? I disagree with the author’s statement that an automated research would make science “limitlessly free.” Even as machines are doing the work, humans are still controlling the direction and scope of the research. As we ship more human activity to machines, ensuring they reflect our ethical standards remains a human mandate.

Why Is Artificial Intelligence All Over The News Lately?

AI hype has come and gone in the past. Why is back on the spotlight now? I will answer this question by describing the three main trends that are driving the AI revolution.

Artificial Intelligence has been around since the 1950’s. Yet after much promise and fanfare, AI entered a winter period in the 80’s where investment, attention and enthusiasm greatly diminished.  Why has this technology re-emerged in the last few years? What has changed? In this blog, I will answer this question by describing the three main trends that are driving the AI revolution: breakthroughs in computing power, the emergence of big data and advances in machine learning algorithms. These three trends converged to catapult AI to the spotlight.

Computing Power Multiplies

When neural networks (the first algorithm to be considered Artificial Intelligence) were theorized and developed, the computers of the time did not have the processing power to effectively run them. The science was far ahead of the technology, therefore delaying its testing and improvement for later years.

Thanks to Moore’s law, we are now in a place where computing power is affordable, available and effective enough for some of these algorithms to be tested. My first computer in the early 90’s had 128K of RAM memory. Today, we have thumb drives with 100,000 the size of this memory! Even so, there are still ways to go as these algorithms can still be resource-expensive with existing hardware. Yet, as system architects leverage distributed computing and chip manufacturers experiment with quantum computing, AI will become even more viable. The main point is that some of these algorithms can now be tested even if it takes hours or days when before that was inconceivable.  

Data Gets Big

With smartphones, tablets and digital sensors becoming common in our lives, the amount of data available has grown exponentially. Just think about how much data you generate in one day anytime you use your phone, computer and/or enter retail stores. This is just a few examples of data being collected on an individual. For another example, consider the amount of data collected and stored by large corporations on customers’ transactions on a daily basis.

Why is this relevant? The AI is only as good as the data fed into it for learning. A great example is the data available for Google in searches and catalogued websites. That is why Google can use Artificial Intelligence to translate texts. It does that by simply comparing translation of large bodies of texts. This way, it can transcend a word-by-word translation rules to understand colloquialism and probable meaning of words based on context. It is not as good as human translation but fast becoming comparable with it.  

There is more. Big data is about to get bigger because of the Internet of Things (IoT). This new technology expands data capture beyond phones and tablets to all types of appliances. Think about a fridge that tells you when the milk is about to expire. As sensors and processors spread to all electronics, the amount of data available for AI applications will grow exponentially.

Machine Learning Comes of Age            

The third trend comes from recent breakthroughs proving the effectiveness of Machine Learning algorithms. This is the very foundation of AI Technology because it enables computers to detect patterns from data without being programmed to do so. Even as computing power improved and data became abundant, the technology was mostly untested in real-life examples breeding skepticism from scientists and investors. In 2012, a computer was able to identify cats accurately from watching YouTube videos using deep learning algorithms. The experiment was hailed as a major breakthrough for computer vision. Success stories like this and others like it brought machine learning to the spotlight. These algorithms started getting attention not just from the academic community but also from investors and CEO’s. Investment in Artificial Intelligence has significantly increased since then and is now projected to reach $47 Billion by 2020. Now there was both abundance of data and enough computing power to process it, machine learning could finally be effectively used. These trends paved the way for Artificial Intelligence to become a viable possibility again.

Pulling All Together  

These trends have turned Artificial Intelligence from a fixture of science fiction to a present reality we must contend with. This has not happened overnight but emerged through a convergence of technological advances that created a ripe environment for AI to flourish. As they came together, the media, politicians and industry titans started to notice. Hence, that’s why your Twitter feed is exploding with AI-related articles.

Because the trends leading the emergence of AI show no sign of slowing down, this is probably only the beginning of an AI springtime. While there are events that could derail this virtuous cycle, the forecast is for continuous advancement in the years and possibly decades to come. So, for now, the attention and enthusiasm is bound to stay steady for the foreseeable future.         

As AI applications are being tested by large companies and start-ups alike, this is the time to start asking the right questions about how it will impact our future. The good news is that there is still time to steer the advance of AI towards human flourishing. Hence, let the conversation around it continue. I hope the attention engendered by the media will keep us engaged, active and curious on this topic.

Can Companion Robots Heal Our Loneliness?

Can companion robots improve the social life of the elderly? That is what Intuition robotics wants you to believe with their new product: ELI Q. This sociable robot interacts with their users reminding them to take their meds, call friends and even to play games. Their rationale is compelling. With an aging population and longer life-spans, using AI to prevent social isolation is a clever idea. The question is, of course, how much is it really incumbent on the user to seek out these interactions?

Not in ELI Q’s demographic, no worries, there are plenty of other sociable robot options for you. Meet Buddy, the companion robot for everyone. He will remind you of Rosie of the Jetsons. He can protect your home, play your music, display facial expressions and more. This project also has a social component in that it proposes to democratize robotics by using an open source platform. That point caught my attention since making robotics technology accessible could be a game-changer for developing countries. Using technology for human advancement is always an attractive proposition.

Now for the future of companion robots, going from cute to human-like, check out Nadine. This human-looking bot goes right past the uncanny valley. That is, she looks human enough not to give us the creeps. She also stands out by having advanced emotional intelligence able to detect emotions through our facial expressions and recall past conversations. Her creators also believe her to be a good companion for those with dementia or autism.

These are glimpses of a coming future where robots will increasingly become part of our lives. Given the acute social isolation many suffer from in our time, social robots offer a promising solution. Yet, can they really provide the relational warmth mostly found in human relationships? That remains to be seen. If, like in the first example, the robot is a conduit to strengthen existing relationships, than this could be a form of enhancement rather than replacement. However, judging by the last example, the line gets blurry. My hope is that we start reflecting on these issues now rather than once these technologies come to commercial fruition. The best interaction with technology is one shaped by human wisdom.

What are your thoughts? Would you consider acquiring a social robot? If so, why?

 

Who gets to decide our future?

 

Business insider published a provocative article suggesting a transition to come where our devices will progress from being detached to wearable, and eventually to implanted. Elon Musk just launched Neuralink, a company seeking to develop a neural lace that could upload our thoughts. Sweedish startup Epicenter is now implanting microchips in their employees to act as cards so they can open doors and pay for a smoothie. Can this be the beginning of a whole new industry that wants to shape us into cyborgs?

This is not scientific fiction anymore but part of a near future. Technology is basically the outgrowth of humanity’s desire to create tools. Tools are extensions of our bodies so we can perform tasks more proficiently.  Yet, these technologies are taking tools to a different level: they are no longer extensions but would become actual parts of our bodies. This is clearly a new frontier we have scarcely considered.

As business titans imagine this cyborg scenario, the question I want to ask is who gets to decide our future? Just look at how our lives now are shaped by the gadgets that surround us. Are ready to accept them as part of our bodies? In a vacuum of vision, the future belongs to the few who dare imagine it now. Maybe it is time we step into these conversations and start imagining alternative futures.

Are you ready to imagine?