Thousands of resumes, few positions, and limited time. The story repeats itself in companies globally. Growing economies and open labor markets, now re-shaped by platforms like Linkedin and Indeed, a growing recruiting industry opened wide the labor market. While this has expanded opportunity, it left employers with the daunting task to sift through the barrage of applications, cover letters, resumes thrown in their way. Enters AI, with its promise to optimize and smooth out the pre-selection process. That sounds like a sensible solution, right? Yet, how is AI hiring impacting minorities?
Not so fast – a 2020 paper summarizing data from multiple studies found that using AI for both selection and recruiting has shown evidence of bias. As in the case of facial recognition, AI for employment is also showing disturbing signs of bias. This is a concerning trend that requires attention from employers, job applicants, citizens, and government entities.
Using AI for Hiring
MIT podcast In Machines we Trust goes under the hood of AI hiring. What they found was surprising and concerning. Firstly, it is important to highlight how widespread algorithms are in every step of hiring decisions. One of the most common ways is through initial screening games that narrow the applicant pool for interviews. These games come in many forms that vary depending on vendor and job type. What they share in common is that, unlike traditional interview questions, they do not directly relate to skills relevant to the job at hand.
AI game creators claim that this indirect method is intentional. This way, the candidate is unaware of how the employer is testing them and therefore cannot “fake” a suitable answer. Instead, many of these tools are trying to see whether the candidate exhibits traits of past successful employees for that job. Therefore, employers claim they get a better measurement of the candidate fit for the job than they would otherwise.
How about job applicants? How do they fare when AI decides who gets hired? More specifically, how does AI hiring impact minorities’ prospects of getting a job? On the other side of the interview table, job applicants do not share in the vendor’s enthusiasm. Many report an uneasiness in not knowing how the tests’ criteria. This unease in itself can severely impact their interview performance creating additional unnecessary anxiety. More concerning is how these tests impact applicants with disabilities. Today, thanks to the legal protections, job applicants do not have to report disabilities in the interviewing process. Now, some of these tests may force them to do it earlier.
What about Bias?
Unfortunately, bias does not happen only for applicants with disabilities. Other minority groups are also feeling the pinch. The MIT podcast tells the story of an African-American woman, who though having the pre-requisite qualifications did not get a single call back after applying to hundreds of positions. She eventually found a job the old-fashioned way – getting an interview through a network acquaintance.
The problem of bias is not entirely surprising. If machine learning models are using past data of job functions that are already fairly homogenous, they will only reinforce and duplicate this reality. Without examining the initial data or applying intentional weights, the process will continue to perpetuate this problem. Hence, when AI is training on majority-dominated datasets, the algorithms will tend to look for majority traits at the expense of minorities.
This becomes a bigger problem when AI applications go beyond resume filtering and selection games. They are also part of interviewing process itself. AI hiring companies like Hirevue claim that their algorithm can predict the success of a candidate by their tone of voice in an interview. Other applications will summarize taped interviews to select the most promising candidates. While these tools clearly can help speed up the hiring process, bias tendencies can severely exclude minorities from the process.
The Growing Need for Regulation
AI in hiring is here to stay and they can be very useful. In fact, the majority of hiring managers state that AI tools are saving them time in the hiring process. Yet, the biggest concern is how they are bending power dynamics towards employers – both sides should benefit from its applications. AI tools are now tipping the balance toward employers by shortening the selection and interview time.
If AI for employment is to work for human flourishing, then it cannot simply be a time-saving tool for employers. It must also expand opportunity for under-represented groups while also meeting the constant need for a qualified labor force. Above all, it cannot claim to be a silver bullet for hiring but instead an informative tool that adds a data point for the hiring manager.
There is growing consensus that AI in hiring cannot go on unregulated. Innovation in this area is welcome but expecting vendors and employers to self-police against disparate impact is naive. Hence, we need intelligent regulation that ensures workers get a fair representation in the process. As algorithms become more pervasive in the interviewing process, we must monitor their activity for adverse impact.
Job selection is not a trivial activity but is foundational for social mobility. We cannot afford to get this wrong. Unlike psychometric evaluations used in the past that have scientific and empirical evidence, these new tools are mostly untested. When AI vendors claim they can predict job success by the tone of voice or facial expression, then the burden is on them to prove the fairness of their methods. Should AI decide who gets hired? Given the evidence so far, the answer is no.
Share this:
- Click to share on Twitter (Opens in new window)
- Click to share on Facebook (Opens in new window)
- Click to email a link to a friend (Opens in new window)
- Click to share on WhatsApp (Opens in new window)
- Click to share on LinkedIn (Opens in new window)
- Click to share on Pocket (Opens in new window)
- Click to print (Opens in new window)
- More