The Great Equalizer?
As we rebuild the workforce, we can avoid repeating past mistakes.
A diverse workforce begins with a diverse candidate pool. Ensuring that the hiring process isn’t shutting out qualified, racially diverse candidates is more important now than ever. Recruiters and HR specialists must be intentional about mitigating bias in the hiring process, and this includes the use of artificial intelligence (AI). Although often thought of as objective methods of streamlining hiring workflow, AI hiring tools can automate rejections and narrow a candidate pool through a series of decisions that close the door to a wider range of qualified applicants.1
The Nature of AI: Understanding How Bias Happens
Busy hiring managers sometimes use AI in the form of hiring algorithms that can quickly screen job applicants’ résumés, analyze large datasets, and reduce the amount of time it takes to find qualified candidates.1 Although AI algorithms may appear to be free from human bias, AI is only as unbiased as those who design, oversee, and input data into these systems.2
The very nature of AI means that systems are always learning and changing. As an algorithm grows more sophisticated, it uses broader data sets, ones that were not directly introduced by a human.3 It can also give outsized emphasis to identified patterns.4 Without the benefit of human judgment or a moral compass, more complex machine-thinking can incorporate irrelevant data, leading to biased results.
From Sourcing to Sorting: Bias Can Creep in at Every Step
Posting
Where a job posting appears determines who learns about the job and who applies. Algorithmic ad platforms and job boards use predictive technology to target certain demographics deemed most likely to click on specific ads. These methods can exclude candidates by limiting who sees particular job advertisements. In one study, predictive algorithms sent advertisements to an approximately 75% Black audience, while job advertisements for lumberyard and cashier positions reached audiences that were 72% white.5
Searching
Job candidates commonly learn about open positions through collaborative filtering, a process that predicts someone’s interests by comparing them to seemingly similar job searchers. This filtering risks stereotyping when the actions of a few appear to represent an entire group.6 For example, a person of color looking for a management position may be shown only junior positions if an AI-powered system has recorded other people of color clicking on junior-level positions. Based on this incomplete data, the AI may make an incorrect assumption about the management-position seeker and filter out management jobs. The result is a qualified candidate not seeing—and therefore not applying for—certain open positions through no fault of their own. 1
Screening
Predictive screening systems often use an algorithm that relies on data from previous successful hires to evaluate prospective candidates responding to a job posting.7 This model of screening is likely to include and perpetuate interpersonal, institutional, and systemic social biases.1
Biases can enter the process when an AI hiring system screens application materials using proxies to narrow the candidate pool. At a company in which the majority of employees are white males, a hiring algorithm using a proxy based on the names of previous hires might advance a candidate named Connor to the next round of hiring while rejecting a candidate named Jamal.8 Recently, a major tech company’s recruitment AI taught itself who the company’s “ideal” candidate was by reviewing patterns in résumés submitted over a 10-year period. As the majority of applicants during this time had been male, the system accrued its data through those resumes and downgraded applications that used the word “women’s” when referring to teams or clubs—even outright rejecting graduates from two women’s colleges. The result was a predominantly male applicant pool, as the hiring AI significantly screened out the number of women applicants.9
Interviewing
The speed and complex-analysis capability of AI are a tempting combination for hiring managers, especially at large companies. One algorithm could cross-reference up to 500,000 data points in a 30-minute assessment and score applicants using reference data from current employees.10 It is important for recruiters to recognize automation bias, an overdependence on computer predictions or rankings provided by AI and the assumption that such calculations are objective.1 Some organizations use AI that assesses interviewees by analyzing their facial expressions, word choices, and tone of voice. As these algorithms judge new candidates based on data collected from previous hires, existing biases and prejudices can be perpetuated and amplified.11
Looking to the Future
The Need for Transparency
AI researchers, legislators, professors, and applicants have expressed concerns that a lack of transparency in the collection of data points, or how algorithms extrapolate from initial data sets, can lead to hiring decisions that discriminate against non-native English speakers and people of color.10 Without transparency, it is difficult to determine how AI decides which potential employee is qualified12 and to hold an organization accountable for discriminatory hiring practices that might have their roots in AI. When used by companies that are already racially homogenous, AI will learn from historical data and set hiring criteria that match the dominant employee group, perpetuating a lack of racial diversity in the company’s work force.13
Alternative AI approaches such as building in specific constraints from the start and using additional algorithms to “audit” the initial AI used are being tested, but these need to be refined and tested further.14
The Post-Covid-19 Workplace
Recent data on the impact of Covid-19 on the US economy indicates that women—and women of color in particular—are experiencing higher job-loss rates than other groups.15
The only US population subset that saw an increase in unemployment in April 2020 was Black women,16 and despite small gains in June, Black women and Latinas still face the worst unemployment numbers.17
As workplaces reopen and begin hiring again, it is critical to recover the loss of diversity that the pandemic created in our workforce, especially as more companies verbally commit to supporting the #BlackLivesMatter movement. The first step to rebuilding a diverse workforce is equitable hiring. Organizations must not let AI be a barrier to access. Artificial intelligence is a tool that can learn and replicate longstanding institutional barriers and biases. Intentional recruitment and hiring processes are necessary to ensure AI doesn’t replicate existing biases, screen out talent, or continue to disproportionately negatively affect underrepresented groups such as people of color, women, and those with disabilities.
- Miranda Bogen, “Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias,” Upturn (2018).
- Megan Farokhmanesh, “The Next Frontier in Hiring Is AI-Driven: Can an AI Ease the Stress of Recruiting?” The Verge, January 30, 2019.
- John Villasenor, “Artificial Intelligence and Bias: Four Key Challenges,” The Brookings Institution, January 3, 2019.
- Ayanna Howard and Jason Borenstein, “The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity,” Science and Engineering Ethics, vol. 24: p. 1521-1536 (2018).
- Muhammad Ali, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke, “Discrimination through Optimization: How Facebook’s Ad Delivery Can Lead to Skewed Outcomes,” Proceedings of the ACM on Human-Computer Interaction, vol. 3 CSCW (2019): p. 1-30.
- Catherine Stinson, “Algorithms are not Neutral: Bias in Recommendation Systems,” Center for Science and Thought, University of Bonn (2018).
- Miranda Bogen, “All the Ways Hiring Algorithms Can Introduce Bias,” Harvard Business Review, May 6, 2019.
- Anya Prince and Daniel Schwarcz, “Proxy Discrimination in the Age of Artificial Intelligence and Big Data,” Iowa Law Review, vol. 105 (2020).
- Jeffrey Dastin, “Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women,” Reuters, October 9, 2018.
- Drew Harwell, “A Face-scanning Algorithm Increasingly Decides Whether You Deserve the Job,” Washington Post, November 6, 2019.
- Ivan Manokha, “Facial Analysis AI is Being Used in Job Interviews—It Will Probably Reinforce Inequality,” The Conversation, October 7, 2019.
- AON, “Be Aware of ‘Black Box’ Problems When Using AI for Recruiting,” October 30, 2018.
- Liz Webber, “These Entrepreneurs Are Taking On Bias in Artificial Intelligence,” Entrepreneur, September 5, 2018; Bahar Gholipour, “We Need to Open the AI Black Box Before It’s Too Late,” Futurism, January 18, 2018.
- James Zou and Londa Schiebinger, “AI Can Be Sexist and Racist — it’s Time to Make it Fair,” Nature, July 18, 2018.
- Danielle Kurtzleben, “Job Losses Higher Among People of Color During Coronavirus Pandemic,” NPR, April 22, 2020.
- Claire Ewing-Nelson, “Despite Slight Gains in May, Women Have Still Been Hit Hardest by Pandemic Related Job Losses,” National Women’s Law Center (June 2020).
- Claire Ewing-Nelson, “June Brings 2.9 Million Women’s Jobs Back, Many of Which Are At Risk of Being Lost Again,” National Women’s Law Center (July 2020).