This content is Supporter-only. To find out if you’re an employee of a Catalyst Supporter, click here.
Attracting a diverse range of candidates is key to hiring the talent your organization needs to be successful in a competitive landscape. Inclusive organizations embed equity throughout an employee’s journey—even before they’re hired. Using AI to create more equitable entry points opens the door to a much broader candidate pool.
How and Why to Review Job Descriptions With AI
- Remember that a job seeker’s first impression of organizational culture is often the job description.
- AI can help you look out for words that may unintentionally discourage well-qualified candidates from applying.
- For example, adjectives like “competitive,” “dominant,” and “determined” are often understood as describing masculine traits and could lead women to believe that they would be less welcomed in an organization that uses those words.1
- Job descriptions with these types of words may also lead interviewers to evaluate candidates on these characteristics, straying from objective criteria and disadvantaging some people.
- Harness the power of a large language model (LLM) such as ChatGPT, Microsoft Copilot, or Google Gemini to review a job posting based on any given parameter, and assess if the text exhibits implicit bias.
- Check if your hiring software has this feature built in.
- Review and incorporate the suggested changes to remove bias and avoid potential stereotypes, ensuring that the job description appeals to candidates from a variety of backgrounds.
AI in Action
We used the following prompt on two different LLMs to assess whether a sample job description for a mechanical engineer contains potentially biased language and offer suggestions for improvement.
Prompt: For the job description below, identify any issues with gendered or stereotypical language and suggest alternatives.
The AI responded with several suggestions for revising the job description to remove gendered language and potential stereotypes.
Brainstorm Other Ways AI Can Help Diversify the Candidate Pool
Think big about the additional possibilities for using AI—either on its own or through your hiring software—especially as it continues to improve.
Some ideas include:
- Assessing whether job descriptions align with necessary skills, rather than ideas about cultural fit or unnecessary “legacy” skills (e.g., lifting requirements for a remote job).
- Cross-referencing job descriptions to skill sets highly associated with that job function to highlight unique qualifications or eliminate those that needlessly discourage applicants.
- Scanning résumés for high-skill matches, rather than relying on human assumptions.
- Removing information from résumés such as names (may prevent gender or ethnicity bias), college/university names (may prevent elitism or affinity bias), and graduation dates (may prevent ageism).
- Assessing interview questions for bias.
- Scanning interview transcripts or interviewer notes for skill matches and signs of bias.
Watch out for hazards that may arise if AI is not monitored appropriately.
- Even with the best intentions, implementing AI systems without careful oversight and planning can lead to the amplification of existing human biases at scale.2
- LLMs have been known to “hallucinate,” or present inaccurate information as fact.3 Double-checking output and installing guardrails (e.g., feedback mechanisms, internal policies) can help mitigate possible errors.
Next Steps
De-bias the interview process with structured interviews.
Endnotes
- Gaucher, D., Friesen, J. & Kay, A. C. (2011). Evidence that gendered wording in job advertisements exists and sustains gender inequality. Journal of Personality and Social Psychology, 101(1), 109–128. Stille, L., Sikström, S., Lindqvist, A., Renström, E. A., & Gustafsson Sendén, M. (2023). Language and gender: Computerized text analyses predict gender ratios from organizational descriptions. Frontiers in Psychology, 13.
- Manyika, J., Silberg, J., & Presten, B. (2019, October 25). What do we do about the biases in AI? Harvard Business Review. Shedding light on AI bias with real world examples. (2023, October 16). IBM.
- Glover, E. (2023, October 2). What is an AI hallucination? Built In.