AI is What We Design It to Be
Artificial intelligence (AI) has been heralded as a tool that can enhance human capacities and improve services,1 influence the future of work,2 create jobs,3 and serve as an equalizer by reducing bias in decisions by making predictions with algorithms based on data.4 Yet AI, ultimately, is what humans design it to be, learn, and do. This means that AI, by definition, is not neutral. Rather, it reflects the biases held by those who build it, reinforcing stereotypes based on those biases.5
Stereotypes are widely held, oversimplified generalizations about a group of people.6 This sort of “shorthand” categorization is based on the assumption that all members of a particular group are the same. Whether explicitly or implicitly, when stereotypes influence our perceptions and decision-making, members of stereotyped groups can be disadvantaged, and damage can be done.7
Women Take Care and Men Take Charge
Gendered stereotypes result in sexism and can create structural barriers that perpetuate workplace gender inequality.8 One example of a gendered stereotype is that women are more nurturing than men.9 Over time, the societally pervasive stereotype that “women take care, men take charge” can embed itself in organizational cultures and norms, and—whether at home or in the workplace—women are viewed as more likely to be caretakers, which often negatively impacts women’s careers.10
As we interact with AI in our daily lives, AI has the power to unintentionally reinforce gendered stereotypes.11 Catalyst research finds that women leaders perceived as nurturing or emotional are liked but not considered competent. This “double bind” can lead to women’s occupational segregation and lack of advancement opportunities.12 A study on human-robot interactions found that AI reinforced the double bind dilemma. Participants rated robots that were assigned an explicit gender—either stereotypically male personality traits (confident and assertive) or stereotypically female personality traits (agreeable and warm). Participants rated the male-identified robot as more trustworthy, reliable, and competent than the female robot; the female robot was rated as more likeable.13
While users do not necessarily prefer robots of a certain gender, they do prefer robots whose “occupations” and “personalities” match stereotypical gender roles.14 For example, people respond better to healthcare service robots identified as female and security service robots identified as male.15
Digital voice assistants, such as Siri and Alexa, are often designed with female names and gendered voices. Their role is to perform tasks that have traditionally been assigned to women, such as scheduling appointments and setting reminders.16 Designing these assistants consistently with a female voice can reinforce traditional gender roles17 and may even lead to biased hiring of women in service or assistant-type jobs.18
Additionally, how we speak to our digital assistants can influence societal norms. Abusive, insulting, or sexual language can normalize the way we speak to each other and particularly to women, while the tolerant or passive responses by feminized digital assistants to this language can reinforce stereotypes of the compliant and forgiving woman.19
AI Reinforces Gendered Roles and Occupations
Word embedding is an example of how machine learning can reinforce gender stereotypes. AI identifies words close to each other and uses them as a frame of reference. Recently Apple’s iOS automatically offered an emoji of a businessman when users typed in the word “CEO.” When AI finds words like “CEO” near the word “man” multiple times, it learns this association and links these words going forward.20
Princeton University found that AI’s word associations can reinforce stereotypes on everything from what internet search results we receive to hiring decisions we make. Princeton researchers measured AI’s word associations and found gender stereotypes in the word choices.21 The word “nurse,” for instance, was highly associated with the words “women” and “nurturing.” Meanwhile, the word “doctor” was more often associated with “men.” AI learns these contextual associations through the data provided to it by programmers who are predominantly white and male. It’s possible that gender bias could occur if an AI recruiting system begins to use these word associations to accept nurse candidates with female names at a higher rate. 22
Even AI translation services reveal gender-occupation stereotypes when translating languages without gender-specific pronouns, such as Chinese and Turkish. In this example, researchers found that AI also assumed “nurse,” “nanny,” and “teacher” all to be women.23
Machines taught by photo-based image-recognition software also quickly learn gender bias. In a recent study at the University of Virginia, images that depict activities such as cooking, shopping, and washing were more likely to be linked to women while images of shooting or coaching were more likely linked to men. Researchers further tested the data sets and discovered that the AI not only reflected the unconscious stereotypes of its creators but actually amplified them.24
Where Do We Go from Here?
- AI Industry Diversification: As of 2018, women comprise only 22% of AI professionals globally.25 The lack of gender diversity in the AI field hinders the industry’s ability to catch gender bias and stereotyping during AI machine learning and database design.26 An important first step to begin to mitigate the impact of AI-reinforced bias and stereotypes is for the AI industry to increase representation of women and other underrepresented groups in its workforce.
- Business Policies, Procedures, and Practices: The number of businesses using AI increased by 60% between 2017 and 2018, but “only half of businesses across the [United States] and Europe have policies and procedures in place to identify and address ethical considerations—either in the initial design of AI applications or in their behavior after the system is launched.”27 While providing clear benefits, AI is not a technological utopia. It is important for organizations to develop policies and procedures to address ethical concerns that arise from the application of AI in their business models. It is equally important that businesses identify and utilize tools to review for unwanted bias in their data sets and machine-learning models.28
- Janna Anderson and Lee Rainie, “Artificial Intelligence and the Future of Humans,” Pew Research Center, December 10, 2018.
- World Economic Forum, The Future of Jobs Report 2018 (2018).
- Alison DeNisco Rayome, “AI Created 3x As Many Jobs As It Killed Last Year,” TechRepublic, June 27, 2019; ZipRecruiter, The Future of Work Report (2019).
- Lauren Pasquarella Daley, Trend Brief: AI and Gender Bias (Catalyst, 2019).
- Suzana Dalul, “AI Is Not Neutral, It Is Just as Biased as Humans,” Android Pit, January 30, 2019.
- Oxford Online Dictionary, “Stereotype.”
- Sigal Samuel, “Alexa, Are You Making Me Sexist?” Vox, June 12, 2019.
- International Labour Organization, Breaking Barriers: Unconscious Gender Bias in the Workplace (2017); The Ohio State University, “Understanding Implicit Bias.”
- Planned Parenthood, “What Are Gender Roles and Stereotypes,”
- Catalyst, The Double-Bind Dilemma for Women in Leadership (August 2, 2018).
- Tom Simonite, “AI Is the Future—But Where Are the Women?” WIRED, August 17, 2018; Jessica Guynn, “The Problem with AI? Study Says It’s Too White and Male, Calls for More Women, Minorities,” IMDiversity, April 16, 2019.
- Catalyst, The Double-Bind Dilemma for Women in Leadership (August 2, 2018).
- Matthias Kraus, Johannes Kraus, Martin Baumann, and Wolfgang Minker, “Effects of Gender Stereotypes on Trust and Likability in Spoken Human-Robot Interaction,” (European Language Resources Association, 2018).
- Matthias Kraus, Johannes Kraus, Martin Baumann, and Wolfgang Minker, “Effects of Gender Stereotypes on Trust and Likability in Spoken Human-Robot Interaction,” (European Language Resources Association, 2018).
- Matthias Kraus, Johannes Kraus, Martin Baumann, and Wolfgang Minker, “Effects of Gender Stereotypes on Trust and Likability in Spoken Human-Robot Interaction,” (European Language Resources Association, 2018).
- Jordan Muller, “Why We Really Need to Be Thinking About AI and Gender,” Towards Data Science, April 23, 2019.
- Sigal Samuel, “Alexa, Are You Making Me Sexist?” Vox, June 12, 2019; Amy C. Chambers, “There’s a Reason Siri, Alexa, and AI Are Imagined as Female—Sexism,” The Conversation, August 13, 2018.
- Jordan Muller, “Why We Really Need to Be Thinking About AI and Gender,” Towards Data Science, April 23, 2019.
- UNESCO, I’d Blush If I Could: Closing Gender Divides in Digital Skills Through Education (2019).
- John Murray, “Racist Data? Human Bias is Infecting AI Development,” Towards Data Science, April 24, 2019.
- Bennett McIntosh, “Bias in the Machine: Internet Algorithms Reinforce Harmful Stereotypes,” Princeton University, November 22, 2016.
- Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan, “Semantics Derived Automatically from Language Corpora Contain Human-like Biases,” Science, vol. 356, no. 6334 (2017): p. 183-186; Bennett McIntosh, “Bias in the Machine: Internet Algorithms Reinforce Harmful Stereotypes,” Princeton University, November 22, 2016.
- Nikhil Sonnad, “Google Translate’s Gender Bias Pairs ‘He’ With ‘Hardworking’ and ‘She’ With Lazy, and Other Examples,” Quartz, November 29, 2017.
- Tom Simonite, “Machines Taught by Photos Learn a Sexist View of Women,” Wired, August 21, 2017.
- World Economic Forum, “Assessing Gender Gaps in Artificial Intelligence,” The Global Gender Gap Report 2018 (2018).
- Ryan Daws, “Lack of STEM Diversity Is Causing AI to Have a ‘White Male’ Bias,” Artificial Intelligence News, April 18, 2019.
- David Ingham, “What Can Businesses Do to Help Reduce AI Bias?” Tech Native, September 12, 2019.
- Kush R. Varshney, “Introducing AI Fairness 360,” IBM Research Blog, September 19, 2018.