Supporter content

AI and gender bias

LPD

By  Lauren Pasquarella Daley

Executive summary

Can We Program an Equitable Future?

The field of artificial intelligence (AI) is growing at a rapid pace, developing algorithms and automated machines that show promise in making the workplace more efficient and less biased. Many of us already interact with artificial intelligence in our daily lives, often without even realizing it—it’s responsible for everything from credit score calculators to search engine results to what we see on social media.1

Likewise, organizations have introduced AI into many work processes, especially recruiting and talent-management functions. In many cases, algorithms sort through numerous factors to profile people and make predictions about them. AI hiring and talent-management systems have the potential to move the needle on gender equality in workplaces by using more objective criteria in recruiting and promoting talent.2 But what happens if the algorithm is actually relying on biased input to make predictions? Can machines and artificial intelligence develop unintentional biases, creating the same inequities as people with unconscious biases?

More and more evidence indicates that humans are programming their own biases, including around gender and race, into the algorithms behind AI. How is this happening? What can be done to prevent bias in AI in workplaces?

This is Supporter-exclusive content.

Employees of Supporter organizations can register or log in to get full access. Existing and new users must create a new account.

Register Already registered? Login.