Employment and You
It’s easy to imagine that artificial intelligence is an abstract, automatic process—something happening behind the scenes, acting by its own accord. The truth is anything but. Artificial intelligence needs to be invented and programmed. And the people who do the programming are just that: people. Sometimes, when a person programs an artificial intelligence — say, for example, an algorithm that automatically sorts thousands of job applications — their own unconscious biases and prejudices are imparted into the code.
The bias in AI systems comes directly from the human behavior it is programmed to emulate. Research has shown that resumes with English-sounding names get snagged for interviews more often than identical resumes with Chinese, Indian, or Pakistani names. And since artificial intelligence can learn from data and adapt, we’ve also seen biases that result in algorithms that favor male candidates or female candidates, simply because over a span of time there have been more male candidates for a particular job.
The implications of biased artificial intelligence in the world of recruiting and employment are severe. Companies like Delta, Dunkin, and Ikea now use AI to assess job applicants. And while this has greatly reduced the time and work spent on the employment process, it has also resulted in discrimination against women and people of color.
So what do we do? For starters, we program AI to stop unconscious bias, not produce it. AI can be programmed to ignore a candidate’s demographic information. It can be designed to make decisions based on skillsets and qualifications rather than other subjective criteria. But most of all, it needs human oversight.