Sections

Commentary

For some employment algorithms, disability discrimination by default

Visitors check their phones behind the screen advertising facial recognition software during Global Mobile Internet Conference (GMIC) at the National Convention in Beijing, China April 27, 2018. REUTERS/Damir Sagolj - RC1838EC3EA0

Last week, Washington Post’s Drew Harwell reported that HireVue’s artificial intelligence (AI) software has assessed over a million video job interviews. Its autonomous interview system asks questions of candidates, films their responses, and then uses the resulting video and audio to assess candidates for jobs, such as in investment banking and accounting. The AI attempts to predict how a candidate will perform on the job based on how they act in an interview—their gestures, pose, lean, as well as their tone and cadence—and the content of their responses. This process produces an employability score, which employers use to decide who advances in the application process.

Yet a number of ethical AI observers have been very critical of HireVue. In the Washington Post story, AI Now Institute Co-Founder Meredith Whittaker calls this development “profoundly disturbing” and the underlying methodology “pseudoscience.” Princeton Computer Science Professor Arvind Narayanan says this is “AI whose only conceivable purpose is to perpetuate societal biases.” Scientific evidence suggests that accurately inferring emotions from facial expressions is very difficult and it stands to reason that inferring personality traits is even harder, if it’s possible at all.

What has been not been noted, however, is the way in which these systems likely discriminate against people with disabilities. The problem that people with disabilities face through this kind of AI is, even if they have a strong set of positive qualities for certain jobs, the AI is unlikely to highlight those features and could generate low scores for those individuals.

Characteristics such as typical enunciation and speaking at a specific pace are qualities that might correlate with effective salespersons. Further, perhaps leaning forward with one arm on the table signals an interpersonal comfort that prior high-performing salespersons often display. The AI system would have identified this relationship from the “training data”—the video interviews and the sales outcomes collected from current employees. However, people with disabilities will not benefit if their qualities manifest physically in a way the algorithm has not seen in that training data. If their facial attributes or mannerisms are different than the norm, they get no credit, even if their traits would be as beneficial to the job.

Advocates suggest that this is a big problem. Sheri Byrne-Haber, Head of Accessibility at VMware, has argued that “the range of characteristics of disability is very, very broad,” contributing to this algorithmic discrimination problem. Shari Trewin, an Accessibility Researcher at IBM, agrees, arguing: “The way that AI judges people is with who it thinks they’re similar to—even when it may never have seen anybody similar to them—is a fundamental limitation in terms of fair treatment for people with disabilities.”

To account for this problem, AI training data would have to include many people with diverse disabilities. Since each job type has a distinct model, this would have to be true for many different models (for context, HireVue has over 200 models). While it is possible AI software could include such a range of different individuals, it would take a tremendous effort. Without diverse training, an AI system would not be able to learn any characteristics demonstrated by people with disabilities who were later successful. With some of their qualities ignored, these candidates would be pushed towards the middle of the distribution. And since most applicants for any specific job do not get hired, applicants with no similar, high-performing past employees do not stand a chance.

On their ethical AI page, HireVue says they actively promote equality opportunity “regardless of gender, ethnicity, age, or disability status.” Further, HireVue does allow people with disabilities to request more time for questions and otherwise implement accommodations as requested by the employer. However, the core problem of inferring from videos of people with disabilities remains. In the Post’s reporting, Nathan Mondragon, the chief industrial-organizational psychologist at HireVue, says that facial actions can make up 29% of a person’s employability score.

Broadly, this issue of coverage (meaning the training data containing enough relevant examples) is a genuine concern when applying AI systems to people with disabilities. Potentially relevant to this software, there is research showing that speech recognition works poorly for people with atypical speech patterns, such as a deaf accent. Google researchers demonstrated that some AI considers language about having a disability as inherently negative. As another problematic example, imagine how driverless cars might learn human movements to avoid the path of pedestrians. This is a type of situation in which humans still dramatically outperform AI: choosing not to narrowly interpret a situation based only on what we have seen before.

About 13% of Americans have a disability of some kind and they already suffer from worse employment outcomes. Their unemployment rate stands at 6.1%, twice that of people without disabilities. Americans without disabilities also out-earn their peers with disabilities: $35,000 to $23,000 in median earnings over the past year, according to the Census Bureau. Specifically relevant to facial analysis, estimates suggest that around 500,000 to 600,000 people in the United States have been diagnosed with a craniofacial condition, meaning an abnormality in the face of head. Additionally, millions of Americans have autism spectrum disorder, one of many conditions which can manifest itself in unusual facial or speech expressions.

While systems like these may embolden recent calls for facial recognition bans, there are other policy implications as well. For algorithms that are crucial in hiring, companies should publicly release bias audit reports—summaries of the predictions made across subgroups, especially protected classes—rather than simply claiming their models have been evaluated and are bias-free. Further, the Equal Employment Opportunity Commission (EEOC) should review these systems and issue guidance on whether these systems violate the Americans with Disabilities Act. While there are many positive applications of AI for people with disabilities, we need to be especially careful that AI for video and audio analysis treats all people fairly.