DEANNA BOLISLAVSKY
The research, writing, and editing of this post was part of an undergraduate project undertaken for a Rutgers Honors College Seminar in Fall 2021, “Fictions of Artificial Intelligence.” The author’s bio follows the post.
Imagine this: you have spent months job searching when you finally get an invitation to interview for your dream company. You have been prepping for this interview for a long time, and when you finally join the online meeting, you realize the recruiter you are trying to impress isn’t human. You spend the entirety of the interview speaking with a robot, an automated form of artificial intelligence (AI), who asks you questions and analyzes your every word, your every move. Would you trust this AI to make the ultimate decision of whether you should be hired?
This isn’t too far from reality right now. Consider HireVue, a pioneer in video interviews and assessments, which is selling its software to hundreds of companies to help optimize and maximize efficiency in their hiring processes. The program analyzes different characteristics from job applicants who are asked to submit a recorded video, tracking their language, speech, behavior and (up until recently) facial expressions.* Such machine-learning algorithms are beginning to rule our lives more and more each day: determining university admissions, promotions, loan offerings, prison sentencing, and more (Martin). Companies, like HireVue, are taking humans out of the hiring process and inserting their “data driven” models that, according to the company’s media, deliver neutral decision-making. According to HireVue’s website, the company’s software reduces hiring bias and increases diversity. Despite these claims, the automated decision-making may be having the opposite effect.
“Data driven” machine learning systems, like that which HireVue offers, are built on the premise that the analysis of training data will enable a program to isolate key criteria that are important to perceived success. Such models work by predicting future outcomes based on past trends. But as business ethics professor Kirsten Martin argues, these systems “could be discriminatory by design or the algorithm can be trained on data with historical biases” (Martin 841). Moreover, as a data journalist Meredith Broussard points out in a chapter titled “People Problems,” “computer systems are proxies for the people that created them” (Broussard 67). Because software is affected by the perceptions of designers and programmers, the expectation that machines eliminate human bias is misguided. In addition, training data can itself fail to represent a wide range of people or characteristics: hiring programs usually work by analyzing the data collected from existing employees. They may be biased toward different cultures, accents, disabilities, ages, or skin colors. In this way a supposedly objective model becomes a vehicle for discrimination, reinforcing and recreating the inequities already found in our world. When bringing AI into the playing field, Martin writes, it is important to remember that “replacing the discriminatory human with a biased technology does not erase the discrimination” and may actually amplify it (841).
Take, for example, the financial services industry, one that has been dominated by mostly white males (“Diversity and Inclusion: Holding America’s Large Banks Accountable”). The training data for hiring will register this trend and, subsequently, identify white men as the successful employees. This puts all non-white and women candidates at a disadvantage. Data points that are just replicating historical trends of discrimination will not bring us “towards a more just society” (Broussard 115).
Because HireVue’s algorithm is secret and proprietary, the company does not release information about how it scores a candidate or what characteristics are deemed more important than others. It is not transparent which data HireVue is training its system on or what their “golden standard” for job applicants is, making it difficult to judge whether it is working to rectify bias or perpetuate it. To make matters worse, the companies using HireVue’s software can choose to let the “system reject candidates without having a human double-check” (Murad). It is therefore possible that qualified candidates are rejected because of a superficial feature such as an unusual tone of voice. Without “human judgement, reinforcement and interpretation” to monitor the potential failures of a machine system (Broussard 119), we are blindly allowing technology to control our future. When evaluating the qualities of a successful hire, we cannot assume that non-transparent system has extracted the relevant data or that data alone is sufficient to evaluate an interview. The present stage of AI technology requires keeping humans in the loop, to make certain that the outcome is fair from a human point of view. Though hiring managers may well have (un)conscious biases of their own, the answer is not to replace them with “black box” algorithms that conceal their decision-making criteria (Powers 7:49).
In response to these concerns, lawmakers around the United States are passing legislation in hopes of regulating the use of AI in the hiring process. Effective from January 2020, Illinois passed the Artificial Intelligence Video Interview Act (AIVIA). The first of its kind, this law requires companies to make applicants aware that AI would be used to consider them for positions, as well as explain the how the technology works and what types of characteristics it considers during the evaluation. Maryland, New York City, Baltimore, California, Texas, Washington, and others have followed suit. However, most of these laws may not be doing enough to increase transparency and limit discrimination.
The New York City bill passed in November 2021 is requiring employers to have a yearly bias audit to show they are not discriminating based on age or gender but this leaves “out the trickier-to-detect bias against disabilities or age.” Laws like these are barely scratching the surface of what needs to be done to ensure fair practices within AI hiring, setting vague, surface-level standards that companies can easily follow without shifting their ways. In response to the Illinois bill, Kevin Parker, the CEO of HireVue commented in a blog post, “For [our] customers, the AIVIA entails very little, if any, change” as they are already in compliance with past and current guidelines. Although government regulations have the power to hinder the negative consequences of AI in hiring, they seem to be avoiding addressing the more challenging aspects of how to control this technology.
In her famous novel Frankenstein (1818), Mary Shelley tells the story of Victor Frankenstein, a scientist who feverishly experiments with the boundaries of life and death and creates a living being. Frankenstein did not anticipate the negative effects of his creation. If he did, he could have mitigated the risks and consequences, just as we can when knowingly bringing AI into the hiring process. A clear divide exists between the HireVue’s claims about its technology and that technology’s effects. Sasha Costanza-Chock, a theorist of design justice, argues that we must prioritize a product’s community impact over its creator’s intentions. Such reprioritization will critically support those “who are normally marginalized by design” (Costanza-Chock 6). If we naively ignore the unintentional consequences and limitations of our own technology, we run the risk of doing more harm than good.
* Due to concerns regarding transparency and bias found in facial analysis, the company has since dropped this part of their screening process (Kahn).
Deanna Bolislavsky is a senior studying Business Analytics and Information Technology at Rutgers University. She is interested in the intersection of AI and business and will be an investment technology business analyst after graduation.