Skip to main content

Confronting the Biased Algorithm: The Danger of Admitting Facial Recognition Technology Results in the Courtroom

Posted by on Tuesday, May 25, 2021 in Notes, Volume 23, Volume 23, Issue 4.

Gabrielle M. Haddad | 23 Vand. J. Ent. & Tech. L. 891 (2021)

From unlocking an iPhone to Facebook “tags,” facial recognition technology has become increasingly commonplace in modern society. In the wake of the Black Lives Matter movement and call for police reform in the United States, it is important now more than ever to consider the implications of law enforcement’s use of facial recognition technology. A study from the National Institute of Standards and Technology found that facial recognition algorithms generated higher rates of false positives for Black faces—sometimes up to one hundred times more false identifications—than white faces. Given the embedded bias of this technology and its increased prevalence, the lack of federal regulation of facial recognition technology and its uses by law enforcement are alarming. This Note explores issues that arise with law enforcement’s use of facial recognition technology and how results from the technology should be treated in the criminal justice system.

This Note cautions against admitting results from facial recognition technology into evidence in criminal trials based on the current state of the industry and the technology. Further, if facial recognition evidence is admitted, this Note argues that defendants should have access to the software’s source code to meaningfully challenge the evidence presented against them under the confrontation clause of the US Constitution. While this Note recognizes developers’ interest in protecting trade secrets, it nevertheless recommends that judges balance these interests with those of defendants and make case-by-case decisions about how to protect developers’ information without blocking defendants’ access to the software.

PDF Download Link

Author:

Gabrielle M. Haddad