Artificial Intelligence Referees: Offsides and Out of Bounds
By Benjamin Dias
Popcorn spills as the group collectively leaps from the couch to scream at the TV. You hear a cacophony of explicit phrases and derisive language. “Use your eyes!” “How do you call that?!” “Someone paid the ref!” This scene could easily describe fans across the world as they watch football, baseball, basketball, or even tennis. Inextricably linked to any sports competition is the notion of fairness. In any sport, athletes are constrained to the defined rules of their game—no matter how broad or narrow that rule is. Referees and umpires (collectively referred to here as referees) adjudicate these rules in real time to facilitate game management, diagnose rule infractions, and promote player safety.
Referees are human and are therefore subject to human error. While a minor mistake—an errant call, a missed call, or a misinterpretation of an unimportant rule—has a nominal effect on a match, sometimes blown calls alter games and entire seasons. The margin of error for referees seems to shrink over time. For example, recent advances in replay technology enables referees to confirm whether a football player scores touchdown or secures a turnover, a basketball player shoots before the shot clock expires, or a baseball player beat a throw to first base. Although replay review creates more accurate calls, it also contributes to controversy. Referees make on-the-field calls and utilize a pseudo-legal standard, “substantial evidence,” to overturn field calls. With widespread access to slow-motion replay, fans can review the most minute details of quick interactions and debate whether the videos provide sufficient evidence to overturn the referee’s visceral judgment. Often, fans come to a different conclusion that the officials.
Technological improvement has given birth to greater social scrutiny of referee performance. Beyond replay review, Sports Governing Organizations (“SGOs”) like the National Football League (“NFL”), National Basketball Association (“NBA”), and Major League Baseball (“MLB”) have implemented various camera angles, dedicated replay officers, and trained rules analysts to ensure each game complies with official rulebooks. Still, errors occur. Sports fans and players alike call upon SGOs to replace core referee functions with Artificial Intelligence (“AI”) to improve officiating accuracy. Sports like tennis and soccer have already applied some degree of artificial intelligence technology in VAR and Hawk Eye to enhance refereeing. SGOs and engineers have begun developing AI tools to police whether a pitcher tosses a ball or strike, or whether a basketball player commits a foul. Similar technology could be feasibly adapted for automatic ball-spotting functions in football.
Although SGOs could develop and apply AI technology to supplant core refereeing functions, this blog post cautions against rapid integration out of concern for three distinct categories of legal ramifications.
First, SGOs applying AI technology would feed into the current social narrative that SGOs fix matches, violating anti-racketeering legislation. Congress has long sought to inhibit racketeering in professional sports, but recently, states have nullified federal sports gambling laws, enabling sportsbooks to exist and advertise on television. SGOS derive pecuniary benefits from these advertisements, increasing their incentive to engage in match-fixing. Human referees serve as an inherent barrier to match-fixing because involving more people in SGOs match-fixing schemes increases the chances for leaks. Supplanting human referees with AI that SGOs can manipulate increases the likelihood that SGOs would engage in match-fixing. At a minimum, replacing humans with AI referees would supercharge the social narratives around SGOs and match-fixing, likely culminating in stringent government regulation of AI in the sports industry.
Second, emerging studies indicate a rational fear of algorithmic discrimination where AI predicts biased outcomes utilizing biased data. Algorithms like AI require data to predict realistic outcomes, but sometimes, that data is biased. Algorithmic discrimination—whether malignant or benign—appears ubiquitously across industries who adopted AI tools early like criminal law, health care, and employment hiring practices. If historical biases, like racial biases in penalty rates, exist in human refereeing, these errors would compound under AI and result in discriminatory officiating. Presumptively, discriminatory applications of AI referees would expose SGOs to employment discrimination lawsuits for violating Equal Employment Opportunity Commission regulations on discharge and discipline.
Third, referees play a significant role in mitigating on-field health and safety issues. Replacing referees with AI could result in a failure by SGOs to uphold their (often contractual) duties to protect player safety. By nature, sports are competitions that pit athletes against each other in a zero-sum, winner-takes-all game. Athletes and fans sometimes succumb to their passions resulting in “chippy” game atmospheres. SGOs expect referees to both enforce rules and prevent athlete and viewer passion from boiling over into all-out brawls. SGOs equip referees with various mechanisms like penalties, warnings, and ejections, to quell aggressive behavior. If SGOs replace human referees with an AI alternative, they knowingly remove an important entity responsible for preventing fights. Assumedly, AI referees would not be able to discern “unnecessary roughness,” “excessive celebration,” or an “intentional hit by pitch” penalties from typical in-game actions. Without shifting the fight prevention responsibility to other human entities, SGOs risk breaching their duty to safeguard the health and safety of their athletes.
Ultimately, although AI technology would likely improve the accuracy and quality of officiating, SGOs should pump the breaks on rapid integration. Legal concerns surrounding racketeering, discrimination, and employment safety highlight the need for more research before adopting available technology to wholly replace human referees. If SGOs remain keen on developing AI tools to improve officiating accuracy, the best solution would be to implement a hybrid system where human referees partner with AI referees to enhance penalty accuracy without sacrificing a core human element in sports. Considering legal liabilities, AI referees should remain technology of the future until sufficient measures are taken to address the racketeering stigma, algorithmic discrimination, and health and safety concerns involved with full adoption.
Ben Dias from Voorhees, New Jersey is a second year JD-MBA candidate at Vanderbilt University and an avid sports fan. He hopes to merge his passions for law and economics to best serve his clientele in a transactional business law space.
 Michael J. Madison, Fair Play: Notes on the Algorithmic Soccer Referee, 23 Vand. J. Ent. & Tech. L. 341, 358-59, 363-65 (2021).
 Ben Morse, ‘Sorry but I don’t like that call!’: The Controversial Penalty Call Which Played a Decisive Role in Super Bowl LVII Outcome, CNN (Feb 2023), https://www.cnn.com/2023/02/13/sport/holding-call-super-bowl-lvii-chiefs-eagles-spt-intl/index.html (Demonstrating that errant calls can skew the results of pivotal sports games, altering that sport’s very history); Shanna McCarriston, Twitter Reacts to Saints’ Overtime Loss to Vikings, Controversial Non-OPI Call, CBS Sports, (2020), https://www.cbssports.com/nfl/news/twitter-reacts-to-saints-overtime-loss-to-vikings-controversial-non-opi-call/ (Demonstrating that missed calls can be just as detrimental to referees’ credibility as errant calls).
 Joe Pompliano, Why the NFL Puts Computer Chips in Each Football, Substack (Jan 25, 2023), https://huddleup.substack.com/p/why-the-nfl-puts-computer-chips-in.
 Anthony Castrovince, Futures Game a Perfect Showcase for Ball-Strike Challenge System, Major League Baseball, (July 2023), https://www.mlb.com/news/abs-system-gets-rigorous-test-at-futures-game.
 Spencer Garfield, Robot Referees in Basketball, Portland State Vangaurd, (2020), https://psuvanguard.com/robot-referees-in-basketball/.
 Madison, supra note 1, at 349-50.
 Castrovince, supra note 5.
 Pai, supra note 4.
 Pompliano, supra note 3; See generally Moeen Mostafavi, Fateme Nikseresht, Jacob Earl Resch, Laura Barnes, & Medhi Boukhechba, Collision Prediction and Prevention in Contact Sports Using RFID Tags and Haptic Feedback (2021), https://arxiv.org/ftp/arxiv/papers/2102/2102.03453.pdf (explaining that RFID chips can be used for both officiating and injury prevention purposes).
 Timothy L. O’Brien, Match Fixing has Existed for Centuries. Gambling Apps are Making it Worse, The Wash Post (Feb 7, 2023), https://www.washingtonpost.com/business/match-fixing-has-existed-for-centuries-gambling-apps-are-making-it-worse/2023/02/07/340821a6-a6a7-11ed-b2a3-edb05ee0e313_story.html.
 John Holden & Marc Edelman, A Short Treatise on Sports Gambling and the Law: How America Regulates Its Most Lucrative Vice, 2020 Wis. L. Rev. 907, 917-18 (2020).
 See Sandra Mayson, Bias In, Bias Out, 128 Yale L.J. 2218, 2224 (2019).
 See generally, Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, Machine Bias, Propublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
 Sharona Hoffman and Andy Podgurski, Artificial Intelligence and Discrimination in Health Care, 19 Yale J. Health Pol’y L. & Ethics 1, 4 (2020).
 Keith E. , The Promise and the Peril: Artificial Intelligence and Employment Discrimination, 77 U. Miami L. Rev. 1, 5 (2022).
 See NFL, Safety Rules and Regulations, Nat’l Football League (Aug 9, 2013), https://www.nfl.com/news/safety-rules-regulations-0ap1000000228345 (describing player safety as a top priority of the NFL’s).