Skip to main content

Regulation of Algorithm Use in Public Sector Application Systems

Posted by on Monday, February 5, 2024 in Blog Posts.

By Molly Devereaux

As algorithms have increasingly been used in the public sector to streamline service provision,[1] it is important to understand that the output from algorithms is not always accurate.[2] One example of using algorithms in government settings is to streamline applicant processes, such as detecting fraudulent claims in unemployment applicants.[3] Algorithms are helpful, efficient, and engrained in nearly every part of professional life already.[4] However, there can be a cost of saving time on the front end when an algorithm gives a wrong output and later remedies are required, potentially putting a cost on government entities and its citizens. The question then becomes what the right balance of accuracy and efficiency is to justify the unchecked use of algorithms in government processes.

A case from Michigan highlights how using an inaccurate algorithm to streamline processes can cause real-world harm to citizens.[5] A Michigan unemployment agency was accused of using a faulty algorithm system to detect fraudulent unemployment claims.[6] Allegedly, the algorithm would flag a suspected fraudulent claim, which would cause the system to initiate an automated process that would result in ceased payment of unemployment benefits and letters sent on behalf of the agency that demanded repayment of benefits accrued during the supposedly fraudulent term.[7] Plaintiffs in the class action suit received notice of their obligation to pay thousands of dollars in benefits overpaid, penalties, and interest.[8] Eventually, they had their tax refunds seized by the agency and their wages garnished.[9] In deciding whether the plaintiffs would be permitted to recover monetary damages, the Michigan appellate court commented that should the agency’s sole reliance on an algorithm that triggers an automated process that can revoke unemployment benefits, accuse plaintiffs of fraudulent receipt of benefits, and impose penalties and interest on plaintiffs be proven true, then the state would be acting in violation of the plaintiffs’ right to due process.[10] The case recently settled for $20 million to be paid by the agency, so it is unclear whether the system was relied on solely.[11] However, in a subsequent announcement, the agency admitted it had used a computerized system developed in 2010 that was no longer performing acceptably.[12] Due to the use of a flawed algorithm, an agency caused actual harm to Michigan citizens, raised due process concerns, and caused the state to incur a costly settlement.[13]

While the potential for harm resulting from faulty algorithm output is real, algorithms’ benefits are undeniable.[14] A call for a complete cessation of algorithm use might eradicate the harm caused specifically by algorithms.[15] Still, harm from reversion to complete human output remains or might even be enhanced with the increased workload presented by the erasure of streamlined processes.[16] For example, human implicit biases in unemployment applicant reviewers would remain.[17] Additionally, algorithms are more amiable to change than changing implicit biases in human behavior, which means the ability to protect against harm is potentially greater if algorithms are kept in place and altered to give a more accurate output.[18] Given the permeance of algorithms already in public settings and the potential to further reduce harm compared to fully human output, eradicating algorithms is unlikely.

In the public sector, rather than requiring a certain threshold of accuracy to implement algorithms into the application process, a demand for a second check on the output result is more appropriate, given the potential for monetary and due process harm caused by faulty outputs. Yet few regulations or laws require a second check, and there is little incentive to perform a second check since it is not always clear who can be liable for an algorithm’s mistake under traditional tort law remedies.[19] Under a risk-benefit analysis, the unlikely liability event outweighs the time cost to do a double check.[20] Some solutions for regulations or changes in laws have been proposed to incentivize a double check of algorithm use, such as embracing strict liability in this particular setting or creating regulations for using algorithms in the public sector.[21] A solution focused on placing the burden on government entities to double-check algorithmic output seems especially appropriate given public reliance on these entities.

Molly Devereaux is a 2L at Vanderbilt Law school from Bartlett, Illinois. She plans on working on transactional matters at a law firm after law school.

[1] Rachel Wright, Artificial Intelligence in the States: Harnessing the Power of AI in the Public Sector, CSG (Dec. 5, 2023,

[2] Lee Rainie & Janna Anderson, Code-Dependent: Pros and Cons of the Algorithm Age, Pew Rsch. Ctr. (Feb. 8, 2017),

[3] Michele Gilman, States Increasingly Turn to Machine Learning and Algorithms to Detect Fraud, U.S. News (Feb. 14, 2020),

[4] See Rainie & Anderson, supra note 2.

[5] See generally Bauserman v. Unemployment Ins. Agency, 950 N.W.2d 446 (2019), aff’d, 983 N.W.2d 855 (2022).

[6] Id.  at 459.

[7] Id. at 459.

[8] Id. at 450–54.

[9] Id.

[10] Id. at 459

[11] Carolyn Muyskens, Mich. Judge Oks $20M For Victims Of Faulty Fraud Algorithm, Law360 (Jan. 30, 2024, 1:32 PM),

[12] Id.

[13] See id.

[14] See Rainie & Anderson, supra note 2.

[15] See id.

[16] See id.

[17] See id.

[18] Sendhil Mullainathan, Biased Algorithms Are Easier to Fix Than Biased People, N.Y. Times (Dec. 6, 2019),

[19] See Carrie Kirby, When Algorithms Harm Us, Iowa Coll. L. (Nov. 30, 2022),

[20] See id.

[21] See id.

Tags: , , , ,