Skip to main content

State and Regulatory Agency Approaches to Limiting Deepfakes in Political Advertising 

Posted by on Thursday, June 19, 2025 in Issue 4, Notes, Volume 27.

Mary Margaret Burniston | 27 Vand. J. Ent. & Tech. L. 797 (2025)

With recent advancements in artificial intelligence (AI), regulators have turned their attention to the issue of how—and whether—to regulate the use of AI in political advertisements. While nineteen states have passed legislation regulating AI in political advertising, such regulations may be challenged as violations of the First Amendment. Furthermore, federal agencies also dispute which regulatory agency has jurisdiction to address the problem, with the Federal Election Commission (FEC) and the Federal Communications Commission (FCC) both claiming authority. Beyond issues of jurisdiction, agency action is also limited by the US Supreme Court’s recent decision in Loper Bright Enterprises v. Raimondo.

As deepfakes in political advertisement present the clearest threat of electoral confusion and deception, lawmakers should focus on deepfakes and craft content-neutral regulations of the manner of speech that can be used in AI-generated political advertisements. Such regulations would advance the strong government interest of preventing misrepresentation and electoral confusion. These regulations should be narrowly tailored to require labeling of deepfakes, while leaving open ample channels of alternative communication. The FCC and FEC should exercise complementary roles, with the FCC focusing on deepfakes in robocalls, television, and radio, and the FEC focusing on prohibiting fraudulent misrepresentation.

PDF Download Link