Skip to main content

Safeguarding Democracy in the Age of AI

Posted by on Thursday, February 22, 2024 in Blog Posts.

By Faheem Ali

Earlier this month, the FCC issued a ruling that robocalls using voices generated by artificial intelligence (AI) are illegal.[1] The ruling comes at a particularly important time as the 2024 election cycle ramps up. The problem of AI generated robocalls has already been evident, as prior to the New Hampshire Democratic Primary, robocalls using AI to mimic President Joe Biden’s voice were used to discourage potential voters from casting their ballot at the primary.[2] The FCC’s ruling is a step towards ensuring that the 2024 election is not marred by the prevalence of AI and its potential uses for spreading misinformation for the purposes of election interference. Furthermore, in an era where technological advancements continue to reshape the landscape of political discourse and engagement, the FCC’s ruling underscores the necessity to safeguard the election process from emerging threats.

The rise of AI-generated content poses a significant challenge, not only in the form of robocalls but also in the potential manipulation of digital media and the dissemination of deceptive information.[3] Robocalls are just one of many different tactics to that may be used to spread misinformation and interfere with the 2024 election. Concerns about AI generated photos and videos are causing many companies to take steps to ensure that misinformation about candidates and their campaigns are not spread through their platforms. Recently, tech companies with substantial involvements in the development of AI and/or popular social media platforms (Open AI, Google, Meta, TikTok, etc.) signed the “AI Elections Accord”, pledging to develop technology to combat election misinformation generated by AI.[4] While the accord outlines methods that the companies will use to detect deceptive AI content, it does not go as far as to mandate a commitment to ban or remove such content.[5] Critics argued that the language and commitments in the accord were largely symbolic, and that there will be significant challenges in enforcement.[6] However, the initiative still represents a step towards industry-wide cooperation in addressing the potential threats posed by AI-generated content during election.

As voters increasingly rely on digital platforms for information, the need to address the vulnerabilities that AI introduces into the electoral system becomes more pressing. The FCC’s stance on AI-generated robocalls signals a commitment to maintaining the integrity of elections, yet it also highlights the broader necessity for comprehensive regulations and collaborative efforts to counteract the risks associated with AI in the realm of politics. Striking a balance between technological innovation and safeguarding democratic processes is an ongoing challenge, requiring continuous dialogue between regulatory bodies, technology companies, and the public. As we navigate the evolving landscape of AI in politics, there must be a shared responsibility to develop frameworks that protect the democratic principles upon which our elections are reliant upon, ensuring that advancements in technology contribute positively to the electoral process rather than posing risks to its integrity.

Faheem Ali is a 2L at Vanderbilt Law School. Prior to law school, Faheem graduated from Case Western Reserve University with a bachelor’s in Biochemistry and Political Science.

[1] Fed. Communications Comm’n, FCC Makes AI-Generated Voices in Robocalls Illegal (2024)

[2] Ali Swenson & Will Weissert, Fake Biden Robocall Being Investigated in New Hampshire, Associated Press (Jan. 22, 2024, 10:32 PM)

[3] Averi Harper, Bobby Gehlen & Ivan Pereira, AI Use in Political Campaigns Raising Red Flags Into 2024 Election, ABC News (Nov. 8, 2023, 7:25 AM)

[4] Matt O’Brien & Ali Swenson, Tech Companies Sign Pact to Combat AI-Generated Election Trickery, Associated Press (Feb. 16, 2024, 1:36 PM)

[5] Id.

[6] Id.

Tags: , , ,