Hazardous Misinformation: Key Policy Levers


UNIFYING THEME:  Information Marketplace: Ensuring the Public has the Data 

Digital technology allows for the frictionless spread of information, including false and manipulated content. As a nation that has enshrined freedom of speech in the First Amendment of our Constitution, the policy levers available to U.S. officials to confront the free flow of dangerous misinformation-whether pertaining to COVID-19, elections or other matters of existential significance to lives and our democratic institutions-are necessarily circumscribed. Thankfully, misinformation scholars have proposed policies that comply with Constitutional limitations and have the potential to mitigate the hazards of misinformation. 

RELATED NEWS: The Tennessean/Gannett News Service (Opinion): "How to support journalism in the fight against misinformation" (April 13, 2021)

by Caroline Friedman Levy, Ph.D, Research-to-Policy Collaboration, Penn State University 

and Matthew Facciani, Ph.D, postdoc researcher at Vanderbilt University

Rumors and lies have been shared since the origins of speech, and they have pockmarked traditional print and broadcast media since their origins. But with digital technology allowing for the frictionless spread of false and manipulated contentthe lightning speed and limitless reach of misinformation have increasingly created threats to the public and to our democratic institutions. Misinformation pertaining to COVID-19, linked to worsened individual outcomes and community outbreaks, and to our elections, tied to instances of voter intimidation, suppression and incitement, have raised the alarm in recent months. While Congress funds an array of federal programs within the Department of Homeland Security, our intelligence services, the State Department and the Department of Defense to confront foreign-originating disinformation-intentionally false communication designed to mislead and disrupt-as yet, there is no coordinated federal response to dangerous misinformation that may or may not be of foreign origin, that may or may not be intentionally designed, and that is substantially propagated by domestic social media consumers.  

As a nation that enshrined freedom of speech in the First Amendment of its Constitution, the context for addressing hazardous misinformation through policy is necessarily different here than in other countries. Nevertheless, misinformation scholars have proposed policy levers that would sit comfortably with our deference to free speech by focusing on efforts to promote transparency and accountability rather than engaging the government in the highly awkward and constitutionally prohibited role of "content moderator." The most widely endorsed policy proposals fall into the following four categories:

  1. industry co-designed codes of practice with public oversight 
  2. redress for opportunistic practices that have weakened legitimate news outlets 
  3. a safe harbor for data sharing with academic misinformation researchers; and  
  4. broad implementation of validated interventions to boost users' media literacy and capacity to discern misinformation.  

Codes of practice with public and/or third-party oversight

Legal scholars have noted that government regulation of digital platform companies-the most lucrative and powerful businesses in existence-has been shoehorned into existing federal agencies, leading to profound oversight gapsA recent influential reportco-authored by former top officials at the Federal Communications Commissioand Department of Justice, encouraged Congress to create Digital Platform Agencya distinct new federal regulatory body infused with "digital DNA" to hold this far-reaching and protean industry accountable and to promote transparency in its practices. This proposal has been co-signed by a variety of scholars who note the need for comprehensive oversight conducted within an agency designed to be "open-minded and agile enough to keep up with the internet sector."

Given the legislative hurdles that creating a new federal agency will face in a closely divided Congress, the new administration might opt to immediately create a task force or commission focused on updating digital platform policy. Such a task force could, for example, lead an undertaking parallel to the EU Commission's Code of Practice on Disinformation, a living case study of industry self-regulation backed by public oversight. Europe's leading online platforms, networks and digital advertisers, including Facebook, Google, Twitter and Mozilla, published this jointly designed framework in October 2018, with the stated goal of stemming the flow of "verifiably false or misleading information created, presented and disseminated for economic gain or to intentionally deceive the public which may cause public harm via threats to democratic political and policymaking processes as well as public goods such as the protection of EU citizens' health, the environment or security." The code promotes transparency and accountability through commitments, including disclosing political funding in advertising, demonetizing purveyors of disinformation, closing fake and/or bot accounts, and addressing algorithmic biases that reward extremist and/or low-credibility content, with the EU Commission participating in targeted monitoring and feedback on the industry's implementation. A Brookings review of these European efforts concludes, "it's clear that democracies might not be able to halt all disinformation, but they can limit it-and, hearteningly, they can do so within democratic norms." Notably, Facebook's CEO has nominally endorsed the notion of extending similar standards to the United States, with third-party oversight. 

Redress for practices that weaken legitimate journalism

Misinformation thrives in an information landscape in which legitimate journalism, particularly local journalism, is on economically shaky ground. In an extension of a long-term trend, 36,000 journalists in the U.S. have lost their jobs, were furloughed or have had their pay cut since the start of the COVID-19 pandemic. While the demand for news and information is arguably stronger than ever, and digital platforms like Google and Facebook earn a robust and ever-growing share of advertising revenue, journalistic outlets compete for crumbs.

Multiple misinformation scholars have argued for a comprehensive re-evaluation of the current media ecosystem in which consumers increasingly rely on social media for news, allowing digital platforms to reap economic rewards while the authors of the news content struggle to survive. To remedy this disconnect, proposals include leveraging antitrust actions against digital platforms to ensure that news sites are reimbursed for journalism shared on social media platforms and, conversely, the creation of a temporary safe harbor in antitrust laws allowing news publishers to collectively negotiate with platforms regarding pricing terms for use of news content. Indeed, there is strong bipartisan support in the House and Senate for a current bill, the Journalism Competition and Preservation Act, which would allow for exactly this latter option to help local "newspapers survive amid massive layoffs."

Additional proposals have included (1) levying a digital services tax on major news-sharing platforms in the interest of creating a vibrant public funding model to support local and public interest journalism, and (2) using tax incentives to encourage struggling news outlets to transition to a not-for-profit status. While these proposals raise challenging questions regarding which news outlets are "legitimate" and warrant support, models exist for authenticating news sites dedicated to the public interest from which policy makers can draw.

Supporting research/mapping misinformation

Just as independent academic researchers play a crucial independent role in monitoring other public health and safety issuesmisinformation researchers in the U.S. need access to data that will bolster best practices and inform wise policy. One recent study showed that COVID-19 misinformation proliferates across digital platforms far more rapidly than validated COVID-19 health information. This provides researchers with a map detailing where health misinformation spreads online. Despite the utility of this study, other researchers warn they habitually lack access to data that would inform our understanding of the players and dynamics underlying the spread of hazardous misinformation. Indeed, one of the strongest critiques of the current EU Code of Practice on Disinformation is the lack of a pathway for academic researchers to access data in support of meaningful oversight. In response to this concern, the commission recently recommended a "structured model for cooperation between platforms and the research community." 

While privacy concerns (and implementing state regulations) have prompted digital platforms to tighten restrictions on sharing data with third parties, scholars have urged federal policymakers to create national safe harbor for sharing aggregated, anonymized data with misinformation experts at accredited academic institutions to conduct independent research that can be used to develop and refine internal, industry-wide and governmental policies. Absent federal action or at least a regulatory safe harbor, social media platforms have little incentive to allow access to data, even in anonymized form, surrounding platform use.

Civics education/media literacy

Research shows that helping media consumers to discern and downgrade the types of dubious information they will encounter-a practice called "prebunking"-is more effective than efforts to "debunk" false (yet often legitimate-looking) news stories following exposure. Prebunking interventions have been likened to psychological inoculationsyet they can be as painless and engaging as the video games recently field-tested by the Department of Homeland Security's Cybersecurity and Infrastructure Security Agency as part of its mission to "strengthen the national immune system for disinformation."

Prebunking exercises are incorporated into media literacy and civics education curricula in other countries. In Finland, for example, prebunking strategems are a component of not only school-based civics education; they are required for professional onboarding and continuing education in fields that require information literacy-including for journalists and politicians.

The proposed federal Digital Citizenship and Media Literacy Act (H.R. 4688 and S. 2240) would create a Department of Education grant program to support K-12 digital citizenship and media literacy education. Critically, though, older adults are particularly susceptible to digital scams and misinformation. And while validated prebunking exercises have been developed specifically for seniors, only a tiny percentage of this population has benefited from them. To address these most vulnerable media consumers, the principal researchers who worked with CISA on prebunking field tests have urged digital platforms, governments and educational institutions to develop broad-based inoculation efforts, not only for students within a civics education or media literacy context, but as an expectation for users engaging with a given digital media platform. Indeed, policymakers can leverage digital platform oversight to ensure that major media-sharing platforms implement validated prebunking exercises into their practices for all users.

Transparency as free speech

Far from conflicting with our First Amendment rights to free speech, these policy levers aimed at mitigating the potential hazards of misinformation would clarify and expand upon speech by: 1) promoting digital companies' transparency about their internal practices, 2) strengthening local and legitimate journalism, 3) providing access to data for academic researchers working in the public interest, and 4) bolstering consumers' awareness of how information can be distorted. Digital misinformation can spread at the speed of light; oversight of the digital platform industry can no longer proceed at a snail's pace.

Caroline Friedman Levy

Caroline Friedman Levy

Caroline Friedman Levy, Ph.D.is a clinical psychologist, researcher and policy specialist focused on applying behavioral science to the implementation of evidence-based policies. She completed her undergraduate work at Cornell University and doctoral work in clinical psychology at the University of Vermont, earned an M.Sc. in health, community and development at the London School of Economics and Political Science, served as a policy fellow at the Department for Education in the UK and is currently a participating researcher for the Research-to-Policy Collaboration at Pennsylvania State University.

Matthew Facciani

Matthew Facciani

Matthew Facciani, Ph.D. is a postdoctoral researcher at Vanderbilt University in the Medicine, Health, and Society Department. He received a B.A. in psychology from Westminster College and M.A. and Ph.D. in sociology from the University of South Carolina. His research areas include LGBTQ health, social networks, political polarization and misinformation. Facciani is also interested in evidence-based policy and works with the Research-to-Policy Collaboration at Pennsylvania State University.