Human-Shaped Hole in AI Oversight

By Caroline Friedman Levy, Ph.D., and Matthew Facciani, Ph.D. 

Even in these polarized times, policymakers across the ideological spectrum can find common ground on artificial intelligence. Members of Congress from both parties are rooting for AI’s potential to help us cure pediatric cancers and solve some of our most devastating environmental crises, while they also are calling for oversight to address its contributions to societal hazards—including deepening inequalities, infringements on privacy and a weakening of personal accountability.  

Early in President Joe Biden’s administration, the White House mobilized a National AI Research Resource Task Force with broad bipartisan support, charging its members to devise a road map to mitigate risks and improve the chances that the technology benefits us all, not just a privileged few.  

Yet, in truth, AI oversight is being relegated to the goodwill of the companies that are creating and applying these tools. 

 With technology advancing so quickly and the federal regulatory process stuck in neutral, is the industry capable of policing itself?   

It would appear not. In fact, we are engaged in a kind of asymmetric warfare regarding AI and human behavior 

Tech platforms are using AI tools to rapidly master our behavior: predicting our preferences and prompting us to buy products we never considered. AI is helping tech companies master our attention—arguably our most precious behavioral resource.  

But as tech companies exploit behavioral strategies to remarkable financial advantage—and toward a new kind of global behavioral dominance—these overseers of some of the most potent behavioral modification tools ever established have shown little interest in applying this expertise to their oft-stated goals of establishing ethical AI standards. 

We can see this gulf in the ways tech companies apply nudge theory to customers while ignoring this concept in operationalizing AI ethics. 

Nudge theory evolved out of years of social psychology research revealing that our behavior is often swayed more powerfully by our environment and social influences than by purported incentives. Much of the time, the behavior that feels easiest and most appealing to us is, in its way, the most reinforcing. We tend to gravitate toward a path of least resistance, yet we’re frequently oblivious to the nudges that set us on this path.  

One of the most elegant examples of an intentional policy nudge was the introduction of painted lines on roads to divide two-way traffic. Here was a contextual change that had a powerful effect on driving behavior and safety, creating a new social norm—a change that spread quickly from the UK in 1918 all around the world through the 1920s. Whatever our politics, most of us would likely agree this was a “nudge for good” and not an insidious attempt by the government to manipulate our behavior. 

On the other hand, we experience (often inadvertent) “sludges” all around us. These are contextual factors that create friction and impede beneficial behaviors. The complexity of tax forms creates a sludge effect that reduces the likelihood that we file our taxes in a timely manner.  

Tech companies are masterful at intentionally applying nudges and sludges on consumers. The nudges are designed with the help of AI-powered systems to identify our preferences and craft our segmented online worlds—encouraging us to purchase in remarkably targeted ways or turning our attention to highly stratified content. Tech companies also exploit sludges: creating such friction that it’s harder to click away or to “unsubscribe” from a monthly purchase than it was to sign up.  

Yet while Big Tech has started paying lip service to “aligning incentives” with ethical outcomes—and are staffing up AI ethics divisions—these companies ignore such strategies as nudges and sludges in their ostensible efforts to develop ethical AI. Rather, in their race to bring applications to market, tech companies exhibit operational nudges and sludges that promote ethical hazards.  

In the past two years, Google’s ethical AI division experienced a massive exodus of top employees, several of whom protested company policies that they said led to stifling the publication of critical academic research. Whistleblowers at Meta documented that their leadership failed to be slowed by internal research showing the company’s AI-powered algorithms promoted social media engagement based on anger, anxiety and other polarizing emotions—in stark conflict with the company’s original mission “to make the world more open and connected.” 

We need to create the tech culture version of painted lines on our roadways to encourage ethical AI development. 

Helpfully, ethicists who spend their days considering AI risks and challenges largely agree on where tech culture must head. They reliably emphasize the need for greater inclusivity among key decision-makers to help root out bias in data sets and models, the importance of comprehensive documentation to allow for the transparency needed for oversight, and the need for attention to ethical considerations to be built in at every phase of a company’s and/or product’s life cycle 

Despite such broad consensus, there have been virtually no efforts to harness decades of behavioral science toward operationalizing and implementing these principles. For ethical AI to dominate—to amount to more than window-dressing—it must become the path of least resistance within the tech industry. If we hope for an AI future that truly benefits us all, behavioral science must be central to shaping this project. 

 

About the authors:

Matthew Facciani is a postdoctoral researcher at University of Notre Dame Department of Computer Science and Engineering, and he previously served as a postdoctoral researcher at Vanderbilt University in Medicine, Health, and Society. He received a B.A. in psychology from Westminster College and M.A. and Ph.D. in sociology from the University of South Carolina. His research areas include LGBTQ health, social networks, political polarization and misinformation. Facciani is also interested in evidence-based policy and works with the Research-to-Policy Collaboration at Pennsylvania State University. 

Caroline Friedman Levy is a clinical psychologist, researcher and policy specialist focused on applying behavioral science to the implementation of evidence-based policies. She completed her undergraduate work at Cornell University and doctoral work in clinical psychology at the University of Vermont, earned an M.Sc. in health, community and development at the London School of Economics and Political Science, served as a policy fellow at the Department for Education in the UK and is currently a research group member at the Center for AI and Digital Policy.