Saving Icarus 2.0: AI Regulation Requires Extraordinary Partnerships

SUMMARY

Unifying Theme: Information Marketplace: Ensuring the Public has the Data

Artificial intelligence (AI) has the potential to offer humanity enormous benefits, but ensuring that its progress aligns with democratic principles and human rights will require extraordinary coordination-political leaders engaged with technology leaders as partners, not adversaries-to craft a flexible regulatory framework.

Related News from Caroline Levy and Matthew Facciani:

By: Caroline Friedman Levy, Ph.D, Research-to-Policy Collaboration, Penn State University
and Matthew Facciani, Ph.D, postdoc researcher at Vanderbilt University 

While divisive Congressional rhetoric has become commonplace, the U.S. Senate's bipartisan passage of the U.S. Innovation and Competition Act of 2021 struck a hopeful note for policy aimed at boosting our "innovation infrastructure" and particularly an existentially critical sector: artificial intelligence.

AI already permeates our lives, nudging consumer behavior and optimizing markets, transportation routes and energy grids. It contributes to technological advances that might alternately (or even simultaneously) be considered mundane, dubious or miraculous. In contrast to more conventional, mechanistic software, AI systems are designed to adapt and "learn" from images, text and other inputs, with the potential to exceed the bounds of human intelligence. Machine learning played a crucial role in the extraordinarily fast development of effective vaccines against COVID-19. Billions of dollars of smart money bets are being placed on AI's potential to combat climate change and devastating illnesses.

Yet the ancient myth of Daedalus and his son Icarus remains a relevant parable when contemplating this definitively modern innovation. In the Greek tale, Daedalus constructs wings-feathers attached to wooden frames with wax-so he and Icarus can escape their labyrinth prison, and he clearly admonishes his son that they must not fly too close to the sun lest its heat melt the wax. But, intoxicated by his newfound capabilities, Icarus fails to heed his father's warning and crashes to his death. Likewise, soaring alongside the fathomless benefits AI offers humanity holds equally immeasurable risks. And technology leaders warn that with the advent of deep learning systems - foreshadowing the exponential growth of AI's capabilities - our window to manage these risks is narrowing.

Numerous institutes and working groups are developing ethical best practices, and international policymakers have begun to sketch parameters for AI strategy and regulation within their jurisdictions. However, ensuring that AI's evolving uptake aligns with democratic principles and human-centered values will require political leadership working in tandem with leading technologists to formulate strategic policy. The urgency of this coordinated leadership is particularly acute given the deliberate foresight with which China is investing in AI. Critical observers fear that the nation's implicit goal is applying the technology to "the perfection of dictatorship." No less concerning than authoritarian uses are the profound risks of allowing short-term, market-driven imperatives to shape AI's future. While we mortals are not at imminent risk of subjugation by the robot overlords of science fiction fantasy, inadequate leadership leaves us vulnerable to relentless virtual "paper cuts," with gradual but cumulative infringements on fundamental American rights to privacy, agency and impartial treatment under the law.


AI in 2021: Known Risks

There is a striking degree of consensus among experts about the salient flaws of AI as it is used currently and about known risks for the near future. Foremost among these are:

  • Historical racial, gender, ethnic and other biases that are baked into data sets and algorithms currently influencing decisions as momentous as hiring, bail approval and health care allocation. Left unchecked in an oversight-free framework, the increasing speed, power, capability and permeation of AI is likely to broaden the impact of these biases, expanding and reifying social inequalities.
  • AI advances are allowing for metastasized infringements into personal privacy, with the surveillance taking place in authoritarian countries essentially paralleled by more diffuse, corporate surveillance in the United States without any framework for accountability.
  • Without robust strategic leadership focused on education, training and industrial policy in an era of accelerated AI, we risk sharply increased unemployment and ever-widening income inequality, while the economic benefits of the advanced technologies are conferred primarily upon a fortunate few.
  • Absent policy changes, AI-enhanced offensive capabilities - ranging from highly advanced deepfakes and disinformation to the means to disrupt crucial infrastructure and initiate drone swarms - will be increasingly accessible to terrorists, rogue states and other malefactors.
  • Within just a few years, social media and other digital platforms have built algorithms that have effectively allowed for a startling influence over consumer behavior and mastery of the attention economy. As AI-supported nudges advance, risks to social cohesion and our conception of free will are likely to increase.

AI Policy: Underlying Constructs

Most experts approach questions of AI governance with humility, taking into account that the technology will advance in ways that are difficult to predict, with accompanying regulatory challenges that are sure to be similarly fast-changing. Yet, there is considerable agreement about the principles that should frame a coherent and strategic regulatory response. Until recently, these principles have been reiterated ceaselessly across consortia and within the text of white papers without strong U.S. leadership guiding action. Encouragingly, President Biden recently asserted the United States' commitment to helping set norms and standards for international AI policies at the latest G7 and NATO conferences, and he has appointed an "AI czar" to lead a National Artificial Intelligence Research Resource Task Force. Still, one would be hard-pressed to find an AI expert who thinks the United States is moving at a sufficiently expeditious pace to "upskill" our political leaders to meet the needs of the moment. To create necessary safeguards, an AI regulatory regime will need to accomplish the following:

Given that the foremost expertise about the development of AI is embedded within the technology business world, a culture of cooperation must be forged between technology leaders and political leaders to commit to AI that prioritizes fundamental ethics, including safety over speed, and enshrining fairness, accountability and data privacy  as fundamental principles.

The United States has developed models of oversight requiring a delineated degree of transparency for most industries that offer profound social risks-for example, to regulate financial firms and pharmaceutical companies. Indeed, in 2020 the FTC published guidelines highlighting the expectation that AI tools be "transparent" and "explainable" to foster accountability. However, perhaps due to competitive pressures and concerns about intellectual property, a regard for transparency has not yet been embedded within the technology industry culture. The time is now. Any regulatory system constructed to ensure that AI applications are safe, trustworthy and equitable will require model explainability and data use-transparency.

Research and development in AI science in the U.S. is currently led by the private sector, with the most prominent exception being research conducted with the ultimate aim of application to our intelligence and defense capabilities, for example via DARPA. Along with the benefits of ingenuity that attend private sector investments comes the risk of applications driven by the need for short-term return on investment without concerns about equity, privacy and other foundational American values. Additionally, economists focused on the balance of power between democratic nations and China have pointed to the need for new-era industrial policies that will ensure domestic supply chains of the hardware that powers AI. The Senate passage of the U.S. Innovation and Competition Act of 2021 marked an important endorsement of such investments, with Minority Leader Mitch McConnell (R-Ky.) among the 18 Republican senators who voted for the bill. With the House response moving forward in piecemeal fashion, the final scope of this legislation remains unclear; bicameral bipartisan leadership will be needed to assure these critical investments in AI research and development.


Caught Between Regulation and Innovation

While the Senate passage of the U.S. Innovation and Competition Act bodes well for increased funding in research and development for AI, the White House and Congress have yet to make substantive progress on oversight of this rapidly advancing, double-edged technology. The European Union has begun to outline its approach to AI regulation. However, many domestic experts have expressed concern that the EU's proposed rules would impede the innovation necessary to ensure that AI technology standards are developed within countries devoted to democratic principles and human rights. In order for the United States to remain a responsible leader in artificial intelligence, the White House and FTC must actively engage Congress, technology experts and business leaders to strike a regulatory balance that places fundamental rights at the forefront of our AI standards. We have too much to gain from this extraordinary, evolving technology to neglect taking a principled stake in its future. Equally, we have too much to lose.


Caroline Friedman Levy

Caroline Friedman Levy, Ph.D.

Caroline Friedman Levy, Ph.D., is a clinical psychologist, researcher, and policy specialist focused on applying behavioral science to the implementation of evidence-based policies. She completed her undergraduate work at Cornell University and doctoral work in clinical psychology at the University of Vermont, earned an M.Sc. in health, community and development at the London School of Economics and Political Science, served as a policy fellow at the Department for Education in the UK and is currently a participating researcher for the Research-to-Policy Collaboration at Pennsylvania State University. 

Matthew Facciani

Matthew Facciani, Ph.D.

Matthew Facciani, Ph.D. is a postdoctoral researcher at Vanderbilt University in the Medicine, Health, and Society Department. He received a B.A. in psychology from Westminster College and M.A. and Ph.D. in sociology from the University of South Carolina. His research areas include LGBTQ health, social networks, political polarization and misinformation. Facciani is also interested in evidence-based policy and works with the Research-to-Policy Collaboration at Pennsylvania State University.