Algorithmic Opacity, Private Accountability, and Corporate Social Disclosure in the Age of Artificial Intelligence
Today, firms develop machine-learning algorithms to control human decisions in nearly every industry, creating a structural tension between commercial opacity and democratic transparency. In many of their commercial applications, advanced algorithms are technically complicated and privately owned, which allows them to hide from legal regimes and prevents public scrutiny. However, they may demonstrate their negative effects—erosion of democratic norms, damages to financial gains, and extending harms to stakeholders—without warning. Nevertheless, because the inner workings and applications of algorithms are generally incomprehensible and protected as trade secrets, they can be completely shielded from public surveillance. One of the solutions to this conflict between algorithmic opacity and democratic transparency is an effective mechanism that requires firms to disclose information about their algorithms.
This Article argues that the pressing problem of algorithmic opacity is due to the regulatory void of US disclosure regulations that fail to consider the informational needs of stakeholders in the age of artificial intelligence (AI). In a world of privately owned algorithms, advanced algorithms, as the primary source of decision-making power, have produced various perils for the public and firms themselves, particularly in the context of the capital market. While the current disclosure framework has not considered the informational needs associated with algorithmic opacity, this Article argues that algorithmic disclosure under securities law could be used to promote private accountability and further public interest in sustainability. In this vein, through the lens of the US Securities and Exchange Commission (SEC) disclosure framework, this Article proposes a new disclosure framework for machine-learning-algorithm-based AI systems that considers the technical traits of advanced algorithms, potential dangers of AI systems, and regulatory governance systems in light of increasing AI incidents. Towards this goal, this Article considers numerous disclosure topics, analyzes key disclosure reports, and proposes new principles to help reduce algorithmic opacity, including stakeholder interests, sustainability considerations, comprehensible disclosure, and minimum necessary disclosure, ultimately striking a balance between democratic values in transparency and private interests in opacity. This Article concludes with a discussion of the impacts, limitations, and possibilities of using the new disclosure framework to promote private accountability and corporate social responsibility in the AI era.