Skip to main content

(Un)fair Use?: Understanding the New York Times’s Lawsuit Against Microsoft and OpenAI

Posted by on Monday, January 8, 2024 in Blog Posts.

By Jay Eischen

The nascent generative artificial intelligence (AI) industry—OpenAI in particular—has dominated headlines for more than a year. Boardroom drama, hopes of explosive productivity growth, and cautionary tales of existential risks have intermittently gripped the public consciousness.[1] Most recently, however, the legality of OpenAI’s use of copyrighted materials in training their AI models has come to the fore, with the filing of the highest profile legal challenge yet: the New York Times’s recent lawsuit against Microsoft and OpenAI.[2]

The Times’s complaint alleges that OpenAI not only trained its ChatGPT model on millions of copyrighted Times materials, but placed particular emphasis on “high-quality content, including … from the Times”.[3] The result is more than an ability to mimic the Times’s reporting style; the complaint alleges that targeted prompting can yield exact copies or near-copies of several Times articles.[4]

The most apt defense available to OpenAI is the concept of “fair use,” which allows the use of copyrighted materials “for purposes such as criticism, comment, news reporting, teaching…, scholarship, or research”.[5] For example, Google successfully invoked fair use to defend against claims of copyright infringement when it scanned and uploaded millions of books as part of its Google Books project—a favorable precedent that OpenAI would likely cite extensively.[6] Fair use determinations are guided by four factors:

“(1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;

(2) the nature of the copyrighted work;

(3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and

(4) the effect of the use upon the potential market for or value of the copyrighted work.”[7]

With respect to the first factor, it is relevant but not dispositive whether the potential infringer is a for-profit entity.[8] While OpenAI is ostensibly a non-profit, it has ballooned to a $90 billion valuation due to the for-profit subsidiary that develops and monetizes ChatGPT, decreasing the likelihood that OpenAI’s use of the Times’s articles is “fair”.[9] The second factor is generally less important in determining fair use, but it likely favors OpenAI because the nature of the Times’s reporting is factual rather than fictional.[10]

It is too early, however, to determine whether the third and fourth factors are likely to favor the Times or OpenAI. The complaint provides examples of ChatGPT regurgitating significant portions or entire Times articles. The third factor may still favor OpenAI, however, as the Times may have engaged in abnormal, targeted prompting violative of ChatGPT’s terms of use, and OpenAI could potentially tweak the model to reduce or eliminate instances of outright copying, while still using copyrighted materials for training its models.[11] With respect to the fourth factor, it is also unclear to what extent ChatGPT may reduce the marketability of the Times reporting. OpenAI has only recently integrated real-time information access into its services through its partnership with Microsoft, and evidence has yet to be adduced showing any meaningful relationship between ChatGPT and the profitability of the Times’s reporting.

The next phase of litigation will likely commence after Microsoft and OpenAI either file motions to dismiss or answers to the complaint, which are due February 26th. If a settlement is not reached, expect a protracted, hard-fought legal battle, as the dispute over Google Books raged for a decade without reaching trial. Regardless of the outcome in this case, forthcoming copyright law decisions promising profound implications for AI and copyright holders are inevitable.

Jay Eischen is a second-year student at Vanderbilt University Law School.

[1] See Tripp Mickle, et al., Explaining OpenAI’s Board Shake-Up, N.Y. Times (Nov. 22, 2023),; Greg Jensen, et al., Assessing the Implications of a Productivity Miracle, Bridgewater Assocs. (Nov. 30, 2023),; Kevin Roose, A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn, N.Y. Times (May 30, 2023),

[2] Complaint, New York Times Co. v. Microsoft Corp., 2023 WL 8933610 (Dec. 27, 2023) (hereinafter, the “Complaint”).

[3] Complaint ¶¶ 2, 85-91.

[4] Complaint ¶¶ 98-107.

[5] See 17 U.S.C. § 107.

[6] Authors Guild v. Google, Inc., 804 F.3d 202 (2d Cir. 2015).

[7] 17 U.S.C. § 107.

[8] See Authors Guild v. Google, 804 F.3d at 219.

[9] See id.; Complaint ¶ 6.

[10] See Authors Guild v. Google, 804 F.3d at 219.

[11] See Will Oremus & Elahe Izadi, AI’s Future Could Hinge on One Thorny Legal Question, Wash. Post (Jan. 4, 2024, at 7:00 AM),

Tags: , , , ,