New York Times Sues AI Search Startup Perplexity Over Copyright Infringement

The New York Times initiated legal action Friday against AI search startup Perplexity for copyright infringement, marking its second lawsuit against an AI company. The Times joins several other media outlets suing Perplexity, including the Chicago Tribune, which also filed suit this week.

The Times’ complaint alleges that “Perplexity provides commercial products to its own users that substitute” for the outlet, “without permission or remuneration.”

This lawsuit — filed even as numerous publishers, including The Times, are negotiating agreements with AI firms — aligns with a consistent, multi-year strategy. Recognizing the unstoppable advance of AI, publishers are employing lawsuits as leverage in negotiations, hoping to compel AI companies to formally license content in ways that ensure creators are compensated and the economic viability of original journalism is sustained.

Perplexity attempted to address demands for compensation by launching a Publishers’ Program last year, which offers participating outlets such as Gannett, TIME, Fortune, and the Los Angeles Times a portion of its ad revenue. In August, Perplexity also introduced Comet Plus, allocating 80% of its $5 monthly fee to participating publishers, and recently finalized a multi-year licensing deal with Getty Images.

“While we advocate for the ethical and responsible utilization and evolution of AI, we strongly object to Perplexity’s unauthorized use of our content to develop and promote their offerings,” Graham James, a spokesperson for The Times, stated. “We will persist in our efforts to hold companies accountable that decline to acknowledge the worth of our contributions.”

Similar to the Tribune’s complaint, The Times takes issue with Perplexity’s method of answering user queries by collecting information from websites and databases to generate responses using its retrieval-augmented generation (RAG) products, such as its chatbots and Comet browser AI assistant.

“Perplexity then reformats the original content into written responses for users,” the lawsuit states. “These responses, or outputs, frequently consist of verbatim or near-verbatim reproductions, summaries, or condensed versions of the initial content, including The Times’s copyrighted works.”

Or, as James articulated in his statement, “RAG enables Perplexity to crawl the internet and unlawfully extract content from behind our paywall, delivering it to its customers in real time. That content should exclusively be accessible to our paying subscribers.”

The Times further claims that Perplexity’s search engine has fabricated information and falsely attributed it to the publication, which damages its brand reputation.

“Publishers have pursued legal action against new tech companies for a century, commencing with radio, TV, the internet, social media, and now AI,” Jesse Dwyer, Perplexity’s head of communications, told TechCrunch. “Fortunately, it has never succeeded, or we would all be communicating via telegraph.”

(However, publishers have, at times, triumphed in or influenced significant legal battles over new technologies, leading to settlements, licensing frameworks, and court precedents.)

This lawsuit comes just over a year after The Times issued a cease and desist letter to Perplexity, demanding it stop using its content for summaries and other generated outputs. The publication asserts it has contacted Perplexity multiple times over the past 18 months, requesting that they cease using its content unless an agreement could be negotiated.

This is not the first legal conflict The Times has initiated with an AI firm. The Times is also suing OpenAI and its backer Microsoft, claiming the two trained their AI systems with millions of the outlet’s articles without offering compensation. OpenAI has countered by arguing that its use of publicly available data for AI training falls under “fair use,” and has leveled its own accusations at The Times, alleging the outlet manipulated ChatGPT to discover evidence.

That case remains ongoing, but a similar lawsuit directed against OpenAI competitor Anthropic could establish a precedent regarding fair use for training AI systems moving forward. In that suit, where authors and publishers sued the AI firm for using pirated books to train its models, the court ruled that while lawfully acquired books might be a safe fair use application, pirated ones infringe on copyrights. Anthropic agreed to a $1.5 billion settlement.

The Times’ lawsuit intensifies the growing legal pressure on Perplexity. Last year, News Corp — which owns outlets like The Wall Street Journal, Barron’s, and the New York Post — made similar claims against Perplexity. That list expanded in 2025 to also include Encyclopedia Britannica and Merriam-Webster, Nikkei, Asahi Shimbun, and Reddit.

Other outlets, including Wired and Forbes, have accused Perplexity of plagiarism and improperly crawling and extracting content from websites that have explicitly indicated they do not wish to be scraped. The latter claim was recently confirmed by internet infrastructure provider Cloudflare.

In its suit, The Times is asking the courts to mandate that Perplexity compensate for the harm allegedly caused and to prohibit the startup from continuing to use its content.

Evidently, The Times is willing to collaborate with AI firms that offer fair compensation for its reporters’ work. Earlier this year, the outlet finalized a multi-year agreement with Amazon to license its material for training the tech giant’s AI models. Several other publishers and media companies have entered into licensing deals with AI firms, allowing their content to be used for training and featured in chatbot responses. OpenAI has secured agreements with Associated Press, Axel Springer, Vox Media, The Atlantic, and more.

This article has been updated to include commentary from Perplexity.

Leave a Reply