Nieman Foundation at Harvard
HOME
          
LATEST STORY
There’s another reason the L.A. Times’ AI-generated opinion ratings are bad (this one doesn’t involve the Klan)
ABOUT                    SUBSCRIBE
Feb. 18, 2025, 2:40 p.m.

The New York Times will let reporters use AI tools while its lawyers litigate AI tools

“Generative AI can assist our journalists in uncovering the truth and helping more people understand the world.”

As Walt Whitman once wrote, The New York Times is large. It contains multitudes.

One part of the company is suing OpenAI and Microsoft for training their large language models on Times content. It seeks “billions of dollars in statutory and actual damages” for the companies’ “use of The Times’s uniquely valuable works.”

But as Semafor reported Monday, the newsroom is on board with using AI in the story production process — some AI tools, at least. And the green-lit list includes models from…OpenAI and Microsoft. Max Tani:

The New York Times is greenlighting the use of AI for its product and editorial staff, saying that internal tools could eventually write social copy, SEO headlines, and some code.

In messages to newsroom staff, the company announced that it’s opening up AI training to the newsroom, and debuting a new internal AI tool called Echo to staff, Semafor has learned. The Times also shared documents and videos laying out editorial do’s and don’t for using AI, and shared a suite of AI products that staff could now use to develop web products and editorial ideas.

The allowed external tools include “GitHub Copilot programming assistant for coding, Google’s Vertex AI for product development, NotebookLM, the NYT’s ChatExplorer, some Amazon AI products, and OpenAI’s non-ChatGPT API through the New York Times’ business account (only with approval from the company’s legal department).”

Swapping the ChatGPT interface for OpenAI’s API doesn’t change what the underlying LLM was trained on — which includes a huge amount of what the legal side of the Times argues is not-to-be-used copyrighted material. Google’s NotebookLM is based on its Gemini models, for which it’s also facing lawsuits over scraping copyrighted material.

And GitHub Copilot is a product of Microsoft, the Times’ other legal opponent. The Times even singled out Copilot for criticism in its lawsuit, saying it seeks “to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment.” (Though to be fair, Microsoft has rebranded its AI tools roughly 384 times since December 2023, and that was a somewhat different product. It would be surprising if GitHub Copilot — a tool for programmers — was trained on viral Times recipes for green pea guacamole. But it’s facing similar lawsuits from other coders, though.)

Among the things the Times suggests using AI for, according to Tani:

…to generate SEO headlines, summaries, and audience promos; suggest edits; brainstorm questions and ideas and ask questions about reporters’ own documents; engage in research; and analyze the Times’ own documents and images. In a training video shared with staff, the Times suggested using AI to come up with questions to ask the CEO of a startup during an interview. Times guidelines also said it could use AI to develop news quizzes, social copy, quote cards, and FAQs.

For the record, I think these are fine journalistic uses of AI. Current LLMs are nowhere near accurate enough to reliably produce news copy meant for humans. They make stuff up far too often. But they can be extremely useful for analyzing documents, brainstorming ideas, summarizing texts, and a host of other tasks during the reporting and writing process, when a journalist can evaluate and refine the output.1 The new generation ofdeep researchmodels looks much improved for a lot of journalism tasks, though it’s still slow and expensive. And they’ll keep getting better. A smart news organization should be open to using tools where they can help — and avoiding them where they can’t. That’s true no matter what your legal strategy is.

  1. However, I’d stay away from AI on one of the Times’ listed potential use cases: answering “How many times was Al mentioned in these episodes of Hard Fork?” LLMs are still pretty terrible at counting things, and I would definitely not trust the output on this one. [↩]
Joshua Benton is the senior writer and former director of Nieman Lab. You can reach him via email (joshua_benton@harvard.edu) or Twitter DM (@jbenton).
POSTED     Feb. 18, 2025, 2:40 p.m.
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
There’s another reason the L.A. Times’ AI-generated opinion ratings are bad (this one doesn’t involve the Klan)
At a time of increasing polarization and rigid ideologies, the L.A. Times has decided it wants to make its opinion pieces less persuasive to readers by increasing the cost of changing your mind.
The NBA’s next big insider may be an outsider
While insiders typically work for established media companies like ESPN, Jake Fischer operates out of his Brooklyn apartment and publishes scoops behind a paywall on Substack. It’s not even his own Substack.
Wired’s un-paywalling of stories built on public data is a reminder of its role in the information ecosystem
Trump’s wholesale destruction of the information-generating sectors of the federal government will have implications that go far beyond .gov domains.