Despite some high-profile cautionary tales, publishers have been announcing experiments with generative AI — many using OpenAI’s ChatGPT or similar tech — left and right.
But Wired is the first news outlet I’ve seen publish an official AI policy that spells out how the publication plans to use the technology.
The ground rules, published by global editorial director Gideon Lichfield last week, lead off with a set of promises about what the newsroom won’t be doing.“It may seem counterintuitive for a publication like Wired to have a policy of mostly not using AI,” Lichfield told me. “But I think people appreciate both the transparency and the attempt to define clear standards that emphasize what quality journalism is about.”
Lichfield said questions around how journalists may use generative AI had been in the air since ChatGPT was released in November 2022 and that the news of CNET’s AI-written stories — you know, the ones that contained serious factual errors and plagiarized text — was “the accelerant” for developing Wired’s stance.
The policy went through several revisions. The original draft by Lichfield was vetted by senior members of his team, discussed with the entire newsroom during an all-hands meeting, and run by leadership at Condé Nast. I asked Lichfield what those internal conversations looked like. Were there, for example, any points of disagreements or unanimity?“We had a bit of debate about whether it would be OK to use AI to edit a story, or write headlines, or brainstorm ideas, but on the whole everyone was very supportive of a stance that placed clear limits on what we would use it for,” Lichfield told me. “I think we all recognized that it simply isn’t very good for most of our purposes. I’m sure it will improve in some areas, but I believe a lot of people are overestimating what it can do.”
NO: Publishing editorial text written or edited by AI “Wired does not publish stories with text generated by AI,” Lichfield wrote in the policy, adding, “This applies not just to whole stories but also to snippets — for example, ordering up a few sentences of boilerplate on how CRISPR works or what quantum computing is.”
The rule extends beyond articles to editorial-side email newsletters, but leaves the door open to using AI for marketing emails. (Lichfield says the marketing emails “are already automated” and that Wired will disclose if they start using AI-generated text.)
The reasons for this ban are “obvious,” according to Lichfield.
“The current AI tools are prone to both errors and bias, and often produce dull, unoriginal writing,” Lichfield writes in the policy. “We think someone who writes for a living needs to constantly be thinking about the best way to express complex ideas in their own words.”
The fact that an AI tool could produce plagiarized text was another factor. And the policy includes a warning to Wired staff and contributors: “If a writer uses it to create text for publication without a disclosure, we’ll treat that as tantamount to plagiarism.”
YES: Using AI to suggest headlines or social media posts. “We currently generate lots of suggestions manually, and an editor has to approve the final choices for accuracy,” Lichfield writes in the policy. “Using an AI tool to speed up idea generation won’t change this process substantively.”
YES: AI-generated story ideas. Wired has already done some “limited testing” to see if AI could help with the process of brainstorming story ideas. Some of the results were false leads or — maybe worse? — straight-up boring.
NO: AI-generated images or video. Art generated with tools like DALL-E, Midjourney, and Stable Diffusion is “already all over the internet,” Wired acknowledges in its policy. But it looks like a legal headache. Lawsuits from artists and image libraries like Getty Images make AI-generated art a no-go for Wired.
Wired stressed that it specifically avoids using AI-generated images instead of stock photography.
“Selling images to stock archives is how many working photographers make ends meet,” Lichfield explains in the policy. “At least until generative AI companies develop a way to compensate the creators their tools rely on, we won’t use their images this way.”
Add “… for now” to everything. The policy notes that AI is evolving and that Wired “may modify our perspective over time.”
“My initial worry was that I’d been too conservative,” Lichfield told me. “I don’t want to close off experimentation, and we know these tools will evolve.”
But a small experiment gave him confidence in the policy.
Lichfield asked ChatGPT to suggest U.S. cities that a reporter looking to report on the impact of predictive policing on local communities should visit. He said it gave him “a plausible-looking list.” When he asked ChatGPT to justify the suggestions, he received more “plausible-looking” — if somewhat repetitive — text on the history and tensions around policing for each city.
Finally, Lichfield asked the tool when each city started to use predictive policing and asked ChatGPT to provide sources.
“It gave me a list of plausible-looking URLs from various local media,” Lichfield said. “Every single one 404’d. They were all made up.”
I went to ChatGPT to try and replicate the question. I knew it could include glaring errors in otherwise convincing text, but, surely, it hadn’t made up URLs when asked for sources?
Reader, it had. The outlets were real — The Los Angeles Times, The New York Times, NBC Chicago, The Baltimore Sun, Miami New Times, The Verge, The Washington Post, and the Commercial Appeal in Memphis. But the URLs were all broken for me, too.1
ChatGPT making up URLs isn’t a brand new phenomenon. In this case, our best guess is that ChatGPT can’t search for articles live, so it’s doing its best impression of what, say, a link in the Baltimore Sun would look like. Asking the Open AI-powered version of Bing the same set of questions on Monday generated real links that take you to real outlets.
“Yes, here are some sources for when each city began using predictive policing:
These sources should provide more information about the specific dates when each city began using predictive policing technology.” [↩]