The Associated Press has updated its standards — and will publish 10 new AP Stylebook entries — to caution journalists about common pitfalls in coverage of artificial intelligence.
When the AP became the first major news organization to strike a deal with OpenAI, the ChatGPT maker committed to paying to train its models on AP news stories going back to 1985. The joint announcement also said the deal would allow the AP to “examine potential use cases for generative AI in news products and services.”
But while the Associated Press has used AI technology to automate some “rote tasks” — think corporate earnings reports, sporting event recaps, transcribing press conferences, etc. — since 2014, the standards unveiled on Wednesday sound a skeptical note about using generative AI for journalism’s most essential work. As Amanda Barrett, vice president for standards and inclusion at AP, wrote:
Accuracy, fairness and speed are the guiding values for AP’s news report, and we believe the mindful use of artificial intelligence can serve these values and over time improve how we work. However, the central role of the AP journalist — gathering, evaluating and ordering facts into news stories, video, photography and audio for our members and customers — will not change. We do not see AI as a replacement of journalists in any way.
The AP’s standards around AI now include this guidance:
While AP staff may experiment with ChatGPT with caution, they do not use it to create publishable content.Any output from a generative AI tool should be treated as unvetted source material. AP staff must apply their editorial judgment and AP’s sourcing standards when considering any information for publication.
In accordance with our standards, we do not alter any elements of our photos, video or audio. Therefore, we do not allow the use of generative AI to add or subtract any elements.
We will refrain from transmitting any AI-generated images that are suspected or proven to be false depictions of reality. However, if an AI-generated illustration or work of art is the subject of a news story, it may be used as long as it clearly labeled as such in the caption.
We urge staff to not put confidential or sensitive information into AI tools.
We also encourage journalists to exercise due caution and diligence to ensure material coming into AP from other sources is also free of AI-generated content.
Generative AI makes it even easier for people to intentionally spread mis- and disinformation through altered words, photos, video or audio, including content that may have no signs of alteration, appearing realistic and authentic. To avoid using such content inadvertently, journalists should exercise the same caution and skepticism they would normally, including trying to identify the source of the original content, doing a reverse image search to help verify an image’s origin, and checking for reports with similar content from trusted media.
Ultimately, Barrett writes, “if journalists have any doubt at all about the authenticity of the material, they should not use it.”
I asked how journalists can best exercise due caution, given concerns about the reliability of AI detectors. (OpenAI quietly shut down its own tool, AI Classifier, over its “low rate of accuracy” last month.)
“I would say that a good example comes from the polling and investigative teams,” Barrett said in an email. “They often ask sources where the data they have is coming from, who put it together and to see the full data so they can draw their own conclusions. That kind of diligence is going to be standard for so many other journalists on so many other beats.”
Separate updates to the AP Stylebook — which will be published on Thursday — tell journalists to “beware far-fetched claims from AI developers.” Journalists should “avoid quoting company representatives about the power of their technology without providing a check on their assertions, and avoid focusing entirely on far-off futures over current-day concerns about the tools,” according to the updated guidance.
A new entry for artificial intelligence also cautions journalists to avoid “language that attributes human characteristics to these systems, since they do not have thoughts or feelings but can respond in ways that give the impression that they do.” Reporters should also refrain from referring to artificial intelligence with gendered pronouns. (If New York Times tech columnist Kevin Roose can refer to the Sydney chatbot persona that creepily declared love for him as “it” throughout this classic piece, you can too.)
Hallucinations? Falsehoods? Lies? In a new “generative artificial intelligence” entry, the AP Stylebook provides guidance on referring to the inaccurate — if confidently stated — answers AI can spit out:
Read more at the AP blog here.