Nieman Foundation at Harvard
HOME
          
LATEST STORY
The media becomes an activist for democracy
ABOUT                    SUBSCRIBE
July 13, 2023, 1:40 p.m.
Audience & Social

Want a better comments section? Graham Media Group thinks AI can help with that

Graham Media Group developed the idea for a “first dibs” comment bot through NYC Media Lab’s AI & Local News Challenge.

If you’ve ever perused the online comment section of a news organization, you’ll know that despite their intended purpose of extending conversations, they can devolve into cesspools pretty quickly. In recent years, many news organizations have shut them down.

But not Graham Media Group.

In fact, the Detroit-based local TV broadcaster — which operates seven stations in four states, as well as digital media and technology development group Graham Digital, marketing company Omne, and Social News Desk — is doubling down where others are retreating. Now, they’re experimenting with an approach that might seem counterintuitive, even ironic, for the goal of sparking productive human conversations: using AI.

The organization’s interest in prioritizing thoughtfully moderated comment sections dates back to 2016, when toxicity in comments sections was skyrocketing, according to Michael Newman, Graham Media Group’s director of transformation. “We’ve seen that comments are really crucial for active communities,” he told me. The organization wanted to keep conversations on its website, not outsource them to social media. Leadership found that commenters were typically more loyal users, and more likely to return to the site daily — so the organization nicknamed them “highly engaged users” and paid attention to which stories drew their interest.

The idea about an AI application for comments came later. The organization already had data to show that when a reporter put a prompt in the comments — a question about an issue at the heart of a given story — “people are more likely to engage,” so this has been a best practice for the organization since 2017 or 2018, Newman said. “You can guide that conversation a little bit into perhaps a more productive space, perhaps a less divisive space,” he added.

When the organization was brainstorming ideas for the New York University Media Lab’s AI & Local News Challenge, which provides funding, guidance, and support to teams developing local-news-oriented AI projects, an AI comment prompt generator rose to the surface as a natural extension of the work the organization already does with its comments section. Graham Media Group was accepted into the 12-week program for spring 2023 alongside five other news organizations working on their own AI innovation projects. Four members of the digital team, which is based in Detroit, developed the model: audience development lead Dustin Block, early developer Marcus Brown, data scientist Araz Hashemi, and head of product Kristen Tebo.

This “first dibs” bot, developed using OpenAI’s GPT 3.5 training model, can digest a news article and phrase a question based on the information processed, Newman said. “In general…it does find the theme of the story and ask a question that is pretty open-minded and open and positive,” making for consistently good prompts. The organization is interested in eventually developing its own AI model, trained exclusively on its own content, but Newman said the generic model has been “really easy to train.” The team discovered that “the phrasing of the question frames the answer of the question,” he said. For instance, including “please” in a prompt for a question “substantially changed the results,” and explicitly asking the AI to keep questions positive “really helps the conversation.”

One benefit of this application of AI: Because the bot is instructed to phrase questions, not statements, it’s less prone to hallucination. “It hasn’t asserted any facts, so it hasn’t given any wrong information,” Newman said. The organization hasn’t had issues with hallucinations or inaccuracy based on “thousands of stories and questions” tested so far with the most current series of prompts, he specified.

The team has also been surprised by how consistently the bot generates usable questions on sensitive stories.

During the Graham team’s presentation of its project at the AI & Local News Challenge Demo Day last month, Dustin Block explained that on sensitive stories “that do not lend themselves well to a productive conversation,” Graham typically either closes comments or allows users to fill out a form to send private questions to the newsroom. Yet in challenging the bot with lightning rod stories, “we were very surprised how the bot took the tone of the story in general, and found the point of opinion, and just asked a question about that,” Newman said. He brought up an example of a story about a school lockdown due to a violent threat; the AI-generated question for that story was, “How important do you think it is for authorities to take swift action and thoroughly investigate potential threats to the safety of students and faculty in schools?”

“Kind of generic,” Newman allowed. “Might be rewritten if a human looked at it — but still perfectly appropriate for a comment bot in the story.”

On another sensitive story about a heroin overdose, the bot generated the question: “What steps should law enforcement take to ensure the safety of both officers and individuals who may be experiencing a drug overdose during an arrest?”

“Not bad — probably better than our human editors would do, certainly faster,” Block said during the presentation. “It’s sort of repeatedly been able to demonstrate that it can take these more difficult stories and at least not be wildly out of line with what we’re looking for.”

Based on tests of the bot so far, one piece of feedback has led to a practice of asking the bot to generate five different prompts, instead of just one, and allowing the writer to pick the prompt they like best, Newman said. He’s interested to see whether this tweak ultimately helps drive more human engagement.

Currently, comment moderation is a component of the web producer job at Graham Media Group, but “everyone in the newsroom can moderate,” Block told me in a follow-up email. Approximately 10 people across the company “guide our moderation,” he added, and Graham also uses an outside service through its comment provider, which reviews comments 18 hours per day. The organization gets about 120,000 comments per month.

Newman did not necessarily expect the bot to reduce producer labor, though he thinks it could become a useful brainstorming tool — he believes it can increase engagement in addition to, not instead of, the content moderation work human staff does. Newman does not want AI responding to comments, for instance, at this point. He also thinks this use of the AI could help flag questions readers might have about a story where a reporter could add useful context.

While the content the bot can generate is promising and has the organization excited, leadership and the team want to workshop what they call the tool because they have found people have a negative association (surprise) with “bot.” Newman said Graham Media Group intends to be very transparent about the comment generator being an AI with whatever name they choose — the team is considering names along the lines of “AI Prompter” or another name that includes AI but does not mention “bot.” Team members also plan to assign different names to the comment prompters used on each of their six different local sites.

Graham Media Group initially planned to roll out use of the AI comment prompter on AP wire articles, which cover national issues. There is currently limited engagement on these stories, and Block noted that “no human in our newsroom is going to prompt conversation…on most of our wire stories,” so the team sees this as a natural starting point to test whether AI can increase the number of users writing in and doing so productively. Newman said leadership thought this would be a good approach to avoid “stepping on existing producers” who do more involved comment moderation work in more active sections, such as local news. However, he said there’s been more enthusiasm for use of the tech across the newsroom than he expected, which might change their approach. “We thought that perhaps this first dib comments would actually create a conflict” among staff, he said. “But I would say that the feedback we’ve gotten now from the news organization is [staff] actually are kind of excited about it on the local side, and want to see the engagement there.” So, the organization might roll it out on local stories right away, too.

The first dibs comment bot will likely go live in August, Newman said. The company plans to “start it off as a small test on specific stories that are being well monitored by multiple stakeholders in the organization, including reporters and journalists, news directors, editors, and obviously the legal department as well.” They may launch this test just on one site, but due to general enthusiasm for the project, they’re more likely to launch it across all six of their sites at once.

Once it’s launched, they’ll monitor progress toward set goals closely, Newman said — specifically, “increasing engagement, increasing positive engagement, and increasing the number of people who are logged in on an individual story.” In the New York Media Lab presentation, the team specified they hope to grow logged-in users 10% by Oct. 1; grow user registrations 10% by Oct. 1, doubling average revenue per user; and increase the number of comments posted by 25% by Oct. 1 in support of the first two goals.

While the primary goal of engaging more users in the comments is engagement for its own sake, as a social good (and an avenue for newsroom feedback and tips), there’s another benefit Graham Media Group sees from a business perspective. Driving more engagement is a component of its first-party data strategy, especially as the Google-driven “cookiepacolypse” — the loss of third-party data about users, and a significant challenge for newsrooms — approaches in 2024. The company requires users to register and log in to comment on stories. Driving engagement via comments, Newman said, will lead users to register and to return to the site again and again — making them easier to identify, and therefore more valuable to advertisers. “We…know that those users who do comment are actually our highly valuable users,” Newman said. “And that was the case even back when comments would have been considered vulgar and hostile.”

“The connections between the users who commented and the people that the advertisers wanted in the programmatic marketplace were, like, one and the same,” he added.

Newman stressed that Graham Media Group “has a really strict policy” limiting what information they share with advertisers — they don’t share any information such as email addresses, he said. But “what we do do is we say, ‘Hey, this is an audience who’s really interested in local news, and you would be really smart — if you want people in Detroit, hit these people.” While advertisers could buy geolocation from anywhere, “I think that you’ll have a much better experience if you target these people who actually are commenting on a story about a local school, or about a local new business that’s opening up,” he said to explain Graham’s pitch to advertisers.

Having better data about the user, he argued, benefits the user as well, because it allows Graham to better understand what content is most interesting to them. Comments are one of the engagement metrics Graham relies on to understand what content is popular.

Beyond the first dibs comment tool, the organization’s companywide AI task force is working on developing a comprehensive policy for AI use across its newsrooms. “The primary goal of the policy is to communicate to our employees the do’s and don’ts of of AI,” Newman said. (And it’s not coming a moment too soon — in a survey completed by more than 40% of employees during development of the policy, they found that over 30% of respondents already use AI for personal or professional use.) “What we’re really looking for is just to give people those guardrails, so that way, they don’t put themselves or the company at risk,” Newman said. The comment bot and New York Media Lab experience, he added, contributed to “creating some real urgency behind the policy side of the conversation.”

“We believe wholeheartedly that our [human] journalists are insanely important. And [they] will never be replaced by an AI bot,” Newman said. While the policy will clarify and codify prohibited uses of AI, “we also want people who are using AI for story prompts [like the first dibs comment generator to be] able to use that and do it safely and confidently without feeling like they’re kind of lost in the corner of the company.” They plan to release their AI policy companywide in the next few weeks.

For Newman, the comment bot’s generation must be virtually perfect to be worthwhile. In his view, even having just two in a thousand prompts go awry would degrade trust too much for the tool to be usable. “We don’t want to put this out in the world if it’s going to do something that even offends the person who’s reviewing it before it posts up on the site,” he said.

“One bad question can ruin one user’s trust of our brand, of our site — something that we’ve worked so hard for,” he added. “Building the trust is way harder than losing the trust.”

Sophie Culpepper is a staff writer at Nieman Lab. You can reach her via email (sophie@niemanlab.org) or Twitter DM (@s_peppered).
POSTED     July 13, 2023, 1:40 p.m.
SEE MORE ON Audience & Social
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
The media becomes an activist for democracy
“We cannot be neutral about this, by definition. A free press that doesn’t agitate for democracy is an oxymoron.”
Embracing influencers as allies
“News organizations will increasingly rely on digital creators not just as amplifiers but as integral partners in storytelling.”
Action over analysis
“We’ve overindexed on problem articulation, to the point of problem admiring. The risk is that we are analyzing ourselves into inaction and irrelevance.”