Nieman Journalism Lab

Hello *|FNAME|*! Here's today's news from the Lab:

Are people more likely to accurately evaluate misinformation when the political stakes are high? Haha, no

Imagine you’re walking down the street and a random guy asks you to solve a math problem. A complicated one, but one you know how to solve — it’d just take several minutes of some fairly serious thinking on your part. Would you do it?

Maybe you love math and want nothing more than random afternoon word puzzles. But I’d wager most people would just mumble “sorry” and keep on walking. You have the capacity to solve this rando’s problem, but you don’t really have any incentive to.

Now imagine the same scenario — but this time, the guy says he’ll pay you $500 if your answer is correct. Suddenly, there’s a potential return on your investment of time, so you’re more likely to offer up the mental energy.

This is a fairly established finding in situations where someone is asked to throw some cognition power at a problem. Giving someone a meaningful incentive on a mental problem can lead them to work harder and have a better chance of getting it right. That’s also true for a very specific kind of mental problem: figuring out whether to believe some random headline you see on social media. For example:

This 2024 study asked 3,999 people to participate in a mock social network where they were tasked with determining whether or not a post was scientifically sound. The most effective way to increase subjects’ accuracy, and the only way to get them to head over to a search engine to do some research on the topic at hand? Tell them they’ll get more money if they’re correct.

— Or this other 2024 study, in which subjects were asked to evaluate criminal evidence from two witnesses to a crime — one of whom subjects were informed was lying. Despite that information, many believed both witnesses — unless they were promised an extra €5 if they kept things straight. That small amount was enough to “significantly reduce” subjects’ mistakes.

— Or this 2008 paper, which asked 1,200 Americans questions of political knowledge. Telling people they’d get $1 for each correct answer increased their accuracy by 11%.

— Or this 2015 paper, which asked Democrats and Republicans a series of questions about economic performance under past presidential administrations. Unsurprisingly, people painted more positive pictures of their own party’s record. But offering a financial incentive that amounted to just 17 cents per correct answer reduced the partisan gap in answers by more than half.

There’s a consistent thread here: If people don’t see a reason to bring their full mental capacity to bear on a question, they probably won’t. We’re lazy! But when the stakes are a little higher — when there’s a little more reason to bring our A-game — we can do better.

Let’s transfer that idea into politics. After all, there’s usually no direct reward for sussing out a fake headline in your News Feed, or for detecting when a claim about a politician edges from plausible to laughable. In day-to-day life, a single bit of political wrongness is unlikely to impact your life one whit. So why summon up the brain power?

But what if the stakes were suddenly higher — say, just hypothetically, if it was a presidential election season and the country is being presented with two wildly different potential futures? Would people then summon up more of their mental capacity to separate good information from bad? Pundits have long said most voters onlyget serious” about an election a few weeks before the big day — maybe that new seriousness might mean a stricter adherence to the facts?

That’s one of the issues addressed by a new paper by Charles Angelucci, Michel Gutmann, and Andrea Prat — of MIT, Northwestern, and Columbia, respectively. Its title is “Beliefs About Political News in the Run-up to an Election“; here’s the abstract, emphasis mine:

This paper develops a model of news discernment to explore the influence of elections on the formation of partisan-driven parallel information universes. Using survey data from news quizzes administered during and outside the 2020 U.S. presidential election, the model shows that partisan congruence’s impact on news discernment is substantially amplified during election periods. Outside an election, when faced with a true and a fake news story and asked to select the most likely true story, an individual is 4% more likely to choose the true story if it favors their party; in the days prior to the election, this increases to 11%.

Did you catch that? People aren’t more likely to evaluate accuracy correctly during the fever pitch of an election season — they’re less likely, and by a meaningful margin.

Angelucci, Gutmann, and Prat base their study on YouGov survey data from a total of 10,094 people. These surveys, from between 2018 and 2022, presented participants with both real and fake news stories and asked them to identify the ones most likely to be true. Importantly, people had an incentive to be accurate: They were promised $1 for each correct answer. The authors separated out the surveys conducted just before the 2020 presidential election from the others and compared how people’s answers differed.

Here are some of the true statements:

  • The U.S Senate acquitted Trump of impeachment charges.
  • President Trump nominated Brett Kavanaugh to the U.S. Supreme Court.
  • The U.S. Government was partially shut down in fight over Trump’s border wall with Mexico.

And some of the false statements:

  • Attorney General Barr released text message from Special Counsel prosecutor Robert Mueller: “We’re taking down Trump.”
  • President Trump disparaged the Puerto Rican governor and statehood movement, tweeting that Puerto Rico was “a small island filled with savages.”
  • President Trump said that former President Obama wrote the emoluments clause of the Constitution.

First, let’s look at their findings on partisanship:

This chart shows how the partisan nature of people’s responses differed depending on the survey’s timing. The left column shows data from the non-election-season surveys and finds that — depressingly, if not surprisingly — Republicans are more likely to give Republican-biased answers and Democrats are more likely to give Democratic-biased answers. (That is, they’re more likely to ascribe truth to fake stories when they make their political opponents look bad.)

The right column shows the just-before-the-election data. The bias gets amplified on both sides — though, as is often the case in U.S. politics these days, the effect is more pronounced on the political right.

Strikingly, the gap between Democrats and Republicans in the average partisan reflection of their selected statements more than doubles during the presidential election compared to outside of the election period. This finding helps explain the varied results in the literature and suggests a dynamic approach to studying “parallel information universes.”

To try to eliminate alternate explanations for the data, the authors construct a “model of news discernment” that controls for variability between surveys and the amount of time between when a news story broke and when people were being asked about it. (It’s worth noting that people were not especially good at these little quizzes. On a given survey, the average person got 2.61 correct out of 4.)

Our estimation exercise confirms the result suggested by the raw data: individuals’ beliefs when assessing the truthfulness of political news become significantly more partisan during election periods.

To quantify this finding, consider a thought experiment where an average partisan individual is presented with a pair of recent news stories — one true and one false, with the false story being neutral in its partisan orientation. Our model estimates predict that, outside of an election period, the individual is 4% more likely to select the true story as the most likely to be true if it reflects favorably on their preferred party compared to unfavorably. However, during an election, this difference increases to nearly 11%.

When asked about a recent news story on a survey conducted outside election season, people were 7 percentage points more likely to call a story true if it was “very favorable” to their party than if it was “very unfavorable.” But just before an election, that gap more than doubled to 17 percentage points.

All in all, our model estimation exercises corroborate the pattern hinted at by the raw data presented in the introduction: even after accounting for stories’ age, quiz difficulty and differences in partisan dispersion of news stories, we find that partisan congruence shapes individuals’ beliefs about political news far more strongly during election periods than non-election periods. Elections, it seems, amplify the influence of partisanship on the perception of truth.

In a sense, it all comes down to what you mean by “high stakes.” Yes, a presidential election is high stakes for the country at large. But believing something that supports your ideological priors is high stakes for your ego — especially at the height of an all-consuming campaign. Our brains want to believe the best about our side and the worst about the other. And it seems that overrides any extra incentive for accuracy at the moment our votes matter most.

Readers prefer to click on a clear, simple headline — like this one

In an era when people trust news less than ever, how can journalists break through and attract the attention of average people to provide information about their communities, the nation and the world?

By not complicating things.

Our research, published in Science Advances, shows that simple headlines significantly increase article engagement and clicks compared with headlines that use complex language.

In our research, typical news readers preferred simple headlines over complex ones. But importantly, we found that those who actually write headlines — journalists themselves — did not.

We first used data from The Washington Post and Upworthy to see how language features, such as word length and how common a word is, changed how many people clicked on an article’s headline. These datasets included over 31,000 randomized experiments — also known as A/B tests — that compared two or more headline versions of the same underlying article to determine which one generated the most clicks.

Headlines with more common words — simple words like “job” instead of “occupation” — shorter headlines, and those communicated in a narrative style, with more pronouns compared with prepositions, received more clicks. For example, The Washington Post headline, “Meghan and Harry are talking to Oprah. Here’s why they shouldn’t say too much” outperformed the alternative headline, “Are Meghan and Harry spilling royal tea to Oprah? Don’t bet on it.” This example illustrates how sometimes a more straightforward headline can generate more interest.

In follow-up laboratory experiments, we found that typical news readers focused more on simple rather than complex headlines because the writing was easier to understand. When journalists participated in the same experiments, they didn’t show any preference for simple headlines over complex ones. Put differently: Those who write the news appear to be less responsive to simple writing than regular audiences.

Generations of communication consultants have advised that communicators follow the rather crass acronym KISS: Keep it Simple, Stupid. We suggest a modified version applied for journalists. Since KISJ isn’t all that simple, we propose: Keep It Simple, Staffers. Simplicity increases the number of people who click on a news story headline and improves reader recall of the material in the article. Most importantly, simplicity boosts reader engagement, such as how much attention they pay to the information.

Why it matters

News outlets that appear ahead of the curve have already implemented KISS strategies. For example, Ezra Klein, a journalist who founded the explanation-focused news site Vox, recommends journalists avoid writing for their editors.

As our work finds, writers and editors respond differently to complexity than people who consume the news. Therefore, one way for journalists to avoid the writing-for-editors problem is to simplify the writing with readers in mind: Use shorter words, write shorter sentences, and use more everyday words instead of complex alternatives. Work that is more approachable will reach the widest audience and garner the most engagement.

Writing simply could have additional effects beyond engagement, too. Information has never been more abundant, and yet readers are consistently in search of credible news providers. One potential pathway to improve how people think and feel about news is through simplicity. Since simple writing has been linked to increased perceptions of trust and warmth over complex writing, news providers may want to think deeply about word choice when creating their next article or broadcast.

Simplicity in headline writing matters both because the news market is extremely competitive and because it reduces a barrier between the public and important information. Our research does not suggest traditional news sites should become “clickbait.” Rather, it suggests that if headlines become more accessible to average news readers, they will be more effective for engagement and, hopefully, a more informed public.

David Markowitz is an associate professor of communication at Michigan State University. Hillary Shulman is an associate professor of communication at Ohio State University. Todd Rogers is Weatherhead Professor of Public Policy at the Harvard Kennedy School. This article is republished from The Conversation under a Creative Commons license.The Conversation

Illustration via MIdjourney.

BELOW HERE ARE TWITTER POSTS