Our inboxes are full of them — press releases, pitches, and other media calling some scientific event “a breakthrough,” “a game-changer,” or “a paradigm-shifter.” Scientists, investors, and analysts flood our Twitter feeds, cheerleading a preprint or singing some company’s praises, even when there is little to no data to back up those claims.
Figuring out whether something is newsworthy can be hard. But, as science journalists, we need to examine these statements, and decide: Is this worth covering? If so, how do we do so objectively, without accidentally becoming a mouthpiece for hyperbolic claims?
What’s at stake is significant. Information comes at us like a fire hose on full blast, and social media algorithms have made it easy for lies to spread faster than truth. For example, antithetical claims have continued to try and sow doubt around the causes of climate change. And misinformation problems have only worsened during the pandemic: In a recent Kaiser Family Foundation poll on false statements about Covid-19 vaccines, researchers found that 78% of people either believed or weren’t sure about at least one of the claims. For journalists on tight deadlines, sifting fact from fiction can sometimes feel impossible.
But coverage lends credibility, which matters immensely to readers. Some of the things we write can profoundly affect people’s actions, especially in health and medicine, says Rosie Mestel, the executive editor of Knowable Magazine. “There are a lot of people who are desperate and very sick,” she says, “and you have to be very, very careful that you’re not going to be misleading people and overplaying things.”
To cut through the murkiness and hype, science journalists need to vet the information and sources they come across and be on the lookout for red flags. Also essential is understanding our own biases — what we wish to be true, and how that plays into our decision making. Here, both skepticism and self-awareness can be key.
Journalists have the power to tell or not tell a story—and how we dissect claims plays into that power, says Ashley Smart, a physics journalist who is the associate director of the Knight Science Journalism Program at MIT and a senior editor at Undark. “We owe it to our readers and to the general public, and even to our sources, to be thoughtful in what we decide to cover, and to make sure that it’s worthy of the platform that we’re giving it,” Smart says.
Information comes to us in many ways: press releases, videos, reader tips, research papers, conference presentations and posters, and more. But not all of them will package data and information in the same way.
Press releases are just as much about getting attention for the institution, the company, or the researcher, as they are about the research, says Janet Stemwedel, a philosophy professor at San José State University who has written on the topic of evaluating claims in research. A lot of nonprofits and institutions use press releases and media they have developed in-house to raise funds, so the tone and language will almost always be optimistic and positive. They may even oversell the value of the research.
Jonathan Wosen, a biotech reporter at The San Diego Union-Tribune, laughed when we discussed a recent example: A headline on a press release that gushed about “breakthrough pre-clinical data.”
That’s usually an oxymoron, he says: “Preclinical” means the experiments were performed in animals or even cells. What works in animals doesn’t often work in humans. As Mestel says, scientists have often cured mice of cancer or destroyed a plate of cancerous cells with some treatment, only to have the treatment fail in people.
Press releases are one form of communication that can be rife with bias, but they don’t stand alone. Here are some general tips for evaluating information in your quest to report objectively.
Wosen relies on a few questions to help him parse the many, many health and medicine pitches and press releases in his inbox, but the questions are applicable to any information you are evaluating:
Many of these questions can be answered by looking directly at the data the information is based on. Smart does this to dig out the quantitative truth behind the qualitative claims in those press releases — for example, the numbers behind phrases such as “vastly improved x, increased y, greater efficiency of z.” It’s not always a straightforward process: Journalists may have to do a little math to figure out exactly how much something changed, and to determine if that change is really significant and meaningful.
Sometimes, though, we don’t have a study or paper to look at. During the Covid-19 pandemic, for instance, drug companies and diagnostic test makers, not to mention politicians and nongovernmental organizations, have made statements about products and research without releasing data or providing support for their claims. The data from companies will sometimes come out later, in the form of a peer-reviewed paper, or in an earnings call, but the news is breaking now, and journalists have to decide whether and how to cover it. Even in the absence of detailed data, Stemwedel says, reporters can still ask study authors how they designed, conducted, and analyzed their experiments. Sometimes, a follow-up story might even be warranted, after the full data are released.
As Wosen notes, one of the best ways to confirm the validity of a claim or the newsworthiness of a piece of information is to hit up a source who wasn’t involved in the work to help out. “Those experts can sometimes save you a lot of trouble and help you identify whether something’s a story or not,” he says.
While at Physics Today, Smart says he and his colleagues routinely reached out to scientists who were not part of the research to ask what amounts to two basic questions: Are people in the field calling this work important? Will whatever claim the scientists are making stand the test of time? “We basically do our own little peer review,” Smart says. This type of vetting is standard practice at other science publications.
Finding these folks can be challenging for a reporter new to a beat, or a journalist covering something they don’t have as much experience with. Smart says one place to start is the references at the end of the paper, if available. I’ve found that, paper or no, a quick call to my local university can also turn up people who can help. So can searching through older news stories on the same topic. I frequently turn to my colleagues as well. When I worked at Chemical & Engineering News, it was common for us reporters to ping one another for sources. Not all colleagues share, but many do.
Some of these early assessments will turn up disappointing answers: Maybe the story just isn’t there. If that’s the case, Mestel says, “it’s O.K. to abandon ship.”
Journalists have to ask tough questions, dig for information, and even question our own belief systems as we interview, research, and write. Our own biases can influence our eagerness to cover something, whether it’s a fascination with new technology or gadgets or a dire report on oil spills that plays on someone’s love for animals. We need to do gut checks, contextualize the evidence provided, and add caveats and nuance to temper expectations.
For example, journalists love reporting on foods like chocolate, says Alice Lichtenstein, a nutrition professor at Tufts University who helps dispel myths about food. But nutrition reporting is often full of holes, and journalists often bring too little skepticism to their coverage of food research. It’s hard to pin down why, but she thinks it has something to do with the idea that because we all eat, we all think we are experts in doing it.
Another common reporting mistake can occur when journalists don’t fully understand the nature of the study they are reporting on. A lot of climate change studies, for instance, are based on modeling of different events and drawing conclusions based on those models. Not understanding how the models work, and what their shortcomings are, can lead to overselling or underselling the research. And in biomedical or clinical research, it’s important to draw distinctions between interventional studies, where researchers change people’s behaviors or treatments and look for effects, and observational studies, where they just look for patterns in what people are already doing. “With an intervention study, it’s essentially cause and effect, where with an observational study, you just look at associations,” Lichtenstein says.
Journalists will always benefit from understanding statistics better, and the importance of study-design issues like sample size. (Bigger studies are generally better.) Outside experts, too, can help if more complex statistical methods crop up; Wosen has developed a relationship with a source who at times weighs in on what he’s considering reporting on. Lichtenstein says she has often served that expert/evaluator function, and sometimes, she has told reporters the study isn’t newsworthy.
And of course, it’s important to navigate conflicts of interest: Ask sources who funds the research? This is especially relevant to climate change and environment reporting, where organizations with a stake in the outcome of climate change mitigation can be prone to hyperbole.
Sometimes, a COI becomes part of a story. I once wrote a story about weight-loss drugs in which one of the academic researchers I spoke to had worked previously for a company we were talking about. I chose to keep him in the story and highlight that relationship because he is truly one of the experts in the field, and I made sure to include his comments on the field at large, as well as other companies.
But sometimes, a COI means a source has to be excluded because they can’t be objective. Stemwedel says excluding a questionable source is about maintaining credibility with your audience.
Yet, with all our due diligence, evaluating claims isn’t foolproof. Gale Sinatra, a psychologist at the University of Southern California who specializes in STEM education, offers this tip: Remember that search engines are built on preferences. Every time you search something, it tells the algorithm you’re interested. As you keep searching, you get more of the same. That’s how misinformation campaigns get rooted — a search for a conspiracy theory brings up all kinds of links, which teaches the algorithm that this is what you want. If you want to evaluate a claim, she says, or the person making the claim, do your research in your browser’s private or “incognito” mode.
Perhaps, for example, you want to double-check a source’s potential conflicts in a story about online wellness apps. Since you’ve been using your browser to research apps, when you search for your source, the algorithm may give you more narrow results related to just wellness. But if you try an incognito search on just that person, you might get different results, including criticism of that source, ties to companies that make apps, or a news story involving that source that may call their credibility into question.
It can be discouraging to find yourself navigating baseless claims in the search for news. After all, Mestel says, many people get into journalism because they want to get to the bottom of things. It’s not always easy, but in this climate of misinformation and bad actors trying to stir up trouble, digging a little deeper into superlatives and claims isn’t just about credibility, but also personal integrity, and being honest with yourself.
But Mestel can always remind herself of what motivated her to become a science journalist in the first place. “I want to understand what’s going on in the world,” she says. “I want to write about what’s true.”
Megha Satyanarayana is the chief opinion editor at Scientific American, where she works with writers of all scientific stripes to publish their expert thoughts. This story originally appeared on The Open Notebook.