You’re in a WhatsApp or Facebook Messenger group with people you know and interact with regularly. Maybe it’s a family group that also includes some extended, not-so-well-known family. Maybe it’s a local group for parents of kids who attend the same school. And because the Covid-19 pandemic is still alive and present, chatter turns to Covid-19 vaccines and someone — not someone close to you — shares information that’s not quite correct. Do you respond? Why not?
These are the questions behind a recently released report from researchers associated with the Everyday Misinformation Project at Loughborough University in the U.K.
What the researchers found, based on in-depth interviews with 102 people in the U.K., is that conflict avoidance was a major theme that emerged as far as how people chose — or not — to engage with misinformation in personal messaging groups.
“If you disagreed with a close friend about an important issue, what we found is that these environments of personal messaging mean that people spend a lot of time getting very anxious about avoiding conflict and not provoking conflict in ways that might degrade their social relationships with their family and friends,” said Andrew Chadwick, professor of political communication at Loughborough University and one of the authors of the report. “The extent to which we saw this was really quite surprising to us.”
Chadwick shared that he expected more people in the study would share that they had been enthusiastically intervening when they encountered misinformation on these personal messaging platforms. “Instead, what we found was a lot of people who were reluctant to really get involved at that level.”One retired woman in north London, Jenny, shared that she used WhatsApp all the time to keep in touch with family. She said her nieces and nephews post conspiracy theories (like the 5G Covid conspiracy theory) and other misinformation into the same family WhatsApp group. Even though she knows the content posted by some of her family members isn’t accurate, she doesn’t engage, in part due to the sheer number of posts from them. “To be honest, I’ve got to the stage now where I don’t, because I just think ‘oh.’ I’ve stopped reading a lot of it,” she said.
One of the main reasons that people gave for wanting to avoid conflict was that they thought speaking up about possible misinformation would undermine “group cohesion by provoking conflict,” according to the study.
People also worried that they didn’t have full command of the facts about Covid-19 vaccine safety and often moved to smaller groups to double-check their thinking with people they were more comfortable with, in a trend the researchers call “gauging and scaling.”
One participant, Bella, said she gauges risk by expressing her thoughts on Covid–19 vaccination not in the larger school WhatsApp group but rather in smaller WhatsApp groups or with one-on-one texts. “It’s really tricky, cause there’s, like, 30 other parents on [the larger school group]…what I actually did was message my other friend from that group in a separate group chat to say ‘Hey, that’s not right—is it!?’ and so I would message the few people I felt comfortable with, and maybe say something about it, but I was too cowardly to call it out in front of 30 other school parents.”
Here’s what else the study found:
- Some people draw boundaries between what they see as the world of public and political communication, where they think there is a norm is that it is legitimate to challenge misinformation, and the interpersonal world of personal messaging, where the norm is that misinformation should go unchallenged because it is not appropriate to call it out.
- Seeing misinformation leads some people to disengage from vaccine talk on personal messaging. This presents a further paradox: they know the content of the misinformation posts but do not speak up, even if they disagree with it. These signals of tacit acceptance in a family, friend or school group can enhance the legitimacy of misinformation and contribute to its further spread.
The study is especially interesting in light of the recent announcement of WhatsApp Communities. Meta, WhatsApp’s parent company, said that “admins will be able to remove errant or problematic messages from everyone’s chats.” (WhatsApp is also testing making it harder to forward messages more than once.)
“This might be used in some groups as a form of content moderation that was previously absent on the platform,” Chadwick said, and added, “That being said, these features are not likely to be used extensively in the less formal groups that are popular for school parents, extended families, and friends. The norm of conflict avoidance will constrain family or friend group ‘admins’ who consider deleting posts containing misinformation.”
What does this mean for those looking to combat misinformation on personal messaging platforms? Chadwick and his team are undertaking a second part of the study, in an attempt to figure out the types of interventions that could help combat the conflict avoidance people seem to run into.In the meantime, “We need to find ways of empowering people not only with good quality information but also to instill in them a sense of confidence and efficacy in a way that allows them to say something in these settings,” Chadwick says. “It’s not simple media literacy. It’s a tough nut to crack.”
“What we’re calling for, with this report, is more attention to the qualitative, the dialogical, the empathetic orientation,” Chadwick said. “It’s more about a conversational style rather than just presenting people with information and hoping that they’ll get the point.”