Nieman Foundation at Harvard
HOME
          
LATEST STORY
Two-thirds of news influencers are men — and most have never worked for a news organization
ABOUT                    SUBSCRIBE
Sept. 4, 2024, 11:04 a.m.
Audience & Social

Want to fight misinformation? Teach people how algorithms work

In the four countries studied, each with its own unique technological, political, and social environment, understanding of algorithms varied across different sociodemographic groups.

In an era dominated by social media, misinformation has become an all too familiar foe, infiltrating our feeds and sowing seeds of doubt and confusion. With more than half of social media users across 40 countries encountering false or misleading information weekly, it’s clear that we’re facing a crisis of misinformation on a global scale.

At the heart of this issue lies social media algorithms — those mysterious computational formulas that determine what content appears on our feeds. These algorithms are designed to show users content that they are most likely to engage with, often leading to the proliferation of misinformation that aligns with our biases and beliefs. A prominent example is Facebook’s profit-driven algorithms, which supported a surge of hate-filled misinformation targeting the Rohingya people, contributing to their genocide by the Myanmar military in 2017.

But here’s the kicker: Social media algorithms remain largely opaque to users. The information feeding mechanism driven by algorithmic decisions is often perceived as a black box, as it is almost impossible for users to recognize how an algorithm reached its conclusions. It’s like driving a car without knowing how the engine functions. Lacking insights into the algorithmic mechanism impairs individuals’ ability to critically evaluate the information they come across. There has been a growing call for and attention to algorithmic knowledge–understanding how algorithms filter and present information. However, it is still unclear whether having algorithmic knowledge actually helps social media users combat misinformation.

That’s where our recent study, published in the Harvard Kennedy School Misinformation Review, comes in. As a media scholar who has long studied the countermeasures to misinformation, I led a study to explore how individuals’ understanding of algorithmic processes shapes their attitudes and actions towards misinformation across four different countries: the United States, the United Kingdom, South Korea, and Mexico. With over 5,000 participants in the survey, the findings yielded several insights.

First, the study found that algorithmic knowledge made individuals more vigilant about misinformation. That is, when people understand how algorithms filter information, how users’ data are used to create algorithms, and what the consequences are, they better see the potential pitfalls of feeding algorithms and recognize that algorithms may amplify misinformation. Such a realization led them to step up to counter misinformation. Their actions ranged from leaving comments to highlight potential biases or risks in social media posts, sharing counterinformation or opinions, disseminating information that exposes issues in the inaccurate content, to reporting specific misinformation posts to the social media platform.

While this finding is encouraging, the study further revealed that algorithmic knowledge isn’t evenly distributed among people. In the four countries studied, each with its own unique technological, political, and social environment, understanding of algorithms varied across different sociodemographic groups. For instance, in the U.S., the U.K., and South Korea, younger people tended to understand algorithms better than older individuals. In South Korea and Mexico, education levels made a difference, with more educated individuals having a better grasp of how social media algorithms work. In the U.S. and the U.K., where political polarization has reached high levels in recent years, political ideology was the key factor in explaining differences in algorithmic knowledge, with liberals having a better understanding of social media algorithms than conservatives.

In addition to the domestic-level algorithmic knowledge gap, the study also found that the level of algorithmic knowledge differed by country; respondents in the U.S. demonstrated the highest understanding, followed by those in the U.K., Mexico, and then South Korea. Interestingly, even though South Korea has the highest rates of internet use and social media access among the four countries, it had the lowest level of algorithmic knowledge among the four countries. These differences highlight a new form of digital divide, one that goes beyond the binary distinction between individuals with access to the internet and those without.

With such uneven distribution of algorithmic knowledge within and between countries, some may possess the ability to scrutinize and make informed judgments about the misinformation presented by algorithms, while others may be more susceptible to the false or biased narratives embedded in algorithmic outputs. That is, people who don’t understand how algorithms personalize information may overlook the risks of being trapped in filter bubbles, which limits exposure to diverse viewpoints. They thus wrongly believe that all content on social media is objective and accurate. Consequently, they are more likely to spread misinformation and are more vulnerable to the negative effects of misinformation.

These findings have important implications for social media platforms, policymakers, researchers, and educators. Traditionally, efforts to combat misinformation have largely focused on strategies like fact-checking, pre-bunking, or content moderation, but the effectiveness of these methods has often been questioned. Our study suggests that educating people about how algorithms work and how information is selected on social media could be a promising alternative; by understanding algorithms better, people may be more equipped to recognize and respond to misinformation. The benefit of this approach is that it could be broadly applicable across different populations and effective globally. Additionally, our study shows that people from different social and cultural backgrounds may not all understand algorithms equally well. This means it’s important to create algorithm literacy programs that are customized to meet the needs of different groups. By doing this, we can help ensure that everyone can navigate the digital world effectively.

The ever-evolving technologies — from metaverse to deepfake to ChatGPT — create a media environment in which the algorithmic curation of unreliable or false information is easier than ever. It is time to prioritize more extensive and in-depth education on algorithms to empower ourselves and protect our society from the dangers of misinformation. Ultimately, the fight against misinformation is a battle we must all join.

Myojung Chung is an assistant professor of journalism and media innovation at Northeastern University and a Rebooting Social Media visiting scholar at the Berkman Klein Center for Internet & Society at Harvard University.

Adobe Stock

POSTED     Sept. 4, 2024, 11:04 a.m.
SEE MORE ON Audience & Social
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
Two-thirds of news influencers are men — and most have never worked for a news organization
A new Pew Research Center report also found nearly 40% of U.S. adults under 30 regularly get news from news influencers.
The Onion adds a new layer, buying Alex Jones’ Infowars and turning it into a parody of itself
One variety of “fake news” is taking possession of a far more insidious one.
The Guardian won’t post on X anymore — but isn’t deleting its accounts there, at least for now
Guardian reporters may still use X for newsgathering, the company said.