Even though the use of social media as a source of news has seen little growth in recent years, this year’s Digital News Report again highlights the centrality of social media, search engines, news aggregators, and other platforms that use algorithms to select news. Younger groups in particular continue to be exposed to more news chosen by algorithms rather than by journalists or editors via publishers’ websites and apps, but they also seem to be increasingly ambivalent about the results.
Despite early enthusiasm about the democratizing potential of search engines and social media, the last decade has mainly been characterized by concern over possible negative effects — not just over the spread of misinformation, but also the possible emergence of “filter bubbles,” where algorithms combine data on past behavior and inferred preferences to show people more of what they like at the expense of diverse news and information exposure.
There has been extensive research into whether algorithmically driven platforms really do trap people inside filter bubbles — and at least when it comes to the question of diverse news exposure, this does not appear to be happening. For now, although most people do not have particularly diverse news repertories, platform use appears to slightly increase the diversity of people’s general news exposure, even as some people use social media to form partisan clusters around specific political issues.
There’s far less research on what people think about algorithmic news selection, and what they feel about news on platforms more broadly. This is a crucial missing piece of the puzzle, as the effects of news exposure at least partly depend on how people receive and process it.
The results reveal that people are skeptical of all these ways of selecting news. Just under one third (30%) think that having stories selected on the basis of their own past behavior is a good way to get the news. Even fewer — just one fifth (19%) — say the same about news selected based on their friends’ past behavior.
Perhaps surprisingly, people are similarly skeptical of news selected by editors and journalists (27%), perhaps because many see these traditional sources as also laden with agendas and biases.
What we find, then, is what we’ve previously called “generalized skepticism,” whereby people are skeptical of all forms of news selection, whether done by humans or by algorithms. What’s more, attitudes about all forms of selection are positively correlated. This means that people do not usually have contrasting views of human and algorithmic selection. If they are skeptical of one, they’re likely to be skeptical of the other.
The results from 2023 have changed little since we last measured them in 2016. Across all markets, the proportion that think their past consumption is a good basis for news selection has dropped by 6 percentage points, with a smaller 3 percentage point fall in approval of editorial selection and social recommendations. Importantly, disapproval has remained stable, and instead there has been a parallel 4–6 percentage point increase in the middle “neither agree nor disagree” category.
Overall, this means that the attitudes have remained broadly stable, with a slight increase in ambivalence over time. But in some countries, such as the U.K., the changes from 2016 are relatively large — especially for news selected by algorithms on the basis of past consumption, and especially with users under the age of 35.
Why are we seeing this shift among younger users? Many young people express low levels of interest in the news, and those who are interested often feel overwhelmed by the negative nature of news in their social media feeds. Young people are also more likely to use a wider range of social networks, each using algorithmic news selection in a different way, which could make it harder for some people to form a clear view. And we continue to see reasonably high levels of concern that “overly personalized” news could lead to missing out on important information (48%) or being exposed to fewer challenging viewpoints (46%).
Given these concerns, it is perhaps not surprising that some people have tried to influence story selection by following or unfollowing, muting or blocking, or changing other settings. But for those who do, the key objective is not to make the feed more fun or more interesting, but rather to make it more reliable, less toxic, or with a greater diversity of views. Yet, despite these clearly stated preferences, some social media companies competing for attention and advertising continue to optimize for engagement, with less attention to increasing quality, reliability, or diversity.
Over the next few years, the increased use of artificial intelligence is likely to place even more focus on the role of algorithms in shaping the selection and understanding of news and information. Our research highlights generalized skepticism around how content is selected for them today, so there is clearly room for something better.
While news consumers like the convenience of automatically selected content, they also worry about missing out on important news or challenging perspectives. Many news organizations are now exploring whether it is possible to mix algorithmically selected news with editorial approaches in a way that delivers more relevant, reliable, and valuable content for consumers, but given current levels of skepticism, the journey is unlikely to be straightforward.
Richard Fletcher is the director of research at Oxford University’s Reuters Institute for the Study of Journalism. Nic Newman is a senior research associate there.