Posts like yesterday’s by my Nieman Lab colleague Jonathan Stray make my academic heart flutter. Stray’s analysis looked at coverage of the latest Google-China developments and found that only 11 percent of the 100-plus news sources did “original reporting” on the issue.
It should join the growing list of reports — from the six year old Harvard Business School study of Trent Lott and the bloggers, to my own research on the Francisville Four, to Yochai Benkler’s work in The Wealth of Networks, to “Meme-tracking and the Dynamics of the News Cycle,” to the PEJ study on news diffusion in Baltimore — that help us understand how exactly reporting gets done and news moves in the new digital ecosystem. And Stray’s analysis is data-driven and involves something of a time commitment — but beyond that, it’s the kind of work that could and should be replicated by interested “citizen media scholars” everywhere.
The one sentence take-away from Stray’s analysis was supplied by Howard Weaver in the comments. “Although you seem reluctant to say so,” Weaver wrote, “almost all the genuine journalism here was done by traditional organizations.” This conclusion echoes findings in the recent Baltimore study by the Project for Excellence in Journalism, findings which were roundly criticized by some members of the blogosphere, particularly Steve Buttry.
So what does this latest piece of research mean?
One the one hand, the increasingly-frequent findings that the vast majority of original news reporting is still done by large, (relatively) resource-rich news organizations seems almost unworthy of comment. But it’s still worth documenting how, exactly, this plays out in practice.
Even more importantly, there are a few throw-away lines in Stray’s post that I think are worthy of further discussion. The first one is this:
Out of 121 unique stories, 13 (11 percent) contained some amount of original reporting. I counted a story as containing original reporting if it included at least an original quote. From there, things get fuzzy. Several reports, especially the more technical ones, also brought in information from obscure blogs. In some sense they didn’t publish anything new, but I can’t help feeling that these outlets were doing something worthwhile even so. Meanwhile, many newsrooms diligently called up the Chinese schools to hear exactly the same denial, which may not be adding much value.
This gets to the heart of something really important: Is aggregating the content of “obscure bloggers” not really original reporting? Traditionally, of course, it isn’t; reporting is digging up previously undiscovered “documents, sources, and direct observations,” as the j-school saying goes. But, as Stray notes, these outlets that did this were still doing something worthwhile, something that seemed even more important than the work of journalists calling up the Chinese schools to get the same standardized denial.
But what is this “something worthwhile”? Is linking to a smart-but-obscure website really all that different than calling up a trusted source? What’s the line between “aggregation,” “curation,” and “reporting”? Can we even draw the line anymore? And if more than a hundred reporters are hard at work rewriting New York Times copy without adding anything new, maybe they’d be better off doing something else — like curating, for instance. Or (god help us) even linking!
The second line in the Stray post I wanted to highlight is this:
The punchline is that no English-language outlet picked up the original reporting of Chinese-language Qilu Evening News, which was even helpfully translated by Hong Kong blogger Roland Soong [at ESWN].
To which a commenter added:
Google News tends to exclude non-traditional sources to begin with. Otherwise ESWN would show up all the time on these China-related stories, doing original research and reporting.
This concern — what sources does the Google News database include, and what does it exclude — is remarkably similar to the criticism of the PEJ-Baltimore study launched by Steve Buttry: that in drawing a line around “who actually counts” as a journalist to be included in the research, you are affecting the outcome of the research.
What would we find if we combined both these concerns I discuss above? What if we analyzed aggregation as well as reporting, and if we included sources that aren’t included in the Google News database?
My guess — and it’s still only a guess — is that we’d find something like the “burbling blips” that Zach Seward highlighted months ago when he was posting about the dynamics of the news cycle. We’d basically find a news ecosystem where a cluster of small (but often obscure) news outlets discussed a story to death — discussions that were picked up and amplified by the more traditional, reporting-focused media, which then fed its reporting back into the wider blogosphere for further commentary. In my own comment on this subject, I called this process “iterative news pyramiding,”
the leapfrogging of news from tightly linked clusters strung out along the end of the long tail to more all-purpose, more generally read websites that form the ‘core’ of the internet.
Taking everything we’ve learned so far — from Stray, Benkler, Buttry, the Harvard Business School, me, the PEJ, and others — what might we hypothesize about where news comes from and how it moves? Here are a few bullet points for your consideration:
What do you all think?