It is fall now, and out west, the aspens are turning. In this case, it’s the Aspen Institute’s Commission on Information Disorder, a group of smart, powerful, and/or ex-royal people tasked with figuring out how to tackle the misinformation seemingly endemic to modern digital life.
(Among the names that’ll be most familiar to Nieman Lab readers: Amanda Zamora, Alex Stamos, Kate Starbird, Jameel Jaffer, Safiya Umoja Noble, Deb Roy, Katie Couric, Henry C. A. D. Windsor, and Kathryn Murdoch.)1
Today, they turned out their final report, summing up their findings and making 15 recommendations for improvement — what it calls “key, measurable actions.”
So proud to share the @AspenDigital Commission for Information Disorder report. Almost a year since we decided to do this and more than six months of hard work by our 16 brilliant Commissioners, you can now read it here: https://t.co/vGF6KRBsnW #AspenInfoCommission. (thread) pic.twitter.com/L9PfgQ8SV9
— Vivian Schiller (@vivian) November 15, 2021
Before we get to the commission’s recommendations, let’s look at its summary of the status quo, what the report calls “key insights and context,” with excerpts from each.
Mis- and disinformation do not exist in a vacuum. The spread of false and misleading narratives, the incitement of division and hate, and the erosion of trust have a long history, with corporate, state actor, and political persuasion techniques employed to maintain power and profit, create harm, and/or advance political or ideological goals. Malicious actors use cheap and accessible methods to deliberately spread and amplify harmful information…False narratives can sow division, hamper public health initiatives, undermine elections, or deliver fresh marks to grifters and profiteers, and they capitalize on deep-rooted problems within American society. Disinformation pours lighter fluid on the sparks of discord that exist in every community.
Currently, the U.S. lacks any strategic approach and clear leadership in either the public or the private sector to address information disorder…Congress, meanwhile, remains woefully under-informed about the titanic changes transforming modern life and has under-invested in the staff, talent, and knowledge to understand and legislate these new realms, particularly given that it has never replaced the capability lost by the closure of the Office of Technology Assessment in the 1990s. The technology industry lobby also has outsized influence in shaping legislative priorities favorable to its interests…More than any single action or implementable recommendation we could make, it is necessary for our government, civil society, and private sector leaders to prioritize, commit to, and follow-through on addressing the worst harms and worst actors and to invest in their own capacities to understand and respond to the problems we face together.
Should platforms be responsible for user-generated content? If so, under what circumstances? What exactly would responsibility look like? These questions are deeply contested within legal and policy debates…Despite pleas by “Big Tech” to be regulated around speech issues, it has repeatedly sought to push the task back to lawmakers to devise rules and regulations that will respect free speech while protecting consumers from harm, often under the frameworks most favorable to the industry. This strategy also ensures that tech companies can continue to exploit the lack of such constraints to their benefit until new regulations are in place.
Understanding the root problems of information disorder requires understanding hard-wired human behaviors, economic and political policy, group psychology and ideologies, and the relationship to people’s sense of individual and community identity…One of the most challenging aspects of addressing information disorder is confronting the reality that “disinformation” and information campaigns by bad actors don’t magically create bigotry, misogyny, racism, or intolerance — instead, such efforts are often about giving readers and consumers permission to believe things they were already predisposed to believe. There is a “demand” for disinformation (amplified and driven by product designs, to be sure), but reckoning with our problems online will require taking a hard look at our society offline.
Over a century ago, Justice Louis Brandeis promised that “sunlight is said to be the best of disinfectants,” and online today, it’s clear we have far too little. Understanding both the behaviors of users, platforms, and algorithms and the resulting impacts of information disorder requires much more data. Critical research on disinformation — whether it be the efficacy of digital ads or the various online content moderation policies — is undercut by a lack of access to data and processes. This includes information regarding what messages are shared at scale and by whom, whether they are paid, and how they are targeted.
[Targeted programmatic advertising] has proven fantastically profitable, and tech companies like Google and Facebook sit at the top of Wall Street markets, richly rewarded for their ability to translate consumer attention into dollars…Ads are not just about selling toothpaste or better mousetraps either; platform tools have made it possible to amplify content to narrow segments of the population, often for political purposes. Advertising tools provided by platforms can include or exclude specific users, creating a powerful, unaccountable, and often untraceable method of targeting misinformation.
One of the most difficult areas to address in an American context is today’s shifting norms around falsehoods and misrepresentation of facts among prominent public figures. Politicians, CEOS, news anchors, talk radio hosts, and professionals can abuse their prominent roles and high degrees of reach for both personal and partisan gain. This trend is exacerbated by a political and business environment that offers fewer and fewer consequences for these actions. In short, in the public and business sphere at least, leaders have had to contend with the risk that they would be punished and distrusted by voters or consumers if caught in a lie. Today, though, they’re increasingly celebrated for their lies and mistruths — and punished, politically, for not ascribing to others’ falsehoods.
A free and democratic society requires access to robust, independent, and trustworthy media institutions. The distrust we see today, which fluctuates across types of media, and different groups, has been decades in the making, for varied, well-documented reasons-from the decline of quality reporting in the face of the collapse of traditional economic models, to the rise of partisan or bad faith publishers at the national and local level, to the failures or reporting in the lead up to war, to a lack of diversity in newsrooms that may result in misrepresentation of the experiences of Black and other minority communities.
All reasonable enough; personally, I think No. 1 and No. 4 have usually been underestimated in public discussion of this misinfo moment. But a commission like this can’t just summarize — there must be recommendations! Aspen has 15, broken into three overarching categories: recommendations to increase transparency, to build trust, and to reduce harms.
Public interest research: “Implement protections for researchers and journalists who violate platform terms of service by responsibly conducting research on public data of civic interest.” And: “Require platforms to disclose certain categories of private data to qualified academic researchers, so long as that research respects user privacy, does not endanger platform integrity, and remains in the public interest.”
High reach content disclosure: “Create a legal requirement for all social media platforms to regularly publish the content, source accounts, reach and impression data for posts that they organically deliver to large audiences.”
Content moderation platform disclosure: “Require social media platforms to disclose information about their content moderation policies and practices, and produce a time-limited archive of moderated content in a standardized format, available to authorized researchers.”
Ad transparency: “Require social media companies to regularly disclose, in a standardized format, key information about every digital ad and paid post that runs on their platforms.”
Truth and transformation: Endorse efforts that focus on exposing how historical and current imbalances of power, access, and equity are manufactured and propagated further with mis- and disinformation — and on promoting community-led solutions to forging social bonds.”
Healthy digital discourse: Develop and scale communication tools, networks, and platforms that are designed to bridge divides, build empathy, and strengthen trust among communities.”
Workforce diversity: Increase investment and transparency to further diversity at social media platform companies and news media as a means to mitigate misinformation arising from uninformed and disconnected centers of power.”
Local media investment: “Promote substantial, long-term investment in local journalism that informs and empowers citizens, especially in underserved and marginalized communities.”
Accountability norms: “Promote new norms that create personal and professional consequences within communities and networks for individuals who willfully violate the public trust and use their privilege to harm the public.”
Election information security: “Improve U.S. election security and restore voter confidence with improved education, transparency, and resiliency.”
Comprehensive federal approach: “Establish a comprehensive strategic approach to countering disinformation and the spread of misinformation, including a centralized national response strategy, clearly defined roles and responsibilities across the Executive Branch, and identified gaps in authorities and capabilities.”
Public Restoration Fund: “Create an independent organization, with a mandate to develop systemic misinformation countermeasures through education, research, and investment in local institutions.”
Civic empowerment: “Invest and innovate in online education and platform product features to increase users’ awareness of and resilience to online misinformation.”
Superspreader accountability: “Hold superspreaders of mis-and disinformation to account with clear, transparent, and consistently applied policies that enable quicker, more decisive actions and penalties, commensurate with their impactsregardless of location, or political views, or role in society.”
Amendments to Section 230 of the Communications Decency Act of 1996: “Withdraw platform immunity for content that is promoted through paid advertising and post promotion.” And: “Remove immunity as it relates to the implementation of product features, recommendation engines, and design.”
A few quick thoughts on what I think is overall a solid report, one that had me nodding a lot more often than shaking my head:
Obviously, your local daily doesn’t have the reach of Facebook. But I wonder where the legal line gets drawn between “publishers” and “the large tech platforms we seem to be trying to legally rebrand as publishers.” It’s one thing to mandate transparency around moderated content — where at least there is a corporate action to be evaluated — but it’s another to do it around merely popular content.
Improving workforce diversity is a great idea! Just as it has been for the last 284 commissions to recommend it. Of all these issues, mistrust is the one I suspect is least vulnerable to commission-driven external action, alas.
(You know what else is a great idea? Improving election security. But making that a key recommendation here implies that it’s somehow weak election systems that are responsible for the boom in election-fraud misinformation, and that “improving election security” would thus reduce the cries of electoral theft. It’s not as if the machines suddenly became vulnerable to bamboo-based fraud the minute Donald Trump decided to run for office. I suspect a big government effort to “improve election security” would, if anything, further harden the convictions of Republicans who believe recent elections were really “stolen” from them. It’d be like announcing a big government program to “improve microchip scarcity in Covid-19 vaccines.”)
That said, I do really like one element of the “superspreader accountability” item: Platforms should be incentivized to spend more resources focused on big, popular accounts with massive reach, rather than the status quo of often giving those big, popular accounts extra undeserved leeway. And the recommended changes to Section 230 are more sensible than what either Democrats or Republicans in Congress have talked about.
I just think the “information disorder” is both (a) a very real issue that naturally attracts the attention of Big Commissions and Big Think Tanks and Big Reports, and (b) a problem that is uniquely immune to Big Commissions and Big Think Tanks and Big Reports.
This report nails it when it notes that:
Mis- and disinformation are not the root causes of society’s ills but, rather, expose society’s failures to overcome systemic problems, such as income inequality, racism, and corruption, which can be exploited to promote false information online.
The Internet is an amplifier. It increases both the reach and awareness of society’s ills. As long as the root causes exist — and as long as there are people who seek power, wealth, or fame through exploiting them — things will keep getting louder.