I would assume that “extort” isn’t the word that the author of a new report on Section 230 would prefer. Maybe “no carrot, all stick”? Actually, he instead uses the much nicer phrase “leverage to persuade platforms to accept a range of new responsibilities.” But either way, the pitch is pretty similar:
Hey, tech companies — that’s a nice bit of protection from legal liability you’ve got there. It would be a real shame if anything were to happen to it. Maybe if you cut a big enough check every month, we could make sure it stays nice and secure.
For those unfamiliar, Section 230 of the Communications Decency Act of 1996 is the bedrock law that allowed for the evolution of the digital world we know today, for better and for worse — the “twenty-six words that created the internet.” It says that, when it comes to legal liability, websites should be treated more like newsstands than like publishers.
In the print world, if the Daily Gazette prints something libelous about someone, that person can sue the newspaper. But they generally can’t sue the Barnes & Noble where copies of the Gazette were being sold. The thinking is that if you made a newsstand legally liable for the content of every single newspaper and magazine and book it sells…that’d be a pretty strong incentive to get out of the newsstand business. And even if a newsstand stayed open, it would likely become more boring, culling the publications it carries to a few middle-of-the-road options in order to limit its liability.
Section 230 says that an “interactive computer service” would not be “treated as the publisher or speaker of any information” that is provided by a third party — like one of its users posting a comment or sharing a link. So if you post defamatory material about your neighbor on Facebook, you are legally liable for it — but Facebook isn’t. And indeed, it would be hard for Facebook, Twitter, Google, or any other sort of digital service provider to exist in their current forms without Section 230. If they were all legally responsible for everything on their platforms, it’d be hard to imagine they’d let random users publish on them.
Section 230 also allows sites to moderate legal content without (generally) being open to litigation. In America, it’s perfectly legal to be a Nazi and to say pro-Nazi things — but if YouTube removes a pro-Nazi video, the Nazi can’t sue claiming his First Amendment rights have been violated.
There has been a lot of political hubbub about Section 230 of late; both Donald Trump and Joe Biden say they want to revoke it. Trump sees it as protecting dastardly social media companies that target conservatives and try to fact-check his tweets. Biden see it as protecting dastardly social media companies that amplify Trump’s falsehoods and extremist content.
Into this debate comes this new paper, by former Businessweek journalist Paul Barrett, now deputy director of NYU Stern’s Center for Business and Human Rights. It’s titled “Regulating Social Media: The Fight Over Section 230 — and Beyond.” It’s a good and valuable contribution, with excellent background summaries of various points of view and filled with good ideas…and one not-as-good one I’m going to complain about for a bit.
Barrett argues for a three-step approach:
1. Keep Section 230The law has helped online platforms thrive by protecting them from most liability related to third-party posts and by encouraging active content moderation. It has been especially valuable to smaller platforms with modest legal budgets. But the benefit Section 230 confers ought to come with a price tag: the assumption of greater responsibility for curbing harmful content.
2. Improve Section 230
The measure should be amended so that its liability shield provides leverage to persuade platforms to accept a range of new responsibilities related to policing content. Internet companies may reject these responsibilities, but in doing so they would forfeit Section 230’s protection, open themselves to costly litigation, and risk widespread opprobrium.
3. Create a Digital Regulatory Agency
There’s a crisis of trust in the major platforms’ ability and willingness to superintend their sites. Creation of a new independent digital oversight authority should be part of the response. While avoiding direct involvement in decisions about content, the agency would enforce the responsibilities required by a revised Section 230.
So the threat of opening up massive legal liability should be used as “leverage to persuade platforms to accept a range of new responsibilities related to policing content” — to turn it into “a quid pro quo benefit.” What could those responsibilities be? The paper offers a few ideas (emphases mine).
One, which has been considered in the U.K. as part of that country’s debate over proposed online-harm legislation, would “require platform companies to ensure that their algorithms do not skew toward extreme and unreliable material to boost user engagement.”Under a second, platforms would disclose data on what content is being promoted and to whom, on the process and policies of content moderation, and on advertising practices.
Platforms also could be obliged to devote a small percentage of their annual revenue to a fund supporting the struggling field of accountability journalism. This last notion would constitute a partial pay-back for the fortune in advertising dollars the social media industry has diverted from traditional news media.
I like the idea of the tech giants giving money to journalism as much as anyone. And I have no particular objection to items 1 and 3 on the paper’s to-do list. But I have to say No. 2 — making liability protection contingent on accepting other, sometimes only tangentially related policy proposals — bugs me. A few reasons:
Speaking of which…
Why can news sites publish reader comments? Because of Section 230. Imagine if your favorite news outlet was suddenly liable for potentially massive damages because some rando posted “John Q. Doe is a child molester!” under one of its stories. What would an outlet in that situation likely do? Kill off the comments or any other kind of public input that increases liability.
Which is why…
In other words…
And finally…
To be blunt: Would you trust the Trump administration to use that power well? This is a president who, just a few months ago, signed an executive order declaring it unacceptable that a Democratic congressman’s tweet “peddling the long-disproved Russian Collusion Hoax” was allowed. The order didn’t do much, practically speaking, because an executive order can’t cancel Section 230. But if whether or not Twitter had legal protections was based on an administration determination that it was not promoting “extreme and unreliable materials,” the scenario is very different.
Literally just yesterday, Trump said Twitter should not be allowed to keep up an obviously photoshopped meme of Mitch McConnell and that “Mitch must fight back and repeal Section 230, immediately. Stop biased Big Tech before they stop you!”
Why does Twitter leave phony pictures like this up, but take down Republican/Conservative pictures and statements that are true? Mitch must fight back and repeal Section 230, immediately. Stop biased Big Tech before they stop you! @HawleyMO @MarshaBlackburn https://t.co/ah0nMeQdM0
— Donald J. Trump (@realDonaldTrump) September 8, 2020
Are you really confident that a Trump appointee wouldn’t read a POTUS tweet one day and then decide that allowing a #blacklivesmatter hashtag to trend is “promoting extreme content”? The experience of the past four years has not made me eager to get the government involved in defining and regulating political speech.
Barrett’s paper acknowledges many of these problems. Here’s how it described a hypothetical world where Section 230 has been repealed:
If Section 230 were swept away tomorrow, the internet would change, and on the whole, not for the better. It would slow down drastically, as platforms, websites, and blogs looked more skeptically at content posted by users, blocking more of it. Pointed political debate might get removed. Threats of litigation against internet companies would become more common, as would take-down demands, more of which would be successful, as nervous platforms and sites tried to avoid lawsuits. The internet could become a “closed, one-way street that looks more like a broadcaster or newspaper and less like the internet we know today,” writes Jeff Kosseff in his book, The Twenty-Six Words That Created the Internet.
To be fair, in a hypothetical post-Section 230 world, some people making take-down requests targeting such harmful content as bullying or defamation would be justified, as some are today. But others would try to silence corporate whistleblowers or activists seeking to build the next #MeToo or #BlackLivesMatter movement—and these efforts at squelching valuable speech would be more likely to succeed. Silicon Valley analyst Anna Wiener depicts an internet that, above all, would be thoroughly bland: “Social-media startups might fade away, along with niche political sites, birding message boards, classifieds, restaurant reviews, support-group forums, and comments sections. In their place would be a desiccated, sanitized, corporate Internet — less like an electronic frontier than a well-patrolled office park.”
I think that’s a bad world. Barrett’s paper uses this vision to support Section 230’s continued existence. But then it advocates making Section 230 exist only for some companies and not for others, for some websites and not for others — all contingent on things like whether the government thinks you’re limiting “extreme” content in the way it would like or whether you’ve paid enough into a journalism fund.
If you like those ideas, make them into laws. Don’t turn them into an obstacle course that everyone who puts content online must navigate in order to save us from that office-park Internet. Because while Trump and Biden view Section 230 as a special gift to a few trillion-dollar companies, it’s actually a gift to all of us who want a free and open and vibrant Internet. Facebook can still make plenty of money on a “desiccated, sanitized, corporate Internet” — but we’d be worse off with one.
The tech giants need greater regulation on a host of issues. But Section 230 has become a political football for all the wrong reasons. Don’t hold the legal heart of the open web hostage in the process.