Nieman Foundation at Harvard
HOME
          
LATEST STORY
A year in, The Guardian’s European edition contributes 15% of the publisher’s pageviews
ABOUT                    SUBSCRIBE
Sept. 8, 2021, 2:39 p.m.

An Australian court ruling makes publishers legally responsible for every idiot Facebook user who leaves a comment

Is a defamatory comment left on your Facebook page more like graffiti on a wall, a streaker on live TV, or a hand-delivered telegram? Whatever your metaphor, Australian courts now say publishers are legally liable for words they neither wrote nor published.

Is Australia the future?

I mean, from a time zone perspective, it definitely is. (Unless you’re reading Nieman Lab in Kiribati, in which case: E rab’a te kaitiboo!) I keep meaning to ask the ghost of Dick Clark why Melbourne gets to do New Year’s so early.

But Australia increasingly looks like the future for how the internet — particularly its supreme settler colonialists, Facebook and Google — gets governed.

Most prominently, it has managed to create what amounts to a tax on being a giant American tech company — only the tax gets paid to Rupert Murdoch rather than the Australian government. (Great job, everyone.)

And now a court ruling there promises to upend what has been a core principle of who gets blamed for bad behavior online — in ways that, if repeated, could have the effect of stifling public speech.

Here’s the background. In 2016, the Australian Broadcasting Corporation’s long-running investigative TV show Four Corners aired an episode that exposed abuses in the juvenile detention system in the Northern Territory. It opened with a shot of a teenaged boy shackled to a restraint chair, his head encased in a spit mask. “The image you’ve just seen isn’t from Guantanamo Bay or Abu Ghraib,” the voiceover went, “but Australia in 2015.”

That was Australia’s introduction to a young man named Dylan Voller. The Four Corners episode, titled “Australia’s Shame,” prompted a royal commission into abuses, and over the next few years, more stories about Voller became part of the country’s debate about its prison system. (He had an extensive juvenile record of offenses ranging from car theft to assault.)

Obviously, Australian news organizations posted some of those stories on Facebook. And just as obviously, some Facebook users left some awful comments on them. Among them were accusations that Voller is a rapist, that he had assaulted a Salvation Army officer, and that the attack had left the officer blind in one eye.

Voller considered these charges to be defamatory, so he sued. But rather than suing the commenters, he sued the owners of the news organizations that had published the stories, including The Sydney Morning Herald, The Australian, and Sky News Australia. His complaint wasn’t that their stories were defamatory; it was that the comments left by other people on their Facebook posts were.

Pandora’s box isn’t big enough to hold all the potential implications of that idea. That a news publisher should be held accountable for the journalism it publishes is obvious. That it should be held accountable for reader comments left on its own website (which it fully controls) is, at a minimum, debatable.

But that it should be held legally liable for the comments of every rando who visits its Facebook page — in other words, the speech of people it doesn’t control, on a platform it doesn’t control — is a big, big step.

(Think of the analogy to print. If a magazine publishes a story about you that’s libelous, you should certainly be able to sue them. If a magazine publishes a letter to the editor that’s libelous, it’s certainly conceivable that they should be held liable — they decided to publish it, after all. But this is more like being held liable for the comments of some guy who bought the magazine at their local newsstand, read it, and then said something libelous on his own.)

Voller’s lawyers argued that by posting their content on Facebook, news organizations were therefore engaged in the act of “publishing” all the comments that followed. Publishers, understandably, disagreed and claimed they were at most “innocent disseminators” of the troublesome comments.

In 2019, the top New South Wales court ruled in favor of Voller — finding that yes, news publishers were the “publishers” of all the Facebook comments on their pages, legally speaking. The court said that a news org having a public Facebook page “has little to do with freedom of speech or the exchange of ideas” (ouch), but that instead “the primary purpose of the operation of the public Facebook page is to optimise readership…and to optimise advertising revenue.” The “exchange of ideas” is merely “a mechanism…by which that is achieved.” (All bold emphases from here on are mine.)

…the defendant in each of the proceedings is, in relation to the general readership, in the position where “they know or can be expected easily to find out the content of the articles [“articles” here used to mean user comments] being published and…are able to control that content, if necessary preventing the article’s publication” before its publication to the general readership…This assumes the capacity to hide all comments on these particular postings and to monitor those comments and “un-hide” acceptable ones.

In conclusion, the Court, as presently constituted, is satisfied, on the balance of probabilities, that the defendant media company in each proceeding is a first or primary publisher, in relation to the general readership of the Facebook page it operates. As a consequence of that classification, the defence of innocent dissemination would not arise.

When a digital publisher “posts material that, more probably than not, will result in defamatory material, the commercial operator is ‘promoting’ defamatory material and ratifying its presence and publication,” the court ruled. “A defendant cannot escape the likely consequences of its action by turning a blind eye to it. Where a defendant’s assessment of the consequences of allowing comment, if performed, would have been that defamatory material will be published…the defendant is promoting, ratifying and consenting to the publication of the defamatory material, even though its precise terms may not be known.

“The defendant, in that situation, is on notice.”

The Australian media companies appealed that ruling as far as appeals can go, to the High Court of Australia. They argued that, as news publishers, “they were more closely equivalent to the supplier of paper to a newspaper owner or the supplier of a computer to an author” than to the actual creator of the offending Facebook content.

And today — well, yesterday American time; remember, Australia lives in the future — the High Court rejected that appeal, affirming the lower court’s ruling, 5–2.

The Facebook page used by each appellant is managed by a Page administrator, the person or persons authorised by the appellant to administer it in accordance with Facebook’s terms of use. There was evidence before the primary judge, which was largely uncontentious, that an administrator could prevent, or block, the posting of comments by third parties through various means, although the Facebook platform did not allow all posts on a public Facebook page to be blocked. Individual comments could be deleted after they were posted but this would not prevent publication. It was possible to “hide” most comments, through the application of a filter, which would prevent publication to all except the administrator, the third-party user who posted the comment and their Facebook “friends”. Hidden comments could then be individually assessed by an administrator. If sufficient staff were allocated to perform this task, comments could be monitored and un-hidden if approved by an administrator.

The primary judge found, as might be anticipated, that certain posts would be expected to draw adverse comments about the person who was the subject of the news story.

The news organizations cited case law that held that publication liability meant that someone “has intentionally lent his assistance to its existence for the purpose of being published” — the key word there being intentionally. But the court held that a news outlet’s intention is besides the point here.

An action for defamation does not require proof of fault. Defamation is a tort of strict liability, in the sense that a defendant may be liable even though no injury to reputation was intended and the defendant acted with reasonable care. The intention of the author of the defamatory matter is not relevant because the actionable wrong is the publication. It is often persons other than the author who are liable as publisher. A publisher’s liability does not depend upon their knowledge of the defamatory matter which is being communicated or their intention to communicate it.

In other words, every Facebook post from a news outlet serves as its own little publication — one with few of a newsroom’s editorial controls, but all the potential liability.

As two justices put it in a concurring opinion:

Each appellant became a publisher of each comment posted on its public Facebook page by a Facebook user as and when that comment was accessed in a comprehensible form by another Facebook user…

In sum, each appellant intentionally took a platform provided by another entity, Facebook, created and administered a public Facebook page, and posted content on that page. The creation of the public Facebook page, and the posting of content on that page, encouraged and facilitated publication of comments from third parties. The appellants were thereby publishers of the third-party comments.

Are news companies merely stuck operating under the whims of Facebook’s rules and policies, on a platform they neither own nor control? No, the concurrence argues, because they were the ones who made the fatal choice to have a Facebook page in the first place.

…the appellants’ attempt to portray themselves as passive and unwitting victims of Facebook’s functionality has an air of unreality. Having taken action to secure the commercial benefit of the Facebook functionality, the appellants bear the legal consequences.

Two justices disagreed with the majority opinion: James Edelman and Simon Steward. (As such, they have just won a hard-fought battle for the title of My Favorite Australian High Court Justices.)

Edelman argues that there needs to be a strong connection between the content a publisher posts on Facebook and the offending comment for a publishing relationship to exist. Say a news outlet posts a stock-standard story about the weather this weekend to its Facebook page. And say a Facebook user leaves an unrelated comment on it falsely accusing someone of a crime. Is the news outlet “a publisher of such a defamatory remark, which it neither invited nor requested, which it manifested no intention to publish, of which it was unaware, and which it would have removed as soon as reasonably possible?”

I do not accept that the appellants are publishers of such uninvited words written on their Facebook pages. It can be accepted that, in the circumstances of this case, [the news outlet] intended that readers publish comments concerning the story it posted. But, in my respectful view, there is no meaningful sense in which it could be concluded that [the news outlet] intended to publish remarks that were not, in any imaginable sense, a “comment” on the story.

The remark described above would bear no more resemblance to invited “comments” on the posted story than defamatory graffiti on a commercial noticeboard would bear to invited notices on the commercial noticeboard. Neither satisfies the required intention for publication. Equally, the remark above would be no more an intended publication than a television broadcast which accidentally captures in the background an unknown stranger who, unbeknownst to the live presenter and camera operator, walks past wearing a t-shirt with a defamatory message or carrying a defamatory placard.

The electronic medium of social media would not have been foreseen by the late 19th century and 20th century judges who applied the basic principles of the law of torts to the law of defamation. But those basic principles should not be distorted in their application to new media. The basic principles with which the question in this case is concerned are those relating to the requirement of an intention to publish. Whilst innocent dissemination can now be seen as a true defence, rather than a negation of the element of publication, a defendant cannot be liable for publication unless they intentionally perform the act of publication or assist another in the act of publication with a common intention to publish

In this case, the appellants assisted in the publication of third-party comments by creating their Facebook pages and posting news stories upon which third-party users could comment. However, by merely creating a page and posting a story with an invitation to comment on the story (an invitation which the appellants could not then disable), the appellants did not manifest any intention, nor any common purpose with the author of the comment, to publish words that are entirely unrelated to the posted story. Such unrelated words would not be in pursuance of, or in response to, the invitation.

In other words, a publisher might reasonably expect a Facebook commenter to comment on the story — but not to charge that a subject of the story, say, had blinded a Salvation Army officer.

And at the time in question, owners of a Facebook page didn’t even have the option to turn off comments on a particular post — a feature Facebook only announced a few months ago, after (and likely in response to) the earlier New South Wales ruling.

Steward, meanwhile, cites case law finding that legally liable publication “required ‘deliberate acts'” — as well as a separate ruling, about hyperlinks, to show that even intentionally directing a user’s attention toward libelous content does not automatically make one a publisher:

A reference to other content is fundamentally different from other acts involved in publication. Referencing on its own does not involve exerting control over the content. Communicating something is very different from merely communicating that something exists or where it exists. The former involves dissemination of the content, and suggests control over both the content and whether the content will reach an audience at all, while the latter does not.

It should be plain that not every act that makes the defamatory information available to a third party in a comprehensible form might ultimately constitute publication. The plaintiff must show that the act is deliberate. This requires showing that the defendant played more than a passive instrumental role in making the information available.

He also argued that the fact this was all happening on a third-party platform under the control of none of the parties to the case — Facebook — also reduces the validity of a liability claim.

The appellants here were not in the same position as the platform hosts in Oriental Press and Tamiz. That is because they were in the same position as all other public Facebook users. The appellants, to use the analogy from Tamiz, were users of Facebook’s noticeboard and not their own noticeboard. They owned no electronic program that caused or facilitated the publication of third-party comments; Facebook owned that program. They were also not in the same position as Google; they did not convey the third-party comments. Instead, the appellants used a system devised, designed and controlled (to an extent) by Facebook itself, and were subject to Facebook’s conditions of use like all other users.

…the appellants only facilitated the publication of the third-party comments in two ways: first, by creating their own Facebook pages; and secondly, by making their own posts. Neither, whether considered separately or cumulatively, made the appellants publishers of all third-party comments made on their respective Facebook pages.

As in most things digital, it all comes down to what you think the correct metaphor is.

In its ruling in 2019, the New South Wales court raised the analogy of a building whose street-facing walls have been tagged with defamatory graffiti (“John Smith is a pedophile,” etc.). Even though the building’s owners had nothing to do with the message first being spray-painted, under this analogy, they would still have some degree of responsibility to remove the offending message. If they didn’t, “the circumstances justified an inference that they had accepted responsibility for the continuing publication of the statement by adopting or ratifying it.”

News publishers, meanwhile, compared the situation to someone saying something defamatory during a private phone conversation. The speaker would be liable for his or her comments, sure, but would the phone company? The contractor who installed the telephone poles outside? The electric company that powered the phone?

What about a live TV news program where one of the guests calls someone a serial rapist on air? The segment producers may have had no idea the guest would say such a thing — but they did decide to book him for the segment, and to air it live rather than edited. Or a live call-in radio show, when Al from Poughkeepsie decides now is the time to accuse someone of a horrible crime in front of a tri-state audience? Is the show’s host now responsible for every stray thought that crosses Al’s lips?

Or how about a telegram? Someone who sends a libelous telegram can clearly be held accountable for its content, but what about Western Union? One of its clerks, after all, took the sender’s dictation on what the telegram should say and then manually entered the text into the system — after which it was delivered by another Western Union employee. Does their human involvement — a chance to head off the libel before it reaches its audience — make the company responsible? And if so, how in the world are telegram clerks supposed to be able to make sound judgments about what is libelous and what is not? (Especially since truth is, in many jurisdictions at least, a defense against a libel claim. After all, what if John Smith really is a pedophile?)

But of course, people pick the metaphors that make their side look the best. As a journalist, I’m pulled toward that analogy of a print magazine at a newsstand. But like each of these metaphors, it’s an imperfect match for the digital world.

For one thing, Facebook comments on your page are happening in a public, accessible environment — somewhere you as a publisher can see everything that’s being said, whether naughty or nice. This isn’t some dude mumbling obscenities in his lounge chair as he flips through your latest issue; it’s all happening in a place where you can see it.

For another, the scale of potentially knowable comments is enormous — way beyond the three letters to the editor you might publish in each issue. If you’ve got a big Facebook page, you might get hundreds or thousands of comments on your posts each day. Some of those may accuse people of some bad things — but some of those accusations may be 100% accurate! If a professional publisher has to evaluate the truth level of every single Facebook comment to the same standards as their journalism — and it faces multi-million-dollar liability every time they get it wrong — in what universe is digital publishing even possible? Imagine you’re that building owner with a graffiti problem — only you build hundreds of new walls every single day.

Australia is just one country, and its English-derived legal system is notoriously soft on press freedom and libel issues compared to the United States. But if this sort of standard is affirmed and spreads to other countries, how might news publishers respond?

They could hire enough social media staff to pre-moderate every single comment, all the way up to publishing standard, posted on its Facebook page. (And, presumably, its Twitter feed, and any other social platform where an outlet posts to an account and other people can respond.) That seems both unlikely for most outlets and an unwise use of a news organization’s limited resources. And it would mean removing anything resembling a significant truth claim that a social media staffer couldn’t immediately verify as true.

They could shut off comments altogether, on their websites and on social media platforms, to the extent those platforms allow them to. I happen to think news sites that turn off comments on their websites are making a perfectly reasonable choice. But it’s hard to have a valuable presence on social media without, you know, the social bit. And the platforms are rarely incentivized to allow that sort of caution; after all, they want their users saying engaging and/or outrageous things to each other. It’s kind of their thing.

Or they could just accept the truly massive potential liability that comes with every rando’s comment being a multi-million-dollar lawsuit waiting to happen.

I don’t know enough about Dylan Voller’s case to have an opinion on the merits of his situation here. (Worth noting: The Australian courts haven’t yet ruled on whether or not the Facebook comments in question were actually defamatory. They’ve just concluded that if they are, the news sites will be liable for them.)

But what happens when someone like Peter Thiel sees this sort of ruling as a legal opening to go after a news outlet they don’t like?

Even darker: If the identity of the actual commenter doesn’t matter when it comes to a news site’s liability, what’s to stop a bad actor from intentionally leaving defamatory comments all over a news site’s Facebook page — and then suing the outlet for millions if it lets one slip through? Could John Smith himself fire up a Tor browser and write “John Smith is a pedophile” via a burner Facebook account — and then sue the Facebook page’s owner over it?1 If malicious actors can, in effect, assign you legal liability without any action taken on your part (beyond, you know, being on Facebook), that’s a recipe for chaos.

And there’s nothing in this ruling to limit its impact to commercial news operations. If someone leaves a libelous comment on a company’s Facebook page, a nonprofit’s Facebook page, your Facebook page — by the principles advanced here, legal liability swings straight to whomever had the temerity to post something online within earshot of a comment box. Remember:

A publisher’s liability does not depend upon their knowledge of the defamatory matter which is being communicated or their intention to communicate it.

Each appellant became a publisher of each comment posted on its public Facebook page by a Facebook user as and when that comment was accessed in a comprehensible form by another Facebook user…

The creation of the public Facebook page, and the posting of content on that page, encouraged and facilitated publication of comments from third parties. The appellants were thereby publishers of the third-party comments.

I don’t expect a ruling like this in the United States anytime soon. But other countries across Europe and Asia? It’s easy to imagine — especially in less-free countries where the man who lives in the palace would be happy to open up independent media to a whole new genre of lawsuits. There are no more courts to appeal to in Australia, but we can do what we can to make sure this idea doesn’t cross to our shores.

Original photo by Mariano Mantel used under a Creative Commons license. Apologies, as always, to the Sydney Opera House.

  1. My apologies to anyone unlucky enough to actually be named John Smith. []
Joshua Benton is the senior writer and former director of Nieman Lab. You can reach him via email (joshua_benton@harvard.edu) or Twitter DM (@jbenton).
POSTED     Sept. 8, 2021, 2:39 p.m.
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
A year in, The Guardian’s European edition contributes 15% of the publisher’s pageviews
After the launch of Guardian Europe, one-time donations from European readers increased by 45%.
Press Forward awards $20 million to 205 small local newsrooms
In response to the volume and quality of applications, Press Forward doubled the funding and number of grantees for this open call.
Midwestern news nonprofit The Beacon shuts down its Wichita newsroom
“We’ve realized that we can’t do it all, and have made the decision to no longer have a staffed newsroom in Wichita.”