Nieman Foundation at Harvard
HOME
          
LATEST STORY
A year in, The Guardian’s European edition contributes 15% of the publisher’s pageviews
ABOUT                    SUBSCRIBE
April 4, 2019, 9:50 a.m.

In Australia, a new law on livestreaming terror attacks doesn’t take into account how the internet actually works

It forces every part of the internet stack — platforms, hosting providers, ISPs — to remove violent video before they’re even made aware of it.

In response to the livestreamed terror attack in New Zealand last month, new laws have just been passed by the Australian Parliament. These laws amend the Commonwealth Criminal Code, adding two substantive new criminal offenses.

Both are aimed not at terrorists but at technology companies. And how that’s done is where some of the new measures fall down.

The legislation was rushed through with neither consultation nor sufficient discussion. The laws focus on abhorrent violent material — capturing the terrorist incident in New Zealand, but also online content created by a person carrying out a murder, attempted murder, torture, rape, or violent kidnapping. The laws do not cover material captured by third parties who witness a crime — only content from an attacker, their accomplices, or someone who attempts to join the violence.

The aim is to prevent perpetrators of extreme violence from using the internet to glorify or publicize what they have done. This will reduce terrorists’ ability to spread panic and fear and reduce criminals’ ability to intimidate. This is about taking away the tools harmful actors use to damage society.

What the legislation aims to do

Section 474.33 of the Criminal Code makes it a criminal offense for any internet service provider, content service, or hosting service to fail to notify the Australian Federal Police “within a reasonable time” once they become aware their service is being used to access abhorrent violent material that occurred or is occurring in Australia. Failing to comply can result in a fine of 800 penalty units (currently $128,952).

Section 474.34 makes it a criminal offense for a content service or hosting service, whether inside or outside Australia, to fail to expeditiously take down material made available through their service and accessible in Australia.

The criminal element of fault is not that the service provider deliberately makes the material available — but rather that they are reckless with regards to identifying such content or providing access to it. “Reckless,” however, has been given a rather special meaning.

What Australia’s new law gets right

There is a clear need for new laws. Focusing on regulating technology services is the right approach. Back in 2010, when I first raised this idea, it was considered radical; today even Mark Zuckerberg supports government regulation. We’ve moved away from the idea of technology companies of all types being part of a safe harbor that keeps the internet unregulated. That’s to be welcomed.

Penalties for companies that behave recklessly — failing to build suitable mechanisms to find and remove abhorrent violent material — are also to be welcomed. Such systems should indeed be expanded to cover credible threats of violence and major interference in a country’s sovereignty, such as efforts to manipulate elections or cause mass panics through fake news.

“Recklessness” as it is ordinarily understood — that is, failing to take the steps a reasonable person in the same position would take — allows the standard to slowly rise as technology and systems for responding to such incidents improve. Also to be welcomed is the new ability for the eSafety Commissioner to issue a notice to a company identifying an item of abhorrent violent material and to demand its removal. When the government is aware of such content, there must be a way to require rapid action; the law does this.

What it gets wrong

One potential problem with the legislation is the requirement for internet service providers (ISPs) to notify the Australian Federal Police if they are aware their service can be used to access any particular abhorrent violent material.

As ISPs provide access for consumers to everything on the internet, this seeks to turn ISPs into a national surveillance network. It has the potential to move us from an already problematic metadata retention scheme into an expectation for ISPs to apply deep packet inspection monitoring of everything that is said.

Content services (including social media platforms such as Facebook, YouTube, and Twitter, and regular websites) and hosting services (provided by companies such as Telsta, Microsoft, and Amazon through to companies like Servers Australia and Synergy Wholesale) have a more serious problem.

Under the new laws, if content is online at the time a notice is issued by the eSafety Commissioner, the legal presumption will be that the company was behaving recklessly at that time. The notice is not a demand to respond, but rather a finding that the response is already too slow. The relevant section, 474.35(5), states (emphasis added) that if a notice has been correctly issued:

…then, in that prosecution, it must be presumed that the person was reckless as to whether the content service could be used to access the specified material at the time the notice was issued

While the presumption can be rebutted, this is still quite different from what Attorney General Christian Porter’s press release today claimed:

…the e-Safety Commissioner will have the power to issue notices that bring this type of material to the attention of social media companies. As soon as they receive a notice, they will be deemed to be aware of the material, meaning the clock starts ticking for the platform to remove the material or face extremely serious criminal penalties.

As the law is written, the notice is more of a notification that the clock has already run out of time. It’s like arguing that the occurrence of a terrorist act means “it must be presumed” the government was reckless with regards to prevention. That’s not a fair standard. The idea of the notice starting the clock would in fact be much fairer.

Under this law, a content service provider can be found to have been reckless and to have failed to expeditiously remove content even if no notice was ever issued. In some cases, that may be a good thing — but what was passed as law and what they say they intended don’t appear to match.

Hosting services have the worse of it. They provide the space on servers that allows content to appear on the internet. It’s a little like the arrangement between a landlord and a tenant. With hosting plans starting from around $50 a year, there’s no margin to cover monitoring and complaints management.

The new laws suggest hosting services will be acting recklessly if they don’t monitor their closely clients enough to take action before the eSafety Commissioner issues a notice. They just aren’t in a position to do that.

A lot still needs to be done

As it stands, only the expeditious removal of content or suspension of a client’s account can avoid the new offense. The legislation does not define what “expeditious removal” means. There is nothing to suggest the clock would start only after the service provider becomes aware of the content, and the notice from the eSafety Commissioner doesn’t start a clock but says a response is already overdue. This law is designed to apply pressure on companies so they improve their response times and take preemptive action.

What’s missing too is a target with safe harbour protections — that is, a clear standard and a rule that says if companies can meet that standard, they can enjoy an immunity from prosecution under this law. That would give companies both a goal and an incentive to reach it.

Also missing is a way to measure response times. If we can’t measure it, we can’t push for it to be continually improved. Rapid removal should be required after a notice from the eSafety Commissioner — perhaps removal within an hour. Fast removal, for example within 24 hours, should be required when reports come from the public.

The exact timelines that are possible should be the subject of consultation with both industry and civil society. They need to be achievable, not merely aspirational. Working together, government, industry and civil society can create systems to monitor and continually improve efforts to tackle online hate and extremism. That includes the most serious content such as abhorrent violence and incitement to violent extremism.

Trust, consultation and goodwill are needed to keep people safe.

Andre Oboler is a senior lecturer at La Trobe University Law School. A version of this article was published at The Conversation.The Conversation

Photo of a vigil for the Christchurch attacks in Melbourne, Australia, March 18, 2019 by Julian Meehan used under a Creative Commons license.

POSTED     April 4, 2019, 9:50 a.m.
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
A year in, The Guardian’s European edition contributes 15% of the publisher’s pageviews
After the launch of Guardian Europe, one-time donations from European readers increased by 45%.
Press Forward awards $20 million to 205 small local newsrooms
In response to the volume and quality of applications, Press Forward doubled the funding and number of grantees for this open call.
Midwestern news nonprofit The Beacon shuts down its Wichita newsroom
“We’ve realized that we can’t do it all, and have made the decision to no longer have a staffed newsroom in Wichita.”