If The New York Times hadn’t reported on the fake Twitter follower factories.
If ProPublica hadn’t investigated targeted Facebook ads discriminating against users based on race, disability, and gender.
If Gizmodo hadn’t uncovered the way Facebook’s “People You May Know” feature can create shadow profiles for non-users.
If the Tow Center and The Washington Post hadn’t analyzed the depth of the Russian disinformation campaign on Facebook.
If journalists and researchers stopped investigating activity on social media platforms — especially Facebook, one of the most closed platforms and also one of the most widely abused — the “thens” are too important to sacrifice.
Currently, most tech investigations of Facebook – such as our exposure of discriminatory advertising last year – may have technically violated Facebook’s TOS. But without violating those rules, journalists can’t investigate our most important platform for public discourse.
— Julia Angwin (@JuliaAngwin) August 7, 2018
That’s the argument made by the Knight First Amendment Institute in a letter sent to Mark Zuckerberg and other Facebook executives on behalf of five journalists and researchers. The letter asks Facebook to create a “safe harbor” for “certain journalism and research on its platform” that isn’t as limited by the platform’s terms of service, which the company has wielded in the past:
We are unaware of any case in which Facebook has brought legal action against a journalist or researcher for a violation of its terms of service. In multiple instances, however, Facebook has instructed journalists or researchers to discontinue important investigative projects, claiming that the projects violate Facebook’s terms of service. As you undoubtedly appreciate, the mere possibility of legal action has a significant chilling effect. We have spoken to a number of journalists and researchers who have modified their investigations to avoid violating Facebook’s terms of service, even though doing so made their work less valuable to the public. In some cases, the fear of liability led them to abandon projects altogether.
In that passage specifically, the footnotes contain some legal jargon that could be an outline to a legal argument based on First Amendment concerns. The authors — Jameel Jaffer, the institute’s executive director; Ramya Krishnan, legal fellow; and staff attorney Carrie DeCell — also provided a proposition for the terms of service safe harbor specifically for “news-gathering and research projects” at the end of their letter.
So far, Facebook doesn’t seem to be taking the bait. A Facebook spokesperson told the Washington Post that the company is reviewing the letter (the authors asked for a response by September 7). But Campbell Brown, Facebook’s head of global news partnerships, said in a statement that “we appreciate the Knight Institute’s recommendations,” but that Facebook has “strict limits” for third parties using personal information. She pointed to CrowdTangle and the forthcoming API for political advertising on Facebook as tools journalists can use instead.
“That [political advertising], of course, is a small sliver of what disturbs people about the Facebook world, leaving a lot of other information officially out of journalists’ reach,” wrote Kashmir Hill and Surya Mattu at Gizmodo. They were the builders behind the “People You May Know” feature inspector, and they detailed how Facebook told them it violated the terms of service:
We argued that we weren’t seeking access to users’ accounts or collecting any information from them; we had just given users a tool to log into their own accounts on their own behalf, to collect information they wanted collected, which was then stored on their own computers. Facebook disagreed and escalated the conversation to their head of policy for Facebook’s Platform, who said they didn’t want users entering their Facebook credentials anywhere that wasn’t an official Facebook site—because anything else is bad security hygiene and could open users up to phishing attacks. She said we needed to take our tool off Github within a week.
Facebook has been doing damage control to tamp down its data security in the wake of the Cambridge Analytica scandal (but then, word also got out that Facebook is asking banks to share user information. Really building trust here). And the company also has been reluctant to be put in a position of deciding what’s news and what’s not — see “human editors,” circa 2016.That’s why we’re telling this story for the first time: When we released a tool to help people study their People You Know recommendations, Facebook wasn’t happy about it. https://t.co/of1r5eK5SU
— Kashmir Hill (@kashhill) August 7, 2018
I reached out to Jaffer about that: Could this safe harbor setup put Facebook in the position of deciding who belongs in “news-gathering and research projects”? His response:
We’ve deliberately avoided asking Facebook to decide who is, and who isn’t, a journalist. Instead, we’ve put the focus on the nature of the project. If Facebook adopted our proposed safe harbor, Facebook would ask, with respect to any given project, whether the purpose of the project is to inform the general public about matters of public concern, and whether the project appropriately protects the privacy of Facebook’s users and the integrity of Facebook’s platform. Obviously, our proposal contemplates that Facebook will be deciding which projects satisfy the safe harbor and which don’t. But we think Facebook exercising this judgment is preferable by far to the current state of affairs, under which Facebook categorically prohibits the use of digital investigative tools that are crucial to the study of the platform.
The company just joined the swell of other platforms removing the prominent conspiracist Alex Jones from their vessels. Does the benefit of this reporting safe harbor open up risks for the future of news on the platform?