Platforms aren’t efficiently self-regulating. Government officials don’t know how Facebook’s advertising works (or some know it too well). The internet can be a cesspool of spiteful users and malicious bots and yeah, in some places, digital-based communities and positive connections. But what can be done?
How about requiring internet companies to be legally liable for the content appearing in their domains?
Auditing algorithms regularly and making the results publicly available?
Launching a large-scale civic literacy and critical thinking campaign?
Giving individuals greater rights over the use, mobility, and monetization of their data?
These are some of the suggestions floated in “Democracy Divided,” a new Canadian report by Public Policy Forum CEO/former Globe and Mail journalist Edward Greenspon and University of British Columbia assistant professor/Columbia Journalism School senior fellow Taylor Owen. The ideas are bold, sure, and maybe a little far-fetched — especially when viewed from the very different regulatory context of the United States — but hey, bold thinking is at least somewhere to start.
“We believe that certain behaviors need to be remedied; that digital attacks on democracy can no more be tolerated than physical ones; that one raises the likelihood of the other in any case; and that a lowering of standards simply serves to grant permission to those intent on doing harm,” they wrote.
Greenspon also authored a report last year about how Canada could strengthen its struggling news ecosystem, with 12 specific steps.In the new report, Greenspon and Owen start with assumptions like “there is a necessary role for policy; self-regulation is insufficient on its own” and “elected representatives have a responsibility to ensure the public sphere does not become polluted with disinformation and hate by setting rules, not by serving as regulators.”
(Side note: In a survey with results out today, though from Canada’s southern neighbor, internet users narrowly opted for companies to be held accountable for accurate and unbiased information rather than for the government to get involved. But a third felt that the users should be responsible instead.)The recommendations also push for more transparency and accountability in the platforms and companies that contain the vast majority of public dialogue today. These include:
Clear information about the ad should be contained in the payload of the ad (e.g. a hover pop up box for textual and image ads, or subtitle text for video ads). The following data should be included:
- Sponsor of the ad — including the amount spent, the name of the organization that bought the ad, and a list or a link to its disclosed donors;
- Time and targeting parameters — time period during which the ad is running, the demographic characteristics selected by the advertiser (if applicable), the organization whose custom target list the recipient belongs, and (if applicable) the demographic charateristics that indicate why the recipient was included in a so-called “lookalike” target list;
- Engagement metrics — number of user impressions the ad has paid to reach, number of active engagements by users, and whether the ad is being amplified by the use of bots
[among others.]
(Another side note: Researchers and journalists have asked Facebook for a safe harbor to conduct similar studies without the risk of openly violating the platform’s terms of service. TBA if they’ll get it.)Independent researchers have sometimes used indirect means of tapping into algorithms to correct company underestimates and show the numbers of people actually exposed to Russian misinformation. This should happen by design, not subterfuge.
“The internet represents the greatest advance in communications since the printing press, but its consolidation by a handful of giant global companies and the exploitation of its vulnerabilities by individuals and organizations intent on destabilizing our democracy have reversed its early promise and challenged the public interest,” Greenspon and Owen wrote. Read the full report here.