As you are most certainly aware, Republican frontrunner Donald Trump recently suggested that Muslims be banned from entering the United States for a while. In this country, that kind of talk is not illegal — we’re protected by the First Amendment, which guarantees our right to free speech, to give voice to whatever we might believe to be true or wise.
But Facebook is a private company, and it reserves the right to moderate, or censor, any speech on its platform that doesn’t adhere to its Community Standards. Users can flag content they think should be removed, and Facebook’s on-staff content managers will review the post in question and evaluate it, in theory, according to its guidelines.
I say “in theory” because the same rules don’t apply to everybody. When Trump uploaded video of himself suggesting his temporary Muslim ban, some people reported the post to Facebook as being offensive. It turns out the post did technically violate Facebook’s internal community standards, which prohibit language that calls “for violence, exclusion, or segregation” of people based on their ethnicity or religion, among other things.
My colleagues and I shared his video on our own accounts, along with status updates indicating endorsement of Trump’s message. Then we reported each other’s posts, and Facebook removed them. But Trump’s post remained.
Facebook, when asked about this inconsistency, said it’s out of respect for healthy political discourse — even if the message is one the social network’s own CEO doesn’t agree with. A cynical way to interpret the decision to leave Trump’s post up is that Facebook understandably wants to be the place where big names go direct to the world with their own stories, circumventing the traditional media and consequently driving more activity on its platform — censoring celebrities isn’t going to encourage other famous people to join in, right? Not to mention Facebook’s newly-launched play for political advertising…
But I can only speculate about all this; I don’t have the kind of insider information necessary to draw a conclusion about Facebook’s motives. (Though I have a lot of sympathy for the difficult position social networks find themselves in as they deal with Trump and online harassment and all sorts of complicated things. How do you protect free speech and also deal with hate? How do you retain your users if you don’t let the most voracious users speak? How do you retain your users if they’re being chased off your platform by sexists or racists? Could Zuckerberg possibly remove Trump’s video without making the GOP candidate seem more persecuted and brave in the eyes of his supporters?)
What I can say is this: Increasingly, regardless of how they wield it, platforms have the power to decide what gets heard and what does not get heard. After all, Facebook gets 968 million uniques a day; your website doesn’t. This isn’t exactly a new phenomenon — Facebook’s News Feed has been algorithmically determined for years — but the trend is only going to become more impactful in 2016. Twitter has started testing its algorithmically-determined feed and already, with the “While You Were Away” or “Top Tweets” features, I find myself scrolling pretty far down the homescreen before I start seeing posts that the robot didn’t choose for me. But between platforms deciding what they will and won’t take down, determining what appears at the top of a news feed and what doesn’t appear at all, their increasing role as our content management systems, and Facebook’s aspirations to control Internet access itself in various parts of the world, it’s not overly paranoid to wonder what will and won’t be heard on social media in the coming years.
News organizations are concerned about all this because we rely on social networks to ensure our stories are seen, but it should also matter to all the sources who broadcast directly to their audiences on these platforms — and it should matter to “regular” people, too. What does it mean that Trump’s voice can be heard calling for the exclusion of Muslims at our borders, but yours can’t? What could that evolve into? Could it affect elections? What if the tables were turned, and voices being removed from Facebook were the ones saying Muslims should be allowed in the country? Meanwhile, as Twitter slowly enters its brave new world of algorithmic newsfeeding, it’s a bit like the company is up in a tree, testing each branch to determine whether it will bend or break before gingerly stepping forward — after all, many users (and many journalists) have decried the coming “ice bucket”-ing of Twitter as the demise of the platform itself. But look at Facebook: Its “ice bucket” feed does not seem to have hurt its growth at all.
For the record, I think it will bend. After this item publishes, I will post it to my Facebook and Twitter accounts. Who knows who will see it, and who won’t.
Anjali Mullany is editor of Fast Company Digital.