Bot or not? Vice’s Motherboard has a piece up today — a “botifesto,” if you will (or if you must) — that gives a comprehensive rundown of just how useful, harmful, and ultimately inescapable bots are to our digital lives.
Bots, “[g]enerally speaking these sets of algorithms are responsible for so much on the backend of the Internet,” are used everywhere from sharing up-to-date information on earthquakes to launching a DDoS attack on a news site following publication of an important cover story. They deliver information in Slack, sharing breaking news or helping human editors decide which stories will likely take off on social media. The New York Times’ Election Bot is even helping send 2016 presidential election questions straight to the newsroom.In the realm of journalism, bots can serve as a tireless extension of human reporters. From Motherboard:
Bots can be useful for making value systems apparent, revealing obfuscated information, and amplifying the visibility of marginalized topics or communities. Twitter accounts such as @congressedits, @scotus_servo, and @stopandfrisk use re-contextualization in order to highlight information in a way that has traditionally been the role of journalistic organizations. The bot can be thought of as more than an assistant: it can be a kind of civic prosthetic, a tool that augments our ability to sense other people and systems. Bots won’t replace journalists, but they can help supercharge them by automating tasks that would otherwise have to take place manually.
The smallest outlets can employ bots in their favor: The New England Center for Investigative Reporting and WGBH News, for instance, launched a little MassBudgetBot that tweets out earmarks from the 2016 Massachusetts state budget.
There are big ethical questions around accountability (as a Tow Center report on the state of automated journalism also brought up).
But as Frédéric Filloux suggests in his latest Monday Note, the proper use of bots could be a “gamechanger” for news organizations looking to deliver personalized information to readers, and do it quickly, and in a manner that readers are already familiar with:A bot will eventually get things wrong if it is fed inaccurate information, and the bot could commit libel. If you make a bot, are you prepared to deal with the fallout when your tool does something that you yourself would not choose to do? How do you stem the spread of misinformation published by a bot? Automation and “big” data certainly afford innovative reporting techniques, but they also highlight a need for revamped journalistic ethics.
The survival of the news industry depends, for a large part, on its ability to create services on top of their contents streams. But getting into personalized services requires a major leap forward for which “Conversational Bots” could become strategic tools.
The full Motherboard piece, the written outcome of a workshop of “bot experts” organized by Sam Woolley of the Data & Society Institute, is here.
Leave a comment