“At the time of this writing, it is difficult to avoid the realization that one side of politics — mainly in the U.S. but also elsewhere — appears more threatened by research into misinformation than by the risks to democracy arising from misinformation itself.”
Back in 2012, the spread of outlandish conspiracy theories from social media into the mainstream was a relatively new phenomenon, and an indication of what was to come.
Our research found that posts that came from influencers, as well as women without enormous numbers of followers, and that cited scientists or other scholars, received more likes, comments, retweets and hashtags.
Conspiracy theories seem to meet psychological needs and can be almost impossible to eradicate. One remedy: Keep them from taking root in the first place.
Tangherlini, Timothy. "An AI tool can distinguish between a conspiracy theory and a true conspiracy." Nieman Journalism Lab. Nieman Foundation for Journalism at Harvard, 16 Nov. 2020. Web. 21 Dec. 2024.
APA
Tangherlini, T. (2020, Nov. 16). An AI tool can distinguish between a conspiracy theory and a true conspiracy. Nieman Journalism Lab. Retrieved December 21, 2024, from https://www.niemanlab.org/2020/11/an-ai-tool-can-distinguish-between-a-conspiracy-theory-and-a-true-conspiracy/
Chicago
Tangherlini, Timothy. "An AI tool can distinguish between a conspiracy theory and a true conspiracy." Nieman Journalism Lab. Last modified November 16, 2020. Accessed December 21, 2024. https://www.niemanlab.org/2020/11/an-ai-tool-can-distinguish-between-a-conspiracy-theory-and-a-true-conspiracy/.
Wikipedia
{{cite web
| url = https://www.niemanlab.org/2020/11/an-ai-tool-can-distinguish-between-a-conspiracy-theory-and-a-true-conspiracy/
| title = An AI tool can distinguish between a conspiracy theory and a true conspiracy
| last = Tangherlini
| first = Timothy
| work = [[Nieman Journalism Lab]]
| date = 16 November 2020
| accessdate = 21 December 2024
| ref = {{harvid|Tangherlini|2020}}
}}