Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
June 17, 2019, 10:26 a.m.
Mobile & Apps
LINK: www.theatlantic.com  ➚   |   Posted by: Laura Hazard Owen   |   June 17, 2019

The Atlantic is launching a new skill for Amazon Echo and Google Home: A “single, illuminating idea” every weekday. From the release:

Every weekday, when people ask their smart speakers to play The Atlantic’s Daily Idea, they’ll hear a condensed, one-to two-minute read of an Atlantic story, be it “An Artificial Intelligence Developed Its Own Non-Human Language” or “The Case for Locking Up Your Smartphone.” The skill will include reporting from across The Atlantic’s science, tech, health, family, and education sections, as well as the magazine’s archives, representing the work of dozens of writers.

The Atlantic’s briefing joins a number of other offerings from publishers. But while ownership of the devices is increasing — an estimated 65 million U.S. adults, around 23 percent of the population over 12, own one; 12 percent of U.S. adults said they used one in the past week, per the new Reuters Digital News Report, and 14 percent of U.K. adults — the percentage of people who use them for news is quite a bit smaller. People still mostly use them for music and the weather.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”