Sexually-explicit AI-generated ‘photos’ of pop singer Taylor Swift that spread on social media platforms earlier this week are “alarming,” White House spokesperson Karine Jean-Pierre told reporters during a news briefing on Friday, promising action against nonconsensual porn was forthcoming.
“We’re going to do what we can to deal with this issue,” Jean-Pierre said, adding that Congress should pass legislation and social media platforms should crack down on the sharing and posting of such images.
“While social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing, enforcing their own rules to prevent the spread of misinformation, and non-consensual, intimate imagery of real people,” she said.
One of the images, posted on X, was viewed over 47 million times before the account posting it was suspended, according to the New York Times. X claimed it was working to remove the images and has suspended several of the accounts that posted them. The platform’s terms of service prohibit the sharing of AI-generated images of real people, pornographic or otherwise.
A search for ‘Taylor Swift’ on X returned an error on Saturday afternoon. A Swift fan on the platform said the search term had been “banned”. However, it was still possible to use “Taylor Swift” in a search, as long as another word or words came first – whether “protect” or “AI generated.”
Some of the images were originally posted to a Telegram group devoted to “non-consensual AI-generated sexual images of women” on Thursday, according to tech blog 404 Media. Others had been floating around on 4chan and other forums for weeks before the crossover into ‘mainstream’ social media took them viral.
Many of the images were created using Microsoft’s Designer, a commercially-available AI text-to-image generator. The Telegram group explains to the uninitiated how to circumvent Microsoft’s own protections against celebrity deepfakes and porn, outlining how, whereas the program will not generate an image in response to the prompt “Taylor Swift,” it will respond agreeably to “Taylor singer Swift.”
Microsoft told the blog it was “investigating these reports and…taking appropriate action to address them,” pointing out that its terms of service forbid using the programs to create “adult or non-consensual intimate content.”
US lawmakers introduced a bill in the House of Representatives earlier this month aimed at federally controlling the use of AI for audio and video deepfakes. The No Artificial Intelligence Fake Replies and Unauthorized Duplications Act (NO AI FRAUD Act) is reportedly based on the similar Senate bill Nurture Originals, Foster Art, and Keep Entertainment Safe Act (NO FAKES Act), which allows celebrities to sue anyone who creates “digital replicas” of their image or voice without permission.