Instagram algorithm sexualizing children – WSJ
The algorithm behind Instagram’s Reels service has been found to recommend “jarring doses of salacious content” involving children to test accounts set up by the Wall Street Journal, in an investigation published on Monday.
The outlet says the purpose of the investigation was to see what kind of content the platform would suggest to accounts that primarily follow young gymnasts, cheerleaders, and other teen and preteen influencers.
The WSJ says it decided to conduct the test after noticing that many of the subscribers to these types of accounts were adult men, some of whom had also demonstrated interest in sexual content related to both children and adults.
During its testing, the outlet claims that Instagram’s algorithm served up a large amount of salacious content, including “risqué footage of children as well as overtly sexual adult videos.” In between those videos, the WSJ says the platform also featured ads for some of the biggest US brands.
In one example, the outlet says Instagram offered a stream of videos which included an ad for a dating app, a video of someone stroking a life-size latex doll, and a video of a young girl lifting her shirt to expose her midriff. Another stream featured a commercial, followed by a video of a man lying on a bed with his arm around a ten-year-old girl.
According to the WSJ, the Canadian Center for Child Protection ran similar tests on Instagram and reported similar results.
Meta Platforms, which owns Instagram and Facebook, responded to the WSJ investigation by stating that the outlet’s tests produced a “manufactured experience” that does not represent what the vast majority of its billions of users see.
Nevertheless, a number of businesses have chosen to cancel advertising for their products on Meta’s platforms, with Match spokeswoman Justine Sacco saying, “We have no desire to pay Meta to market our brand to predators or place our ads anywhere near this content.”
Last month, Instagram was also targeted in a lawsuit filed by the attorneys general of 41 US states, which accused the platform and its parent company of contributing to an ongoing mental health crisis among young people by purposely enticing them into compulsive social media use.