Facebook testing AI to spot potentially suicidal members
Facebook has started using artificial intelligence to identify and help users that display suicidal thoughts.
The social network has developed an algorithm to spot warning signs in posts, comments, and live videos following three separate suicides broadcast on Facebook Live in as many months.
On Wednesday, the company’s Global Security team said they are adding prevention tools to their Live feature that will give concerned viewers the option to reach out to the person directly, and report the video to Facebook staff.
The person filming the video will then see a list of resources and tips on screen that encourages them to reach out to a friend or contact a helpline.
The tool is only available to US users at the moment.
“Facebook is in a unique position — through friendships on the site — to help connect a person in distress with people who can support them,” reads the social network’s announcement.
The company post cites statistics by the World Health Organization who says one person dies every 40 seconds from suicide - which is the second leading cause of death among 15-29-year-olds - as the reason behind their increased efforts to prevent suicide.
On February 19, Naika Venan, 14, from Miami broadcast her suicide on Facebook Live following a two-hour feed. One day later, aspiring actor Jay Bowdy, 33, shot himself in a car after telling his followers, via Facebook Live, that he intended to take his life.
On December 30, Katelyn Nicole Davis, 12, died by suicide while posting a Facebook Live stream that lasted over 40 minutes.
Facebook has already added a service for its Messenger users to connect directly with crisis support organizations like Crisis Text Line, the National Eating Disorder Association, and the National Suicide Prevention Lifeline.
Facebook founder Mark Zuckerberg has previously said he intends to use AI to identify terrorism, violence, and bullying across the social media network.
Meanwhile, Twitter announced on Wednesday it would work to identify accounts “engaging in abusive behavior” without having to rely on users to report it.
The company says it will take action against abusive users by hiding their posts from non-followers for a set amount of time and will now allow users to block certain keywords or phrases from their timeline.
“We aim to only act on accounts when we're confident, based on our algorithms, that their behavior is abusive,” wrote Twitter’s engineering chief Ed Ho.