Revolutionary machine learning technology trained to recognize visual objects and describe photos has been introduced by Facebook to enable visually impaired users to ‘see’ pictures posted on the social network.
The so-called “automatic alt text” generates a description of a photo using advancements in object recognition technology. People using screen readers on their iOS devices will hear a range of items a photo may feature as they swipe past the vast array of snaps on Facebook.
In the past, people with visual impairments could only hear the name of the person who shared the photo, followed by the term “photo” when they came across an image in their News Feed.
“This is possible because of Facebook’s object recognition technology, which is based on a neural network that has billions of parameters and is trained with millions of examples,” the company said in a statement.
The current list of concepts covers a range of things that can be present in photos, including people's appearance, and pictures of nature, transportation, sports and food.
"As Facebook becomes an increasingly visual experience, we hope our new automatic alternative text technology will help the blind community experience Facebook the same way others enjoy it," the company said.
Facebook is launching automatic alt text first on iOS screen readers set to English, but said it plans to add the function for other languages and platforms in the near future.
A total of 285 million people are estimated to be visually impaired worldwide: 39 million are blind and 246 million have low vision, according to the World Health Organization (WHO).
Over two billion pictures are shared daily across Facebook, Instagram, Messenger and WhatsApp, according to Facebook.
"While this technology is still nascent, tapping its current capabilities to describe photos is a huge step toward providing our visually impaired community the same benefits and enjoyment that everyone else gets from photos," the Silicon Valley-based social network said.
Last week, Microsoft announced new additions to its personal assistant Cortana to allow the system to "see, hear, speak, understand and interpret our needs using natural methods of communication," among other things.
Microsoft said that a "Seeing AI" research project was underway, aimed at helping those who are visually impaired or blind to "better understand who and what is around them."