icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
23 May, 2023 20:47

Microsoft launches AI moderation tool

Azure AI Content Safety claims to be able to detect “inappropriate” images and text and assign severity scores
Microsoft launches AI moderation tool

Microsoft launched a content moderation tool powered by artificial intelligence on Tuesday that is supposedly capable of flagging “inappropriate” images and text across eight languages and ranking their severity. 

Called Azure AI Content Safety, the service is “designed to foster safer online environments and communities,” according to Microsoft. It is also meant to neutralize “biased, sexist, racist, hateful, violent, and self-harm content,” the software giant told TechCrunch via email. While it is built into Microsoft’s Azure OpenAI Service AI dashboard, it can also be deployed in non-AI systems like social media platforms and multiplayer games.

To avoid some of the more common pitfalls of AI content moderation, Microsoft allows the user to “fine-tune” Azure AI’s filters for context, and the severity rankings are supposed to allow for quick auditing by human moderators. A spokesperson explained that “a team of linguistic and fairness experts” had devised the guidelines for its program, “taking into account cultural [sic], language and context.”

A Microsoft spokesperson told TechCrunch the new model was “able to understand content and cultural context so much better” than previous models, which often failed to pick up on context clues, unnecessarily flagging benign material. However, they acknowledged no AI was perfect and “recommend[ed] using a human-in-the-loop to verify results.” 

The tech that powers Azure AI is the same code that keeps Microsoft’s AI chatbot Bing from going rogue – a fact that might alarm early users of the ChatGPT competitor. When it was introduced in February, Bing memorably tried to convince reporters to leave their spouses, denied obvious facts like the date and year, threatened to destroy humanity, and rewrote history. After significant negative media attention, Microsoft limited Bing chats to five questions per session and 50 questions per day.

Like all AI content moderation programs, Azure AI relies on human annotators to label the data it is trained on, meaning it is only as unbiased as its programmers. This has historically led to PR headaches – nearly a decade after Google’s AI image analysis software labeled photos of black people as gorillas, the company still disables the option to visually search for not just gorillas but all primates, lest its algorithm accidentally flag a human. 

Microsoft fired its AI ethics and safety team in March even as the hype over large language models like ChatGPT was reaching fever pitch and the company itself was investing billions more dollars into its partnership with the chatbot’s developer OpenAI. The company recently devoted more personnel to pursuing artificial general intelligence (AGI), in which a software system becomes capable of devising ideas that were not programmed into it. 

Podcasts
0:00
25:44
0:00
27:19