EU seeking crackdown on AI

6 Jun, 2023 04:40 / Updated 2 years ago
The bloc is weighing new restrictions on online material generated by artificial intelligence

The European Union has called for new measures forcing big tech firms to clearly label any content generated using artificial intelligence. The bloc's officials are hoping to build on prior legislation related to manipulated videos, audio and photos, also known as ‘deep fakes’.

The EU’s vice president for values and transparency, Vera Jourova, advocated for stepped-up AI restrictions during a press briefing on Monday, arguing technology companies that have integrated artificial intelligence must “build in necessary safeguards” to prevent abuses by “malicious actors.”

“Signatories who have services with a potential to disseminate AI-generated disinformation should in turn put in place technology to recognize such content and clearly label this to users,” she said, citing services offered by Microsoft and Google.

While EU lawmakers are working to pass the Artificial Intelligence Act, which could impose new rules on all companies helping to create content with AI, another regulation with similar provisions has already been adopted. Passed last year, the Digital Services Act will soon force major search engines to identify any AI-manipulated material with “prominent markings,” a move aimed at cracking down on misinformation online. 

Jourova went on to announce that 44 signatories on the EU’s 2022 Code of Practice on Disinformation will form a new association to consider how to address emerging technologies such as AI. The code’s participants include a number of social media platforms and other tech firms, among them Google, Meta, Microsoft, TikTok, Twitch and Vimeo. Though Twitter previously took part, the company recently stepped away from the project, according to Politico, a decision slammed by the EU official.

“We believe this is a mistake from Twitter. They chose confrontation, which was noticed very much in the Commission,” Jourova continued, adding that the Elon Musk-owned platform should expect greater scrutiny from regulators. 

The EU has repeatedly declared its misgivings about AI as programs such as ChatGPT and DALL-E quickly rose to prominence in recent years, with tools capable of creating highly realistic fakes now easily accessible for millions of netizens. The body has called for “tailor-made regimes” for services like OpenAI’s ChatGPT, and is now debating amendments to strengthen the Artificial Intelligence Act before it comes up for a general vote, including a classification scheme to label “high-risk” AI tools.