Researchers have been left baffled after the latest program by OpenAI, an artificial intelligence systems developer, appears to have created a language which only it can understand.
DALL-E2 is a self-learning text-to-image generator launched in January 2022, its main function is to use text prompts provided by users and produce a picture to match the words along with subtitles.
However, according to Computer Science PhD student Giannis Daras, one of DALL-E2’s limitations has been the ability to produce text as it is only able to return nonsense words when prompted.
In a paper published on the science archive, Arxiv, earlier this month, Daras provided an example of this limitation, stating that feeding the program the prompt “two farmers talking about vegetables, with subtitles” returns an image that has two farmers talking to each other while holding some vegetables in their hands, but the text that appears in the image seems to be complete gibberish.
But researchers have now realized that there might be much more to the program’s seemingly incomprehensible words. “We discovered that this produced text output is not random, but rather reveals a hidden vocabulary that the model seems to have developed internally,” wrote Daras.
He also provided another example: asking the program to produce “an image of the word plane” would often lead to generated images that depicted gibberish text. However, feeding that text back to the AI frequently resulted in images of airplanes.
Daras’s hypothesis is that the AI seems to have developed its own vocabulary and assigned meaning to certain words that it itself had produced, such as in the case with the farmers, who were talking about vegetables and birds.
But although it might be impressive, Daras doesn’t seem to be too thrilled about the idea, saying that if he is correct about the AI’s ability to produce its own language, it could pose serious security challenges for the text-to-image generator.
“The first security issue relates to using these gibberish prompts as backdoor adversarial attacks or ways to circumvent filters,” he wrote in his paper. “Currently, Natural Language Processing systems filter text prompts that violate the policy rules and gibberish prompts may be used to bypass these filters.”
“More importantly, absurd prompts that consistently generate images challenge our confidence in these big generative models,” he added.
However, Daras’s paper has yet to be peer-reviewed and some researchers have questioned his findings, with one stating that the AI doesn’t always seem to work in the described fashion.
Research Analyst Benjamin Hilton says he asked the generator to show two whales talking about food, with subtitles. At first, DALL-E2 failed to return any decipherable text, so the researcher pressed on until it finally did.
Hilton stated that “‘Evve waeles’ is either nonsense, or a corruption of the word ‘whales’. Giannis got lucky when his whales said ‘Wa ch zod rea’ and that happened to generate pictures of food.” He added that some phrases, like “3D render” often gave completely different results, suggesting that they do not mean the exact same thing.
Nevertheless, Hilton admitted that a proper peer review of Daras’s paper could reveal a lot more and insisted that there could still be something to his claims, as the gibberish phrase “Apoploe vesrreaitais” consistently returns images of birds.
DALL-E2 is not the first AI to show signs of developing a language, previously Google Translate AI, which uses a neural network to translate between some of the most popular languages, appeared to have also created its own artificial vocabulary that it used to translate between languages in which it was not explicitly trained.
Facebook’s AI also seemed to have developed a form of internal communication, after two chatbots began talking in a way that was completely incomprehensible to humans. In fact, it got so far ahead so fast that researchers decided to pull the plug before it could develop any further.
Facebook’s programmers insisted that they wanted the AI bots to speak in English so that other users could understand them and noted that humans would never be able to keep up with the evolution of an AI-generated language.