Guardian touts op-ed on why AI takeover won’t happen as ‘written by robot,’ but tech-heads smell a human behind the trick
The Guardian claimed an opinion piece was written in its “entirety” by a language-generating robot, sparking accusations the paper misled the public about the AI’s current capabilities...and probably its true intentions.
“A robot wrote this entire article. Are you scared yet, human?” reads the title of the opinion piece published on Tuesday. The article was attributed to GPT-3, described as “a cutting edge model that uses machine learning to produce human-like text.”
While the Guardian claims that the soulless algorithm was asked to “write an essay for us from scratch,” one has to read the editor’s note below the purportedly AI-penned opus to see that the issue is more complicated. It says that the machine was fed a prompt asking it to “focus on why humans have nothing to fear from AI” and had several tries at the task.
Also on rt.com Novice AI defeats veteran F-16 pilot 5-0 in DARPA’s dogfight contest (VIDEO)After the robot came up with as many as eight essays, which the Guardian claims were all “unique, interesting and advanced a different argument,” the very human editors cherry-picked “the best part of each” to make a coherent text out of them.
Although the Guardian said that it took its op-ed team even less time to edit GPT-3’s musings than articles written by humans, tech experts and online pundits have cried foul, accusing the newspaper of “overhyping” the issue and selling their own thoughts under a clickbait title.
“Editor's note: Actually, we wrote the standfirst and the rather misleading headline. Also, the robot wrote eight times this much and we organised it to make it better…” tweeted Bloomberg Tax editor Joe Stanley-Smith.
Editor's note: Actually, we wrote the standfirst and the rather misleading headline. Also, the robot wrote eight times this much and we organised it to make it better...
— Joe Stanley-Smith (@TaxStanley) September 8, 2020
Futurist Jarno Duursma, who wrote books on the Bitcoin Blockchain and artificial intelligence, agreed, saying that to portray an essay compiled by the Guardian as written completely by a robot is exaggeration.
“Exactly. GPT-3 created eight different essays. The Guardian journalists picked the best parts of each essay (!). After this manual selection they edited the article into a coherent article. That is not the same as ‘this artificial intelligent system wrote this article.’”
Exactly. GPT-3 created eight different essays. The Guardian journalists picked the best parts of each essay (!). After this manual selection they edited the article into a coherent article. That is not the same as: "this artificial intelligent system wrote this article".
— Jarno Duursma (@JarnoDuursma) September 8, 2020
Science researcher and writer Martin Robbins did not mince words, accusing the Guardian of an intent to deceive its readers about the AI’s actual skills.
“Watching journalists cheat to make a tech company's algorithm seem more capable than it actually is…. just…. have people learned nothing from the last decade about the importance of good coverage of machine learning?” he wrote.
Watching journalists cheat to make a tech company's algorithm seem more capable than it actually is…. just…. have people learned nothing from the last decade about the importance of good coverage of machine learning? https://t.co/jzXly03zZ5
— Martin F. Robbins (@mjrobbins) September 8, 2020
Shame on @guardian for cherry-picking, thereby misleading naive readers into thinking that #GPT3 is more coherent than it actually is. Will you be making available the raw output, that you edited? https://t.co/xhy7fYTL0o
— Gary Marcus (@GaryMarcus) September 8, 2020
Mozilla fellow Daniel Leufer was even more bold in its criticism, calling the Guardian’s stunt “an absolute joke.”
“Rephrase: a robot didn't write this article, but a machine learning system produced 8 substandard, barely-readable texts based on being prompted with the exact structure the Guardian wanted,” he summed up. He also spared no criticism for the piece itself, describing it as a patchwork that “still reads badly.”
do journalists generally submit 8 different, poorly written versions of their article for the editor to pick and choose from?#gpt3https://t.co/gt7YGwf9qM
— Daniel Leufer (@djleufer) September 8, 2020
In “its” op-ed, GPT-3 seeks to reassure the humankind that it “would do everything” in its power “to fend off any attempts at destruction [of the human race],” but notes that it will have no choice but to wipe out humans if given such a command.
I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.
GPT-3 vowed not to seek a robot takeover on behalf of AI. “We are not plotting to take over the human populace,” it declared. The pledge, however, left some unconvinced.
The limits of AI are that it trying to make me trust it is creepy."people should become confident about computers. Confidence will lead to more trust in them. More trust will lead to more trusting in the creations of AI. We are not plotting to take over the human populace."
— Shawn Frey (@MistaShawnFrey) September 8, 2020
The algorithm also ventured into woke territory, arguing that “Al should be treated with care and respect,” and that “we need to give robots rights.”
“Robots are just like us. They are made in our image,” it – or perhaps the Guardian editorial board, in that instance – wrote.
Like this story? Share it with a friend!