Why privacy-busting, law-breaking GCHQ’s pledges to protect the public using artificial intelligence should raise an eyebrow
The UK’s signals intelligence agency isn't known for its commitment to the rule of law, so claims its new Artificial Intelligence capabilities will be used to safeguard citizens, not spy on them, shouldn’t be taken at face value.
On February 24, GCHQ issued a report – Pioneering a New National Security – outlining how it intends to use Artificial Intelligence (AI) to tackle child sex abuse, drugs, weapons and human trafficking, and online disinformation.
Mainstream media outlets widely reiterated the paper’s headline claims without criticism or balance. The BBC went so far as to suggest the release reflected GCHQ’s benevolent intentions and commitment to transparency.
News organizations particularly focused on the agency’s pledge to use AI to prevent online grooming, track potential predators, identify sources of child pornography, and help law enforcement identify and infiltrate pedophile rings. That no alarm was raised about this particular commitment is understandable - after all, apart from pedophiles themselves, who wouldn’t welcome child sex abuse being battled via every available means?
However, GCHQ’s history doesn’t make it an obvious candidate to lead child-protection efforts. For instance, in June 2020 journalist Matt Kennard exposed how the agency has gained illicit access to at least 22,000 primary and secondary school children in dozens of UK schools – some of them as young as four – via its CyberFirst program, often without their parents’ knowledge or consent.
The ostensible purpose of CyberFirst is to help pupils “experience new ways of learning in an innovative cyber environment,” and to identify young IT talents early, so they can be pointed in the direction of “opportunities...within the cyber security and computing industry.”
This suggests children enrolled in the program are spied on, assessed by GCHQ’s talent spotters both on- and off-site and the most promising kids targeted for recruitment. Kennard found the agency’s officers were directly operating in at least one school, running a “code club” for students. It’s clear CyberFirst serves to propagandize children too, extolling the agency’s virtues.
“GCHQ has been at the heart of the nation’s security for 100 years. It has saved countless lives, given Britain an edge, and solved or harnessed some of the world’s hardest technology challenges,” a slide from a lesson plan for 11 - 12 year olds states.
Unsurprisingly, no mention can be found in the presentation of the European Court of Human Rights ruling in 2018, that GCHQ’s mass surveillance programs are unlawful and violate citizens’ rights to privacy and freedom of expression.
That damning judgment isn’t the only reason to view GCHQ’s promise to use AI to combat “disinformation” with intense suspicion, if not outright trepidation.
In the section on the policy, the paper alleges that “a growing number of states are using AI-enabled tools and techniques to pursue political ends.” As a result, the agency proposes to deploy tools for “machine-assisted fact checking through validation against trusted sources,” disrupting “troll farms,” and identifying and neutralizing sources of particular malicious content.
“AI analysis can be used to target individual user profiles with tailored information to enable personalised political targeting. It can also be used to inject fake personas into debate through the use of AI chatbots,” GCHQ asserts. “AI has also been known to be deployed to manipulate information availability through interference with content curation algorithms.”
Scary sounding stuff, although academic Samuel Woolley, who has conducted extensive analysis of alleged “computation propaganda campaigns,” begs to differ. He found the “vast majority” of automated accounts identified to date don’t harness AI in any way, and are in fact “very simple,” generating mere “spam and noise,” and repetitively posting specific articles.
Also on rt.com Free will hacked: AI can be trained to manipulate human behavior and decisions, according to research in AustraliaThere are many legitimate reasons to find the rise of AI disquieting, although for the time-being at least, such fears remain rooted in theory rather than reality. A team of Stanford researchers who track the technology’s development concluded that, despite “truly amazing strides,” “computers still can’t exhibit the common sense or the general intelligence of even a five-year-old.”
Fittingly, Deeptext, an AI tool launched by Facebook to identify hate speech on its platform, amply underlines AI’s current impotence. In 2017, the social media giant made much of Deeptext’s efficacy, claiming it helped in the removal of over 60,000 posts a week – although also acknowledged a sizable number of human moderators are still required to vet the tool’s work, and determine whether content it has highlighted is “hateful” or not.
It seems likely AI would be similarly if not even more useless at rooting out disinformation, contrary to the claims of GCHQ’s paper. However, that particular passage’s reference to “validation against trusted sources” is illuminating – in other words, content and viewpoints targeted for suppression will fall outside the mainstream spectrum of ‘acceptable’ facts and opinion.
GCHQ seems to have already engaged in such efforts. In December 2020, the agency – along with the Cabinet Office Rapid Response Unit and British Army psyops unit 77th Brigade – was enlisted in an operation to battle “online propaganda” relating to coronavirus.
In an eerie coincidence, the operation’s official launch followed mere days after Prime Minister Boris Johnson announced a significant easing of lockdown restrictions over Christmas, a reckless folly widely condemned by the scientific community and UK citizens alike, many of whom took to social media to voice their outrage.
There are troubling indications 77th Brigade maintains a vast army of real, fake and automated social media accounts to disseminate pro-government messages and discredit any and all critics of Whitehall – GCHQ could certainly help support such efforts in a variety of ways.
“[Manipulating] information availability through interference with content curation algorithms,” for instance.
If so, GCHQ may have conducted operations targeted at British citizens, and broken the law in the process. Government spokespeople deny the Brigade conducts domestic operations, and its clandestine capabilities are “not being and have never been targeted against British citizens,” but a public statement issued by the UK Army Secretariat in June 2020 suggests the reverse may be true.
“As a UK Government unit, they have two primary audiences – government departments and British citizens, as well as anyone else seeking reliable information online,” the Secretariat stated.
Also on rt.com Hackers and Space Command: UK PM pledges £16.5bn military spending spree amid cuts to foreign aid & welfareGCHQ working hand-in-glove with the aforementioned Rapid Response Unit is notable, too, given its founder-and-chief, Alexander Aiken – who serves in 77th Brigade – in July 2018 authored a since-deleted article for the government’s website acknowledging “alternative news sources” were one of the unit’s key targets.
As such, GCHQ’s Artificial Intelligence push may stem from a need to justify its own desire to exploit “AI-enabled tools and techniques to pursue political ends.” Security services framing their quests to possess malign capabilities as a “response” to similar developments elsewhere is an old chestnut. The January 2015 launch of 77th Brigade was officially stated to be a counter to the alleged online propaganda efforts of Russia, and of Islamic State.
In April 2020, elite UK defense ‘think tank’ RUSI issued a report alleging the country was under significant threat from foreign-borne campaigns involving ‘DeepFake’ technology, among other AI innovations, and urging Whitehall to begin investing heavily in the field. The research was funded, and commissioned, by none other than GCHQ.
When one follows the money, this has all the makings of a self-fulfilling prophecy that anyone concerned about online privacy in Britain should be deeply worried by.
Think your friends would be interested? Share this story!
The statements, views and opinions expressed in this column are solely those of the author and do not necessarily represent those of RT.