The US military is testing an AI-driven computerized precognition system designed to give decision makers the ability to accurately predict a crisis before it happens. If it sounds too good to be true, it's probably because it is.
As any Tom Cruise fan will tell you, claims of precognitive ability do not have happy endings. Steven Spielberg’s 2002 film, ‘Minority Report’, drove that point home in spades. But that was fiction – no, worse than fiction: Hollywood fiction, where screenwriters and directors conspire to manufacture viscerally pleasing narratives populated by picture-perfect characters who resolve some of the world’s most pressing problems in around two hours flat.
The Pentagon has been watching too many Hollywood films, it seems. Otherwise, how can one explain the attraction of something called the Global Information Dominance Experiments (or GIDE, a nod to the military’s proclivity for acronyms).
According to press reports, US Northern Command has completed a series of tests of the GIDE system – a “combination of AI (artificial intelligence), cloud computing and sensors” that, according to General Glen VanHerck, the commander of both Northern Command and the North American Aerospace Defense Command, would allow US military commanders to predict events “days in advance.”
According to the article, “the machine learning-based system observes changes in raw, real-time data that hint at possible trouble.” In the example cited in the article, the GIDE system, flagged by satellite imagery depicting an adversarial submarine preparing to leave port, would flag the potential deployment, alerting all military units and commanders who would have an interest in such things. GIDE, the article brags, would be able to accomplish this “in seconds.” Military analysts, on the other hand, would take hours or even days to “comb through this information.”
General VanHerck is a believer. With the assistance of GIDE, he notes, the military will no longer be reactive when it comes to responding to global crises but will rather be able to assume a more proactive posture, nipping potential problems in the bud with a well-timed response. Civilian leadership will be better empowered as well, able to more effectively employ the tools of diplomacy to stop a crisis from becoming a conflict. According to the general, GIDE seeks “to leap forward our ability to maintain domain awareness, achieve information dominance and provide decision superiority in competition and crisis.”
Also on rt.com Despite the rise of AI 'super-brains' that help tanks and robots target the enemy, humans will always triumph over machines in warWhy I’m wary of ‘self-learning’ machines
As a former military intelligence analyst with no small amount of real-world experience, I have to confess to more than a little skepticism about the efficacy of a system like GIDE. I’m wary of “self-learning” machines, knowing all too well that they were all birthed by computing programs and algorithms produced by humans.
Anytime I hear a general officer, or one of his civilian superiors, saying things like “integrated deterrence is about using the right mix of technology, operational concepts and capabilities, all woven together in a networked way that… is credible, flexible and formidable that will give any adversary pause, especially to think about attacking our homeland,” my inner alarm bells go off. A plethora of buzzwords and catchphrases are the biggest indicator that the speaker is manufacturing a politicized narrative as opposed to briefing ground truth. And I don’t need GIDE or its equivalent to make that observation.
Once upon a time the United States fielded one of the most capable intelligence organizations in the world. I know – I was part of it. We weren’t infallible – no organization ever is – but we were pretty damn good, especially when it came to the analysis aspect of our work. Laymen will often conflate intelligence analysis with intelligence estimates, without understanding that the former is a pure analytical exercise, while the latter has been infected with politics. An analyst can assess that the sky is blue, and that once the sun goes down, it gets dark. But if a political leader has determined that the sky is, in fact, red, and that it will be light 24 hours a day, no amount of sound reporting will change his or her mind. That comes only by reality-smacking said politician in the face, but by then it is too late, and the damage has been done.
A good intelligence analyst knows that it is not in his or her job description to tell their boss what they want to hear, but rather to present fact-based assessments that the boss then uses to decide. Because analysis is an imperfect art (note I didn’t use the term science – more on that later), a good analyst will provide their superiors with a range of assessments that support a similar range of options. At the end of the day, however, the buck stops with the boss. More often than not, the fact-based assessments briefed are inconvenient to the political objectives desired, and decisions are made which, in one form or another, end up blowing up in the boss’ face. And because most superiors lack the moral character and intestinal fortitude to admit they were wrong, blame for the failure is kicked downstairs, ending up on the analyst’s desk characterized as an “intelligence failure.”
Also on rt.com China’s new stealth bomber is just the latest proof that 21st century technological revolution will be centered in China, not US‘Intelligence failure’ is always a ‘leadership failure’
The media has become very comfortable in embracing the concept of an “intelligence failure,” because to instead accurately report that what really happened was a “leadership failure” would terminate the hand-in-glove relationship between the media and senior military leaders, both in uniform and civilian, that currently drives the news cycle.
A system like GIDE is designed to provide a buffer for the intelligence analyst, a safeguard so to speak from accountability, while continuing to provide the military and civilian leadership who are the ultimate decision makers a buffer for the deflection of blame (“We had a glitch in the AI functions of GIDE,” I can hear General VanHerck telling the press, after some future hypothetical military misadventure triggered by a predictably inaccurate precognition produced not by humans, but rather machines).
My resume contains sufficient analytical success stories for me to feel comfortable saying that the human potential for timely, accurate intelligence analysis exists. I was able to accurately predict the missile production cycle of a Soviet ICBM factory, my assessment that no Iraqi SCUDs were destroyed during Desert Storm was spot on, and I was a lonely voice when it came to objecting to the US government claims that Iraq continued to produce and possess weapons of mass destruction, to name three. All we need is for the humans who are the customers for this intelligence to trust in the capabilities of those producing these assessments.
Therein lies the rub. There was a time when an intelligence analyst was a true expert on the topic he or she was tasked with reporting on. When I was an inspector in the Soviet Union, I could rely on the wisdom and insights of intelligence analysts at the CIA who had spent their entire career examining a very specific fact set. For instance, I worked with an imagery analyst who had spent the better part of two decades looking at the Plesetsk missile test range from an “all source” perspective. There wasn’t much that escaped his attention, and he was able to predict – with deadly accuracy – future events in a way that the programmers of GIDE could only dream of.
The analysts didn’t use algorithms or digital databases. His brain was the computer, and his understanding of what was happening at Plesetsk was derived from the art of dissecting human nature as much as the science of assessing physical structures. He knew the personalities involved, and how they behaved before, during, and after a test launch event. He knew the difference between routine maintenance and unique site improvements. His service was, literally, invaluable, and yet, when the Cold War ended, he and the office he worked for (the Office of Imagery Assessments) were declared superfluous, deemed repetitive in a system where other agencies carried out identical or similar tasks.
Any intelligence analyst worth his or her salt, however, would tell you that this very repetitiveness was the source of solid assessments, since it prevented a single school of thought from dominating, and forces analysts to share the foundation of their assessments to other experts who were in a position to either corroborate the findings, or offer valid reasons in opposition. Civilian leaders more worried about a fiscal bottom line than the quality of intelligence which underpinned the national security of the nation eliminated what they termed a “duplication of effort,” consolidating analytical capability under a single roof, all in the name of cost effectiveness and system efficiency.
The other change that took place at the end of the Cold War was the elimination of the career expert – the person who could start a job seated at a given desk, and spend 20-30 years at that desk, promoted in place, looking at the same problem set the entire time. This kind of expertise was abandoned in favor of “career broadening” tours that had intelligence analysts rotating through divisions and departments to gain experience, while never becoming an expert in anything.
Also on rt.com The Taliban tide rolls on ahead of US withdrawal from Afghanistan – déjà vu all over againIncompetence by design
During the Gulf War, the CIA was able to provide truly expert analysis on a given target location, thanks to the quality of the imagery analysts then employed. By 1993, however, when the CIA sent its new cadre of imagery analysts to New York to brief the UN on suspected sites inside Iraq, the analysts all had less than six months’ experience, and were completely unfamiliar with anything other than the most rudimentary details about a given location.
While 9/11 created a window of opportunity for the terrorist hunters of the CIA (“targeteers”) to spend years following a target (‘Zero Dark Thirty’ provides the Hollywood take on that), the fact is that today what passes for an intelligence organization in the US today is comprised of heavily politicized, overly managed “professionals” incapable of providing the kind of focused, dead-on accurate analytical predictions of their Cold War predecessors. This built-in incompetence appears to be more by design than accident, which is why I am doubly skeptical of any so-called “artificial intelligence”-driven system like GIDE.
At the end of the day, accurate intelligence analysis is more about comprehending human nature than counting cars, or discerning other physical manifestations of human conduct. The best judge of human nature is another human. No computer can come close. I was willing to bet my life on that principle when I served in the military. It’s a shame General VanHerck and his ilk are not.
The fact the United States is willing to subordinate the predictive intelligence requirements of our collective national security to a computer should be worrisome to every American. If it were in my power, I’d make watching ‘Minority Report’ required homework for every American concerned with national security. If you still believe in the promise of farming out precognition to outsiders afterwards, then GIDE is the system for you.
Just count me out.
Like this story? Share it with a friend!
The statements, views and opinions expressed in this column are solely those of the author and do not necessarily represent those of RT.