
“Things are in the saddle and ride mankind.” Emerson
This October 16th through the 28th marks the 63rd anniversary of the Cuban Missile Crisis. For thirteen days, the world stood on the brink of nuclear war. On the sixth day of the nuclear standoff, a chilling book titled Fail Safe was published. The authors were Eugene Burdick and Harvey Wheeler.
Eugene Burdick was a political theorist and author, and Harvey Wheeler, a political scientist, having taught at Harvard, Johns Hopkins University, and Washington and Lee University, wrote in the preface to their 1962 novel Fail Safe the following:
“…there is substantial agreement among experts that an accidental war is possible and its probability increases with the increasing complexity of the man-machine components which make up our defense system. Hardly a week passes without some new warning of this danger by knowledgeable persons who take seriously their duty to warn and inform the people. In addition, all too often past crises have been revealed to us in which the world tottered on the brink of thermonuclear war, while SAC commanders ponder the true nature of unidentified flying objects on their radar screen.”
Fail Safe is a story of faulty technology and mechanical failure that results in a nuclear disaster. Mike Schmidt, of Bluefield Process Safety, clarifies what is meant by fail-safe. Most of us believe that fail-safe means “having no chance of failure, infallibly problem-free.” But, what it actually means is “incorporating some feature for automatically counteracting the effect of an anticipated possible source of failure.”
Artificial Intelligence has become integrated into our commercial and civilian activities. What is lesser known or understood is the extent of AI’s introduction and application in military technologies, including both non-nuclear and nuclear systems―command, control, and communications. The AI-enabled non-nuclear systems for conventional warfare are generally intertwined with nuclear capabilities.
I am among those who recall the “duck and dive” drills of the 1950s through the Cuban Missile Crisis. These drills, with the expectations we would survive, had us cowering beneath our classroom desk with our hands over our heads. Another drill had us running into the hallway with our coats. We would face the wall, huddled beneath our coats. There were also the “bomb shelters” in our basements. One evening, I asked my father, who had gone to Hiroshima following Japan’s surrender, if we would survive. He took me outside. Directing my attention to the glow of the sky above New York City, twenty-five miles away, he told me that a nuclear blast would create a fireball spreading destruction and radiation.
Those who either listened to the tapes or read the transcripts of the executive committee of President Kennedy’s National Security Council during the Cuban Missile Crisis are aware of the conflicting ideas, tensions between members, and the stress in the room as they debated a solution.
An ICBM (Intercontinental Ballistic Missile) carries a payload of ten nuclear warheads. There are tactical nuclear weapons (TNW) that are short to medium range, carrying one warhead. The type and radius of destruction (blast effects) depend on the explosive yield and the burst, whether it is an air burst or a ground burst. From the moment of launch to striking its target, an ICBM takes 18 minutes. A supersonic missile dramatically reduces that time. Fortunately, Kennedy had thirteen days rather than 18 minutes to negotiate a settlement with Nikita Kruschev and the equally divided Politburo on potential countermeasures. Cooler heads prevailed.
This past June, Aladislav Chernavskikh and Jules Palayer published a paper titled “Impact of Military Artificial Intelligence on Nuclear Escalation Risk.” They note the “deterioration of the international security environment over the past decade that has resulted in heightened concerns about the risk of nuclear war…” They provide insight into what is meant by nuclear escalation.
“Nuclear escalation can be defined as the intensification or expansion of a conventional conflict to the extent that it crosses what one or more parties perceives to be a critical threshold, ultimately culminating in the use of nuclear weapons. Typically, the literature differentiates between three kinds of escalation: (a) deliberate, when a state intends for escalation to occur; (b) inadvertent, when a state did not anticipate that its actions would lead to escalation, probably because its actions crossed a rival’s threshold; and (c) accidental, when escalation is the result of mistaken or unauthorized actions.”
The assessments for evaluating a crisis, once a critical threshold has been passed, depend on strategic stability, which is defined as the absence of an initiative to launch the first nuclear attack. A political leader can neither be devoid of any sense of anxiety for the fate of humanity, nor lack an understanding of what war, be it conventional or nuclear, entails. Nor can political leadership afford to underestimate the intent of other leaders and the measures they will take to control natural resources, expand their national boundaries and spheres of influence, or avoid defeat in a conventional conflict. Additionally, there are rogue states and terrorist organizations that could take the initiative and provoke a war between two nuclear powers for their own advantage.
Escalation factors are sometimes due to natural events and technological malfunctions that activate the defense’s early warning systems. These include a satellite exploding during the Cuban Missile Crisis, which led to the belief that the Russians had launched an ICBM attack. Twice, flocks of geese initiated the system, leading to the erroneous conclusion—quickly corrected by humans—that Soviet bombers were attacking. In 1983, sunlight reflecting on cloud tops set off the Soviet Union’s system, indicating that the United States had launched five missiles. In 1980, a faulty computer chip failed, displaying two ICBMs on the command center screens. The number jumped to 200, and then the system indicated zero incoming Soviet missiles. These are but a few declassified incidents the public is aware of.
The military adoption of AI systems will have real world consequences. We need to be aware that those systems have their limitations, as we are discovering in their civilian use. AI systems have design flaws, contain both accurate and inaccurate content, are subject to malfunctions and hallucinations, exhibit biases, have limitations in evaluating an enemy’s countermeasures, and are vulnerable to cyberattacks. AI systems are not designed to differentiate between actual enemy bombers or missiles and what might be a flock of geese. The dangers include misidentifying targets and misleading data on the success or failure rates. AI will accelerate the military and political leadership decision-making process in a crisis, thus, as Chernavskikh and Palayer observe, “increasing the likelihood of misperception and overreaction.” And that “AI systems come with significant limitations that can lead to critical failures.”
There are moral and ethical issues surrounding AI development and application in all aspects of civilian life, in weapons development, and in the conduct of war. Some would have us believe there is no code of conduct for war and no rules of engagement. This is a false perception and understanding. Human decision-making is a product of logic, experience, emotional intelligence, and social dynamics. Ethics and morality are the foundation of the social contract. We are responsible for our decisions and actions. AI does not remove our responsibilities. These programs are incapable of understanding the social and personal dynamics and the implications of a crisis, on which the survival of humanity and civilization hinges. The civilian and military leadership must consider how much say AI has in the decision-making process before continuing its widespread application. Human oversight is a necessity in all aspects of civilian and military applications of AI.
The question is whether we want to be riding the technology, or allow the technology to ride us. Currently, we are rushing to apply AI in every aspect of our lives. This should be a warning that we are giving the wrong answer.
The nuclear clock is currently set at 89 seconds to midnight.
Photograph: This photograph, probably of a bomb dubbed “How,” was likely taken on June 5, 1952, as part of Operation Tumbler-Snapper test series at the Nevada Proving Grounds. The picture was taken using a Rapatronic camera.
Source & Credit: https://www.atomicarchive.com/media/photographs/index.html
Leave a comment