The Soviet False Alarm Incident and Able Archer 83
”
At the height of the Cold War, the Soviets designed an early-warning radar system meant to track fast-moving threats to increase the chance of reprisal. On September 26, 1983, however, the system, code-named Oko, malfunctioned. At around midnight, Oko’s alarms rang out, alerting the base of one incoming nuclear missile. The screen read, “LAUNCH,” which was not a warning, but an automatic order to prepare for retaliation.
Believing that a U.S. intercontinental ballistic missile (ICBM) was incoming, the base went into a panic. However, some officers on duty were skeptical that the United States would choose to send only one ICBM, knowing that it could not affect the Soviets’ counter-strike capability.Stanislov Petrov, an officer that helped create the code for the early-warning software, also knew that Oko was prone to error. He reset the system, but the alarms persisted.
Rather than following protocol, which entailed alerting superiors up the chain of command, Petrov awaited corroborating evidence. No evidence came, and the alarms soon stopped. Petrov’s actions, or inaction, almost certainly averted a nuclear disaster.
Just 11 days later, NATO forces in Brussels took part in a joint military exercise that simulated a response to a hypothetical Soviet nuclear attack. The exercise was code-named Able Archer 83.
The primary purpose of the exercise was to test the command-and-control procedures for NATO’s nuclear forces in the event of a global crisis. Unlike previous wargames, however, Able Archer 83 featured~new~ elements specifically meant to confuse and disorient the Soviets.
KGB observers alerted Moscow of the unusual activity, and paranoia set in. Working off dubious~intelligence~ that a NATO offensive against the U.S.S.R. could be cloaked under the guise of a military exercise, the Soviets began preparations for a large-scale retaliation. Then Soviet leader Yuri Andropov mobilized entire military divisions, transported nuclear weapons to their launch sites, and scrambled a fleet of bombers carrying nuclear warheads. Military command handed Andropov the~nuclear briefcase, known in Russia as the “cheget.”
Lenoard Perroots’ high-ranking intelligence officer for the U.S. Air Force stationed in Europe, observed that the Soviets were responding as though the exercise was real. In what the Foreign Intelligence Advisory Board has called a “fortuitous, if ill informed~” decision, Perroots did not reciprocate by raising western asset alert levels. Instead, he waited. The Soviets eventually realized that the exercise was not a surprise attack and aborted their planned response.
”
To translate, the Early Warning System on Soviet Computers were so bad because of issues parsing imagery (the ooga booga stoneman but undeniably both artificial and intelligent) that it mistook a cloud for the trail of an ICBM.
And being human and stupid are not preventable. Parsing text is difficult, and looking for semantic modalities across a broad pattern across social media posts might be difficult to do unsupervised even with the best transformer with “-in-context reasoning” or state spaces or even the best knowledge engineering in the world could replace the awkward layers of technical debt (legacy code, oop ontology created nightmares of abstracting that even a postmodernist would spit at you) and a need to scale literally nations with planning 10 years ahead.
After PyTorch’s open source and NVidia’s new chip architecture, we had the computational boom to do what researchers up at MIT, and Stanford 25+ years of math to figure out—in 10 lines of Python. 95% of deep-fakes produced in the query-able web come from are built by the same open source engine. But it is not hard to make Sonnet or GPT do it. Ok, so 20 lines then.
In the case of audio or video, the patterns remain fundamentally the same when the content is generated, while it is computationally expensive to keep this running all the time, they can afford to do that. They can also afford to be so big you give 1 hour long TED style talks on them:
We could mathematically describe what the pickup in a guitar means to a deep neural net in binary … (tried on Python, thank you Sukitha for suggestion), it would be so perfect that it would be ruined. Because digitally produced lacks the (?un-)intent cries of the copper wires in a way that it cannot conceive—which we’re asking it to do 24 years after y2k, isn’t reasonable if it isn’t:
- In the hands of an expert and most likely group of expert
- The group is malicious or organized by an adjacent
No one lives this fear out the way Taiwan does. Which explains the swift advancements in cyber capability (even CSO-led), where one CSO developer an anti-fake news bot for WhatsApp that would go and detect a fake news bot on some large group chat. This would be comical if it didn’t involve the current geopolitical context.
In a year (2024) that has the most elections of any (60+), we are worried about the fake, reality is so much more comical that even the deepfake shrivels in horror. At present, most FAANG companies are quietly
rolling back API accessdespite outcries from Mozilla and EFF. Their problem again is the scale of defense, detection, prevention, that they’re expected to undertake while building logical AI that sufficiently handle the entire chain of possible vulnerabilities. It is a tall ask, but not an impossible one. By inference, you then require experts battling each other on both side. The conceptual framework for a piece of data we identify as generated (easier with audio and video, much harder on text). This is because of temporality in signals processing) So the question goes to these variables?
1) Velocity
2) Payload
3) Reach