1914. 1962. 2024? [Altucher Confidential] January 23, 2024 [WEBSITE]( | [UNSUBSCRIBE]( Ukraine, Gaza, and AI all raise the odds of a nuclear war considerably. The financial implications of this for investors are simple. [Hero_Image] The Darker Shades of AI [Chris Campbell] CHRIS
CAMPBELL Dear Reader, The year was 1962. Nuclear annihilation seemed probable, edging on inevitable. Didn’t help that the Cuban Missile Crisis was also one of the first major international events to be covered on television -- which only stoked the flames of fear. Families across the U.S. and other parts of the world were glued to their sets. The visual nature of TV made the crisis more tangible and emotionally impactful than radio ever could’ve. Schools conducted regular "duck and cover" drills, and some families built or stocked fallout shelters in their homes, preparing for the worst. You couldn’t walk 50 feet in public without hearing about it. Everyone was flying in the dark. The Fog of War Problem was, both the US and the Soviet Union faced situations where they had to interpret the intentions and capabilities of the other side. The fog of war was thick. The great risk was that a wrong interpretation would lead to a nuclear exchange. The worry was far from unwarranted. Back in 1914, after the assassination of Archduke Franz Ferdinand, there was also a flurry of confusion. Leaders and diplomats misread each other’s intentions, causing them to rush into war. Take, for example, the famous “Blank Check” assurance that Germany gave Austria, signaling unconditional support. Germany thought it would act as a deterrent. Instead, Austria-Hungary took it as a greenlight to attack. Russia’s decision to mobilize its army in 1914 was also meant as a deterrent against Austrian aggression. Germany interpreted this as an act of aggression. These are only two examples of many that flung the crisis into world war. Now, the rub… Both cases -- the Cuban Missile Crisis and WWI -- were shaped by the technology at the time. Although we’re excited about tech’s future… It would be crazy not to at least consider the ramifications of such a powerful technology like AI on the global stage. In today’s featured piece below, our Paradigm colleague Jim Rickards reveals the potentially darker shades of AI… How AI has already found its way into nuclear warfare… What Ukraine and Gaza have to do with it… And what he’s worried about. Check it out below. [External Advertisement] Biggest Investors in the World LOADING UP on This AI Stock [Click here to learn more]( It's a small cap that trades for less than $10... Yet the biggest investors in the world own millions of shares. Why? Because their AI just did something no company has ever done before. [Details here.]( Could AI Start a Nuclear War? Jim Rickards AI in a command-and-control context can either malfunction and issue erroneous orders as in Fail Safe or, more likely, function as designed yet issue deadly prescriptions based on engineering errors, skewed training sets or strange emergent properties from correlations that humans can barely perceive. Perhaps most familiar to contemporary audiences are the failed efforts of the president and Col. Grady’s wife to convince the bomber commander to call off the attack. Grady had been trained to expect such efforts and to treat them as deceptions. Today, such deceptions would be carried out with deepfake video and audio transmissions. Presumably, the commander’s training and dismissal of the pleas would be the same despite the more sophisticated technology behind them. Technology advances yet aspects of human behavior are unchanged. Another misunderstanding, this one real not fictional, that came close to causing a nuclear war was a 1983 incident codenamed Able Archer. The roots of Able Archer go back to May 1981 when then General Secretary of the Communist Party of the Soviet Union Leonid Brezhnev and KGB head Yuri Andropov (later general secretary) disclosed to senior Soviet leaders their view that the U.S. was secretly preparing to launch a nuclear strike on the Soviet Union. Andropov then announced a massive intelligence collection effort to track the people who would be responsible for launching and implementing such an attack along with their facilities and communications channels. At the same time, the Reagan administration began a series of secret military operations that aggressively probed Soviet waters with naval assets and flew directly toward Soviet airspace with strategic bombers that backed away only at the last instant. These advances were ostensibly to test Soviet defenses but had the effect of playing to Soviet perceptions that the U.S. was planning a nuclear attack. Analysts agree that the greatest risk of escalation and actual nuclear war arises when perceptions of the two sides vary in such a way as to make rational assessment of the escalation dynamic impossible. The two sides are on different paths making different calculations. Tensions rose further in 1983 when the U.S. Navy flew F-14 Tomcat fighter jets over a Soviet military base in the Kuril Islands and the Soviets responded by flying over Alaska’s Aleutian Islands. On Sept. 1, 1983, Soviet fighter jets shot down Korean Air Lines Flight 007 over the Sea of Japan. A U.S. Congressman was onboard. On November 4, 1983, the U.S. and NATO allies commenced an extensive war game codenamed Able Archer. This was intended to simulate a nuclear attack on the Soviet Union following a series of escalations. The problem was that the escalations were written out in the war game briefing books but not actually simulated. The transition from conventional warfare to nuclear wargame was simulated. This came at a time when the Soviets and the KGB were actively looking for signs of a nuclear attack. The simulations involving NATO Command, Control and Communications protocols were highly realistic including participation by German Chancellor Helmut Kohl and UK Prime Minister Margaret Thatcher. The Soviets plausibly believed that the war game was actually cover for a real attack. In the belief that the U.S. was planning a nuclear first-strike, the Soviets determined that their only course to survive was to launch a preemptive first strike of their own. They ordered nuclear warheads to be placed on Soviet Air Army strategic bombers and put nuclear attack aircrafts in Poland and East Germany on high alert. This real life near nuclear war had a backstory that is even more chilling. The Soviets had previously built an early warning radar system with computer linkages using a primitive kind of AI codenamed Oko. On September 26, 1983, just two months before Able Archer, the system malfunctioned and reported five incoming ICBMs from the United States. Oko alarms sounded and the computer screen flashed “LAUNCH.” Under the protocols, the LAUNCH display was not a warning but a computer-generated order to retaliate. Lt. Col. Stanislov Petrov of the Soviet Air Defense Forces saw the computer order and had to immediately choose between treating the order as a computer malfunction or alerting his senior officers who would likely commence a nuclear counterattack. Petrov was a co-developer of Oko and knew the system made mistakes. He also estimated that if the attack were real, the U.S. would use far more than five missiles. Petrov was right. The computer had misread the sun’s reflection off some clouds as incoming missiles. Given the tensions of the day and the KGB’s belief that a nuclear attack could come at any time, Petrov risked the future of the Soviet Union to override the Oko system. He relied on a combination of inference, experience, and gut instinct to disable the kill-chain. The incident remained secret until well after the end of the Cold War. In time, Petrov was praised as “The Man Who Saved the World.” The threat of nuclear war due to AI comes not just from the nuclear-armed powers but from third parties and non-state actors using AI to create what are called catalytic nuclear disasters. The term catalytic refers to chemical agents that cause volatile reactions among other compounds without themselves being part of the reaction. As applied in international relations, it refers to agents who might prompt a nuclear war among the great powers without themselves being involved in the war. That could leave the weak agent in a relatively strong position once the great powers had destroyed themselves. AI/GPT systems have already found their way into the nuclear warfighting process. It will be up to humans to keep their role marginal and data oriented, not decision oriented. Given the history of technology in warfare from bronze spears to hypersonic missiles, it’s difficult to conclude AI/GPT will be so contained. If not, we will all pay the price. Ukraine, Gaza, and AI all raise the odds of a nuclear war considerably. The financial implications of this for investors are simple. In case of nuclear war, stocks, bonds, cash and other financial assets will be worthless. Exchanges and banks will be closed. The only valuable assets will be land, gold and silver. It’s a good idea to have all three — just in case. Regards, Jim Rickards
for Altucher Confidential Urgent Note From James — Response Requested By Midnight [I just made a massive change to my Altucher’s Investment Network newsletter.]( This is one of the biggest changes to a newsletter in the history of our business… As far as I know, nothing like it has ever been done before. I’m adding 3 brand-new benefits to this all-new “Pro level” of Altucher’s Investment Network. And as one of my readers, I’d hate to see you left behind. That’s why – until MIDNIGHT tonight – [you’ll be able to upgrade your current subscription to this new “Pro level” by clicking here.]( [Seriously. Just click here now to see how to claim your upgrade.]( [Paradigm]( ☰ ⊗
[ARCHIVE]( [ABOUT]( [Contact Us]( © 2024 Paradigm Press, LLC. 1001 Cathedral Street, Baltimore, MD 21201. By submitting your email address, you consent to Paradigm Press, LLC. delivering daily email issues and advertisements. To end your Altucher Confidential e-mail subscription and associated external offers sent from Altucher Confidential, feel free to [click here.]( Please note: the mailbox associated with this email address is not monitored, so do not reply to this message. We welcome comments or suggestions at feedback@altucherconfidential.com. This address is for feedback only. For questions about your account or to speak with customer service, [contact us here]( or call (844)-731-0984. Although our employees may answer your general customer service questions, they are not licensed under securities laws to address your particular investment situation. No communication by our employees to you should be deemed as personalized financial advice. We allow the editors of our publications to recommend securities that they own themselves. However, our policy prohibits editors from exiting a personal trade while the recommendation to subscribers is open. In no circumstance may an editor sell a security before subscribers have a fair opportunity to exit. The length of time an editor must wait after subscribers have been advised to exit a play depends on the type of publication. All other employees and agents must wait 24 hours after on-line publication or 72 hours after the mailing of a printed-only publication prior to following an initial recommendation. Any investments recommended in this letter should be made only after consulting with your investment advisor and only after reviewing the prospectus or financial statements of the company. Altucher Confidential is committed to protecting and respecting your privacy. We do not rent or share your email address. Please read our [Privacy Statement.]( If you are having trouble receiving your Altucher Confidential subscription, you can ensure its arrival in your mailbox by [whitelisting Altucher Confidential.](