How AI May Escalate Geopolitical Conflicts

Photo by Steve Johnson on Unsplash


Artificial intelligence (AI) is already transforming industries, but its applications in financial markets and national security remain underexplored. In his latest book, Money GPT: AI and the Threat to the Global Economy, bestselling author, economist, and investment advisor Jim Rickards provides a sobering analysis of how AI could disrupt financial systems and even escalate geopolitical conflicts. Leveraging his knowledge in capital markets and national security, Rickards recently discussed his book with Financial Sense, highlighting the risks of relying on AI for crucial decision-making and emphasizing the importance of human intervention to avert disastrous consequences.

For audio interview, see Jim Rickards on MoneyGPT: AI and the Threat to the Global Economy.


AI in Everyday Life: A Double-Edged Sword

Rickards begins by acknowledging AI’s widespread presence and potential for good. From optimizing medical research to improving everyday tools like refrigerators and cars, AI is already a part of our lives. “AI is huge. It’s going to get bigger,” Rickards notes. He highlights how AI can exponentially accelerate drug discovery by analyzing vast data sets and generating solutions that humans alone could not achieve.

However, Rickards cautions that while AI’s capabilities are impressive, it is not truly "intelligent" in the human sense. “AI is just math. There’s no actual human brain inside it,” he explains. This distinction, he argues, becomes critical when AI is applied to sensitive domains like financial markets and military systems, where human intuition and judgment play an important role.


The Fallacy of Composition: AI in Financial Markets

One of the central concerns Rickards raises is the application of AI in financial markets, particularly during times of crisis. He explains the concept of the "fallacy of composition," where actions rational for individuals can lead to disastrous outcomes when replicated on a large scale.

Using a stock market crash as an example, Rickards describes how an individual investor might sell their holdings, move to cash, and wait for the market to stabilize before reinvesting. While this strategy works for one person, if every investor acts similarly—especially with AI systems programmed to make identical decisions—the market could collapse entirely. “You get all sellers, no buyers. The markets are not just crashing; they’re going through the floor,” Rickards warns.

He highlights the role of AI in amplifying such crises. AI systems, designed to learn from human behavior, often replicate the worst instincts of traders, such as panic selling. Unlike humans, however, AI lacks the ability to exercise judgment or recognize buying opportunities during market downturns, which can create dangerous feedback loops.


The Loss of Human Oversight

Rickards notes that the safeguards once present in financial markets, such as human specialists on the trading floor, have largely been replaced by automated systems. These systems, while efficient, lack the common sense and intuition of human traders. “There’s nobody sitting there saying, ‘Maybe it’s time to buy the dip,’” he laments.

In his book, Rickards presents a scenario illustrating how AI could exacerbate a market collapse. He describes a hypothetical situation involving hedge fund traders manipulating markets, a Chinese cyber warfare unit hacking trading systems, and investors responding to deepfake announcements by AI-generated central bank officials. The lack of human intervention in such a scenario could lead to a cascading series of failures, shutting down financial markets entirely.


AI and National Security: The Escalation Trap

In Money GPT, Jim Rickards highlights two compelling historical examples from the Cold War era where human judgment, rather than automated systems, averted potentially catastrophic nuclear disasters. These examples underscore the irreplaceable role of human intuition and common sense in high-stakes decision-making, particularly when automated systems like AI are prone to errors or misinterpretations.


Example 1: Lieutenant General Leonard Perroots and the 1983 NATO War Games

In the early 1980s, tensions between the United States and the Soviet Union were running high. During this period, the Soviet Union developed a rudimentary AI system called VRYAN, which analyzed various factors—economic growth, military strength, demographics, and more—to identify potential threats. According to the system, the widening power gap between the U.S. and the USSR suggested that the U.S. might soon launch a preemptive nuclear strike. This conclusion heightened Soviet paranoia.

Simultaneously, NATO conducted a war game simulating a nuclear attack. Although the exercise was purely hypothetical, the Soviets, already on edge due to the AI's predictions, mistook the war game for preparations for an actual nuclear strike. Soviet forces began mobilizing their bombers, fueling their missiles, and preparing for retaliation.

Recognizing the danger of escalation, Lieutenant General Leonard Perroots, who was overseeing the NATO exercise, took it upon himself to de-escalate the situation. Against protocol, Perroots decided to halt the exercise and avoid playing out its final phases, which could have been misinterpreted as the start of a genuine attack. His decision allowed the Soviet Union to stand down, ultimately preventing a nuclear confrontation. Rickards emphasizes that Perroots’ intuition and willingness to act independently were crucial in averting disaster.


Example 2: Lieutenant Colonel Stanislav Petrov and the 1983 False Alarm

Later in 1983, the Soviet Union's early-warning system, Oko, detected what appeared to be incoming U.S. nuclear missiles. The system identified five missiles heading toward the Soviet Union and issued an automatic launch order, as per the doctrine of "launch on warning." This doctrine dictated that the Soviets should retaliate immediately to avoid being wiped out by a first strike.

The launch order reached Lieutenant Colonel Stanislav Petrov, who was stationed at a Soviet command center. Petrov’s orders were clear: he was to report the launch signal to his superiors, initiating a full-scale retaliatory strike. However, Petrov was skeptical. He reasoned that if the U.S. were launching a nuclear attack, it would likely involve hundreds of missiles, not just five. Additionally, he knew the warning system had technical flaws and could generate false positives.

Relying on his intuition and understanding of the system’s limitations, Petrov chose to disobey protocol and did not report the launch signal. His decision was later vindicated when it was revealed that the warning had been triggered by sunlight reflecting off clouds, which the system had mistakenly interpreted as missile launches. Petrov’s calm judgment in the face of immense pressure earned him the nickname "The Man Who Saved the World."


The Role of Abductive Logic in Human Decision-Making

Rickards uses these examples to illustrate a critical point: the unique human ability to apply abductive logic, or "common sense," in high-pressure situations. Abductive logic involves making intelligent guesses based on incomplete information—a skill that automated systems, including AI, currently lack.

Both Perroots and Petrov relied on abductive reasoning to assess the situation and override automated systems. In Perroots’ case, he recognized the escalating tensions and acted to defuse the situation. In Petrov’s case, he identified inconsistencies in the incoming data and trusted his instincts over the machine's recommendation. Without these human interventions, both events could have escalated into full-scale nuclear conflicts.


The Danger of Delegating to AI

Rickards contrasts these historical examples with the potential dangers of integrating AI into critical decision-making systems, such as nuclear command chains. While AI can process vast amounts of data and identify patterns, it lacks the empathy, intuition, and ability to question its own conclusions that humans possess. If similar situations were to arise in the future with AI systems in control, the likelihood of escalation would increase dramatically, as AI would blindly follow its programming without the capacity to de-escalate or second-guess its outputs.

Rickards concludes with a warning: “Don’t put AI in the nuclear kill chain.” While AI can serve as a resource or adjunct, humans must remain the ultimate decision-makers in matters of life and death. The lessons of Perroots and Petrov demonstrate that human judgment, with its blend of logic, intuition, and emotional intelligence, is an essential safeguard against catastrophic errors in automated systems.


AI, Bias, and Propaganda

Rickards also addresses the biases inherent in AI systems, particularly large language models like GPT. He explains how these systems reflect the biases of their training data and developers, which can lead to misinformation and censorship.

For example, Rickards highlights instances where AI-generated responses distorted historical facts or injected politically motivated narratives. He cites a case where an AI system produced images of female popes and black Vikings—both historically inaccurate—due to programmed "prompt injections" designed to prioritize diversity and inclusion.

“AI systems are not malfunctioning; they’re working as programmed,” Rickards argues. The problem, he says, lies in the substitution of one set of biases for another, often without transparency or accountability.


The Weaponization of AI

Rickards emphasizes that AI’s capabilities are not limited to benign applications. From cyberattacks on financial systems to deepfake propaganda campaigns, AI can—and already has—fallen into the hands of bad actors. He describes a potential scenario in which a hostile nation uses AI to create a fake speech by the Federal Reserve chairman, sparking panic in financial markets.

“The goal of such an attack wouldn’t be to make money but to destroy the wealth of the United States,” Rickards explains. He likens this to traditional warfare, where the objective is to degrade the enemy’s economic capacity.


The Need for Human Oversight

Despite the risks, Rickards believes AI can be managed effectively if humans remain in control of critical decision-making processes. He urges policymakers and industry leaders to ensure that humans remain "in the loop," particularly in high-stakes areas like financial trading and military strategy.

“Don’t put AI in the nuclear kill chain,” Rickards stresses repeatedly. “If you want to use it as a resource or adjunct, fine. But leave the final decisions to humans.”


Conclusion: Preparing for the Age of AI

In Money GPT, Jim Rickards provides a compelling examination of the risks AI poses to financial markets, national security, and society at large. From amplifying market crashes to escalating geopolitical conflicts, AI’s power lies in both its capabilities and its limitations.

While Rickards acknowledges AI’s potential to drive innovation and efficiency, he warns against over-reliance on systems that lack human judgment. His message is clear: as we rush to implement AI, we must remain vigilant about its potential to disrupt the systems we depend on.

For investors, policymakers, and anyone concerned about the future of technology, Rickards’ book is both a wake-up call and a guide to navigating the challenges ahead. Money GPT: AI and the Threat to the Global Economy is a must-read for understanding how AI is reshaping our world—and how we can prepare for its unintended consequences.


More By This Author:

Doomberg On U.S. Energy Policy, Trump Vs. California, And Big Tech's Next Move
Why Dividends Make Sense In Today's Markets
Smart Macro: Is The Bull Market In Gold Over? Not Unless This Happens

Advisory services offered through Financial Sense® Advisors, Inc., a registered investment adviser. Securities offered through Financial Sense® Securities, Inc., Member FINRA/SIPC. DBA ...

more
How did you like this article? Let us know so we can better customize your reading experience.

Comments

Leave a comment to automatically be entered into our contest to win a free Echo Show.
Or Sign in with