Key Takeaways
1. AI Reshapes Global Power Dynamics
Whoever becomes the leader in this sphere will become the ruler of the world.
Technology as power. Artificial intelligence is a general-purpose technology, like electricity or the internal combustion engine, poised to trigger another industrial revolution. Historically, such technological shifts have profoundly altered the global balance of power, elevating nations that effectively harness new capabilities while diminishing those that lag. AI promises similar sweeping economic, social, and political changes, making it a central arena for international competition.
New metrics of power. Just as the industrial revolution shifted the measure of military power from men under arms to industrial capacity (coal, steel, oil), the AI revolution is changing the key metrics of national strength. In the age of AI, power increasingly hinges on control over critical inputs: data, computing hardware (compute), talent, and the institutions capable of translating these into practical applications. Nations leading in these areas gain significant advantages.
Geopolitical competition. The potential scale of change from AI has ignited a fierce geopolitical rivalry, most notably between the United States and China. Both nations are racing to capitalize on AI for national advantage, viewing leadership in this domain as crucial for shaping the twenty-first-century global order. This competition is complicated by the deep entanglement of their respective AI ecosystems, creating both opportunities and vulnerabilities.
2. Four Critical Battlegrounds for AI Dominance
Nations that lead in these four battlegrounds—data, compute, talent, and institutions—will have a major advantage in AI power.
Key inputs. Machine learning, the engine of recent AI progress, relies fundamentally on data, compute, algorithms, and human talent. While algorithms are widely available, the relative scarcity and control over data, compute, and talent are major factors determining national AI capacity. Additionally, effective institutions are essential to translate these raw inputs into tangible military, economic, and political power.
Data's value. Data has been called the "new oil," a critical resource for training machine learning systems. The explosion of data from internet-connected devices offers enormous opportunities, but data is not fungible like oil; its value is specific to its type. While China has a large population and less stringent data privacy regulations, major U.S. tech firms have global reach, and the value of data may shift as AI evolves toward techniques using less real-world data or synthetic data.
Compute and talent. Compute power is essential for training AI models, and control over specialized AI hardware (semiconductors) is a key battleground. The semiconductor supply chain is highly concentrated, with Taiwan playing an outsize role in leading-edge chip fabrication. Talent is also crucial, with the United States currently leading in attracting top AI researchers globally, although China is rapidly increasing its domestic talent pool and actively recruiting from abroad.
3. China Pioneers AI for Techno-Authoritarian Repression
AI is helping to enable this repression through tools such as face, voice, and gait recognition.
Xinjiang as a testbed. China is building an intrusive techno-authoritarian surveillance state, with Xinjiang serving as an extreme testbed for AI-enabled repression against the Uighur population. The government employs a dense network of surveillance cameras, biometric data collection (face, voice, DNA), and AI-powered platforms like the Integrated Joint Operations Platform (IJOP) to monitor, track, and predict citizens' behavior. This system enables mass arbitrary detention and systematic human rights abuses.
Digital sharp eyes. Across China, the government is expanding its surveillance architecture, building on projects like Skynet and Sharp Eyes. Hundreds of millions of cameras are being integrated with AI for facial recognition, often linked to other databases containing personal information. This aims to create an "omnipresent, fully networked, always working and fully controllable" surveillance system, though it remains fragmented and imperfect today.
Exporting the model. China is actively exporting its surveillance technology and governance model globally, particularly through the Belt and Road Initiative. Chinese companies like Huawei are selling "safe city" solutions to dozens of countries, often in exchange for data that helps refine their algorithms. More troublingly, China is exporting its laws and policies, influencing other nations to adopt elements of digital authoritarianism, threatening global freedoms.
4. AI Fuels a New Era of Disinformation
Large-scale synthesized disinformation is now possible.
AI-generated content. AI tools are transforming the information ecosystem, enabling the creation of highly realistic fake text, audio, and video (deepfakes). Language models like GPT-2 and GPT-3 can generate convincing fake news stories at scale, lowering the barrier to entry for malicious actors. While current deepfakes may have flaws, the technology is rapidly improving, making it harder for humans to distinguish real from fake.
Undermining truth. The proliferation of sophisticated synthetic media, combined with existing disinformation tactics like social media bots and troll farms, threatens to undermine public trust in authentic information. This "liar's dividend," where real content can be dismissed as fake, benefits authoritarians who thrive in a "post-truth" landscape where reality is whatever the powerful say it is.
Algorithmic control. Social media platforms use powerful, often opaque, algorithms to filter, promote, and recommend content to billions of users. These algorithms significantly influence public opinion and perceptions of reality. The rise of Chinese-owned platforms like TikTok, which are ultimately beholden to the Chinese Communist Party, poses a new challenge, risking censorship and propaganda on a global scale.
5. AI Systems Possess Powerful, Yet Brittle, Intelligence
machine learning works, but may easily be broken.
Narrow and brittle. Despite achieving superhuman performance in specific tasks or games, AI systems today possess a narrow form of intelligence. They often struggle with "distributional shift," failing dramatically when faced with conditions even slightly different from their training data. Simple tricks, like Marines hiding under a cardboard box, can fool AI systems that lack a rich understanding of the world.
Alchemy, not science. Many contemporary AI systems, particularly deep neural networks, function as "black boxes" whose internal logic is not fully understood even by their designers. This opacity makes it difficult to predict when and how they will fail. While researchers are working on "explainable AI," the sheer complexity of massive models may defy complete human comprehension.
Vulnerable to attack. AI systems are susceptible to novel security vulnerabilities that target their cognitive processes. "Adversarial examples" can subtly alter inputs to fool a trained model, even in the physical world (e.g., stickers on a stop sign). "Data poisoning" can inject vulnerabilities during training, creating hidden "backdoors" that attackers can later exploit. These attacks are difficult to defend against and pose a new class of national security risk.
6. Military AI Adoption Faces Bureaucratic Hurdles
The DoD’s acquisition system prioritizes minimizing the risk of fraud and abuse, a worthy goal, but at the cost of sacrificing speed and agility.
Spinning in commercial tech. The bulk of AI innovation occurs outside traditional defense industries, forcing militaries to find ways to "spin-in" commercial technologies. The U.S. Department of Defense has created new organizations like the Defense Innovation Unit (DIU) and the Joint AI Center (JAIC) to bridge the gap between Silicon Valley and the Pentagon and accelerate AI adoption.
Bureaucratic quagmire. Despite these efforts, the DoD's cumbersome acquisition system remains a major obstacle. Projects like the Joint Enterprise Defense Infrastructure (JEDI) cloud contract faced years of delays due to protests and lawsuits, hindering the development of essential AI infrastructure. This system, designed for accountability, often stifles innovation and is "lethal" to small start-ups.
Scaling challenges. While new organizations have achieved success with small-scale prototypes (e.g., Project Maven, predictive maintenance), scaling these innovations across the vast DoD bureaucracy is difficult. Data stovepipes, outdated systems, and a culture focused on traditional platforms rather than digital capabilities slow progress. The military is in a race to reform its institutions faster than its adversaries.
7. AI Competition Risks a Dangerous Race to the Bottom
Military AI competition produces relentless pressure to stay ahead of adversaries and may well lead countries to cut corners on safety and rush to deploy insufficiently tested AI systems.
Pressure to move fast. The intense geopolitical competition in AI creates pressure on nations to rapidly develop and deploy AI systems. This can lead to a "race to the bottom" on safety, where countries may shortcut crucial test and evaluation processes to field capabilities faster than competitors.
Accident risk. Deploying insecure, unreliable, or insufficiently tested AI systems increases the risk of accidents. Brittle AI could fail catastrophically in complex military environments, potentially causing civilian casualties, fratricide, or unintended escalation in a crisis. Even well-functioning AI could introduce dangerous ambiguity if its actions are misinterpreted.
Not an arms race, but a dilemma. While current military AI spending doesn't constitute a traditional arms race, the competition creates a security dilemma. One nation's efforts to enhance its security through AI may decrease the security of others, prompting reciprocal actions. This dynamic can fuel dangerous behaviors, including accelerating development timelines at the expense of safety or integrating AI into sensitive areas like nuclear operations.
8. AI Could Fundamentally Alter Warfare
AI systems don’t think like humans. They think differently.
Cognitive transformation. AI will transform the cognitive aspects of warfare, enabling militaries to process vast amounts of information faster, improve situational awareness, and execute operations with greater precision and coordination. This will accelerate the tempo of conflict and make it harder for forces to hide.
Inhuman tactics. AI systems often achieve superhuman performance by employing tactics and strategies fundamentally different from human approaches. Examples include AI fighter pilots executing impossible maneuvers or AI game agents using unconventional strategies. Militaries that learn to effectively combine human and machine cognition, leveraging AI's alien intelligence, will gain significant advantages.
Beyond human control. In the long term, the increasing speed and complexity of AI-driven operations could push warfare beyond human cognitive control, potentially leading to a "battlefield singularity." While unlikely in the near term, a future where AI systems plan and execute combat with minimal human intervention raises concerns about controlling escalation and terminating conflicts.
9. Shaping AI's Future Requires Global Cooperation
It is vital that the future of AI is one that advances human freedom and global peace.
Contested governance. The use of AI is inherently political, and how it is governed reflects societal values. While democracies debate AI ethics and regulation through messy, transparent processes, authoritarian regimes like China impose top-down control, using AI to suppress freedoms and export their model globally.
Democratic unity needed. To counter the spread of digital authoritarianism, democracies must develop and promote alternative models for AI governance that protect human rights and civil liberties. This requires cooperation among democratic nations and between governments, tech companies, and civil society to establish norms and standards for responsible AI use.
Managing risks through cooperation. Despite geopolitical competition, nations must cooperate to mitigate the risks of military AI, such as accidents, unintended escalation, and threats to nuclear stability. Confidence-building measures, transparency about AI assurance processes, and dialogue on dangerous applications (like autonomous nuclear systems) are crucial steps to ensure AI competition remains safe and stable.
Last updated:
Review Summary
Four Battlegrounds explores AI's impact on military and geopolitical power through data, computing, talent, and institutions. Readers praise Scharre's expertise and balanced perspective, appreciating his insights on AI's potential and risks in warfare. The book examines US-China competition, ethical concerns, and policy implications. While some find certain sections repetitive or biased, most consider it an informative read on AI's role in national security. Criticisms include an overemphasis on China and occasional technical complexity, but overall, reviewers recommend it for understanding AI's future in global affairs.
Download EPUB
.epub
digital book format is ideal for reading ebooks on phones, tablets, and e-readers.