Searching...
English
EnglishEnglish
EspañolSpanish
简体中文Chinese
FrançaisFrench
DeutschGerman
日本語Japanese
PortuguêsPortuguese
ItalianoItalian
한국어Korean
РусскийRussian
NederlandsDutch
العربيةArabic
PolskiPolish
हिन्दीHindi
Tiếng ViệtVietnamese
SvenskaSwedish
ΕλληνικάGreek
TürkçeTurkish
ไทยThai
ČeštinaCzech
RomânăRomanian
MagyarHungarian
УкраїнськаUkrainian
Bahasa IndonesiaIndonesian
DanskDanish
SuomiFinnish
БългарскиBulgarian
עבריתHebrew
NorskNorwegian
HrvatskiCroatian
CatalàCatalan
SlovenčinaSlovak
LietuviųLithuanian
SlovenščinaSlovenian
СрпскиSerbian
EestiEstonian
LatviešuLatvian
فارسیPersian
മലയാളംMalayalam
தமிழ்Tamil
اردوUrdu
Empire of AI

Empire of AI

Dreams and Nightmares in Sam Altman's OpenAI
by Karen Hao 2025 496 pages
4.17
367 ratings
Listen
Try Full Access for 7 Days
Unlock listening & more!
Continue

Key Takeaways

1. OpenAI's Idealistic Founding Quickly Yielded to the Pursuit of Power and Profit.

Over the next four years, OpenAI became everything that it said it would not be.

Initial altruism. Founded as a nonprofit by figures like Elon Musk and Sam Altman, OpenAI initially pledged $1 billion to develop artificial general intelligence (AGI) for humanity's benefit, emphasizing openness, collaboration, and even self-sacrifice if another project surpassed them. The goal was to prevent AGI from being controlled by a single corporation like Google.

Shift to commercialization. Financial pressures and internal power struggles, particularly after Musk's departure, led Altman to restructure OpenAI into a "capped-profit" entity. This allowed it to raise significant capital, notably a $1 billion investment from Microsoft, but fundamentally altered its trajectory towards aggressive commercialization and secrecy, prioritizing being first to AGI over its founding ideals.

Erosion of principles. The transition marked a clear departure from the original mission.

  • Transparency was replaced by secrecy.
  • Collaboration gave way to fierce competition.
  • The focus shifted from open research to building lucrative products like ChatGPT, seeking massive valuations.
    This transformation highlighted that the project, despite its noble framing, was also driven by ego and the pursuit of dominance.

2. Relentless Scaling of AI Models Became OpenAI's Core Strategy, Driven by a Self-Fulfilling Prophecy.

OpenAI’s Law, or what the company would later replace with an even more fevered pursuit of so-called scaling laws, is exactly the same. It is not a natural phenomenon. It’s a self-fulfilling prophecy.

The scaling hypothesis. Inspired by the observation that AI performance improved with increased computational resources ("compute"), particularly after the 2012 ImageNet breakthrough, OpenAI leaders, especially Ilya Sutskever and Greg Brockman, theorized that scaling simple neural networks to unprecedented sizes was the fastest path to AGI. They noted that compute use in AI was growing faster than Moore's Law.

The need for massive compute. This hypothesis dictated an insatiable demand for GPUs and supercomputers, far exceeding the resources available to a nonprofit.

  • Training GPT-3 required a supercomputer with 10,000 GPUs.
  • Future models like GPT-4 and beyond would need tens or hundreds of thousands.
  • The estimated cost for a future "Phase 5" supercomputer could reach $100 billion.
    This escalating need for capital and infrastructure solidified the shift to a for-profit model and reliance on partners like Microsoft.

A strategic imperative. Scaling became not just a technical approach but a business strategy.

  • Being first or best required staying ahead on the scaling curve.
  • Falling behind meant losing influence over AGI development.
    This belief in "scale above all" set the rules for the new era of AI, pushing the entire industry into a resource-intensive race, regardless of alternative approaches or potential downsides.

3. The AI Empire's Growth Is Fueled by Exploiting Vulnerable Global Labor for Data Annotation.

Behind promises of their technologies enhancing productivity, unlocking economic freedom, and creating new jobs that would ameliorate automation, the present-day reality has been the opposite.

The hidden workforce. Training AI models, especially large language models, requires vast amounts of human labor to collect, clean, and annotate data. This "ghost work" is often outsourced to low-wage workers globally, particularly in countries facing economic hardship.

Exploitation in crisis economies. Companies like Scale AI and Sama have leveraged crises, such as Venezuela's economic collapse or the pandemic's impact in Kenya, to find desperate workers willing to perform tedious and often psychologically damaging tasks for pennies.

  • Venezuelans worked for less than a dollar an hour on platforms like Remotasks.
  • Kenyan workers were paid less than $2 an hour to filter toxic content for OpenAI.
    This reliance on precarious labor mirrors historical colonial practices of exploiting subjugated populations for resource extraction.

The cost of "data swamps." The shift to training models on unfiltered, massive datasets ("data swamps") increased the need for content moderation and reinforcement learning from human feedback (RLHF). This exposed workers to disturbing content, including child sexual abuse material, leading to severe mental health consequences, often without adequate support or fair compensation.

4. Building the AI Empire Demands Vast Resources, Imposing Significant Environmental Costs Globally.

If we are going to develop this technology in the same way that we used to, we are going to devastate the earth.

Physical infrastructure. AI models, particularly large generative ones, require massive physical data centers ("hyperscalers" and "megacampuses") for training and inference. These facilities consume enormous amounts of energy, land, minerals, and water.

Escalating environmental footprint. The demand for resources is growing exponentially with scaling.

  • Data centers are projected to use 8% of US power by 2030.
  • AI computing globally could use more energy than India.
  • AI demand could consume 1.1 to 1.7 trillion gallons of fresh water globally by 2027.
    This resource intensity exacerbates climate change and strains local environments, particularly in water-stressed regions.

Disproportionate impact. The environmental burden falls heavily on communities, often in the Global South, where data centers are built due to cheap land, energy, and water. These communities, already vulnerable due to historical extractivism, face:

  • Depleted water sources.
  • Increased energy demands straining local grids.
  • Noise pollution and land displacement.
    Despite corporate sustainability claims, the reality is often a continuation of resource plundering for the benefit of distant tech giants.

5. Internal Conflicts Over Safety vs. Commercialization Intensified as OpenAI Accelerated Deployment.

To succeed, we need these three clans to unite as one tribe—while maintaining the strengths of each clan—working towards AGI that maximally benefits humanity.

Factions within OpenAI. From its early days, OpenAI was marked by internal divisions, caricatured as "Exploratory Research" (advancing capabilities), "Safety" (focusing on risks), and "Startup" (moving fast and building products). These factions often clashed over priorities and the pace of development.

Safety concerns vs. product urgency. The "Safety" clan, particularly those focused on catastrophic and existential risks (Doomers), grew increasingly alarmed by the rapid scaling and deployment of models like GPT-3 and DALL-E 2 without sufficient testing or safety mechanisms. They advocated for caution and delay.

Commercial pressures prevailed. The "Applied" division and "Startup" clan, bolstered by investment and the need for revenue, pushed for faster product releases ("iterative deployment").

  • GPT-3 API was released despite safety concerns.
  • DALL-E 2 was launched as a "research preview" to manage risk.
  • ChatGPT was rushed out due to perceived competition.
    These decisions often overrode safety objections, creating tension and leading to the departure of key safety researchers who felt their concerns were sidelined for commercial gain.

6. Sam Altman's Leadership Style—Marked by Ambition, Dealmaking, and Alleged Manipulation—Fueled Both Success and Turmoil.

“Sam is extremely good at becoming powerful.”

Ambition and network building. Sam Altman is characterized by relentless ambition, a talent for dealmaking, and a strategic focus on building powerful networks. He leveraged his position at Y Combinator and his relationships with figures like Peter Thiel and Reid Hoffman to advance his career and OpenAI's standing.

Contradictory behaviors. Altman is described as charismatic and outwardly agreeable, but also prone to anxiety and a pattern of telling different people what they want to hear. This led to confusion, mistrust, and conflict among colleagues and partners, including:

  • Misrepresenting agreements with Microsoft.
  • Pitting executives against each other (e.g., Sutskever and Pachocki).
  • Undermining those who challenged him.
    These behaviors, while subtle individually, created a pervasive sense of instability at the highest levels.

Allegations of dishonesty and abuse. More serious accusations, including those from his sister Annie Altman and former colleagues like Geoffrey Irving, paint a picture of a long history of alleged manipulation, dishonesty, and abuse. While Altman and his family deny these claims, they contributed to a perception among some that his personal conduct was deeply problematic and potentially relevant to his leadership of a powerful AI company.

7. The 2023 Board Crisis Exposed Deep Power Struggles and Governance Failures at the Apex of AI Development.

It illustrated in the clearest terms just how much a power struggle among a tiny handful of Silicon Valley elites is shaping the future of AI.

The board's concerns. OpenAI's nonprofit board, tasked with prioritizing the mission over profit, grew increasingly concerned about Sam Altman's leadership style, perceived lack of candor, and behaviors that seemed to undermine the board's oversight and the company's safety culture. Feedback from senior executives like Ilya Sutskever and Mira Murati solidified these concerns.

The ouster and fallout. The board's decision to fire Altman triggered a chaotic five-day period.

  • Employees threatened mass resignation.
  • Investors pressured the board to reinstate Altman.
  • Microsoft publicly backed Altman and offered jobs to departing staff.
    The swift, overwhelming backlash highlighted the board's miscalculation of Altman's influence and the deep loyalty he commanded among employees and key stakeholders.

Governance failure. The crisis revealed the fragility of OpenAI's unique governance structure. The nonprofit board, despite its mandate, ultimately buckled under pressure from moneyed interests and the threat of company collapse. The event underscored that critical decisions about a technology with global implications were made behind closed doors by a small group, with limited transparency even to employees.

8. OpenAI Actively Shapes AI Policy to Favor Incumbents and Frontier Models, Often Dismissing Present Harms.

Altman’s prep team considered it a resounding success.

Policy influence campaign. Following ChatGPT's success, Sam Altman and OpenAI launched an aggressive global lobbying effort, meeting with policymakers worldwide to shape AI regulation. Altman's testimony before Congress was a key moment, positioning OpenAI as a responsible leader advocating for necessary safeguards.

Focus on "frontier" risks. OpenAI's policy proposals, echoed by the "Frontier Model Forum" (including Google and Anthropic), emphasize regulating future, potentially catastrophic risks from highly capable ("frontier") AI models. This shifts attention away from regulating the immediate, documented harms of existing AI systems, such as:

  • Labor displacement and exploitation.
  • Environmental costs.
  • Bias and discrimination.
  • Copyright infringement and data privacy violations.

Compute thresholds and export controls. Key proposals, like using compute thresholds (e.g., 10^26 FLOPs) to identify "frontier" models and restricting their export (potentially banning open-source model weights), align with OpenAI's scaling strategy and competitive interests. These measures risk entrenching the dominance of companies with massive compute resources while hindering independent research and development.

9. The "Empire of AI" Metaphor Reveals Disturbing Parallels to Historical Colonialism and Extractivism.

Over the years, I’ve found only one metaphor that encapsulates the nature of what these AI power players are: empires.

Resource extraction. Like historical empires, AI companies seize and extract valuable resources:

  • The work of artists, writers, and online users (data).
  • The labor of low-wage workers globally (data annotation, content moderation).
  • Land, energy, water, and minerals for data centers and hardware.
    This extraction often occurs without consent, fair compensation, or regard for local communities and environments.

Justification through narrative. The pursuit of AGI and the promise of a better future ("modernity," "progress," "abundance") serve as a powerful narrative to justify this extraction and exploitation. This mirrors how historical empires used concepts like "civilizing missions" to legitimize their actions.

Concentration of wealth and power. The benefits of this system accrue disproportionately to a small elite in Silicon Valley and allied corporations, while the costs are borne by vulnerable populations globally. The relentless drive to outcompete rivals in the "AI race" further fuels this extractive dynamic, consolidating power and wealth at the top.

10. Alternative Visions for AI Development Offer a Path Toward Decentralized, Ethical, and Community-Driven Technology.

Artificial intelligence doesn’t have to be what it is today.

Resisting the dominant paradigm. Communities and organizations globally are challenging the prevailing model of AI development, which is centralized, resource-intensive, and extractive. They argue that AI can be developed differently, prioritizing human well-being and environmental sustainability.

Examples of alternative approaches:

  • Community-driven AI: Projects like Te Hiku Media in New Zealand develop AI (e.g., speech recognition) based on community consent, reciprocity, and data sovereignty, using small, task-specific models.
  • Ethical research institutes: Organizations like DAIR (Distributed AI Research Institute) conduct AI research centered on affected communities, questioning existing systems and fairly compensating labor.
  • Activist movements: Groups like MOSACAT in Chile fight against the environmental impacts of data centers, advocating for local control over resources and envisioning AI infrastructure integrated with ecological restoration.

Redistributing power. These efforts aim to shift power away from centralized AI empires by:

  • Promoting independent knowledge production and research.
  • Demanding transparency about data, models, and supply chains.
  • Advocating for stronger labor and environmental protections.
  • Building collective power through cross-border solidarity and organizing.
    This vision seeks to remold AI development towards a more democratic, equitable, and sustainable future.

Last updated:

Review Summary

4.17 out of 5
Average of 367 ratings from Goodreads and Amazon.

Empire of AI receives mixed reviews, with praise for its investigative reporting on OpenAI and Sam Altman, but criticism for perceived bias and lack of technical depth. Some readers appreciate the expose on AI's environmental and labor impacts, while others find the book overly critical and ideologically driven. The narrative structure and focus on personal details are contentious points. Overall, readers value the insights into OpenAI's evolution and AI industry practices, but opinions vary on the book's perspective and conclusions.

Your rating:
Be the first to rate!

About the Author

Karen Hao is a technology journalist known for her coverage of artificial intelligence and its societal impacts. She has extensive experience reporting on OpenAI and other major tech companies, having covered the AI industry for several years. Hao's approach combines in-depth research with a critical lens on the power dynamics and ethical implications of AI development. Her work often explores themes of accountability, labor practices, and environmental consequences in the tech sector. Hao's writing style is described as engaging and accessible, though some readers find her perspective controversial. Her background in both journalism and technology informs her nuanced understanding of complex AI issues.

Download PDF

To save this Empire of AI summary for later, download the free PDF. You can print it out, or read offline at your convenience.
Download PDF
File size: 0.31 MB     Pages: 16

Download EPUB

To read this Empire of AI summary on your e-reader device or app, download the free EPUB. The .epub digital book format is ideal for reading ebooks on phones, tablets, and e-readers.
Download EPUB
File size: 2.94 MB     Pages: 15
Listen
Now playing
Empire of AI
0:00
-0:00
Now playing
Empire of AI
0:00
-0:00
1x
Voice
Speed
Dan
Andrew
Michelle
Lauren
1.0×
+
200 words per minute
Queue
Home
Library
Get App
Create a free account to unlock:
Recommendations: Personalized for you
Requests: Request new book summaries
Bookmarks: Save your favorite books
History: Revisit books later
Ratings: Rate books & see your ratings
100,000+ readers
Try Full Access for 7 Days
Listen, bookmark, and more
Compare Features Free Pro
📖 Read Summaries
All summaries are free to read in 40 languages
🎧 Listen to Summaries
Listen to unlimited summaries in 40 languages
❤️ Unlimited Bookmarks
Free users are limited to 4
📜 Unlimited History
Free users are limited to 4
📥 Unlimited Downloads
Free users are limited to 1
Risk-Free Timeline
Today: Get Instant Access
Listen to full summaries of 73,530 books. That's 12,000+ hours of audio!
Day 4: Trial Reminder
We'll send you a notification that your trial is ending soon.
Day 7: Your subscription begins
You'll be charged on Jun 28,
cancel anytime before.
Consume 2.8x More Books
2.8x more books Listening Reading
Our users love us
100,000+ readers
"...I can 10x the number of books I can read..."
"...exceptionally accurate, engaging, and beautifully presented..."
"...better than any amazon review when I'm making a book-buying decision..."
Save 62%
Yearly
$119.88 $44.99/year
$3.75/mo
Monthly
$9.99/mo
Start a 7-Day Free Trial
7 days free, then $44.99/year. Cancel anytime.
Scanner
Find a barcode to scan

Settings
General
Widget
Loading...