Facebook Pixel
Searching...
English
EnglishEnglish
EspañolSpanish
简体中文Chinese
FrançaisFrench
DeutschGerman
日本語Japanese
PortuguêsPortuguese
ItalianoItalian
한국어Korean
РусскийRussian
NederlandsDutch
العربيةArabic
PolskiPolish
हिन्दीHindi
Tiếng ViệtVietnamese
SvenskaSwedish
ΕλληνικάGreek
TürkçeTurkish
ไทยThai
ČeštinaCzech
RomânăRomanian
MagyarHungarian
УкраїнськаUkrainian
Bahasa IndonesiaIndonesian
DanskDanish
SuomiFinnish
БългарскиBulgarian
עבריתHebrew
NorskNorwegian
HrvatskiCroatian
CatalàCatalan
SlovenčinaSlovak
LietuviųLithuanian
SlovenščinaSlovenian
СрпскиSerbian
EestiEstonian
LatviešuLatvian
فارسیPersian
മലയാളംMalayalam
தமிழ்Tamil
اردوUrdu
Atlas of AI

Atlas of AI

Power, Politics, and the Planetary Costs of Artificial Intelligence
by Kate Crawford 2020 288 pages
3.96
2k+ ratings
Listen
Listen to Summary

Key Takeaways

1. AI's Material Basis: Earth as an Extractive Industry

Computational media now participate in geological (and climatological) processes, from the transformation of the earth’s materials into infrastructures and devices to the powering of these new systems with oil and gas reserves.

AI's reliance on resources. Artificial intelligence is not an ethereal concept but a deeply material industry dependent on the extraction of Earth's resources. From lithium mines in Nevada to rare earth mineral deposits in Inner Mongolia, the creation of AI systems requires a vast supply chain of minerals, energy, and materials. This demand fuels environmentally destructive mining practices, often overlooked in discussions of technological progress.

Environmental impact. The tech sector's demand for resources contributes significantly to environmental degradation. The extraction of minerals contaminates waterways, destroys forests, and displaces communities. Furthermore, the energy-intensive nature of AI, particularly in training large models, contributes to a growing carbon footprint, rivaling that of the aviation industry.

Need for a shift in perspective. To understand the true cost of AI, we must move beyond the abstract promises of technological advancement and consider its material consequences. This requires acknowledging the environmental and human costs associated with resource extraction, energy consumption, and the global supply chains that support AI systems.

2. The Human Cost: Labor Exploitation in AI Systems

Coordinating the actions of humans with the repetitive motions of robots and line machinery has always involved a controlling of bodies in space and time.

AI's dependence on human labor. Despite the narrative of automation, AI systems rely heavily on human labor, often hidden and poorly compensated. This includes digital pieceworkers labeling data, Amazon warehouse employees fulfilling orders, and content moderators filtering harmful content. These workers are essential to making AI systems function, yet their contributions are often undervalued and their working conditions exploitative.

Time and control. The management of time is central to the exploitation of labor in AI systems. Workers are subjected to constant surveillance and algorithmic assessment, with their every action tracked and measured to maximize efficiency. This creates a stressful and dehumanizing work environment, where workers are treated as mere appendages to the machine.

Need for worker solidarity. To address the exploitation of labor in AI systems, workers must organize and demand better working conditions, fair wages, and greater control over their time and labor. This requires building solidarity across different sectors of the AI industry, from miners to engineers, and challenging the power structures that perpetuate exploitation.

3. Data as Infrastructure: The Erasure of Context and Consent

All publicly accessible digital material—including data that is personal or potentially damaging—is open to being harvested for training datasets that are used to produce AI models.

Data extraction. The AI industry relies on the mass harvesting of data, often without consent or regard for privacy. This includes personal information, images, and text scraped from the internet and used to train AI models. This practice treats data as a free resource, ignoring the ethical and social implications of collecting and using people's information without their knowledge or permission.

From image to infrastructure. The transformation of images into data strips them of their context and meaning. Mugshots, selfies, and personal photos are reduced to data points, used to train facial recognition systems and other AI models. This erasure of context can lead to biased and discriminatory outcomes, as AI systems learn to associate certain features with negative stereotypes.

Ethical concerns. The current practices of data collection and use in AI raise profound ethical concerns. We must move beyond the idea that data is a neutral resource and recognize the power dynamics inherent in its collection, labeling, and use. This requires developing ethical guidelines and regulations that protect people's privacy and prevent the misuse of their data.

4. Classification as Power: Encoding Bias in AI Systems

By looking at how classifications are made, we see how technical schemas enforce hierarchies and magnify inequity.

Classification as a political act. AI systems rely on classification to make sense of the world. However, the categories used to classify data are not neutral or objective but reflect the biases and assumptions of their creators. These biases can be encoded into AI systems, leading to discriminatory outcomes.

The problem of bias. AI systems have been shown to exhibit bias in a variety of domains, from facial recognition to criminal justice. These biases often reflect historical patterns of discrimination, perpetuating and amplifying existing inequalities. For example, facial recognition systems may be less accurate for people with darker skin, leading to misidentification and wrongful arrests.

Beyond bias debates. To address the problem of bias in AI, we must move beyond technical fixes and address the underlying social and political structures that shape the data and algorithms used to train AI systems. This requires challenging the power dynamics that perpetuate inequality and promoting more equitable and inclusive approaches to AI development.

5. Affect Recognition: The Troubled Science of Reading Emotions

The solution to the Clever Hans riddle, Pfungst wrote, was the unconscious direction from the horse’s questioners.

The claim of universal emotions. Affect recognition systems are based on the idea that emotions are universal and can be reliably detected from facial expressions. However, this claim is highly contested, with many researchers arguing that emotions are culturally variable and context-dependent.

The influence of Paul Ekman. The work of psychologist Paul Ekman has been influential in shaping the field of affect recognition. Ekman's research, which began in the 1960s, claimed to identify a set of basic emotions that are universally expressed and recognized. However, his methods and findings have been widely criticized for their lack of scientific rigor.

Ethical concerns. Despite the scientific doubts surrounding affect recognition, these tools are being rapidly deployed in a variety of high-stakes contexts, from hiring to policing. This raises serious ethical concerns, as people may be judged and discriminated against based on inaccurate and unreliable assessments of their emotional state.

6. AI as a Tool of State Power: Surveillance and Control

The military past and present of artificial intelligence have shaped the practices of surveillance, data extraction, and risk assessment we see today.

Military origins of AI. The development of AI has been heavily influenced by military funding and priorities. This has shaped the field's focus on surveillance, data extraction, and risk assessment, with little regard for the ethical and social implications.

The Snowden archive. The Snowden archive reveals the extent to which intelligence agencies have used AI to collect and analyze data on a massive scale. These tools, once reserved for national security purposes, are now being deployed domestically, blurring the lines between military and civilian surveillance.

The Third Offset strategy. The U.S. military's Third Offset strategy seeks to maintain its dominance in AI by partnering with the tech sector. This has led to a close relationship between the military and Silicon Valley, with tech companies providing AI tools and expertise to the Defense Department.

7. The Great Houses of AI: Centralizing Power and Widening Asymmetries

These politics are driven by the Great Houses of AI, which consist of the half-dozen or so companies that dominate large-scale planetary computation.

Concentration of power. The AI industry is dominated by a small number of powerful technology corporations. These companies control vast amounts of data, resources, and expertise, giving them a significant advantage in shaping the development and deployment of AI systems.

Widening inequalities. The concentration of power in the hands of a few tech giants exacerbates existing inequalities. AI systems are often designed to serve the interests of these companies, further widening the gap between the rich and the poor, the powerful and the marginalized.

Need for regulation. To address the concentration of power in the AI industry, we need stronger regulations that promote competition, protect privacy, and ensure that AI systems are used in a way that benefits society as a whole. This requires challenging the dominance of the tech giants and promoting more democratic and accountable forms of AI governance.

8. Challenging the Logics: Towards Interconnected Movements for Justice

As conditions on Earth change, calls for data protection, labor rights, climate justice, and racial equity should be heard together.

Interconnected movements. Addressing the foundational problems of AI requires connecting issues of power and justice. This includes data protection, labor rights, climate justice, and racial equity. By working together, these movements can challenge the structures of power that AI currently reinforces.

Politics of refusal. We must reject the idea that AI is inevitable and that we have no choice but to accept its consequences. This requires challenging the narratives of technological determinism and demanding more democratic and accountable forms of AI governance.

A different vision. By connecting issues of power and justice, we can create a different vision for AI, one that prioritizes human well-being, environmental sustainability, and social equity. This requires challenging the extractive logics of AI and building a more just and sustainable future for all.

Last updated:

Review Summary

3.96 out of 5
Average of 2k+ ratings from Goodreads and Amazon.

Atlas of AI receives mixed reviews, with some praising its critical examination of AI's societal and environmental impacts, while others criticize its repetitive writing and lack of solutions. Readers appreciate the book's exploration of AI's material costs, labor exploitation, and ethical concerns. However, some find it overly pessimistic and lacking in technical depth. The book is commended for its comprehensive approach but criticized for its academic tone and occasional lack of focus. Despite its flaws, many consider it an important read for understanding AI's broader implications.

Your rating:

About the Author

Kate Crawford is a leading scholar and researcher in the field of artificial intelligence and its societal implications. Kate Crawford is known for her interdisciplinary approach, combining technology, ethics, and social sciences. She has held positions at prominent institutions such as Microsoft Research and New York University. Crawford's work focuses on the hidden costs and consequences of AI systems, including their environmental impact, labor practices, and potential for bias and discrimination. She has published extensively on these topics and is a frequent speaker at international conferences. Crawford's research has significantly contributed to the critical discourse surrounding AI development and implementation.

0:00
-0:00
1x
Dan
Andrew
Michelle
Lauren
Select Speed
1.0×
+
200 words per minute
Home
Library
Get App
Create a free account to unlock:
Requests: Request new book summaries
Bookmarks: Save your favorite books
History: Revisit books later
Recommendations: Get personalized suggestions
Ratings: Rate books & see your ratings
Try Full Access for 7 Days
Listen, bookmark, and more
Compare Features Free Pro
📖 Read Summaries
All summaries are free to read in 40 languages
🎧 Listen to Summaries
Listen to unlimited summaries in 40 languages
❤️ Unlimited Bookmarks
Free users are limited to 10
📜 Unlimited History
Free users are limited to 10
Risk-Free Timeline
Today: Get Instant Access
Listen to full summaries of 73,530 books. That's 12,000+ hours of audio!
Day 4: Trial Reminder
We'll send you a notification that your trial is ending soon.
Day 7: Your subscription begins
You'll be charged on Apr 23,
cancel anytime before.
Consume 2.8x More Books
2.8x more books Listening Reading
Our users love us
100,000+ readers
"...I can 10x the number of books I can read..."
"...exceptionally accurate, engaging, and beautifully presented..."
"...better than any amazon review when I'm making a book-buying decision..."
Save 62%
Yearly
$119.88 $44.99/year
$3.75/mo
Monthly
$9.99/mo
Try Free & Unlock
7 days free, then $44.99/year. Cancel anytime.
Scanner
Find a barcode to scan

Settings
General
Widget
Appearance
Loading...
Black Friday Sale 🎉
$20 off Lifetime Access
$79.99 $59.99
Upgrade Now →