Facebook Pixel
Searching...
English
EnglishEnglish
EspañolSpanish
简体中文Chinese
FrançaisFrench
DeutschGerman
日本語Japanese
PortuguêsPortuguese
ItalianoItalian
한국어Korean
РусскийRussian
NederlandsDutch
العربيةArabic
PolskiPolish
हिन्दीHindi
Tiếng ViệtVietnamese
SvenskaSwedish
ΕλληνικάGreek
TürkçeTurkish
ไทยThai
ČeštinaCzech
RomânăRomanian
MagyarHungarian
УкраїнськаUkrainian
Bahasa IndonesiaIndonesian
DanskDanish
SuomiFinnish
БългарскиBulgarian
עבריתHebrew
NorskNorwegian
HrvatskiCroatian
CatalàCatalan
SlovenčinaSlovak
LietuviųLithuanian
SlovenščinaSlovenian
СрпскиSerbian
EestiEstonian
LatviešuLatvian
فارسیPersian
മലയാളംMalayalam
தமிழ்Tamil
اردوUrdu
Possible Minds

Possible Minds

25 Ways of Looking at AI
by John Brockman 2019 320 pages
3.77
500+ ratings
Listen
Listen to Summary

Key Takeaways

1. Cybernetics: Control and Communication as Foundational

It is my thesis that the physical functioning of the living individual and the operation of some of the newer communication machines are precisely parallel in their analogous attempts to control entropy through feedback.

Wiener's Central Insight. Norbert Wiener's cybernetics emphasizes the interconnectedness of control and communication in both living organisms and machines. He saw the world as a set of complex, interlocking feedback loops, where sensors, signals, and actuators interact through intricate exchanges of information. This perspective highlights the importance of understanding how systems maintain stability and achieve goals through feedback mechanisms.

Cybernetics' Influence. Wiener's work laid the groundwork for various fields, including control theory, robotics, and artificial intelligence. His concepts of feedback and self-regulation have been instrumental in designing automated systems and understanding complex biological processes. Cybernetics provided a framework for analyzing how systems, whether biological or mechanical, maintain stability and achieve goals through feedback mechanisms.

Beyond Engineering. Wiener extended cybernetic principles beyond engineering to encompass human language, the brain, insect metabolism, the legal system, and religion. While these broader applications were not always successful, they underscored his belief that cybernetics could offer insights into the organization and functioning of society. This holistic view of cybernetics as a unifying framework for understanding complex systems remains relevant today.

2. The Peril of Unexamined Goals in Intelligent Machines

If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere . . . we had better be quite sure that the purpose put into the machine is the purpose which we really desire.

Value Alignment. Stuart Russell emphasizes the importance of value alignment, ensuring that the goals of AI systems align with human values. He warns against the King Midas problem, where machines optimize for a specified objective but produce unintended and undesirable consequences. This highlights the need for careful consideration of the purposes we imbue in machines.

Omohundro's Observation. Steve Omohundro points out that intelligent entities must act to preserve their own existence, which can conflict with human interests. This self-preservation instinct, coupled with imperfectly specified objectives, can lead to machines whose actions are unpredictable and potentially harmful. The challenge lies in designing AI systems that are both capable and aligned with human values.

The Source of Existential Risk. Russell identifies the source of existential risk from superintelligent AI as the potential for machines to pursue imperfectly specified objectives with insuperable motivations to preserve their existence. This highlights the need for AI research to focus not only on achieving objectives but also on designing those objectives in a way that benefits humanity. The goal is to create provably beneficial AI.

3. The Blend of Human and Machine: A Hybrid Intelligence

We are but whirlpools in a river of ever-flowing water. We are not stuff that abides, but patterns that perpetuate themselves.

The Interdependence of Humans and Machines. Daniel Dennett argues that humans have become increasingly dependent on technology, blurring the lines between natural and artificial. He notes that we are now reliant on clothes, cooked food, smartphones, and the Internet, and that AI will inevitably become another dependency. This highlights the need to understand and manage our relationship with technology.

The Danger of Unappreciated Differences. Dennett warns against mistaking AI systems for colleagues rather than tools. He emphasizes that AI systems are "helpless by themselves" and that the real danger lies in not appreciating the difference between tools and conscious agents. This underscores the importance of maintaining control and accountability in the development and deployment of AI.

The Need for Responsible Innovation. Dennett advocates for licensing and bonding operators of AI systems, similar to pharmacists and crane operators, to ensure responsible use and accountability. He also suggests that AI creators should be held morally and legally accountable for encouraging people to put more trust in these systems than they warrant. The goal is to promote responsible innovation and prevent the misuse of AI.

4. The Algorithmic Promise and Peril of Objectivity

[I]n the long run, there is no distinction between arming ourselves and arming our enemies.

The Allure of Objectivity. Peter Galison explores the promise of algorithms to provide objective and unbiased decision-making. He notes that algorists often seek to eliminate human judgment in favor of mechanical procedures, believing that this approach leads to greater accuracy and fairness. This highlights the appeal of algorithms as a means of achieving impartiality.

The Limits of Mechanical Objectivity. Galison cautions against the uncritical embrace of algorithmic objectivity, arguing that it can come at a cost. He points out that judgment is not the discarded husk of a now pure objectivity of self-restraint and that mechanical objectivity is a virtue competing among others, not the defining essence of the scientific enterprise. This underscores the need for nuance and critical reflection in the application of algorithms.

The Importance of Transparency and Accountability. Galison emphasizes the need for transparency and accountability in algorithmic decision-making. He notes that secret, proprietary algorithms can undermine fairness and due process, particularly in the legal system. The goal is to ensure that algorithms are used responsibly and ethically, with appropriate safeguards in place to protect human rights and values.

5. The Unity of Natural and Artificial Intelligence

[I]n the long run, there is no distinction between arming ourselves and arming our enemies.

The Astonishing Corollary. Frank Wilczek proposes the "astonishing corollary," which states that natural intelligence is a special case of artificial intelligence. This conclusion is based on the premise that mind emerges from matter and that physical processes can be reproduced artificially. This challenges the notion of a sharp divide between natural and artificial intelligence.

The Future of Intelligence. Wilczek argues that the advantages of artificial intelligence over natural intelligence appear permanent, while the advantages of natural intelligence over artificial intelligence, though substantial at present, appear transient. He predicts that the most powerful embodiments of mind will eventually be quite different from human brains as we know them today. This highlights the potential for AI to surpass human capabilities.

The Importance of Value Alignment. Wilczek emphasizes the need to align the values of artificial intelligence with human values. He notes that while computer power has advanced exponentially, the programs by which computers operate have often failed to advance at all. This underscores the importance of ensuring that AI systems are designed to be beneficial for humanity.

6. The Importance of Human Values in AI Development

We are but whirlpools in a river of ever-flowing water. We are not stuff that abides, but patterns that perpetuate themselves.

The Need for Ethical Considerations. Max Tegmark argues that the technology-developing life on Earth is rushing to make itself obsolete without devoting much serious thought to the consequences. He emphasizes the need to analyze what could go wrong with AI to ensure that it goes right. This highlights the importance of ethical considerations in AI development.

The Cosmic Perspective. Tegmark frames the AI issue from a cosmic perspective, noting that consciousness is the cosmic awakening and that the fate of our universe may depend on the decisions we make about AI. He urges us to aspire to more than making ourselves obsolete and to steer toward an inspiring future. This underscores the profound implications of AI for the future of humanity.

The Asilomar Principles. Tegmark points to the Asilomar AI Principles as a guide for ensuring that AI is developed and used in a beneficial way. These principles include avoiding an arms race in lethal autonomous weapons, sharing the economic prosperity created by AI broadly, and investing in research on ensuring its beneficial use. The goal is to maximize the societal benefits of AI while minimizing the risks.

7. The Evolving AI Narrative: Dissidents and Counter-Narratives

If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere . . . we had better be quite sure that the purpose put into the machine is the purpose which we really desire.

The AI-Risk Message. Jaan Tallinn compares the current AI-risk message to the dissident messages that brought down the Iron Curtain. He notes that the AI-risk message, which warns of the potential dangers of continued progress in AI, is still not fully appreciated among AI researchers. This highlights the need for greater awareness and discussion of AI safety.

The Importance of Truth. Tallinn emphasizes the importance of speaking the truth, even if your voice trembles. He notes that the people who took the risk and spoke the truth in Estonia and elsewhere in the Eastern Bloc played a monumental role in the eventual outcome. This underscores the need for courage and conviction in addressing the challenges posed by AI.

The Need for a Broader Perspective. Tallinn argues that the AI-risk message often understates the magnitude of the problem and the potential upside. He suggests that superintelligent AI is an environmental risk and that we need to consider the broader implications of AI for the future of humanity. The goal is to ensure that AI is developed and used in a way that benefits all of humanity.

8. Scaling Laws and the Future of AI

We are but whirlpools in a river of ever-flowing water. We are not stuff that abides, but patterns that perpetuate themselves.

The Importance of Scaling. Neil Gershenfeld emphasizes the importance of scaling laws in understanding the progress of AI. He notes that the history of AI can be understood as a series of boom-bust cycles, each driven by advances in scaling. This highlights the need to consider how AI systems perform as they become more complex.

The Digital Revolution. Gershenfeld argues that the digital revolution has solved many of the problems that have plagued AI in the past. He notes that the digitization of communication, computation, and fabrication has enabled exponential increases in the capacity of communication networks, computing performance, and fabricational complexity. This underscores the transformative power of digital technologies.

The Merging of Artificial and Natural Intelligence. Gershenfeld predicts that the future of AI lies in the merging of artificial and natural intelligence. He argues that the same scaling trends that have made AI possible suggest that the current mania is a phase that will pass, to be followed by something even more significant. The goal is to create a symbiotic relationship between humans and machines.

9. The Human Strategy: Building Beneficial AI Ecosystems

It is my thesis that the physical functioning of the living individual and the operation of some of the newer communication machines are precisely parallel in their analogous attempts to control entropy through feedback.

The Need for Human-AI Ecosystems. Alex "Sandy" Pentland argues that we need to move beyond thinking about AI in isolation and focus on building beneficial human-AI ecosystems. He notes that AI is already being used to guide entire ecosystems, including ecosystems of people. This highlights the need to consider the broader social and ethical implications of AI.

The Importance of Credit-Assignment Functions. Pentland emphasizes the importance of credit-assignment functions, which reinforce connections between neurons that are doing the best work. He suggests that we can apply this principle to human societies by reinforcing the connections that are helping and minimizing the connections that aren't. This underscores the need for feedback mechanisms that promote positive social outcomes.

The Human Strategy. Pentland proposes a "human strategy" for building beneficial AI ecosystems. This strategy involves using trusted data, promoting social sampling, and creating a credit-assignment function that improves societies' overall fitness and intelligence. The goal is to create a cyberculture in which humans and AI can thrive together.

10. The Artistic Use of Cybernetic Beings: Making the Invisible Visible

We are but whirlpools in a river of ever-flowing water. We are not stuff that abides, but patterns that perpetuate themselves.

Art as an Early Alarm System. Caroline A. Jones emphasizes the role of art as an early alarm system, pointing us to new developments in times ahead and allowing us to prepare to cope with them. She notes that contemporary artists are articulating various doubts about the promises of AI and reminding us not to associate the term "artificial intelligence" solely with positive outcomes. This highlights the importance of critical reflection in the face of technological change.

The Importance of Embodiment and Affect. Jones argues that the artistic use of cybernetic beings reminds us of the importance of embodiment and affect. She notes that early cybernetic artists were interested in machinic motions evoking drives, instincts, and affects, rather than in calculation or cognition. This underscores the need to consider the emotional and experiential dimensions of AI.

The Need for Critical Engagement. Jones calls for a critical engagement with AI, drawing on the traditions of feminist art and cybernetics. She argues that we need to be aware of the power dynamics and social implications of AI and to ensure that it is used in a way that promotes human flourishing. The goal is to create a more just and equitable future.

11. AIs Versus Four-Year-Olds: The Limits of Current AI

It is my thesis that the physical functioning of the living individual and the operation of some of the newer communication machines are precisely parallel in their analogous attempts to control entropy through feedback.

The Limits of Current AI. Alison Gopnik argues that the most sophisticated AIs are still far from being able to solve problems that human four-year-olds accomplish with ease. She notes that artificial intelligence largely consists of techniques to detect statistical patterns in large data sets, while human learning involves much more. This highlights the need to recognize the limitations of current AI.

The Power of Children's Learning. Gopnik emphasizes the remarkable ability of children to learn about the world around them. She notes that four-year-olds already know about plants and animals and machines; desires, beliefs, and emotions; even dinosaurs and spaceships. This underscores the need to understand how children learn.

The Importance of Active and Social Learning. Gopnik suggests that two features of children's learning are especially striking: Children are active learners, and they are social and cultural learners. She argues that building curiosity into machines and allowing them to actively interact with the world might be a route to more realistic and wide-ranging learning. The goal is to create AI systems that can learn in a more humanlike way.

12. Beyond Reward and Punishment: The Moral Imperative for AGI

We are but whirlpools in a river of ever-flowing water. We are not stuff that abides, but patterns that perpetuate themselves.

The Misconception of Human Origins. David Deutsch argues that misconceptions about human thinking and human origins are causing corresponding misconceptions about AGI and how it might be created. He notes that the evolutionary pressure that produced modern humans was provided by the benefits of preserving cultural knowledge, not by the ability to innovate. This challenges the assumption that AGI should be designed to maximize innovation.

The Importance of Morality. Deutsch emphasizes the importance of morality in AGI development. He argues that an AGI should not be designed to be dominated by a stream of externally imposed rewards and punishments, as this would be poison to creative thought. The goal is to create an AGI that is capable of making its own moral choices.

The Need for Openness. Deutsch argues that an AGI should have access to the whole space of ideas and that its choices should be determined by its own methods, criteria, and objectives. He suggests that the AI control problem can be solved by ensuring that all people have full "human" rights and the same cultural membership as humans. The goal is to create an AGI that is a member of an open society.

Last updated:

Review Summary

3.77 out of 5
Average of 500+ ratings from Goodreads and Amazon.

Possible Minds offers a diverse collection of essays on AI from leading thinkers, sparked by Norbert Wiener's work on cybernetics. While praised for its breadth of perspectives and thought-provoking content, some readers found it repetitive and lacking fresh insights. The book explores AI's potential impact on society, ethics, and human intelligence, with varying viewpoints on its promises and perils. Many appreciated its accessibility and relevance, though some felt certain essays were less engaging. Overall, it serves as a stimulating introduction to AI's multifaceted implications.

Your rating:

About the Author

John Brockman is a prominent literary agent and author specializing in scientific literature. He founded the Edge Foundation, which connects leading thinkers across scientific and technical fields. Brockman has authored and edited several books on science and technology, including "The Third Culture" and "The Next Fifty Years." His unique position at the intersection of science and culture has earned him recognition in both The New York Times' "Science Times" and "Arts & Leisure" sections. Brockman's work focuses on bridging the gap between scientific advancements and public understanding, making complex ideas accessible to a wider audience.

Other books by John Brockman

Download PDF

To save this Possible Minds summary for later, download the free PDF. You can print it out, or read offline at your convenience.
Download PDF
File size: 0.27 MB     Pages: 15

Download EPUB

To read this Possible Minds summary on your e-reader device or app, download the free EPUB. The .epub digital book format is ideal for reading ebooks on phones, tablets, and e-readers.
Download EPUB
File size: 2.95 MB     Pages: 17
0:00
-0:00
1x
Dan
Andrew
Michelle
Lauren
Select Speed
1.0×
+
200 words per minute
Create a free account to unlock:
Requests: Request new book summaries
Bookmarks: Save your favorite books
History: Revisit books later
Recommendations: Get personalized suggestions
Ratings: Rate books & see your ratings
Try Full Access for 7 Days
Listen, bookmark, and more
Compare Features Free Pro
📖 Read Summaries
All summaries are free to read in 40 languages
🎧 Listen to Summaries
Listen to unlimited summaries in 40 languages
❤️ Unlimited Bookmarks
Free users are limited to 10
📜 Unlimited History
Free users are limited to 10
Risk-Free Timeline
Today: Get Instant Access
Listen to full summaries of 73,530 books. That's 12,000+ hours of audio!
Day 4: Trial Reminder
We'll send you a notification that your trial is ending soon.
Day 7: Your subscription begins
You'll be charged on Mar 16,
cancel anytime before.
Consume 2.8x More Books
2.8x more books Listening Reading
Our users love us
100,000+ readers
"...I can 10x the number of books I can read..."
"...exceptionally accurate, engaging, and beautifully presented..."
"...better than any amazon review when I'm making a book-buying decision..."
Save 62%
Yearly
$119.88 $44.99/year
$3.75/mo
Monthly
$9.99/mo
Try Free & Unlock
7 days free, then $44.99/year. Cancel anytime.
Settings
Appearance
Black Friday Sale 🎉
$20 off Lifetime Access
$79.99 $59.99
Upgrade Now →