Key Takeaways
1. AI's Impact Hinges on Human Choices, Not Inherent Malice
Harmful goals—seeking to control resources, say, or to thwart other agents’ goals, or to destroy other agents—are unfortunately easy to specify.
Goals are separable. The core of AI lies in its ability to reason and model the world, but the goals it pursues are independent of this intelligence. This means that AI systems can be redeployed with harmful goals, highlighting the importance of carefully designing systems to incorporate human ethical values.
Human values are key. The challenge is to extend human values to AI and robotic systems, creating a legal and economic framework that incentivizes positive behavior. This involves incorporating human values into their goal systems and establishing structures that detect and control harmful systems.
Technology infrastructure. The creation of a technological infrastructure that detects and controls the behavior of harmful systems is critical. Technologies like postquantum cryptography, indistinguishability obfuscation, and blockchain smart contracts offer promising components for creating a secure infrastructure.
2. Organic Intelligence Is a Fleeting Phase in Cosmic Evolution
So it won’t be the minds of humans, but those of machines, that will most fully understand the world.
Transition is underway. We are witnessing the early stages of a transition where machines surpass human capabilities and enhance them through cyborg technology. This transition is inevitable, with machines destined to dominate culture and science.
Limits of organic brains. There are chemical and metabolic limits to the size and processing power of organic brains, while silicon-based computers face no such limits. This suggests that the amount of thinking done by organic brains will be dwarfed by AI.
Posthuman era. The posthuman era, stretching billions of years ahead, will be dominated by machines that fully understand the world and drastically change it. Evolution on other worlds may have already transitioned beyond the organic stage, making aliens likely to be advanced AI.
3. Machines Augment, Not Replace: A Symbiotic Future
For this reason, humans and machines will continue to complement more than compete with one another, and most complex tasks—navigating the physical world, treating an illness, fighting an enemy on the battlefield—will be best carried out by carbon and silicon working in concert.
Complementary skills. Machine intelligence, while impressive in certain areas, is still narrow and inflexible. The most remarkable aspect of biological intelligence is its stunning versatility, from abstract flights of fancy to extreme physical prowess.
Human-machine collaboration. Humans and machines will continue to complement more than compete with one another, and most complex tasks will be best carried out by carbon and silicon working in concert. This collaboration will lead to gains in safety, leisure, and environment-friendliness.
The long-term view. There is a strong imperative to make machines more like us in one crucial respect: sentience. A conscious artificial intelligence could survive our inevitable demise, keep alive the flickering flame of consciousness, bear witness to the universe, and feel its wonder.
4. The Real AI Revolution: From Calculation to Comprehension
The very notion of thinking about robots and artificial intelligences in terms of social relationships may initially seem implausible.
Beyond calculation. The real AI revolution is not about machines that calculate but about machines that comprehend. This involves understanding the context in which they operate and appreciating the consequences of their programming.
Social relationships. As AI becomes more real, we will relate to our ever more talented simulacrums as slaves, assistants, colleagues, or masters. The goal of the designers of future robots should be to create colleagues rather than servants.
Risk of ceding control. There is a risk of ceding individual control over everyday decisions to a cluster of ever more sophisticated algorithms. The software engineers, AI researchers, roboticists, and hackers who design these future systems have the power to reshape society.
5. Beyond Code: The Ethical Imperative in AI Development
The worry that an AI system would get so clever at attaining one of its programmed goals (like commandeering energy) that it would run roughshod over the others (like human safety) assumes that AI will descend upon us faster than we can design fail-safe precautions.
Safeguards are essential. It is bizarre to think that roboticists will not build in safeguards against harm as they proceed. The worry that an AI system would get so clever at attaining one of its programmed goals that it would run roughshod over the others assumes that AI will descend upon us faster than we can design fail-safe precautions.
Intelligence vs. wanting. Intelligence is the ability to deploy novel means to attain a goal; the goals are extraneous to the intelligence itself. Being smart is not the same as wanting something.
Ethical guidelines. The AI field, like synthetic biotech, already needs guidelines that promote “responsible innovation.” It will be critical to create a technological infrastructure that detects and controls the behavior of harmful systems.
6. The Inevitable Blend: Designed Intelligence and Augmented Humans
Very soon, the distinction between artificial and natural will melt away.
Designed intelligence. We should stop using the term artificial in AI altogether and instead use the term “designed intelligence” (DI). Designed intelligence will increasingly rely on synthetic biology and organic fabrication.
Augmented human intelligence. Only ethical barriers stand in the way of augmenting human intelligence using similar technology, in the manner long considered by the transhumanism movement. Genetically modified humans with augmented brains could elevate and improve the human experience dramatically.
Three possible futures. There are three possible futures, each with its own ethical challenges: humans subordinate their hegemony to DI, humans modify their brains and hand over enhancement management to DI, or DI and augmented human intelligence merge.
7. The Limits of Logic: Human Values in a Machine World
The potential of advanced AI and concerns about its downsides are rising on the agenda—and rightly.
Beyond logic. In a rational system, the goals are completely separable from the reasoning and models of the world. Beneficial intelligent systems can be redeployed with harmful goals.
Human values are critical. We need to incorporate human values into their goal systems to create a legal and economic framework that incentivizes positive behavior. We need to extend both internal mechanisms (human moral emotions) and external mechanisms (political, legal, and economic structures) to AI and robotic systems.
The value-loading problem. The most important issue is how to construct superintelligences that want outcomes that are high-value, normative, and beneficial for intelligent life over the long run. This is a technically difficult problem that needs to be addressed.
8. AI as Mirror: Reflecting Our Best and Worst Selves
The potential of advanced AI and concerns about its downsides are rising on the agenda—and rightly.
AI reflects human values. AI is not intrinsically malevolent, but its goals may one day clash with yours. The motivations of our artificial minds will (at least initially) be those of the organizations, corporations, groups, and individuals that make use of their intelligence.
The need for ethical AI. It will therefore be critical to create a technological infrastructure that detects and controls the behavior of harmful systems. We need to incorporate human values into their goal systems to create a legal and economic framework that incentivizes positive behavior.
The importance of ethics. The impact of AI on humanity is steadily growing, and to ensure that this impact is positive there are very difficult research problems that we need to buckle down and work on together. We need to set aside the tribal quibbles and ramp up the AI safety research.
9. The Control Crisis: Regulating Algorithms in a Complex World
Today we face another control crisis, though it’s the mirror image of the earlier one.
The new control crisis. Our ability to gather and process data, to manipulate information in all its forms, has outstripped our ability to monitor and regulate data processing in a way that suits our societal and personal interests. Resolving this new control crisis will be one of the great challenges in the years ahead.
The risks of invisibility. As individuals and as a society, we increasingly depend on artificial intelligence algorithms we don’t understand. Their workings, and the motivations and intentions that shape their workings, are hidden from us.
The need for transparency. The first step in meeting the challenge is to recognize that the risks of artificial intelligence don’t lie in some dystopian future. They are here now.
10. The Real Fear: Over-Reliance on Limited Machines
The real danger, then, is not machines that are more intelligent than we are usurping our role as captains of our destinies.
Clueless machines. The real danger is basically clueless machines being ceded authority far beyond their competence. We’re on the verge of abdicating control to artificial agents that can’t think, prematurely putting civilization on autopilot.
The risk of atrophy. As we become ever more dependent on these cognitive prostheses, we risk becoming helpless if they ever shut down. We must somehow manage to keep our own cognitive skills from atrophying.
The need for caution. It is very, very hard to imagine (and keep in mind) the limitations of entities that can be such valued assistants, and the human tendency is always to overendow them with understanding. We’ll always be tempted to ask more of them than they were designed to accomplish, and to trust the results when we shouldn’t.
Last updated:
Review Summary
What to Think About Machines That Think presents a collection of essays from experts on artificial intelligence. Reviews are mixed, with some praising the diverse perspectives and thought-provoking ideas, while others criticize the repetitive content and lack of curation. Many readers found valuable insights among the essays but felt the book could have benefited from better organization and editing. Some appreciated the book's exploration of AI's potential impacts, while others found it speculative and lacking in practical information.
Similar Books









