Searching...
English
English
Español
简体中文
Français
Deutsch
日本語
Português
Italiano
한국어
Русский
Nederlands
العربية
Polski
हिन्दी
Tiếng Việt
Svenska
Ελληνικά
Türkçe
ไทย
Čeština
Română
Magyar
Українська
Bahasa Indonesia
Dansk
Suomi
Български
עברית
Norsk
Hrvatski
Català
Slovenčina
Lietuvių
Slovenščina
Српски
Eesti
Latviešu
فارسی
മലയാളം
தமிழ்
اردو
What We Owe The Future

What We Owe The Future

by William MacAskill 2022 353 pages
Philosophy
Science
History
Listen
13 minutes

Key Takeaways

1. The future is vast and our actions today can shape it

Just as we had ancestors who were not human, we may have descendants who will not be human.

The scale of the future is immense. If humanity survives, we could persist for hundreds of millions or even billions of years. The decisions we make today could have profound and long-lasting impacts on this vast future. Our descendants might spread across the cosmos, creating a civilization of unimaginable scale and complexity.

We have outsized influence now. We live at an unusually pivotal time in history, with rapid technological progress and global interconnectedness. This gives us unprecedented power to affect long-term outcomes. Some key areas where we can shape the future include:

  • Developing transformative technologies like AI responsibly
  • Mitigating existential risks like pandemics and nuclear war
  • Promoting beneficial values and institutions that could persist
  • Expanding humanity's reach beyond Earth

Taking a long-term perspective is crucial. By zooming out and considering the full scope of what's possible, we can make wiser choices that benefit not just ourselves, but countless future generations. We have a profound responsibility to use our influence wisely.

2. Moral progress is contingent, not inevitable

Moral changes that bring about the end of slavery could have been much longer than that they would never have happened.

Moral progress is not guaranteed. While we've made significant ethical advancements throughout history, like the abolition of slavery, these changes were not inevitable. They resulted from the dedicated efforts of moral reformers and activists. Without their work, harmful practices and beliefs could have persisted indefinitely.

Values can be highly persistent. Once established, moral norms and societal values can be extremely durable, lasting for centuries or even millennia. This persistence cuts both ways - beneficial values can endure, but so can harmful ones. Examples of long-lasting value systems include:

  • Major world religions
  • Political ideologies
  • Cultural traditions and taboos

Shaping values is crucial. Given how influential and persistent values can be, one of the most impactful things we can do is try to positively shape the moral trajectory of civilization. This involves:

  • Promoting beneficial values like compassion, reason, and long-term thinking
  • Opposing harmful ideologies and practices
  • Creating robust institutions to preserve and transmit positive values
  • Remaining open to moral exploration and progress

3. Technology could enable value lock-in, for better or worse

AGI agents are potentially immortal.

Artificial General Intelligence (AGI) is a pivotal technology. The development of AGI - AI systems with human-level abilities across all domains - could lead to an intelligence explosion. This could enable the rapid achievement of whatever goals are programmed into the AGI, potentially locking in those values indefinitely.

The stakes are enormously high. Depending on how AGI is developed and implemented, it could lead to radically different futures:

  • A utopia of abundance and flourishing for all sentient beings
  • A dystopia where human values are ignored or suppressed
  • The extinction of humanity and the end of our cosmic legacy

Careful development is crucial. Given the transformative potential of AGI, it's vital that we:

  • Invest heavily in AGI safety research
  • Develop robust governance frameworks for AI development
  • Work to instill beneficial values and goals in AGI systems
  • Avoid rushing AGI development in ways that could lead to catastrophe

4. Extinction and collapse are existential risks we must address

What now matters most is that we avoid ending human history.

Human extinction would be an immense tragedy. The extinction of humanity would not only end billions of lives, but would foreclose on the vast potential of our species and all future generations. It would likely mean the permanent loss of consciousness and intelligence in our corner of the universe.

Collapse could be nearly as bad as extinction. Even if some humans survive a catastrophe, the collapse of civilization could trap us in a vulnerable state or lead to the lock-in of flawed values. Recovery might be difficult or impossible, especially if we've depleted easily accessible resources.

We face several major risks. Some of the most pressing existential risks include:

  • Nuclear war
  • Engineered pandemics
  • Misaligned artificial intelligence
  • Extreme climate change
  • Asteroid impacts

Safeguarding humanity is a key priority. Given the enormous stakes, reducing existential risks should be one of our highest priorities. This involves developing technological safeguards, improving global coordination, and creating robust backup plans for human survival and recovery.

5. Engineered pandemics pose a significant threat to humanity

We are often in a position of deep uncertainty with respect to the future for several reasons.

Biotechnology is advancing rapidly. The tools to engineer and modify organisms are becoming increasingly powerful and accessible. While this has great potential benefits, it also creates unprecedented risks. Soon, it may be possible to create pathogens more deadly than anything found in nature.

Safeguards are inadequate. Despite the risks, biosafety and biosecurity measures are often lax. There have been numerous concerning incidents of pathogens escaping from labs. Regulation and oversight have not kept pace with technological capabilities.

Deliberate misuse is a serious concern. In addition to accidental releases, we must worry about the intentional development and use of biological weapons. Even a small group of bad actors could potentially create a global pandemic.

Preparation is essential. To mitigate pandemic risks, we urgently need to:

  • Strengthen biosafety and biosecurity measures
  • Improve global disease surveillance and response capabilities
  • Invest in broad-spectrum medical countermeasures
  • Develop governance frameworks for dual-use research

6. Climate change and resource depletion endanger long-term survival

To safeguard civilisation, we therefore need to make sure we get beyond that unsustainable level and reach a point where we have the technology to effectively defend against such catastrophic risks.

Climate change poses long-term risks. While unlikely to directly cause human extinction, extreme climate change could destabilize civilization and make us more vulnerable to other threats. The impacts could persist for thousands of years, hampering recovery from other disasters.

Fossil fuel depletion complicates recovery. If we burn through easily accessible fossil fuels, it could be much harder for civilization to recover if it collapses. The energy density and accessibility of fossil fuels played a key role in enabling the Industrial Revolution.

Sustainable technologies are crucial. To ensure long-term survival and flourishing, we need to:

  • Rapidly transition to clean energy sources
  • Develop energy storage and transmission technologies
  • Improve energy efficiency across all sectors
  • Preserve some fossil fuel reserves as a backup

Environmental stewardship has long-term benefits. By preserving a stable climate and natural resource base, we give future generations more options and increase the chances of recovery if disaster strikes.

7. Technological stagnation could trap us in a vulnerable state

We are like a climber scaling a sheer cliff face with no ropes or harness, with a significant risk of falling.

Progress is not guaranteed. There are reasons to worry that technological progress could slow dramatically or even stop entirely. This could be due to factors like:

  • Declining population growth
  • Diminishing returns on research
  • Resource constraints
  • Social or political factors

Stagnation is dangerous. If we stagnate at our current level of technology, we would remain vulnerable to various existential risks. We need ongoing progress to develop solutions to threats like pandemics, climate change, and artificial intelligence.

Avoiding stagnation requires effort. To keep technological progress going, we should:

  • Invest heavily in scientific research and development
  • Reform institutions to make them more conducive to innovation
  • Carefully manage demographic trends and their economic impacts
  • Preserve a stable foundation for long-term progress

Balancing progress and safety is key. While we need ongoing technological advancement, we must also be careful to develop new technologies responsibly. The goal should be sustainable progress that enhances rather than endangers humanity's long-term potential.

8. Adding happy people to the world has positive moral value

If humanity survives to even a fraction of its potential life span, then, strange as it may seem, we are the ancients: we live at the very beginning of history, in the most distant past.

Creating good lives has positive value. All else being equal, bringing into existence people who will have lives worth living makes the world better. This means that preventing human extinction isn't just about saving existing people, but about enabling vast numbers of worthwhile future lives.

This view has major implications. If we accept that creating happy people is good, it significantly increases the moral importance of:

  • Ensuring humanity's long-term survival
  • Expanding to other planets and star systems
  • Improving quality of life and increasing humanity's capacity

There are challenging philosophical questions. This view raises complex issues in population ethics, such as:

  • How to weigh quantity vs. quality of life
  • Whether there's an asymmetry between creating good and bad lives
  • How to handle uncertainty about future wellbeing

Careful moral reasoning is crucial. Given the enormous stakes involved in shaping humanity's long-term trajectory, it's vital that we think deeply about these ethical issues and try to resolve key uncertainties.

9. The expected value of the future is likely positive

There's no inevitable arc of progress. No deus ex machina will prevent civilisation from stumbling into dystopia or oblivion.

Reasons for pessimism exist. There are legitimate worries about the future, including risks of dystopia, suffering, or extinction. We shouldn't be complacent or assume progress is inevitable.

But there are also strong reasons for optimism:

  • Human welfare has improved dramatically over time
  • We have a growing capacity to solve global problems
  • Most people want to create a positive future
  • The potential for good far outweighs the potential for bad

Expected value calculation favors optimism. While the worst possible futures might be worse than the best possible futures are good, the good futures seem much more likely. This suggests the expected value of the future is positive.

Optimism is empowering. Believing we can create a positive future motivates us to work towards that goal. A cautious optimism, grounded in reality but hopeful about our potential, is likely the most constructive attitude.

10. We can take concrete actions to safeguard humanity's potential

You may have more power than you realise.

Individual actions matter. While the challenges we face are immense, there are many concrete things individuals can do to help, including:

  • Donating to effective organizations working on crucial issues
  • Choosing careers that address critical long-term challenges
  • Advocating for beneficial policies and values
  • Spreading awareness of long-term thinking

Key focus areas include:

  • AI safety and governance
  • Biosecurity and pandemic preparedness
  • Clean energy innovation
  • Global cooperation and institution-building
  • Moral philosophy and values spreading

A growing movement exists. There is an emerging community of researchers, philanthropists, and activists dedicated to safeguarding humanity's long-term potential. Getting involved with this community can amplify your impact.

Long-term thinking is crucial. By zooming out and considering the full scope of what's at stake, we can make wiser choices that benefit not just ourselves, but countless future generations. We have a profound responsibility and opportunity to positively shape the long-term future of life in the universe.

</instructions>

Last updated:

Review Summary

3.84 out of 5
Average of 5k+ ratings from Goodreads and Amazon.

What We Owe the Future presents a compelling case for longtermism, emphasizing the importance of considering future generations in our decisions. While some reviewers praise MacAskill's thought-provoking ideas and clear writing, others critique his assumptions and philosophical approach. The book explores topics like moral values, existential risks, and population ethics. Critics argue that MacAskill's perspective is overly optimistic and neglects present-day issues. Despite mixed reactions, many agree the book raises important questions about humanity's long-term future and our moral obligations to it.

About the Author

William MacAskill is an Associate Professor in Philosophy at Lincoln College, Oxford. He authored "Doing Good Better" and co-founded two non-profits: 80,000 Hours and Giving What We Can. These organizations contributed to the effective altruism movement, which focuses on maximizing positive impact through evidence-based approaches. MacAskill's work combines academic philosophy with practical initiatives to address global challenges. His research interests include ethics, global priorities, and the long-term future of humanity. Through his writing and advocacy, MacAskill aims to inspire individuals to consider how they can make the most significant positive difference in the world.

0:00
-0:00
1x
Create a free account to unlock:
Bookmarks – save your favorite books
History – revisit books later
Ratings – rate books & see your ratings
Listening – audio summariesListen to the first takeaway of every book for free, upgrade to Pro for unlimited listening.
Unlock unlimited listening
Your first week's on us!
Today: Get Instant Access
Listen to full summaries of 73,530 books. That's 12,000+ hours of audio!
Day 5: Trial Reminder
We'll send you a notification that your trial is ending soon.
Day 7: Your subscription begins
You'll be charged on Sep 28,
cancel anytime before.
Compare Features Free Pro
Read full text summaries
Summaries are free to read for everyone
Listen to full summaries
Free users can listen to the first takeaway only
Unlimited Bookmarks
Free users are limited to 10
Unlimited History
Free users are limited to 10
What our users say
15,000+ readers
“...I can 10x the number of books I can read...”
“...exceptionally accurate, engaging, and beautifully presented...”
“...better than any amazon review when I'm making a book-buying decision...”
Save 62%
Yearly
$119.88 $44.99/yr
$3.75/mo
Monthly
$9.99/mo
Try Free & Unlock
7 days free, then $44.99/year. Cancel anytime.