Searching...
English
EnglishEnglish
EspañolSpanish
简体中文Chinese
FrançaisFrench
DeutschGerman
日本語Japanese
PortuguêsPortuguese
ItalianoItalian
한국어Korean
РусскийRussian
NederlandsDutch
العربيةArabic
PolskiPolish
हिन्दीHindi
Tiếng ViệtVietnamese
SvenskaSwedish
ΕλληνικάGreek
TürkçeTurkish
ไทยThai
ČeštinaCzech
RomânăRomanian
MagyarHungarian
УкраїнськаUkrainian
Bahasa IndonesiaIndonesian
DanskDanish
SuomiFinnish
БългарскиBulgarian
עבריתHebrew
NorskNorwegian
HrvatskiCroatian
CatalàCatalan
SlovenčinaSlovak
LietuviųLithuanian
SlovenščinaSlovenian
СрпскиSerbian
EestiEstonian
LatviešuLatvian
فارسیPersian
മലയാളംMalayalam
தமிழ்Tamil
اردوUrdu
Expert Political Judgment

Expert Political Judgment

How Good Is It? How Can We Know?
by Philip E. Tetlock 2005 344 pages
3.98
500+ ratings
Listen
Try Full Access for 7 Days
Unlock listening & more!
Continue

Key Takeaways

1. Expert Political Judgment Is Often Overconfident and Barely Beats Simple Benchmarks

Even the most astute observers will fail to outperform random prediction generators—the functional equivalent of dart-throwing chimps—in affixing realistic likelihoods to possible futures.

Overconfidence is pervasive. Experts in political and economic forecasting consistently believe they know more about the future than they do. Their subjective probability estimates for outcomes they deem most likely significantly exceed the actual frequency with which those outcomes materialize. For instance, events experts rated as 100% certain occurred only about 80% of the time.

Barely beating chance. When pitted against simple performance benchmarks, experts' forecasting accuracy is humbling. They only marginally outperform random chance (dart-throwing chimps) and simple extrapolation algorithms that mechanically predict the continuation of past trends. They are significantly outperformed by sophisticated statistical models.

Diminishing returns. Beyond a minimal level of general knowledge (like that of an attentive reader of a quality newspaper), increasing expertise or professional status confers little additional advantage in forecasting accuracy. Specialists are often no better than well-informed generalists when predicting outcomes outside their narrow domain, suggesting that predictive skill reaches a plateau quickly.

2. How Experts Think Matters More Than What They Think

What experts think matters far less than how they think.

Content vs. Style. The content of an expert's beliefs – whether they are liberal or conservative, realist or institutionalist, optimist or pessimist – is a poor predictor of their forecasting accuracy. Experts from across the ideological and theoretical spectrums are equally likely to be right or wrong.

Cognitive style is key. However, the way experts think about the world is a significant predictor of their judgmental performance. Differences in cognitive style, particularly along a dimension related to how people handle complexity and conflicting information, strongly correlate with forecasting skill. This suggests that the mental approach to processing information is more crucial than the specific information held or the conclusions drawn.

Beyond background. Traditional markers of expertise, such as educational attainment (Ph.D. vs. Master's), years of experience, professional background (academic, government, journalist), or access to classified information, show little to no correlation with forecasting accuracy. This reinforces the idea that inherent cognitive traits or learned thinking processes are more important than credentials or access.

3. Foxes Consistently Outperform Hedgehogs in Forecasting Accuracy

If we want realistic odds on what will happen next, coupled to a willingness to admit mistakes, we are better off turning to experts who embody the intellectual traits of Isaiah Berlin’s prototypical fox... than we are turning to Berlin’s hedgehogs.

The Hedgehog and the Fox. Drawing on Isaiah Berlin's metaphor, experts can be broadly categorized: Hedgehogs "know one big thing" and try to fit everything into a single, coherent framework. Foxes "know many little things" and draw from an eclectic array of traditions, embracing complexity and contradiction.

Foxes are better forecasters. Across numerous forecasting exercises spanning diverse regions and topics, foxes consistently demonstrate better calibration (their stated probabilities align better with actual frequencies) and discrimination (they assign higher probabilities to events that occur than to those that don't) than hedgehogs. This advantage is particularly pronounced for long-term forecasts within experts' domains.

Why Foxes Win. Foxes' self-critical, dialectical thinking style prevents them from developing excessive confidence in their predictions. They are more sensitive to contradictory forces and the inherent uncertainty of the future, leading them to make more cautious, yet better calibrated, probability estimates. They rarely rule out possibilities entirely, hedging their bets in a way that hedgehogs, seeking definitive closure, often fail to do.

4. Experts, Especially Hedgehogs, Resist Changing Their Minds When Wrong

When the facts change, I change my mind. What do you do, sir?

Not Natural Bayesians. Despite the logical imperative to update beliefs in proportion to the diagnostic value of new evidence (as prescribed by Bayes's theorem), experts are often reluctant to change their minds, particularly when faced with disconfirming information. They tend to cling to their prior views more tenaciously than is warranted by the evidence.

Hedgehogs are worse Bayesians. This resistance to updating is significantly more pronounced among hedgehogs than foxes. When their forecasts are disconfirmed, hedgehogs are less likely to adjust their confidence in their underlying theories or perspectives compared to foxes, who show a greater willingness to revise their views in the direction indicated by the evidence.

Asymmetrical Updating. Experts are more eager to be good Bayesians when events confirm their predictions, readily boosting confidence in their views. However, when events disconfirm their predictions, they are much slower to decrease their confidence, demonstrating an asymmetrical approach to belief updating that favors maintaining existing beliefs.

5. Belief System Defenses and Hindsight Bias Protect Views from Disconfirmation

Bad luck proved a vastly more popular explanation for forecasting failure than good luck proved for forecasting success.

Rationalizing Errors. When predictions fail, experts employ a variety of "belief system defenses" to protect their core views and minimize the perceived magnitude of their mistakes. These defenses include arguing that:

  • The conditions for their theory were not met (exogenous shock).
  • The predicted outcome almost happened (close-call counterfactual).
  • The predicted outcome is merely delayed (off-on-timing).
  • The task was impossible anyway (politics is cloudlike).
  • They made the "right mistake" given the risks.

Selective Defense Activation. These defenses are activated selectively and self-servingly. Experts are far more likely to invoke these arguments when their own predictions fail than when the predictions of their rivals fail, highlighting a bias towards protecting one's own intellectual turf. Hedgehogs, being more invested in their single "big thing," use these defenses more frequently than foxes.

Hindsight Bias. Experts are susceptible to the "I knew it all along" bias, exaggerating the degree to which they foresaw outcomes after they have occurred. This bias is more pronounced among hedgehogs and helps them maintain the illusion of foresight, further reducing the perceived need to update their beliefs or admit error.

6. Judging History Is Theory-Driven, and Hedgehogs Apply Stronger Double Standards

Men use the past to prop up their prejudices.

Theory Shapes Perception. Underlying all interpretations of history are implicit counterfactual assumptions about what would have happened if key events had unfolded differently. These assumptions are heavily influenced by observers' ideological and theoretical preconceptions, leading them to favor counterfactual scenarios that align with their existing worldviews.

Predicting Counterfactual Beliefs. It is surprisingly easy to predict an expert's judgment of a specific historical counterfactual (e.g., could Stalinism have been averted?) based on their broader ideological orientation or theoretical commitments (e.g., views on totalitarianism). This top-down, deductive approach to history is more pronounced among hedgehogs.

Double Standards. Experts, particularly hedgehogs, apply double standards when evaluating new evidence from historical archives. They are more critical of findings that challenge their preferred interpretations of the past and more readily accept findings that reinforce them, even when the research quality is held constant. This selective scrutiny makes their beliefs about historical causality resistant to change.

7. Even Open-mindedness Has Limits and Can Impair Judgment

The impossible sometimes happens and the inevitable sometimes does not.

Scenario Exercises. Techniques designed to increase open-mindedness, such as scenario generation exercises that encourage imagining alternative futures or pasts, have mixed effects on judgmental performance. While they can reduce biases like hindsight (by making alternative outcomes more imaginable), they can also introduce new problems.

Sub-additivity. Encouraging experts to unpack abstract possibilities into detailed scenarios can lead to "sub-additive" probability judgments, where the perceived likelihood of a set of outcomes is less than the sum of its parts. This violates basic probability axioms and indicates confusion, as imagining specific pathways to an outcome inflates its perceived likelihood beyond what is logically warranted.

Foxes are More Susceptible. Paradoxically, the more open-minded foxes are more susceptible to this confusion than hedgehogs. Their willingness to entertain multiple possibilities and elaborate on them makes them more prone to inflating the perceived likelihood of numerous scenarios, leading to less coherent probability judgments compared to hedgehogs, who are better at summarily dismissing possibilities that don't fit their framework.

8. Good Judgment Requires Balancing Theory-Driven Closure and Imagination-Driven Openness

The test of a first-rate intelligence is the ability to hold two opposing ideas in the mind at the same time, and still retain the ability to function.

The Balancing Act. Effective judgment requires navigating a tension between theory-driven thinking (seeking closure, parsimony, and consistency) and imagination-driven thinking (exploring possibilities, embracing complexity, and acknowledging contingency). Both approaches have strengths and weaknesses.

Risks of Imbalance. Excessive reliance on theory (hedgehog tendency) leads to overconfidence, resistance to updating, and dogmatic interpretations of history. Excessive reliance on imagination (fox tendency, amplified by scenario exercises) can lead to confusion, sub-additive probability judgments, and difficulty distinguishing plausible possibilities from remote ones.

Metacognitive Skill. Good judgment is less about possessing a specific cognitive style and more about the metacognitive skill of knowing when to apply which style and how to integrate their insights. It involves the capacity for "self-overhearing" – monitoring one's own thought processes, recognizing biases, and striving for a reflective equilibrium that balances the need for coherence with openness to new information.

9. The Marketplace of Political Ideas Rewards Confidence and Ideology Over Accuracy

The same style of reasoning that impairs experts’ performance on scientific indicators of good judgment boosts experts’ attractiveness to the mass market–driven media.

Market Imperfections. The public marketplace for political commentary is not an efficient mechanism for identifying accurate forecasters or sound judgment. Consumers are often rationally ignorant or motivated by solidarity with their ideological tribe rather than a dispassionate search for truth.

Rewarding Hedgehogs. The media and other consumers of expertise often favor the confident, decisive style of hedgehogs, who offer clear, quotable predictions and unwavering defenses of their views. This preference exists despite the evidence that foxes, with their more nuanced and self-critical approach, are more accurate forecasters.

Solidarity over Credence. Public intellectuals often function more as providers of "solidarity goods" – reinforcing the beliefs and identities of their audience – than "credence goods" – providing reliable, evidence-based analysis. This dynamic perpetuates the demand for confident, ideologically aligned voices, regardless of their predictive track record.

10. Improving Judgment Requires Transparent Accountability and Evidence-Based Scoring

If we want realistic odds on what will happen next, coupled to a willingness to admit mistakes, we are better off turning to experts who embody the intellectual traits of Isaiah Berlin’s prototypical fox... than we are turning to Berlin’s hedgehogs.

Need for Accountability. The current system lacks systematic accountability for political predictions. Experts rarely face rigorous, public scoring of their forecasts against objective outcomes, allowing erroneous beliefs and overconfidence to persist unchecked.

Forecasting Tournaments. A potential remedy is to institutionalize forecasting tournaments with transparent, evidence-based scoring rules (like Brier scores, adjusted for difficulty, value, etc.). These tournaments would provide a public record of predictive performance, incentivizing experts to improve their accuracy and fostering a more evidence-driven discourse.

Beyond Simple Metrics. While basic accuracy is key, a sophisticated system must also account for nuances like the difficulty of the task, the forecaster's error-avoidance priorities, and legitimate ambiguities in outcomes. However, adjustments must be applied judiciously to avoid becoming mere rationalizations for poor performance.

11. Progress Is Possible, But Resistance to Objectivity Is Fierce

Failing to learn everything is not tantamount to learning nothing.

Objectivity is Attainable. Despite the challenges and the valid critiques from relativist perspectives, it is possible to define and measure aspects of good judgment using objective, trans-ideological standards. The process of developing these measures, while complex, reveals much about the nature of judgment itself.

Resistance to Scrutiny. Implementing systems of transparent accountability faces significant resistance from those who benefit from the current low-accountability environment – experts with inflated reputations and consumers who prefer comforting certainties. This resistance is rooted in both self-interest and psychological aversion to confronting uncertainty and error.

Hope for Change. Nevertheless, progress is possible. The increasing availability of data, advancements in analytical tools, and the growing public demand for evidence-based insights create opportunities to push for greater accountability. By focusing on measurable performance and fostering a culture that values intellectual humility and continuous learning, we can gradually improve the quality of political judgment and public discourse.

Last updated:

Review Summary

3.98 out of 5
Average of 500+ ratings from Goodreads and Amazon.

Expert Political Judgment explores the accuracy of political experts' predictions. Tetlock's research shows that generalist "foxes" outperform specialist "hedgehogs" in forecasting, though both are often surpassed by simple statistical models. The book is praised for its rigorous methodology and thought-provoking insights, but criticized for dense academic writing. Readers appreciate Tetlock's examination of cognitive biases and the limitations of expert knowledge. While some found it challenging, many consider it an essential work for understanding political forecasting and decision-making.

Your rating:
4.35
4 ratings

About the Author

Philip E. Tetlock is a renowned social scientist and professor at the University of Pennsylvania. His research focuses on judgment and decision-making, particularly in political and economic contexts. Tetlock is best known for his work on expert political judgment and forecasting, which has spanned several decades. He has authored multiple influential books, including "Superforecasting: The Art and Science of Prediction." Tetlock's research has challenged conventional wisdom about expert predictions and has had significant implications for fields such as intelligence analysis and policy-making. His work has earned him numerous accolades and has been widely cited in academic and popular literature.

Download PDF

To save this Expert Political Judgment summary for later, download the free PDF. You can print it out, or read offline at your convenience.
Download PDF
File size: 0.22 MB     Pages: 15

Download EPUB

To read this Expert Political Judgment summary on your e-reader device or app, download the free EPUB. The .epub digital book format is ideal for reading ebooks on phones, tablets, and e-readers.
Download EPUB
File size: 2.94 MB     Pages: 15
Listen to Summary
0:00
-0:00
1x
Dan
Andrew
Michelle
Lauren
Select Speed
1.0×
+
200 words per minute
Home
Library
Get App
Create a free account to unlock:
Requests: Request new book summaries
Bookmarks: Save your favorite books
History: Revisit books later
Recommendations: Personalized for you
Ratings: Rate books & see your ratings
100,000+ readers
Try Full Access for 7 Days
Listen, bookmark, and more
Compare Features Free Pro
📖 Read Summaries
All summaries are free to read in 40 languages
🎧 Listen to Summaries
Listen to unlimited summaries in 40 languages
❤️ Unlimited Bookmarks
Free users are limited to 10
📜 Unlimited History
Free users are limited to 10
Risk-Free Timeline
Today: Get Instant Access
Listen to full summaries of 73,530 books. That's 12,000+ hours of audio!
Day 4: Trial Reminder
We'll send you a notification that your trial is ending soon.
Day 7: Your subscription begins
You'll be charged on May 21,
cancel anytime before.
Consume 2.8x More Books
2.8x more books Listening Reading
Our users love us
100,000+ readers
"...I can 10x the number of books I can read..."
"...exceptionally accurate, engaging, and beautifully presented..."
"...better than any amazon review when I'm making a book-buying decision..."
Save 62%
Yearly
$119.88 $44.99/year
$3.75/mo
Monthly
$9.99/mo
Try Free & Unlock
7 days free, then $44.99/year. Cancel anytime.
Scanner
Find a barcode to scan

Settings
General
Widget
Loading...