Facebook Pixel
Searching...
English
EnglishEnglish
EspañolSpanish
简体中文Chinese
FrançaisFrench
DeutschGerman
日本語Japanese
PortuguêsPortuguese
ItalianoItalian
한국어Korean
РусскийRussian
NederlandsDutch
العربيةArabic
PolskiPolish
हिन्दीHindi
Tiếng ViệtVietnamese
SvenskaSwedish
ΕλληνικάGreek
TürkçeTurkish
ไทยThai
ČeštinaCzech
RomânăRomanian
MagyarHungarian
УкраїнськаUkrainian
Bahasa IndonesiaIndonesian
DanskDanish
SuomiFinnish
БългарскиBulgarian
עבריתHebrew
NorskNorwegian
HrvatskiCroatian
CatalàCatalan
SlovenčinaSlovak
LietuviųLithuanian
SlovenščinaSlovenian
СрпскиSerbian
EestiEstonian
LatviešuLatvian
فارسیPersian
മലയാളംMalayalam
தமிழ்Tamil
اردوUrdu
Science Fictions

Science Fictions

How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth
by Stuart Ritchie 2020 353 pages
4.38
2k+ ratings
Listen

Key Takeaways

1. Science is a social construct, relying on peer review and publication

Science is a social construct.

Peer review is essential. The scientific process involves not just conducting experiments, but also convincing other scientists of the validity of results. This social aspect of science is embodied in the peer review system, where experts evaluate research before publication. The process aims to ensure quality and reliability in scientific literature.

Publication is key. Scientists communicate their findings through journals, conferences, and other platforms. This sharing of knowledge allows for collective scrutiny, questioning, and refinement of ideas. The publication system, while imperfect, serves as the primary means of disseminating scientific discoveries and building consensus within the scientific community.

2. The replication crisis reveals widespread unreliability in scientific findings

Upon closer scrutiny, the 'results' of the Stanford Prison Experiment, such as they are, are scientifically meaningless.

Reproducibility problems abound. The replication crisis has exposed significant issues across various scientific fields. Large-scale replication attempts have found that many published studies, particularly in psychology and biomedicine, fail to produce the same results when repeated.

Confidence eroded. This crisis has shaken confidence in scientific findings and highlighted the need for more rigorous research practices. Examples of unreplicable results include:

  • Power posing effects
  • Priming studies in social psychology
  • Many preclinical cancer studies
  • Numerous medical treatments later found ineffective

3. Scientific fraud undermines trust and wastes resources

Fraud shows just how badly that trust can be exploited.

High-profile cases shock. Instances of scientific fraud, such as the Macchiarini scandal in regenerative medicine and Wakefield's fabricated vaccine-autism link, have had devastating consequences. These cases not only waste resources but can also lead to harm or death when fraudulent medical treatments are implemented.

Systemic problems exposed. Fraud reveals weaknesses in the scientific system:

  • Inadequate peer review
  • Institutional reluctance to investigate misconduct
  • Pressure to produce groundbreaking results
  • Lack of data transparency

The prevalence of fraud, while difficult to quantify precisely, is likely higher than commonly believed, with surveys suggesting that a significant minority of scientists admit to questionable research practices.

4. Publication bias skews the scientific literature towards positive results

Salami-slicing doesn't in itself mean that the science contained in each of the slices is necessarily of poor quality (though the fact the researchers are willing to take advantage of the publication system so blatantly doesn't exactly speak to their trustworthiness).

File drawer problem. Researchers tend to publish studies with positive results while leaving negative or null findings unpublished. This creates a skewed representation of scientific knowledge, where the literature appears to support certain hypotheses more strongly than the total body of research actually does.

Meta-analysis reveals bias. When examining funnel plots in meta-analyses, missing studies (often those with small sample sizes and small effects) indicate publication bias. This distortion can lead to overestimation of effect sizes and false confidence in scientific claims. The bias affects various fields, including:

  • Psychology
  • Medicine
  • Ecology
  • Economics

5. P-hacking and data manipulation lead to unreliable conclusions

Numbers are noisy.

Statistical fishing expeditions. P-hacking involves manipulating data or analyses to achieve statistically significant results (p < 0.05). This can include:

  • Selectively reporting outcomes
  • Excluding "inconvenient" data points
  • Continuing to collect data until significance is reached
  • Trying multiple statistical tests and only reporting the "successful" ones

False positives proliferate. These practices increase the likelihood of false-positive results, leading to an inflated number of seemingly significant findings in the literature. The focus on statistical significance often overshadows more important considerations, such as effect size and practical significance.

6. Negligence in research practices compromises scientific integrity

Far too many scientific studies are far too small.

Underpowered studies prevail. Many studies, particularly in fields like neuroscience and psychology, have insufficient sample sizes to reliably detect the effects they claim to find. This leads to:

  • Inflated effect sizes
  • Poor replicability
  • Wasted resources on inconclusive research

Basic errors abound. Negligence in research practices includes:

  • Computational errors
  • Mislabeled data
  • Contaminated samples
  • Failure to randomize or blind studies properly

These issues, while often unintentional, can significantly impact the reliability of scientific findings and waste valuable resources.

7. Hype and exaggeration distort scientific communication

Scientists want their research to sound as though it's of this Eureka-shouting kind, so they analyse it, write it up, and publicise it accordingly.

Press releases oversell. Many university press offices and researchers exaggerate the implications of their findings, leading to sensationalized media coverage. This hype can:

  • Misrepresent the certainty of results
  • Overstate practical applications
  • Ignore important limitations

Popular science books simplify. Authors of popular science books often present complex findings as simple, definitive truths, neglecting the nuances and uncertainties inherent in scientific research. This can create unrealistic expectations and misunderstandings among the public.

8. Perverse incentives in academia promote quantity over quality

Salami-slicing doesn't in itself mean that the science contained in each of the slices is necessarily of poor quality (though the fact the researchers are willing to take advantage of the publication system so blatantly doesn't exactly speak to their trustworthiness).

Publish or perish culture. Academic success is often measured by publication count and journal prestige, incentivizing researchers to:

  • Prioritize quantity over quality
  • Seek novel, positive results at the expense of rigorous methods
  • Engage in questionable research practices to increase publishability

Funding pressures distort. The need to secure grants can lead researchers to:

  • Overhype preliminary findings
  • Avoid risky but potentially groundbreaking research
  • Focus on trendy topics rather than important but less glamorous work

These incentives contribute to many of the problems in scientific research, from p-hacking to publication bias.

9. Open Science and pre-registration can improve research transparency

Pre-registration has been mandatory for US government-funded clinical trials since 2000, and a precondition for publication in most medical journals since 2005.

Pre-registration benefits. By publicly declaring research plans before data collection, pre-registration:

  • Reduces p-hacking and HARKing (Hypothesizing After Results are Known)
  • Distinguishes confirmatory from exploratory analyses
  • Helps combat publication bias by creating a record of all studies

Open Science practices. Increasing transparency through open data, open methods, and open access publishing can:

  • Facilitate replication attempts
  • Allow for more thorough peer review
  • Increase public trust in scientific findings

These practices represent a cultural shift towards greater accountability and reliability in scientific research.

10. Reforming scientific culture is crucial for restoring reliability

Fix the science, I'd suggest, and the trust will follow.

Systemic changes needed. Addressing the problems in scientific research requires reforms at multiple levels:

  • Revising incentive structures in academia
  • Improving statistical education for researchers
  • Encouraging replication studies and null results
  • Implementing more rigorous peer review processes

Cultural shift essential. Beyond policy changes, a fundamental shift in scientific culture is necessary. This includes:

  • Valuing methodological rigor over novelty
  • Embracing uncertainty and limitations in research
  • Promoting collaboration over competition
  • Fostering a more critical and skeptical approach to scientific claims

By addressing these systemic issues and cultural norms, science can work towards greater reliability and public trust.

Last updated:

Review Summary

4.38 out of 5
Average of 2k+ ratings from Goodreads and Amazon.

Science Fictions is praised for its insightful examination of fraud, bias, and hype in scientific research. Ritchie offers a comprehensive overview of the replication crisis and proposes solutions to improve scientific practices. While some readers find it eye-opening and essential, others criticize its narrow focus on recent examples and potential oversimplification. The book is lauded for its clear writing and engaging examples, but some argue it may erode trust in science. Overall, it's recommended for both scientists and laypeople interested in understanding the current state of scientific research.

Your rating:

About the Author

Stuart James Ritchie is a Scottish psychologist and science communicator specializing in human intelligence research. He currently serves as a lecturer at King's College London's Institute of Psychiatry, Psychology and Neuroscience. Ritchie gained prominence for his work on the replication crisis in psychology, particularly his involvement in a failed replication attempt of a parapsychology study in 2012. This experience led him to become a vocal advocate for improving scientific practices and transparency. His book, Science Fictions, explores the issues plaguing modern scientific research and proposes potential solutions, establishing him as a leading voice in the open science movement.

0:00
-0:00
1x
Dan
Andrew
Michelle
Lauren
Select Speed
1.0×
+
200 words per minute
Create a free account to unlock:
Bookmarks – save your favorite books
History – revisit books later
Ratings – rate books & see your ratings
Unlock unlimited listening
Your first week's on us!
Today: Get Instant Access
Listen to full summaries of 73,530 books. That's 12,000+ hours of audio!
Day 4: Trial Reminder
We'll send you a notification that your trial is ending soon.
Day 7: Your subscription begins
You'll be charged on Nov 30,
cancel anytime before.
Compare Features Free Pro
Read full text summaries
Summaries are free to read for everyone
Listen to summaries
12,000+ hours of audio
Unlimited Bookmarks
Free users are limited to 10
Unlimited History
Free users are limited to 10
What our users say
30,000+ readers
“...I can 10x the number of books I can read...”
“...exceptionally accurate, engaging, and beautifully presented...”
“...better than any amazon review when I'm making a book-buying decision...”
Save 62%
Yearly
$119.88 $44.99/yr
$3.75/mo
Monthly
$9.99/mo
Try Free & Unlock
7 days free, then $44.99/year. Cancel anytime.
Settings
Appearance