Key Takeaways
1. Everything is measurable, even intangibles
If you can observe it in any way at all, it lends itself to some type of measurement method.
Measurement is uncertainty reduction. Contrary to popular belief, measurement doesn't require perfect precision or certainty. It simply means reducing uncertainty about a quantity of interest. This applies to tangible things like physical objects and intangible concepts like customer satisfaction or project risk.
Observable consequences. Any intangible that matters must have observable consequences. For example, if you claim employee morale affects productivity, there must be some way to detect changes in morale through productivity metrics. By identifying these observable effects, we can indirectly measure the intangible.
Practical methods exist. Many seemingly immeasurable things have already been measured by someone, often using surprisingly simple methods. Examples include:
- Estimating fish populations in lakes without draining them
- Measuring the economic impact of brand damage
- Quantifying the value of a human life for policy decisions
2. Measurement reduces uncertainty for better decisions
Measurement matters because it must have some conceivable effect on decisions and behavior.
Decision-driven measurement. The purpose of measurement is to inform decisions. Before measuring, clearly define the decision at stake and how additional information would affect it. This helps prioritize what to measure and how precisely.
Uncertainty and risk. Decisions involve uncertainty, which creates risk. Measurement reduces uncertainty, thereby mitigating risk. Key concepts:
- Uncertainty: Lack of complete certainty; existence of more than one possibility
- Risk: A state of uncertainty where some possibilities involve loss or undesirable outcomes
Value of information. Not all measurements are equally valuable. Calculate the Expected Value of Perfect Information (EVPI) to determine how much a measurement is worth:
- Identify the decision and possible outcomes
- Estimate probabilities and consequences of each outcome
- Calculate the expected value with and without perfect information
- The difference is the EVPI, the maximum you should spend on measurement
3. Calibrate your estimates to improve accuracy
Success is a function of persistence and doggedness and the willingness to work hard for twenty-two minutes to make sense of something that most people would give up on after thirty seconds.
Overconfidence is common. Most people are overconfident in their estimates, providing ranges that are too narrow. This leads to poor decision-making based on unrealistic expectations.
Calibration training. Through practice and feedback, people can learn to provide more accurate probability estimates. Techniques include:
- Equivalent bet test: Compare your estimate to a bet with known odds
- Consider the opposite: Actively look for reasons why you might be wrong
- Use reference classes: Compare to similar, known quantities
- Practice with feedback: Take calibration tests and review results
Benefits of calibration. Well-calibrated estimators:
- Provide more reliable inputs for decision models
- Are more open to new information and changing their minds
- Make better predictions across various domains
4. Use the Rule of Five for quick population insights
There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population.
Simple but powerful. The Rule of Five allows for quick estimates of population characteristics with minimal data. It works for any type of population, from jelly bean weights to customer satisfaction scores.
How to apply it:
- Take a random sample of five items from the population
- Note the smallest and largest values in the sample
- You can be 93.75% confident that the population median falls between these two values
Limitations and extensions:
- Provides information about the median, not the mean
- For more precise estimates, increase sample size (see mathless table in book)
- Combine with other methods for more comprehensive analysis
5. Decompose complex problems into measurable components
If you don't know what to measure, measure anyway. You'll learn what to measure.
Break it down. When faced with a seemingly immeasurable problem, decompose it into smaller, more manageable components. This often reveals aspects that are easier to measure or estimate.
Fermi problems. Named after physicist Enrico Fermi, this approach involves making rough estimates of hard-to-measure quantities by breaking them down into more easily estimated factors. Example:
Estimating the number of piano tuners in Chicago:
- Estimate population of Chicago
- Estimate percentage of households with pianos
- Estimate how often pianos need tuning
- Estimate how many pianos a tuner can service per day
- Combine these estimates to reach a final answer
Benefits of decomposition:
- Reduces overall estimation error
- Reveals which components contribute most to uncertainty
- Identifies specific areas where additional data would be most valuable
6. Apply Bayesian thinking to update beliefs with new data
When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind.
Prior and posterior probabilities. Bayesian analysis provides a framework for updating beliefs based on new evidence:
- Start with a prior probability (initial belief)
- Collect new data
- Calculate the likelihood of the data given different hypotheses
- Update the prior to a posterior probability using Bayes' theorem
Advantages of Bayesian approach:
- Incorporates existing knowledge
- Allows for incremental updating as new information arrives
- Provides a natural way to express uncertainty
Practical applications:
- Medical diagnosis: Updating disease probabilities based on test results
- Quality control: Refining estimates of defect rates with inspection data
- Project management: Adjusting timelines and budgets as work progresses
7. Small samples can yield valuable information
If you know almost nothing, almost anything will tell you something.
Value of initial data. When starting from a state of high uncertainty, even small samples can provide significant insight. The first few observations often yield the most information per data point.
Diminishing returns. As sample size increases, the marginal value of each additional observation typically decreases. This principle helps guide efficient data collection:
- Start with small samples to get quick insights
- Increase sample size incrementally based on information value
- Stop when the cost of additional data outweighs its benefits
Methods for small samples:
- Student's t-distribution: For estimating population parameters with samples as small as 2
- Nonparametric methods: Techniques that don't assume a specific population distribution
- Bayesian updating: Incorporating prior knowledge to make the most of limited data
8. Quantify the value of additional information
The epiphany equation: How the value of information changes everything.
Expected Value of Information (EVI). Calculate how much a piece of information is worth before collecting it:
- Model the decision and possible outcomes
- Estimate current probabilities and payoffs
- Calculate expected value with current information
- Calculate expected value with perfect information
- The difference is the Expected Value of Perfect Information (EVPI)
- Estimate how much uncertainty a measurement would reduce
- Multiply EVPI by the fraction of uncertainty reduced to get EVI
Measurement inversion. Often, the most valuable measurements are those rarely considered, while routinely measured items provide little decision value. Reasons include:
- Familiarity bias: Measuring what's easy or traditional
- Overconfidence in well-known areas
- Neglecting high-impact, high-uncertainty factors
Iterative approach. Start with rough estimates and refine based on information value:
- Identify key uncertainties in the decision
- Estimate information value for each uncertainty
- Measure the highest-value item
- Update the model and repeat
9. Design experiments to isolate causal relationships
Emily demonstrated that useful observations are not necessarily complex, expensive, or even, as is sometimes claimed, beyond the comprehension of upper management, even for ephemeral concepts like touch therapy.
Control for confounding factors. To determine if A causes B, design experiments that isolate the effect of A while controlling for other variables. Techniques include:
- Randomized controlled trials: Randomly assign subjects to treatment and control groups
- Natural experiments: Exploit naturally occurring variations in the variable of interest
- Difference-in-differences: Compare changes over time between affected and unaffected groups
Statistical significance vs. practical importance. While statistical tests help rule out chance findings, focus on effect sizes and confidence intervals for decision-making. Consider:
- Magnitude of the effect
- Precision of the estimate
- Practical implications for the decision at hand
Learn from simple experiments. Even basic tests can yield valuable insights:
- A/B testing in marketing
- Pilot programs before full-scale implementation
- Observational studies when experiments aren't feasible
10. Measure preferences and risk tolerance for better choices
If managers can't identify a decision that could be affected by a proposed measurement and how it could change those decisions, then the measurement simply has no value.
Revealed vs. stated preferences. People's actions often differ from their stated preferences. To measure true preferences:
- Observe actual choices (revealed preferences)
- Use carefully designed surveys (stated preferences)
- Combine multiple methods for a more complete picture
Quantifying risk tolerance:
- Present hypothetical scenarios with varying risk-reward trade-offs
- Identify the point where the decision-maker is indifferent between options
- Plot these points to create a risk tolerance curve
Applications:
- Investment decisions: Balancing potential returns with acceptable risk levels
- Product development: Prioritizing features based on customer preferences
- Public policy: Evaluating trade-offs in health, safety, and environmental regulations
Ethical considerations. While some object to quantifying certain values (e.g., human life), failing to do so often leads to worse outcomes. Thoughtful measurement allows for more informed and consistent decision-making in sensitive areas.
Last updated:
Review Summary
How to Measure Anything receives mixed reviews. Many praise its insights on quantifying intangibles and reducing uncertainty in decision-making. Readers appreciate the practical tools, statistical concepts, and historical examples provided. However, some find the book dense, repetitive, and overly focused on mathematics. Critics argue it oversimplifies complex issues and mocks managers excessively. While some consider it a must-read for business decision-makers, others find it tedious and lacking in depth. Overall, the book is valued for its unique perspective on measurement but criticized for its writing style and presentation.
Similar Books
Download PDF
Download EPUB
.epub
digital book format is ideal for reading ebooks on phones, tablets, and e-readers.