Calling Bullshit cover

Calling Bullshit

The Art of Scepticism in a Data-Driven World

Jevin D. West, Carl T. Bergstrom 2020
Science

Press Enter to add

10

Key Takeaways

  1. 1

    “Calling Bullshit” argues that in a world saturated with data, misinformation often spreads not through outright lies but through misleading statistics, visualizations, and scientific claims. The authors emphasize that the ability to critically evaluate quantitative information is now an essential civic skill. They provide tools to recognize and challenge deceptive or careless uses of data.

  2. 2

    The book distinguishes between intentional deception and what the authors broadly call “bullshit”—misleading claims made without regard for truth. Bullshitters may not aim to deceive deliberately, but they are indifferent to accuracy. This indifference can be just as harmful as intentional fraud.

  3. 3

    Statistical and scientific authority are frequently exploited to lend credibility to weak or false claims. The authors show how numbers, graphs, and technical jargon can create an illusion of rigor. Readers are encouraged to look past surface-level sophistication and examine underlying assumptions and methods.

  4. 4

    Data visualization is a powerful communication tool, but it can easily be manipulated. The book demonstrates how axis scaling, cherry-picked time frames, and misleading graphical elements distort perception. Learning to critically interpret charts is a key defense against misinformation.

  5. 5

    Correlation is often mistaken for causation in media reporting and public discourse. The authors stress the importance of understanding confounding variables, alternative explanations, and the limits of observational data. Recognizing this distinction prevents overconfident conclusions.

  6. 6

    Big data and algorithms are not inherently objective or unbiased. The book explores how algorithmic outputs can reflect flawed data, hidden assumptions, and systemic biases. Skepticism should extend to automated systems as much as to human claims.

  7. 7

    The authors advocate for simple, back-of-the-envelope calculations to test the plausibility of claims. Rough quantitative reasoning can quickly reveal when numbers are unrealistic or exaggerated. This practical approach empowers readers without requiring advanced mathematics.

  8. 8

    Scientific publishing and media incentives can amplify weak or sensational findings. The book highlights issues like p-hacking, publication bias, and the replication crisis. Understanding these systemic pressures helps readers evaluate research claims more carefully.

  9. 9

    Effective skepticism involves asking basic but powerful questions: What is the source? What is being measured? Compared to what? Who benefits? These questions shift the focus from accepting conclusions to interrogating evidence and context.

  10. 10

    Ultimately, the book frames skepticism as a civic responsibility in a democracy. Citizens must be equipped to challenge misleading claims in politics, business, and media. Cultivating quantitative literacy and critical thinking strengthens public discourse and decision-making.

12

Concepts

Bullshit vs. Lies

The distinction between deliberate deception and statements made with indifference to truth. Bullshit does not necessarily aim to deceive but disregards accuracy and evidence.

Example

A company promoting a vague statistic without verifying its source. A public figure repeating an impressive-sounding number without checking its validity.

Statistical Significance

A measure used in hypothesis testing to assess whether results are likely due to chance. Misunderstanding or misusing it can make weak findings seem definitive.

Example

Treating a p-value just below 0.05 as proof of a strong effect. Ignoring the practical importance of a statistically significant but tiny effect.

Correlation vs. Causation

The principle that a relationship between two variables does not necessarily mean one causes the other. Confounding factors may explain the observed association.

Example

Assuming ice cream sales cause crime because both rise in summer. Claiming a supplement improves health based solely on observational data.

Data Visualization Manipulation

Techniques that distort graphical representations to exaggerate or hide patterns. Visual design choices can strongly influence interpretation.

Example

Truncating a y-axis to exaggerate differences. Using inconsistent scales to compare trends.

Back-of-the-Envelope Calculation

A quick, approximate calculation used to test whether a claim is plausible. This method helps identify implausible magnitudes or inconsistencies.

Example

Estimating whether a proposed policy budget matches the scale of its goals. Checking if reported averages align with known population sizes.

Cherry-Picking

Selecting only the data that supports a desired conclusion while ignoring contradictory evidence. This creates a biased representation of reality.

Example

Highlighting a single successful quarter while ignoring annual losses. Citing only studies that support a product’s effectiveness.

Big Data Skepticism

The recognition that large datasets and complex algorithms can still produce flawed or biased results. Scale does not guarantee accuracy or objectivity.

Example

An AI hiring tool reflecting historical discrimination in its training data. Predictive policing systems amplifying biased crime reports.

Publication Bias

The tendency for journals to publish positive or novel findings more often than null results. This skews the scientific literature.

Example

Drug trials with negative outcomes remaining unpublished. Media coverage focusing only on groundbreaking discoveries.

P-Hacking

Manipulating data analysis or selectively reporting results to achieve statistically significant findings. This undermines scientific reliability.

Example

Testing multiple variables and reporting only significant ones. Stopping data collection once a desired p-value is reached.

Authority Bias in Data

The tendency to trust claims more readily when they appear scientific or are backed by experts. Quantitative presentation can create unwarranted credibility.

Example

Accepting a complex-looking chart without questioning its source. Trusting a statistic because it was presented at a conference.

Base Rate Neglect

Ignoring the underlying prevalence of a condition or event when evaluating probabilities. This leads to misinterpretation of risk and evidence.

Example

Overestimating the accuracy of a medical test without considering disease rarity. Misjudging the likelihood of fraud based on a single red flag.

Civic Quantitative Literacy

The ability to interpret and critically evaluate numerical claims in public life. It is framed as a necessary skill for informed democratic participation.

Example

Assessing competing budget proposals during an election. Evaluating public health statistics reported in the news.