The Lean Startup cover

The Lean Startup

How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses

Eric Ries 2011
Business Startups

Press Enter to add

10

Key Takeaways

  1. 1

    A startup is not a smaller version of a large company — it is a human institution designed to create a new product or service under conditions of extreme uncertainty. The key challenge is not execution of a known plan but discovery of a viable business model through rapid experimentation.

  2. 2

    The core engine of the Lean Startup is the Build-Measure-Learn feedback loop. Build a minimum viable product, measure how customers respond, learn whether to pivot or persevere, and repeat. The goal is to minimize the total time through this loop, not to maximize the quality of any single iteration.

  3. 3

    A Minimum Viable Product (MVP) is the version of a new product that allows a team to collect the maximum amount of validated learning with the least effort. It's not about building less — it's about learning faster. Your first product should embarrass you; if it doesn't, you launched too late.

  4. 4

    Validated learning is the process of demonstrating empirically that a team has discovered valuable truths about a startup's present and future business prospects. It's more concrete, accurate, and faster than market forecasting or classical business planning.

  5. 5

    Every startup must answer two fundamental questions: the value hypothesis (does the product deliver value to customers?) and the growth hypothesis (how will new customers discover the product?). Both should be tested as early as possible with real customers, not assumed.

  6. 6

    Innovation accounting replaces vanity metrics (total users, page views) with actionable metrics that demonstrate real progress. The three steps: establish a baseline with an MVP, tune the engine toward the ideal, and make a pivot-or-persevere decision based on whether the metrics are improving fast enough.

  7. 7

    A pivot is a structured course correction designed to test a new fundamental hypothesis about the product, strategy, or engine of growth. It is not a failure — it is a recognition that your initial assumptions were wrong and an organized response. Companies that cannot pivot are stuck; companies that pivot too frequently never gain traction.

  8. 8

    The 'just do it' mentality and the 'analysis paralysis' mentality are both wrong. The Lean Startup offers a third way: systematic experimentation. Instead of debating whether an idea will work, design an experiment to find out. Replace faith and intuition with evidence and iteration.

  9. 9

    Continuous deployment and split testing accelerate learning by putting real experiments in front of real customers every day. Rather than big-bang product launches based on months of speculation, deploy small changes constantly and measure their impact. Let customer behavior, not opinions, guide product decisions.

  10. 10

    Startups need an adaptive organization that can pivot quickly. Small, cross-functional teams with the authority to make decisions and run experiments will outperform large, siloed organizations every time. Speed of learning — not speed of coding — is the ultimate competitive advantage.

10

Concepts

Build-Measure-Learn Loop

The fundamental cycle of the Lean Startup: turn ideas into products (Build), measure customer reactions with data (Measure), and learn whether to pivot or persevere (Learn). The goal is to minimize the time through each cycle.

Example

Dropbox built a simple video demonstrating their product idea before writing any code. The waitlist went from 5,000 to 75,000 overnight — validating demand without building the full product. Each cycle provides new data to inform the next iteration.

Minimum Viable Product (MVP)

The simplest version of a product that lets you start the Build-Measure-Learn loop. Not a prototype or beta — it's the smallest experiment that tests your riskiest assumption.

Example

Zappos tested whether people would buy shoes online by photographing shoes at local stores and posting them on a simple website. When someone ordered, the founder went and bought the shoes at retail price. No warehouse, no inventory system. The MVP tested the core assumption: will people buy shoes without trying them on?

Validated Learning

Empirical evidence that a startup has discovered something valuable about its customers, market, or strategy through rigorous experimentation — as opposed to learning from anecdotes, intuition, or unreliable surveys.

Example

IMVU initially assumed users would want to integrate a 3D chat avatar into existing IM platforms. After building and testing, they learned users actually wanted a separate, new network where they could meet strangers. The learning was validated by measurable changes in user behavior — not by asking users what they wanted.

Pivot

A structured course correction to test a new fundamental hypothesis about a startup's product, business model, or engine of growth, while keeping one foot grounded in what has been learned so far.

Example

YouTube started as a video dating site before pivoting to a general video-sharing platform. Flickr started as a feature inside an online game before becoming a photo-sharing service. Instagram started as Burbn, a location-sharing app, before pivoting to focus solely on photo sharing with filters. Each pivot preserved some learning while fundamentally changing direction.

Innovation Accounting

A framework for measuring progress in a startup using actionable metrics rather than vanity metrics. It involves three steps: establishing a baseline, tuning the engine, and deciding to pivot or persevere.

Example

Instead of tracking total registered users (vanity), track activation rate (percentage who complete a meaningful first action), retention rate (percentage who return after 30 days), and revenue per customer. If these metrics improve with each experiment cycle, you're making real progress. If they plateau despite multiple efforts, it may be time to pivot.

Vanity Metrics vs. Actionable Metrics

Vanity metrics (total users, total downloads) make you feel good but don't inform decisions. Actionable metrics (conversion rates, cohort retention, revenue per user) tell you whether specific changes are working.

Example

'We have 100,000 registered users!' sounds impressive but says nothing about business health if 95% signed up and never returned. 'Our week-1 retention improved from 20% to 35% after the onboarding redesign' is actionable — it ties a specific change to a measurable improvement. Vanity metrics fuel press releases; actionable metrics fuel decisions.

The Value Hypothesis

Tests whether a product or service actually delivers value to customers once they use it. It's one of the two most important assumptions every startup must validate early.

Example

A food delivery startup must test: do customers actually value having restaurant food delivered to their door at this price point? Early testers who reorder frequently validate the value hypothesis. If customers try the service once and never return, the value hypothesis is invalidated, regardless of how many people sign up initially.

The Growth Hypothesis

Tests how new customers will discover a product or service. There are three engines of growth: sticky (high retention), viral (users bring users), and paid (customer acquisition cost is less than lifetime value).

Example

Sticky growth: a SaaS product where 95% of customers renew monthly. Viral growth: Hotmail adding 'Get your free email at Hotmail' to every outgoing message. Paid growth: a company spending $10 to acquire a customer worth $50 in lifetime revenue. Each engine has different metrics and strategies for optimization.

Small Batches

Working in small increments rather than large ones accelerates learning and reduces waste. Smaller batches move through the Build-Measure-Learn loop faster and surface problems earlier.

Example

Toyota's manufacturing insight: producing one piece at a time through every step (rather than batching 1,000 of each step) actually finishes faster and catches defects earlier. In software: deploying one feature at a time to production and measuring its impact beats building ten features in isolation and launching them all at once.

Five Whys

A root-cause analysis technique adapted from Toyota: when a problem occurs, ask 'Why?' five times to trace symptoms back to their underlying cause, then make proportional investments in prevention at each level.

Example

Problem: New feature broke the website. Why? A new deployment had a bug. Why? The developer didn't write a test. Why? There's no test-writing requirement. Why? The team never established testing guidelines. Why? Leadership didn't prioritize engineering practices. Solution: invest proportionally at each level — fix the bug, add the test, create guidelines, and schedule a team discussion.