Jan 27, 20225 min read

From Idea to Launch — What Actually Happens

The textbook version of product development is neat and linear. Reality is messier. Here's what it actually looks like.

Product Development

Every product management course teaches the same linear process: Idea → Research → Design → Build → Test → Launch → Iterate.

Clean. Logical. And mostly fiction.

Real product development is loops within loops. You'll go back to design mid-build. You'll launch and realize the idea was wrong. You'll iterate before you test. Here's what it actually looks like.

Stage 1: The Idea (It's Probably Wrong)

Ideas come from everywhere. User feedback, market gaps, competitive pressure, that shower thought at 6am.

Most ideas are wrong. Not bad, just wrong — solving the wrong problem, or solving the right problem for the wrong people, or solving it in the wrong way.

Don't get attached. The goal isn't to prove your idea is right. It's to find out if it's wrong as fast as possible.

What I actually do:

  • Write down the hypothesis in one sentence
  • List what would have to be true for this to work
  • Identify the riskiest assumption
  • Figure out the cheapest way to test that assumption

If the idea survives this, move to validation.

Stage 2: Validation (Talk to Humans)

Not surveys. Not focus groups. Actual conversations with potential users.

"Would you use this?" is a useless question. People say yes to be nice.

Better questions:

  • "Tell me about the last time you dealt with [problem]"
  • "What did you try? What worked? What didn't?"
  • "How much time/money did that cost you?"
  • "What would make this not worth solving?"

Listen for emotion. If someone's animated about a problem, it's real. If they're giving polite answers, it's probably not.

5-10 conversations usually tells you if you're onto something.

Stage 3: Scoping (Kill Your Darlings)

Now you know there's a problem worth solving. What's the smallest thing you can build that solves it?

This is where most products go wrong. The scope balloons. "While we're at it..." features creep in. What started as a focused tool becomes a bloated mess.

My process:

  1. List everything the product could do
  2. Cross off everything that's not essential for the core use case
  3. Cross off more things
  4. Whatever's left is v1

V1 should be embarrassingly simple. If you're not a little uncomfortable with how little it does, you've overbuilt.

Stage 4: Design and Build (They Overlap)

Design isn't a phase that ends before development starts. It's ongoing.

How I actually work:

  • Rough wireframes for the core flow
  • Engineers start on the backend while design iterates on UI
  • Build → Review → Adjust → Build → Review → Adjust
  • Real product emerges from collaboration, not handoffs

Don't wait for perfect designs. Build something ugly, use it, then make it pretty. You'll learn more from using a rough prototype than from staring at Figma files.

Stage 5: Testing (It's Never Enough)

Everyone says they test. Few actually do it well.

What I expect:

  • Unit tests for logic-heavy code
  • Integration tests for critical paths
  • Smoke tests before every deploy
  • Actual humans using the thing before launch

That last one is crucial. Automated tests catch bugs. Human testing catches "this is confusing" and "why would anyone do it this way?" problems.

Allocate real time for testing. "We'll test if there's time" means "we won't test."

Stage 6: Launch (It's Not The End)

Launch day isn't a celebration. It's when you finally get real data.

What I do on launch day:

  • Watch the dashboards (errors, performance, user behavior)
  • Be ready to roll back if something's broken
  • Don't celebrate until 48 hours of clean operation
  • Collect feedback aggressively

What I don't do:

  • Big marketing pushes on day one
  • Invite everyone to use it immediately
  • Assume it's working because nobody complained

Soft launch to a small group. Fix the obvious issues. Then expand.

Stage 7: Learn (The Actual Hard Part)

Now you have users. Are they getting value?

Look at:

  • Are people coming back? (retention)
  • Are they using the core feature? (engagement)
  • Are they telling others? (organic growth)
  • What are they struggling with? (support tickets, feedback)

This is where the hypothesis from Stage 1 gets validated or invalidated. Often invalidated. That's fine. Now you know.

Iterate based on evidence, not opinions. "Users would love X" needs data. "Users are struggling with Y, here's the support ticket volume" is data.

The Reality: It's Not Linear

Real product development looks more like:

Idea → Validation → "Wait, different problem" → New idea → Validation → Scope → Build → "Design doesn't work" → Redesign → Build → Test → "Core assumption wrong" → Pivot → Build → Launch → Learn → Iterate → Iterate → Iterate

The teams that succeed aren't the ones that follow the process perfectly. They're the ones that learn fast and adapt.


Build things. Ship things. See what happens. Adjust. Repeat. That's product development.