Version 1.0
For non-technical builders who want to turn ideas into working prototypes
Vibe Coding is building software through conversation with AI. Instead of writing code yourself, you describe what you want in plain language, see what the AI builds, and iterate until it works.
Think of it like working with a really fast junior developer who:
Example Conversation:
You: "Create a form where sales reps can log customer calls.
Include fields for customer name, call date, and notes."
Claude: [Builds a working form with those fields]
You: "The notes field is too small. Make it bigger and add
a character counter."
Claude: [Updates the form]
You: "Perfect! Now add a submit button that shows a success
message."
Claude: [Adds the button and success state]
That’s Vibe Coding. You’re guiding, testing, and refining—not writing code.
Perfect for:
You DON’T need:
You DO need:
Quick Prototypes
Simple Internal Tools
Demos & Presentations
Learning & Exploration
Production Systems
Complex Integrations
Compliance Requirements
When You Need:
This guide follows a 5-phase framework:
PHASE 1: THINK → Define problem, test assumptions, choose target
PHASE 2: PLAN → Describe what you want, break down features
PHASE 3: BUILD → Talk to Claude, iterate, checkpoint
PHASE 4: VALIDATE → Test with users, analyze, decide
PHASE 5: SHARE → Deploy, plan next steps
Each phase includes:
Special features:
How to use this guide:
Let’s get started!
Before you start vibe coding, make sure you have:
Time & Focus
Tools & Access
People & Validation
Clarity & Preparation
Have you used Claude.ai before?
Best for:
Pros:
Cons:
How to start:
Best for:
Pros:
Cons:
How to start:
Start with Artifacts if:
Use Projects if:
💡 Pro Tip: Start with Artifacts for your first project. You can always move to Projects later.
Before you dive in, understand these key principles:
Think of it this way:
Your job:
NOT your job:
Your first prompt won’t be perfect. That’s normal!
The Vibe Coding Cycle:
┌─────────────────────────────────────┐
│ 1. DESCRIBE │
│ Tell Claude what you want │
│ (Be specific and clear) │
└──────────────┬──────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ 2. ASSESS │
│ Claude builds it instantly │
│ (Review what you see) │
└──────────────┬──────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ 3. TEST │
│ Try using it yourself │
│ (Click, type, interact) │
└──────────────┬──────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ 4. REFINE │
│ Give specific feedback │
│ ("Change X to Y because...") │
└──────────────┬──────────────────────┘
│
▼
┌─────────────────────────────────────┐
│ 5. REPEAT │
│ Keep cycling until it works │
│ (Typically 5-15 iterations) │
└──────────────┬──────────────────────┘
│
└──────> Back to DESCRIBE
Example of one full cycle:
Most successful projects go through 5-15 iterations on each feature.
Bad prompt (too vague): “Build a user dashboard”
Good prompt (specific): “Create a dashboard showing:
More details = better results.
Don’t wait until “it’s done” to test. Test after every major feature.
Testing rhythm:
Sometimes it’s faster to start fresh than to fix a messy project.
Signs you should restart:
How to restart well:
Claude is powerful but not perfect. You need to:
Think: “Working with a capable but junior team member”
Time Investment: 30-60 minutes
Why it matters: Every minute here saves 30 minutes in Phase 3
Before asking Claude to build anything, get crystal clear on what you’re solving and why.
Answer these in writing (yes, actually write them down):
Bad: “We need better customer management”
Good: “Sales reps forget to follow up with leads, causing us to lose deals”
Be specific. Use real examples:
Bad: “Business users”
Good: “Sales reps who are always on the road, use mobile primarily, and hate complex software”
Create a mini user profile:
Research 2-3 existing tools:
💡 Pro Tip: Spend 15 minutes using a competitor’s tool. Take screenshots. This gives Claude context about what to emulate (or avoid).
Bad: “Users like it”
Good: “3 out of 5 test users can complete the core task in under 2 minutes without help”
Define success metrics:
This is crucial—scope creep kills projects.
Example:
Could this be:
If software isn’t clearly the best answer, pause and reconsider.
PROBLEM STATEMENT:
[One paragraph describing the problem with specific examples]
TARGET USER:
Role: [Title/description]
Context: [When/where they use it]
Tech level: [Low/Medium/High]
Key frustrations: [What bothers them]
Success for them: [What gets better]
EXISTING SOLUTIONS:
Tool 1: [Name] - Good: [X] / Bad: [Y]
Tool 2: [Name] - Good: [X] / Bad: [Y]
Our advantage: [What we do differently]
SUCCESS CRITERIA:
- [Measurable outcome 1]
- [Measurable outcome 2]
- [Measurable outcome 3]
OUT OF SCOPE:
- [What we're not building]
- [What we're not building]
WHY SOFTWARE?
[Why this needs to be custom-built vs using existing tools]
Before moving forward, verify:
🎯 Reality Check: If you can’t confidently check all boxes, spend more time in Step 1. This foundation determines everything.
Now use Claude to poke holes in your thinking BEFORE you start building.
You might be:
Better to discover this now than after 8 hours of building!
Copy and customize this template:
I'm planning to build [type of tool/app].
CONTEXT:
Problem: [Your problem statement]
Target users: [Your user profile]
Success looks like: [Your criteria]
CHALLENGE THIS IDEA:
1. What am I assuming that might be wrong?
2. What are 3 alternative approaches I haven't considered?
3. What will be hardest to build or maintain?
4. What could make this fail with users?
5. What's the absolute simplest version that could work?
6. Is there an existing tool I should use instead?
Be skeptical and direct. I want to know if this is a bad idea before I invest time.
Look for these red flags:
Common outcomes:
User’s initial idea: “Build a tool for managing customer subscriptions with payment processing”
Claude’s pressure-test response: “Payment processing adds significant complexity and compliance requirements. Consider:
User’s response: “You’re right—payments are out of scope. I’ll build just the subscription tracking interface and leave payment processing to our existing system.”
After pressure-testing:
Not all prototypes are created equal. Define what you’re building toward.
Goal: Prove the core idea is technically possible
Audience: Just you (maybe 1-2 teammates)
Quality bar: Rough edges are fine, bugs are okay
Time investment: 2-4 hours
Example: “Can we extract data from PDFs automatically?”
What’s next: If it works → Enhance to MVP/Demo or hand to engineering
Goal: Test with real users to validate need
Audience: 5-10 alpha/beta users (real usage scenario)
Quality bar: Core flow works reliably, handles real data
Time investment: 1-3 days
Example: “Let’s see if sales reps actually use this lead tracker”
⚠️ Important: MVPs used by real users often need engineering involvement for:
When to involve engineering:
Goal: Get buy-in, funding, or approval
Audience: Executives, investors, partners
Quality bar: Polished UI, handles realistic data, no crashes
Time investment: 2-4 days
Example: “Show the CEO why we should build this feature”
What’s next: Usually handed to engineering for proper build
Goal: Solve an operational problem for your team
Audience: Specific team or department
Quality bar: Works reliably with real data (UI can be simple)
Time investment: 1-4 days
Example: “Automate our weekly reporting process”
⚠️ Important: Plan for maintenance! These tools will break and need updates.
POC (2-4 hrs)
↓ Works? Like it?
├─> Enhance to MVP (add polish, real data handling)
├─> Enhance to DEMO (add visual polish, remove bugs)
└─> Enhance to INTERNAL TOOL (add error handling, documentation)
You don’t need to choose the final target immediately. Many successful projects:
Or:
START: What do you need to accomplish?
│
├─> "I want to prove this is technically possible"
│ └─> POC (Proof of Concept)
│ Time: 2-4 hours
│ Quality: Rough, bugs okay
│ Audience: Just you + maybe 1-2 teammates
│
├─> "I need to test if users actually want this"
│ └─> MVP (Minimum Viable Product)
│ Time: 1-3 days
│ Quality: Core works reliably
│ Audience: 5-10 real users (alpha/beta)
│ ⚠️ May need engineering for production use
│
├─> "I need to get buy-in from stakeholders/executives"
│ └─> DEMO (Stakeholder Presentation)
│ Time: 2-4 days
│ Quality: Polished, no crashes
│ Audience: Executives, investors, partners
│
└─> "I need to solve a problem my team has right now"
└─> INTERNAL TOOL
Time: 1-4 days
Quality: Works with real data
Audience: Your team or department
⚠️ Plan for ongoing maintenance
Quick Questions to Decide:
| If you answer… | Choose… |
|---|---|
| “Will anyone else see this?” → No | POC |
| “Do I need real user feedback?” → Yes | MVP |
| “Am I presenting to leadership?” → Yes | DEMO |
| “Will my team use this daily?” → Yes | INTERNAL TOOL |
💡 Pro Tip: Start with POC, then evolve it! Build a quick proof of concept first. If it works and you like it, you can enhance the same prototype into an MVP, Demo, or Internal Tool by adding polish and features.
| Aspect | POC | MVP | Demo | Internal Tool |
|---|---|---|---|---|
| UI Polish | Skip it | Important | Very important | Nice-to-have |
| Error Handling | Happy path only | Comprehensive | Handle gracefully | Comprehensive |
| Mobile Support | If needed for test | Usually yes | Usually yes | Depends on use |
| Documentation | None | Essential for users | User-facing only | Essential + technical |
| Engineering Involvement | No | Likely for production | For next phase | For scale/reliability |
| Phase 2 Planning | Skip to Phase 3 | Do Steps 4-5 | Do Steps 4-6 | Do Steps 4-6 |
TARGET TYPE: [POC / MVP / Demo / Internal Tool]
PRIMARY GOAL:
[What you're trying to achieve]
AUDIENCE:
[Who will use/see this]
QUALITY BAR:
Must work: [Essential features]
Should work: [Important but not critical]
Nice to have: [Polish/extras]
SUCCESS METRIC:
[How you'll know this achieved its goal]
WHAT HAPPENS NEXT:
[Throw away / Hand to eng / Keep using / Iterate on feedback]
Before proceeding:
🎯 Reality Check: Most people should start with POC or MVP. Demo requires significantly more polish. Be honest about your timeline and goals.
Time Investment: 30-60 minutes
Skip if building POC. Optional for MVP. Required for Demo/Internal Tool.
This step translates your problem into a clear description Claude can work with.
Force yourself to describe it in ONE sentence:
Examples:
Test: Can someone who knows nothing about your company understand what this does?
Describe the step-by-step journey:
USER FLOW:
1. [User starts at...]
2. [User does...]
3. [System shows/does...]
4. [User completes when...]
Example: Lead Tracker
USER FLOW:
1. Sales rep opens the app on their phone
2. Taps "Add Lead" button
3. Fills in: Company name, Contact person, Phone, Interest level
4. Taps "Save"
5. Sees confirmation message
6. Can view list of all their leads sorted by date
Keep it to 4-8 steps max. If it’s longer, you might be building too much.
What does success look like for the user?
Examples:
Technical or design requirements:
CONSTRAINTS:
- Device: [Mobile-first / Desktop / Both]
- Data: [Where does it come from? Where does it go?]
- Design: [Any style requirements?]
- Performance: [Speed needs?]
Example:
CONSTRAINTS:
- Mobile-first (sales reps mostly on phones)
- Data stays in browser (no backend needed)
- Simple, fast interface (no unnecessary clicks)
- Works without internet for data entry (syncs later)
Show Claude what you like:
“Make it look similar to [Trello’s card interface]” “Use a layout like [Google Calendar’s day view]” “Style should feel like [Stripe’s dashboard] - clean and modern”
💡 Pro Tip: Take screenshots of UIs you like and describe what specifically appeals to you.
ONE-SENTENCE SUMMARY:
[What this tool does in one sentence]
USER FLOW:
1. [Step 1]
2. [Step 2]
3. [Step 3]
4. [Step 4]
5. [Result/completion]
SUCCESS OUTCOME:
[What success looks like for user]
CONSTRAINTS:
Device: [Mobile/Desktop/Both]
Data: [Where from, where to]
Design: [Style requirements]
Performance: [Speed needs]
REFERENCE EXAMPLES:
[What this should look/feel like]
NOT INCLUDED (YET):
[Features you're deliberately leaving out]
ONE-SENTENCE SUMMARY:
A simple form where customers can submit product feedback and see the status of their previous submissions.
USER FLOW:
1. Customer lands on feedback page
2. Sees simple form: Name (optional), Email, Feedback (text area), Category (dropdown)
3. Clicks "Submit Feedback"
4. Sees confirmation: "Thanks! We'll review your feedback."
5. Can optionally view "My Previous Feedback" showing their submission history
6. Each past submission shows: Date, Category, Status (New/Reviewing/Implemented)
SUCCESS OUTCOME:
Customer can submit feedback in under 60 seconds. They feel heard because they can track what happens to their feedback.
CONSTRAINTS:
Device: Both mobile and desktop (mobile-first design)
Data: Store submissions (needs simple database or backend)
Design: Clean, friendly, not corporate-feeling
Performance: Form submit under 2 seconds
REFERENCE EXAMPLES:
- Form design like Typeform (one question at a time feel, but all on one page)
- Status tracking like package tracking (clear progress states)
- Overall vibe like Notion (modern, clean, not intimidating)
NOT INCLUDED (YET):
- User accounts/login
- Admin panel to respond to feedback
- Email notifications
- Voting on others' feedback
- File attachments
Review your description:
🎯 Reality Check: If your user flow has more than 10 steps, you’re trying to do too much in one go. Here’s what to do:
Option 1: Simplify the flow
Option 2: Break into phases (not separate projects)
Example: Complex Onboarding Flow
❌ Too much at once (12 steps):
✅ Phase 1 - Core Flow (4 steps):
✅ Phase 2 - Team Features (3 steps added):
✅ Phase 3 - Advanced Setup (remaining steps):
Why this matters: Building everything at once leads to:
The right approach: Build Phase 1, test it, make sure it works, THEN build Phase 2. Each phase is still part of the same app, you’re just building it incrementally.
Now break your idea into bite-sized pieces you’ll build one at a time.
Categorize every feature:
MUST HAVE - Without this, it doesn’t work SHOULD HAVE - Important but not critical COULD HAVE - Nice to have if time permits WON’T HAVE - Explicitly out of scope (for now)
MUST HAVE (Core Value):
- [ ] Feature 1 [Why it's critical]
- [ ] Feature 2 [Why it's critical]
SHOULD HAVE (Important):
- [ ] Feature 3 [Why it matters]
- [ ] Feature 4 [Why it matters]
COULD HAVE (Nice-to-have):
- [ ] Feature 5 [Why it's nice but optional]
WON'T HAVE (Out of scope):
- Feature 6 [Why not now]
- Feature 7 [Why not now]
Build in this sequence:
1. Happy Path First (Build order #1-3) The simplest version of the core feature
2. Common User Flows (Build order #4-6) Secondary but frequent use cases
3. Error Handling (Build order #7-9) What happens when things go wrong
4. Polish & Nice-to-Haves (Build order #10+) UI improvements, convenience features
MUST HAVE (Core Value):
- [ ] #1 Feedback form (Name, Email, Message, Category dropdown)
- [ ] #2 Submit button that shows success message
- [ ] #3 Form validation (email format, required fields)
SHOULD HAVE (Important):
- [ ] #4 View past submissions (list showing date, category, status)
- [ ] #5 Mobile-responsive design
- [ ] #6 Character counter on message field
COULD HAVE (Nice-to-have):
- [ ] #7 Filter past submissions by category
- [ ] #8 Search through past feedback
- [ ] #9 Export feedback to CSV
WON'T HAVE (Out of scope):
- User authentication (we'll use email lookup only)
- Admin dashboard (separate project)
- File uploads (keep it simple)
- Real-time notifications (email is enough)
For each feature, ask:
Is it critical for core value?
Can AI build it reliably?
Can it be tested independently?
Review your features:
⚠️ Red Flag: If you have more than 7 MUST HAVE features, your scope is too large. Simplify or break into multiple phases.
Your opening message to Claude sets the tone for the entire project. Make it count.
I want to build [one-sentence summary from Step 4].
CONTEXT:
Users: [Who and their key traits]
Problem it solves: [From Step 1]
Success looks like: [From Step 4]
TECHNICAL CONSTRAINTS:
- [Device requirements]
- [Data handling]
- [Performance needs]
LET'S START WITH:
[Feature #1 from your build order]
Specifically, I want:
- [Specific detail about feature]
- [Specific detail about feature]
- [Specific detail about feature]
Design should be: [Clean/modern/minimal/etc.]
Use: [Color preferences if any]
Please build this first feature and show me what it looks like. Ask clarifying questions if anything is unclear.
I want to build a simple customer feedback form where people can submit product feedback and see their past submissions.
CONTEXT:
Users: Customers (non-technical, using mobile and desktop)
Problem it solves: Currently feedback goes to scattered emails and gets lost. We want one place to collect and track all feedback.
Success looks like: Customer can submit feedback in under 60 seconds and see the status of past submissions.
TECHNICAL CONSTRAINTS:
- Mobile-first but works on desktop
- Need to store submissions (suggest simple approach)
- Should load fast (under 2 seconds)
LET'S START WITH:
Build the feedback submission form.
Specifically, I want:
- Fields: Name (optional), Email (required), Feedback message (text area), Category (dropdown: Bug Report, Feature Request, General Feedback)
- Submit button that shows a success message after submission
- Form validation (proper email format, message can't be empty)
- Clean, friendly design (not corporate-looking)
Design should be: Modern and welcoming, similar to Typeform's aesthetic
Use: Soft blue accent color (#4A90E2)
Please build this first feature and show me what it looks like. Ask clarifying questions if anything is unclear.
✅ Good first prompts:
❌ Bad first prompts:
For POC:
For MVP:
For Demo:
Review your first prompt:
🎯 Reality Check: You should be able to read this prompt aloud in under 60 seconds. If it takes longer, you’re including too much.
Time Investment: 2-8 hours (spread across multiple sessions is fine)
This is where the magic happens—and where most mistakes occur
Now you’ll actually talk to Claude and see your first feature come to life.
Paste your first prompt (from Step 6)
Claude will typically:
If Claude asks clarifying questions:
❌ Don’t: “Whatever you think is best”
✅ Do: Give specific answers or examples
Example exchange:
Claude: "For the category dropdown, should users be able to select multiple categories or just one?"
Bad response: "Whatever makes sense"
Good response: "Just one category per submission. If someone needs multiple categories, they can submit separate feedback."
When Claude shows you what it built, check:
1. Does it look right?
2. Does it do what you asked?
3. What’s confusing or wrong?
It’s normal for the first version to need tweaks. Common issues:
Visual Issues:
Functional Issues:
Missing Pieces:
Use this pattern:
Looking at [the specific part], I need to change [specific thing].
Currently it: [What it does now]
I want it to: [What it should do]
[Optional: Why this matters]
Real examples:
Looking at the submit button, I need to change its behavior.
Currently it: Just shows "Submitted" briefly
I want it to: Show a green checkmark with "Thanks! Your feedback has been submitted" for 3 seconds, then clear the form
This matters because: Users need clear confirmation their feedback was received.
Looking at the mobile layout, the form is too cramped.
Currently it: Everything is squished and hard to tap
I want it to: Increase spacing between fields, make buttons larger (at least 44px tall), add more padding
This matters because: Sales reps use this on phones between meetings.
After building the first feature:
💡 Pro Tip: Take a screenshot after each working feature. This helps you track progress and shows stakeholders what you built.
This is the core of vibe coding. You’ll repeat this cycle for each feature.
FOR EACH FEATURE:
1. REQUEST the feature (be specific)
2. REVIEW what Claude built
3. TEST it thoroughly
4. ITERATE with feedback
5. VERIFY it works
6. MOVE TO next feature
AFTER EVERY 3-5 FEATURES:
→ Do a Quality Checkpoint (Step 9)
Use this template:
Now let's add [feature name from your list].
What it should do:
- [Specific behavior 1]
- [Specific behavior 2]
- [Specific behavior 3]
Success looks like:
[Concrete example of it working]
Keep the existing [previous features] working.
Real example:
Now let's add the "View Past Feedback" feature.
What it should do:
- Show a "My Feedback" button below the submission form
- When clicked, display a list of all submissions from this user (match by email)
- Each item shows: Date submitted, Category, Message preview (first 50 characters), Status badge
- Status badges: "New" (blue), "Reviewing" (yellow), "Implemented" (green)
- List sorted by date (most recent first)
- If no past feedback, show friendly message: "You haven't submitted any feedback yet"
Success looks like:
User enters their email, clicks "My Feedback", and sees their submission history in a clean list format.
Keep the existing submission form working.
Don’t just accept it. Actually look at it:
Visual check:
Functional check:
Don’t just look—interact with it!
Basic testing:
Specific tests by feature type:
Forms:
Lists/Tables:
Calculations:
When something needs fixing, be specific:
Pattern for bug fixes:
Found an issue with [feature].
What I did: [Steps to reproduce]
What happened: [Actual behavior]
What should happen: [Expected behavior]
Can you fix this?
Pattern for improvements:
The [feature] works, but I want to improve [aspect].
Currently: [How it is now]
Change to: [How it should be]
Reason: [Why this matters]
Real examples:
Found an issue with the feedback list.
What I did: Clicked "My Feedback" without entering an email
What happened: Shows error "Cannot read property 'filter' of undefined"
What should happen: Should show a message like "Please enter your email to view your feedback"
Can you fix this?
The feedback list works, but I want to improve readability.
Currently: All submissions are in one long list with no visual separation
Change to: Add a subtle border between items, increase spacing, and make the date smaller/lighter
Reason: Right now it's hard to scan through multiple items quickly
After Claude makes changes:
A feature is “done” when:
Don’t aim for perfect. You’re building a prototype, not a production app.
Rough guidelines:
⚠️ Red Flag: If a single feature takes more than 2 hours of back-and-forth:
Track your progress systematically:
PROJECT: [Your App Name]
STARTED: [Date]
TARGET: [POC/MVP/Demo/Internal Tool]
═══════════════════════════════════════════════════════
FEATURE TRACKER
Feature #1: [Feature Name]
Status: ✅ DONE
Built: [Date]
Tested: ✅ Works on happy path | ✅ Tested on mobile | ✅ Tested on desktop
Notes: [Any known issues or things to remember]
Feature #2: [Feature Name]
Status: ✅ DONE
Built: [Date]
Tested: ✅ Works on happy path | ✅ Tested on mobile | ⚠️ Minor issue on desktop (acceptable)
Notes: [Description of minor issue and why it's acceptable]
Feature #3: [Feature Name]
Status: 🔄 IN PROGRESS
Built: [Date started]
Tested: ⏳ Waiting to complete
Notes: [What you're currently working on]
Feature #4: [Feature Name]
Status: ⏳ PLANNED
Built: —
Tested: —
Notes: [What needs to happen first before building this]
═══════════════════════════════════════════════════════
KNOWN ISSUES:
• [Issue 1]: [Impact - Critical/Minor/Cosmetic]
• [Issue 2]: [Impact - Critical/Minor/Cosmetic]
DEPENDENCIES:
• Feature #5 requires Feature #3 to be complete
• Feature #7 needs real data before testing
NEXT SESSION GOALS:
□ Finish Feature #3
□ Build Feature #4
□ Test Feature #2 on desktop and fix issue
□ Do quality checkpoint after Feature #5
PROJECT: Customer Feedback Tool
STARTED: Oct 20, 2024
TARGET: MVP
═══════════════════════════════════════════════════════
FEATURE TRACKER
Feature #1: Feedback submission form
Status: ✅ DONE
Built: Oct 20, 2pm
Tested: ✅ Happy path | ✅ Mobile | ✅ Desktop
Notes: All validation working correctly
Feature #2: Form validation & error messages
Status: ✅ DONE
Built: Oct 20, 3pm
Tested: ✅ Happy path | ✅ Edge cases | ✅ Mobile
Notes: Shows clear errors for empty fields and bad email
Feature #3: Success message after submission
Status: ✅ DONE
Built: Oct 20, 3:30pm
Tested: ✅ Happy path | ✅ Mobile | ✅ Desktop
Notes: Displays for 3 seconds then clears form
Feature #4: View past feedback by email
Status: 🔄 IN PROGRESS
Built: Oct 20, 4pm (partial)
Tested: ⏳ Basic version works, needs refinement
Notes: Working on styling and empty states
Feature #5: Status badges (New/Reviewing/Implemented)
Status: ⏳ PLANNED
Built: —
Tested: —
Notes: Needs Feature #4 complete first
Feature #6: Mobile responsive design polish
Status: ⏳ PLANNED
Built: —
Tested: —
Notes: Will do after all features work functionally
═══════════════════════════════════════════════════════
KNOWN ISSUES:
• Past feedback list: No empty state message - Minor (fixing in #4)
• Submit button: Could be larger on mobile - Cosmetic (will fix in #6)
DEPENDENCIES:
• Feature #5 requires Feature #4 complete
• Feature #6 (mobile polish) should be done after all features exist
NEXT SESSION GOALS:
□ Complete Feature #4 (finish styling)
□ Build Feature #5 (status badges)
□ Do quality checkpoint after Feature #5
□ Start mobile polish if time permits
After every 3-5 features, add a checkpoint entry:
═══════════════════════════════════════════════════════
CHECKPOINT #1 - Oct 20, 5pm
Features complete: #1, #2, #3, #4, #5
✅ WHAT'S WORKING WELL:
- All core submission flow works
- Mobile experience is decent
- No critical bugs
⚠️ NEEDS ATTENTION:
- Past feedback list styling needs work
- Status badges color scheme unclear
🔄 NEXT FOCUS:
- Polish the past feedback view
- Add mobile responsive improvements
- Test with first user
═══════════════════════════════════════════════════════
Keep this tracking document open while you work. Update it after every feature completion. This helps you:
Building is intense. Take breaks:
Fresh eyes spot issues faster.
After each feature:
🎯 Reality Check: If you’re 4+ hours into building and still on your first feature, something’s wrong. Simplify the feature or restart with clearer requirements.
Every 3-5 features, pause and do a quality check. This prevents technical debt from piling up.
Do a checkpoint after:
Pretend you’re a user seeing this for the first time:
If your app needs to work on both:
Test unusual scenarios:
Even if you’re not technical, this helps:
Prompt:
Please review what we've built so far and explain:
1. What are the main components/parts?
2. Are there any potential issues or areas that might cause problems?
3. Is the code organized well or is it getting messy?
4. Are there any security concerns I should know about?
5. What would be hardest to change later?
Claude will give you a high-level overview. Look for:
For web apps, check:
If things are slow:
The app is loading slowly, especially [specific action].
Can you optimize this for better performance?
Ask Claude about security:
Review the current implementation for security issues:
- Are there any ways a user could access data they shouldn't?
- Is user input being validated properly?
- Are there any exposed API keys or sensitive data?
- What are the biggest security risks right now?
Common issues in prototypes:
Stop and fix if you notice:
🚩 Context is getting messy
Solution: Start fresh chat with full context:
I'm continuing work on [project]. Here's what we've built so far:
[List of completed features]
Current issues: [Any problems]
Let's continue by [next feature].
🚩 You don’t understand what Claude built
Solution:
I need you to explain the current implementation at a high level.
What are the main parts and how do they work together?
Use simple language (I'm not a developer).
🚩 Same issues keep appearing
Solution:
We keep running into [specific issue].
This suggests a fundamental problem.
Can you suggest a better approach that prevents this issue?
I'm willing to rebuild parts if needed.
If the checkpoint reveals serious issues, you can ask Claude to refactor:
The code is getting messy. Let's clean it up before adding more features.
Specifically:
- [Issue 1 from checkpoint]
- [Issue 2 from checkpoint]
- [Issue 3 from checkpoint]
Please refactor the code to address these issues while keeping all existing functionality working.
After refactoring, let me test everything to make sure nothing broke.
Every checkpoint, update your notes:
COMPLETED FEATURES:
✅ [Feature 1] - Works well
✅ [Feature 2] - Works well
✅ [Feature 3] - Has minor issue: [describe], acceptable for now
KNOWN ISSUES:
- [Issue 1]: [Impact and whether it matters]
- [Issue 2]: [Impact and whether it matters]
NEXT UP:
- [Next feature to build]
- [Any dependencies or concerns]
After each quality checkpoint:
🎯 Reality Check: If quality checkpoints reveal more than 3 serious issues, consider whether you need to restart with a cleaner approach.
Time Investment: 2-4 hours
This is where you find out if your prototype actually solves the problem
Now get your prototype in front of real users. This is non-negotiable.
Preparation checklist:
Ideal test users:
How many users?
If using Claude Artifacts:
If using Claude Projects:
If not deployed:
Use this structure:
[INTRODUCTION - 2 minutes]
"Hi! Thanks for helping me test this. I built a prototype and want to see if it makes sense. I'm testing the tool, not you, so there are no wrong answers. Please think out loud as you use it."
[CONTEXT - 1 minute]
"This tool is for [brief context about what it does]."
[TASK - 10-15 minutes]
"I'd like you to [realistic task that matches your user flow]. Take your time, and let me know if anything is confusing. Don't feel like you need to be polite about issues—honest feedback helps me improve it."
[FOLLOW-UP - 5 minutes]
Ask the follow-up questions below.
Watch for:
Don’t:
Do:
After they complete (or fail to complete) the task:
Understanding:
Experience:
Value:
Comparison:
For each user, capture:
USER: [Name/Role]
DATE: [When you tested]
TASK COMPLETION:
✅ or ❌ Did they complete the main task?
Time taken: [X minutes]
OBSERVATIONS:
[Specific moments where they struggled]
[Things they said out loud]
[Unexpected behaviors]
DIRECT QUOTES:
"[Memorable thing they said]"
"[Another quote]"
FEEDBACK:
What worked: [Positive points]
What didn't work: [Issues]
Suggestions: [Their ideas]
SUCCESS METRIC:
[Your metric from Step 1]: ✅ or ❌ Met or not?
USER: Sarah (Marketing Manager)
DATE: Oct 20, 2024
TASK COMPLETION:
✅ Completed: Submitted feedback successfully
Time taken: 1 min 45 sec
OBSERVATIONS:
- Paused for 10 seconds looking at Category dropdown (wasn't sure which to pick)
- Tried to click "Submit" without filling email (validation worked correctly)
- Smiled when she saw the success message
- Didn't notice the "View Past Feedback" button initially (had to scroll)
DIRECT QUOTES:
"Oh, this is way easier than emailing support!"
"I wish the category options were explained a bit more"
"Wait, I can see my past feedback? That's actually really useful"
FEEDBACK:
What worked:
- Form was simple and fast
- Validation was clear
- Success message felt rewarding
What didn't work:
- Category options unclear ("Bug Report" vs "Feature Request" - what's the difference?)
- "View Past Feedback" button too subtle
- Wanted to know response time expectations
Suggestions:
- Add tooltip explaining each category
- Make past feedback button more prominent
- Add text like "We typically respond within 48 hours"
SUCCESS METRIC:
✅ Submit feedback under 2 minutes: YES (1:45)
✅ Completed without asking for help: YES
Pattern 1: User completes task easily ✅ Good sign! But still ask what could be better.
Pattern 2: User completes task after struggling ⚠️ Fix the friction points. The tool works but isn’t intuitive.
Pattern 3: User abandons task 🚩 Critical issues. Core flow doesn’t work. Major changes needed.
Pattern 4: User completes task but says they wouldn’t use it 🚩 Tool works but doesn’t solve the real problem. Revisit Step 1.
In-Person (Best):
Video Call (Good):
Async (Okay for MVPs):
After user testing:
🎯 Reality Check: If 3+ out of 5 users couldn’t complete the core task, don’t polish—redesign.
Now synthesize what you learned into actionable insights.
Look across all your user tests for patterns.
Create three lists:
✅ WORKED WELL (mentioned by 3+ users):
- [Thing that worked]
- [Thing that worked]
- [Thing that worked]
⚠️ FRICTION POINTS (2+ users struggled):
- [Issue 1]: [How many users hit this]
- [Issue 2]: [How many users hit this]
🚩 CRITICAL ISSUES (blocked task completion):
- [Blocker 1]: [How many users couldn't complete]
- [Blocker 2]: [How many users couldn't complete]
✅ WORKED WELL (mentioned by 4/5 users):
- "Super fast to submit" - all 5 users under 2 minutes
- "Success message felt good" - 4/5 mentioned they felt confirmed
- "Cleaner than emailing" - 4/5 compared favorably to current process
⚠️ FRICTION POINTS (2+ users struggled):
- Category dropdown confusion - 3/5 hesitated, not sure which to pick
- "View Past Feedback" button too hidden - 4/5 didn't see it initially
- No guidance on response time - 3/5 asked when they'd hear back
🚩 CRITICAL ISSUES (blocked task completion):
- None! All 5 users completed the core task
- One user initially missed required email field but validation caught it
Go back to your success criteria from Step 1:
ORIGINAL SUCCESS CRITERIA:
[Metric 1]: [Did we hit it? Yes/No/Partial]
[Metric 2]: [Did we hit it? Yes/No/Partial]
[Metric 3]: [Did we hit it? Yes/No/Partial]
EVIDENCE:
[What from testing proves/disproves each metric]
ORIGINAL SUCCESS CRITERIA:
✅ 3/5 users complete task under 2 minutes: YES (all 5 users: 1:15-2:10)
✅ No user needs to ask for help: YES (all completed independently)
⚠️ Users rate it 7+ out of 10: PARTIAL (ratings: 7, 8, 6, 8, 7 - avg 7.2)
EVIDENCE:
- All users submitted feedback successfully
- Time range: 1:15 to 2:10 (target was 2:00)
- No one asked for help, but 3 users had hesitation moments
- Lower ratings mentioned: category confusion, hidden features
Not all feedback is equally important. Prioritize:
MUST FIX (blocking user success):
1. [Critical issue that prevented completion]
2. [Critical issue that prevented completion]
SHOULD FIX (major friction):
3. [Issue that caused significant struggle]
4. [Issue that caused significant struggle]
COULD FIX (nice to have):
5. [Polish item that would improve experience]
6. [Feature request that's interesting]
WON'T FIX (out of scope):
- [Thing users wanted but doesn't fit your goals]
- [Feature that's too complex for this prototype]
MUST FIX (blocking user success):
None! Core flow works.
SHOULD FIX (major friction):
1. Add tooltips explaining each category option
2. Make "View Past Feedback" button more prominent (larger, different color)
3. Add expected response time message
COULD FIX (nice to have):
4. Add character limit indicator on message field
5. Allow users to edit submitted feedback
6. Add ability to attach screenshots
WON'T FIX (out of scope):
- Real-time admin notification (separate admin tool needed)
- Voting on other feedback (community feature, too complex)
- Integrating with existing ticketing system (requires engineering)
Look for surprises:
Document these:
SURPRISING LEARNINGS:
- [Thing you didn't expect]
- [Thing you didn't expect]
- [How this changes your thinking]
Prepare a one-pager for stakeholders:
PROTOTYPE TESTING SUMMARY
WHAT WE TESTED:
[Brief description]
WHO WE TESTED WITH:
[X users, their profiles]
KEY RESULTS:
✅ [Main success]
✅ [Main success]
⚠️ [Main issue found]
SUCCESS METRICS:
[Metric 1]: [Result]
[Metric 2]: [Result]
TOP 3 FIXES NEEDED:
1. [Fix 1]
2. [Fix 2]
3. [Fix 3]
RECOMMENDED NEXT STEP:
[Polish and iterate / Pivot / Hand to engineering / etc.]
After analysis:
🎯 Reality Check: If you have 20+ “must fix” items, your prototype wasn’t ready for testing. Simplify scope and retest.
Based on your analysis, decide: iterate and polish, or pivot to a different approach.
Use this flowchart to decide your path:
START: Review User Testing Results
│
▼
┌───────────────────────────────────────────┐
│ Did users complete the core task? │
└───────────┬───────────────────────────────┘
│
┌───────────┴───────────┐
│ │
YES NO
│ │
▼ ▼
┌───────────────┐ ┌──────────────────┐
│ Did it meet │ │ Consider PIVOT │
│ success │ │ or major │
│ criteria? │ │ redesign │
└───────┬───────┘ └──────────────────┘
│
┌───────┴────────┐
│ │
YES PARTIAL
│ │
▼ ▼
┌─────────────┐ ┌──────────────────┐
│ POLISH │ │ POLISH & RETEST │
│ │ │ │
│ • Fix must │ │ • Fix critical │
│ items │ │ issues │
│ • Ship it! │ │ • Test with │
│ │ │ 2-3 new users │
└─────────────┘ └──────────────────┘
DECISION PATHS
┌──────────────────────────────────────────────────────┐
│ Path 1: POLISH (2-4 hours) │
├──────────────────────────────────────────────────────┤
│ Choose when: │
│ ✓ Core functionality works │
│ ✓ Users completed main task │
│ ✓ Hit most success criteria │
│ ✓ Feedback is about improvements, not blockers │
│ │
│ Action: │
│ 1. Fix "MUST FIX" items │
│ 2. Fix "SHOULD FIX" items (time permitting) │
│ 3. Light retest (1-2 users) │
│ 4. → Proceed to Phase 5 (Deploy) │
└──────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────┐
│ Path 2: POLISH & RETEST (4-8 hours) │
├──────────────────────────────────────────────────────┤
│ Choose when: │
│ ⚠ Core works but has significant friction │
│ ⚠ Users struggled but completed task │
│ ⚠ Hit some success criteria but not all │
│ ⚠ Fixes might significantly change experience │
│ │
│ Action: │
│ 1. Fix "MUST FIX" items │
│ 2. Fix "SHOULD FIX" items │
│ 3. Full retest with 2-3 NEW users │
│ 4. Analyze again │
│ 5. → Then proceed to Phase 5 │
└──────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────┐
│ Path 3: PIVOT (2-4 hours + restart) │
├──────────────────────────────────────────────────────┤
│ Choose when: │
│ ❌ Users couldn't complete core task │
│ ❌ Missed most success criteria │
│ ❌ Users said they wouldn't use it │
│ ❌ Built the wrong solution │
│ │
│ Action: │
│ 1. Document what you learned │
│ 2. Revisit Step 1 (problem definition) │
│ 3. Consider: Is this the right problem? │
│ 4. Choose: Restart with simpler approach OR │
│ Solve different problem OR │
│ Use existing tool instead │
└──────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────┐
│ Path 4: HAND TO ENGINEERING (Documentation phase) │
├──────────────────────────────────────────────────────┤
│ Choose when: │
│ ⚠ Prototype proves value but needs production code │
│ ⚠ Users love it but technical complexity is high │
│ ⚠ Security/compliance requirements emerged │
│ ⚠ Scale requirements are clear │
│ │
│ Action: │
│ 1. Document everything learned │
│ 2. Create handoff document (see Step 14) │
│ 3. Set up meeting with engineering │
│ 4. Position as reference, not production code │
└──────────────────────────────────────────────────────┘
Answer these questions to clarify your path:
1. Did users accomplish the core task?
2. Did it meet your success criteria?
3. Would users actually use this?
4. Is it solving the right problem?
Choose this if:
What to do:
Prompt for polishing:
Based on user testing, I need to make these improvements:
CRITICAL:
1. [Must fix item 1]
2. [Must fix item 2]
IMPORTANT:
3. [Should fix item 1]
4. [Should fix item 2]
For each fix, maintain existing functionality. Let me test after each change.
Choose this if:
What to do:
Timeline: Add 2-4 hours for fixes + retesting
Choose this if:
What to do:
PIVOT CHECKLIST:
□ Document what you learned (what didn't work and why)
□ Revisit Step 1 (problem definition)
□ Consider: Is this the right problem to solve?
□ Option A: Restart with simpler approach
□ Option B: Solve a different problem
□ Option C: Use existing tool instead of building
When to pivot:
Pivot prompt:
Based on user testing, I need to reconsider our approach.
WHAT WE BUILT:
[Description]
WHAT WE LEARNED:
[Key insights from testing]
WHAT USERS ACTUALLY NEED:
[What testing revealed]
Can you suggest 3 alternative approaches that better address what users actually need? Keep it simple and feasible for prototyping.
Choose this if:
What to do:
DECISION: POLISH & RETEST
REASONING:
✅ All users completed core task
✅ Users said they would use it
⚠️ Average rating 7.2/10 (target was 7+, barely met)
⚠️ Several friction points that could improve experience
POLISH PLAN:
1. Add category tooltips (2 min fix)
2. Make "View Past Feedback" button prominent (5 min fix)
3. Add response time expectation text (2 min fix)
4. Increase mobile button sizes (5 min fix)
RETEST PLAN:
Test with 2 new users after fixes
Target: 8+ out of 10 ratings
If successful → Deploy
If not → Analyze further
Tips for deciding:
💡 Don’t over-polish: If users complete the task and say they’d use it, don’t aim for perfection. Ship it.
💡 Don’t under-react: If 3+ users had the same critical issue, it needs fixing before deployment.
💡 Trust patterns, not individuals: One user’s feedback might be an outlier. Look for what 2-3+ users mentioned.
💡 Time-box polishing: Give yourself max 2 hours for polish fixes. If it takes longer, reconsider.
Before moving to Phase 5:
🎯 Reality Check: Most projects should choose “polish” if they followed the framework. If you’re pivoting, reflect on whether early steps were thorough enough.
Time Investment: 1-2 hours
This is where you decide what happens to your prototype
Time to make your prototype accessible to others beyond your test users.
Before deploying anything:
FUNCTIONALITY:
□ All core features work
□ Tested on mobile and desktop (if needed)
□ No critical bugs
□ Error messages are user-friendly
□ Success states are clear
CONTENT:
□ No test/dummy data visible
□ No placeholder text ("Lorem ipsum", "Coming soon")
□ No profanity or internal jokes
□ Contact info is correct (if any)
SECURITY:
□ No exposed API keys or credentials
□ No sensitive data hard-coded
□ Input validation is working
□ No obvious security holes
PRESENTATION:
□ Looks professional enough for audience
□ Branding is consistent (if applicable)
□ No broken images or links
□ Mobile experience is acceptable (if needed)
EXPECTATIONS:
□ Stakeholders know this is a prototype
□ Documented known issues
□ Clear about what it does/doesn't do
Don’t deploy! Keep it local.
Why: POCs are disposable. No need to make them publicly accessible.
Recommended: Use free, simple hosting
Option 1: Claude Artifacts (Simplest)
Option 2: Replit (Great for beginners)
Option 3: Vercel (More professional)
Prompt for export:
I need to deploy this prototype. Can you provide:
1. All the code organized in files (HTML, CSS, JS)
2. Any setup instructions needed
3. What hosting platform would you recommend for this specific project?
Recommended: Professional hosting + custom domain (optional)
Use Vercel (above) but also:
Recommended: Stable hosting with authentication
Option 1: Keep in Claude Projects
Option 2: Deploy with simple auth
Option 3: Hand to IT/Engineering
Create a Share Package:
Subject: [Prototype Name] - Ready for Review
Hi [Recipients],
I've built a working prototype of [brief description].
🔗 LIVE LINK: [URL]
📋 WHAT IT DOES:
[2-3 sentences about core functionality]
👤 WHO IT'S FOR:
[Target users]
✅ WHAT WORKS:
- [Feature 1]
- [Feature 2]
- [Feature 3]
⚠️ KNOWN LIMITATIONS:
- [Limitation 1]
- [Limitation 2]
🎯 WHAT I NEED FROM YOU:
[Try it out and let me know: feedback/approval/next steps]
Best,
[Your name]
Even for prototypes, basic monitoring helps:
For Simple Tracking:
Ask Claude: "Can you add simple analytics to track:
- How many people visit
- Which features they use most
- Where they drop off
Use a free tool that's easy to set up."
For Bug Reports:
“The link doesn’t work”
“It works for me but not for others”
“It’s really slow”
The deployed version is loading slowly.
Can you optimize for:
- Smaller file sizes
- Faster load times
- Better performance
Keep functionality the same.
After deployment:
🎯 Reality Check: Deployment should take 30-60 minutes max. If it’s taking hours, you’re over-complicating it. Keep it simple.
Your prototype is live. Now decide what happens to it.
Answer this question first:
“What was the goal of this prototype?”
□ Prove technical feasibility → Probably done, hand off findings
□ Get stakeholder buy-in → Present results, get decision
□ Test with real users → Analyze results, iterate or hand off
□ Solve internal problem → Keep using it, plan maintenance
□ Show what's possible → Mission accomplished, document learnings
When to choose this:
What to document:
POC HANDOFF DOCUMENT
PROJECT: [Name]
GOAL: [What we wanted to prove]
OUTCOME: [What we learned]
✅ WHAT WORKED:
- [Technical approach that worked]
- [User feedback that validated concept]
- [Unexpected wins]
❌ WHAT DIDN'T WORK:
- [Technical limitations discovered]
- [User friction points]
- [Things harder than expected]
🔑 KEY LEARNINGS:
1. [Important insight 1]
2. [Important insight 2]
3. [Important insight 3]
📊 USER TESTING RESULTS:
- [Number tested]
- [Success rate]
- [Key quotes]
💡 RECOMMENDATIONS FOR PRODUCTION:
1. [Technical recommendation]
2. [Feature recommendation]
3. [Design recommendation]
🔗 PROTOTYPE LINK:
[URL - as reference only, not production code]
⏭️ SUGGESTED NEXT STEPS:
[What engineering should do next]
When to choose this:
⚠️ Important Reality Check:
Vibe-coded tools are NOT production-grade. They will break. Plan for it.
Maintenance Plan Template:
INTERNAL TOOL MAINTENANCE PLAN
TOOL NAME: [Name]
PURPOSE: [What it does]
USERS: [Who uses it]
🔄 EXPECTED MAINTENANCE:
- Updates needed: [Weekly/Monthly/Quarterly]
- Estimated time per update: [X hours]
- Who maintains it: [Your name + backup]
⚠️ KNOWN LIMITATIONS:
1. [Limitation 1]
2. [Limitation 2]
3. [Limitation 3]
🚨 WHEN TO INVOLVE ENGINEERING:
- If [condition 1]
- If [condition 2]
- If usage grows beyond [X users]
📋 BACKUP PLAN:
If this tool breaks:
1. [Fallback process]
2. [Who to contact]
3. [Data recovery plan if needed]
📚 DOCUMENTATION:
- User guide: [Link]
- Technical notes: [Link]
- Known issues: [Link]
Set Realistic Expectations:
Team announcement:
I've built [tool name] to help with [problem].
✅ It's ready to use for [purpose]
⚠️ Important: This is a prototype-grade tool
- It may have bugs
- It will need occasional maintenance
- If it becomes critical, we should involve engineering
Please report issues to [your email/slack]
When to choose this:
Engineering Collaboration Plan:
ENGINEERING HANDOFF PLAN
PROJECT: [Name]
WHY THIS MATTERS: [Business case from user testing]
✅ PROTOTYPE VALIDATES:
- [User need 1]
- [User need 2]
- [Technical feasibility]
📊 USER TESTING DATA:
- Tested with [X] users
- [Y]% completion rate
- Average rating: [Z]/10
- Key insight: [Main learning]
🎯 PRODUCTION REQUIREMENTS:
MUST HAVE:
- [Requirement 1]
- [Requirement 2]
SHOULD HAVE:
- [Requirement 3]
- [Requirement 4]
🔒 PRODUCTION CONCERNS:
- Security: [What needs review]
- Scale: [Expected load]
- Integration: [What systems to connect]
- Compliance: [Any regulations]
🔗 PROTOTYPE AS REFERENCE:
URL: [Link]
Note: This is reference only, not production code
Use it to understand: UX flow, feature requirements, edge cases
⏱️ TIMELINE:
Prototype took: [X hours]
Production build estimate: [Multiply by 3-10x]
📅 NEXT STEPS:
1. [Engineering reviews prototype]
2. [Technical planning session]
3. [Production design/architecture]
4. [Development begins]
🤝 COLLABORATION MODEL:
- I own: Product decisions, user testing, feedback
- Engineering owns: Code quality, security, architecture
- We collaborate on: Feature prioritization, technical tradeoffs
The Handoff Meeting:
Schedule 30-60 minute meeting with engineering:
Agenda:
Come prepared with:
When to choose this:
This is success too! You learned something quickly before investing engineering months.
Failure Report Template:
PROJECT RETROSPECTIVE: [Name]
❌ OUTCOME: Prototype didn't validate hypothesis
ORIGINAL HYPOTHESIS:
[What we thought would work]
WHAT WE BUILT:
[Brief description]
WHAT WE LEARNED:
[Why it didn't work]
USER TESTING INSIGHTS:
- [Key learning 1]
- [Key learning 2]
- [Key learning 3]
💡 WHAT THIS MEANS:
[Implications for product strategy]
⏭️ ALTERNATIVE APPROACHES:
1. [Could try this instead]
2. [Or this]
3. [Or maybe this isn't the right problem]
⏱️ TIME INVESTMENT:
Total time: [X hours]
Value: Saved engineering [weeks/months] of work
🔗 PROTOTYPE LINK:
[URL - archived for reference]
Share the learnings:
⚠️ Involve engineering NOW if:
□ Handling payment information
□ Storing sensitive user data (PII, health, financial)
□ Need 100+ concurrent users
□ Integrating with complex existing systems
□ Subject to compliance (HIPAA, SOC2, GDPR, etc.)
□ Requires high availability (downtime = major problem)
□ Security is business-critical
□ Performance is business-critical
Don’t try to vibe-code these. Seriously.
Prototype to production timeline:
Why the multiplier? Production code needs:
Before closing this project:
🎯 Reality Check: The prototype was the easy part. Production takes 3-10x longer. Make sure the business case justifies that investment.
Let me walk you through a real example from start to finish.
Background: Sarah is a Product Manager at a B2B SaaS company. Customer feedback currently comes through scattered channels: support emails, sales calls, Slack messages. There’s no central place to track feedback or see what’s been addressed.
Goal: Build a simple prototype to validate whether a centralized feedback tool would solve this problem.
Sarah’s Problem Definition:
PROBLEM STATEMENT:
Customer feedback is scattered across email, Slack, and sales calls. We have no way to track what feedback we've received, what we've acted on, or which requests are most common. This leads to:
- Lost feedback that never gets reviewed
- Customers feeling unheard (ask about same issues repeatedly)
- Product team can't prioritize based on demand
REAL EXAMPLE:
Last month, 5 customers requested API rate limit increases via different channels. We didn't realize it was a pattern until the 5th request.
TARGET USER:
Role: Existing B2B customers (technical and non-technical)
Context: They use our product daily, encounter issues/ideas, want to share feedback
Tech level: Mixed (some developers, some business users)
Key frustrations: "I told support about this bug 3 months ago, is anyone looking at it?"
Success for them: Submit feedback easily, see status of their past feedback
EXISTING SOLUTIONS:
- Canny: Good but $400/mo, too expensive for MVP test
- Google Forms: Free but no status tracking for users
- Intercom: We have it, but feedback gets lost in support tickets
Our advantage: Custom to our needs, integrated status tracking
SUCCESS CRITERIA:
1. 80% of test users can submit feedback in under 2 minutes
2. Users can see status of their past feedback
3. Internal team (me) can easily see all feedback in one place
4. 3+ beta customers use it within first week
OUT OF SCOPE:
- Admin dashboard (separate project)
- Email notifications (can add later)
- Voting on others' feedback (nice-to-have)
- Integration with Jira (too complex for MVP)
WHY SOFTWARE?
Google Form could handle submission, but can't show users their feedback status. Existing tools are too expensive or not the right fit. This needs to be custom, but simple.
Sarah’s prompt to Claude:
I'm planning to build a customer feedback tool for B2B SaaS customers.
CONTEXT:
Problem: Feedback is scattered, customers can't track their submissions
Target users: B2B customers (mix of technical and non-technical)
Success looks like: Easy submission + status tracking for users
CHALLENGE THIS IDEA:
1. What am I assuming that might be wrong?
2. What are 3 alternative approaches I haven't considered?
3. What will be hardest to build or maintain?
4. What could make this fail with users?
5. What's the absolute simplest version that could work?
6. Is there an existing tool I should use instead?
Be skeptical and direct.
Claude’s response highlighted:
Sarah’s decision: Start with submission + basic status tracking. Status is the differentiator vs. Google Forms.
Sarah chose: MVP (Test with real users)
TARGET: MVP
PRIMARY GOAL:
Validate that customers want/use feedback submission with status tracking
AUDIENCE:
5-10 beta customers (mix of technical and business users)
QUALITY BAR:
Must work: Submission form, view past feedback with status
Should work: Mobile responsive, clean design
Nice to have: Categories, filtering
SUCCESS METRIC:
- 5+ customers submit feedback in first week
- 3+ customers check status of past feedback
- Average usability rating 7+/10
WHAT HAPPENS NEXT:
If successful: Build proper admin dashboard, hand to engineering
If not successful: Use simpler solution (Google Form + manual tracking)
ONE-SENTENCE SUMMARY:
A form where customers submit feedback and can see the status of all their past submissions.
USER FLOW:
1. Customer lands on feedback page
2. Sees form: Name (optional), Email (required), Feedback (textarea), Category (dropdown)
3. Clicks "Submit Feedback"
4. Sees success message: "Thanks! We'll review this within 48 hours."
5. Clicks "View My Past Feedback"
6. Enters email, sees list of their submissions
7. Each submission shows: Date, Category, Message preview, Status badge
SUCCESS OUTCOME:
Customer can submit feedback in under 90 seconds and check on any past feedback whenever they want.
CONSTRAINTS:
Device: Mobile-first (many customers on-the-go), but works on desktop
Data: Need simple storage (suggest Firebase or similar easy solution)
Design: Clean, modern, friendly (not intimidating)
Performance: Form submits in under 2 seconds
REFERENCE EXAMPLES:
- Form feel: Like Typeform (friendly, not corporate)
- Status display: Like package tracking (clear progress indicators)
- Overall aesthetic: Like Linear (clean, modern, lots of white space)
NOT INCLUDED:
- User accounts/passwords
- Admin dashboard (separate project)
- Email notifications
- File attachments
MUST HAVE:
- [ ] #1 Feedback form (Name optional, Email req, Message req, Category dropdown)
- [ ] #2 Form validation (proper email, non-empty message)
- [ ] #3 Submit with success message
- [ ] #4 View past feedback (by email lookup)
- [ ] #5 Status badges (New/Reviewing/Implemented)
SHOULD HAVE:
- [ ] #6 Mobile responsive design
- [ ] #7 Character counter on message field
- [ ] #8 Empty states ("No feedback yet")
COULD HAVE:
- [ ] #9 Filter by category
- [ ] #10 Search past feedback
WON'T HAVE:
- Admin panel
- Edit/delete submitted feedback
- Real-time updates
- Voting/commenting
I want to build a customer feedback form where users can submit feedback and see their past submissions with status tracking.
CONTEXT:
Users: B2B SaaS customers (mix of technical and business users)
Problem: Currently no central place for feedback, customers can't track what happens to their requests
Success: Users can submit in under 90 seconds and check status anytime
TECHNICAL CONSTRAINTS:
- Mobile-first, but works on desktop
- Need simple data storage (suggest easiest approach)
- Form submit should be fast (under 2 seconds)
LET'S START WITH:
Build the feedback submission form.
Specifically:
- Fields: Name (optional text), Email (required, validated), Feedback (required textarea, 500 char max), Category (dropdown: Bug Report, Feature Request, General Feedback, Other)
- Submit button that validates, submits, shows success message
- Success message: "Thanks! We'll review this within 48 hours."
- After submission, clear form and keep success message for 3 seconds
Design: Clean and modern with lots of white space, friendly tone (not corporate)
Colors: Use soft blue (#4A90E2) for accents
Please build this and show me what it looks like. Ask if anything is unclear.
Exchange 1:
Sarah: [First prompt above]
Claude: [Builds initial form with all fields, validation, and success message]
Sarah: This looks great! But the textarea is too small on mobile - can you make it taller (at least 120px) and the submit button bigger (48px height)?
Claude: [Updates the styling]
Sarah: Perfect. Now let's add the "View My Past Feedback" feature.
What it should do:
- Add a button below the form: "View My Past Feedback"
- When clicked, show a simple input asking for email
- After entering email and clicking "Show My Feedback", display list of all feedback from that email
- Each item shows: Date (formatted nicely), Category (as badge), Message (first 60 characters + "..."), Status (colored badge)
- Status options: "New" (blue), "Reviewing" (yellow), "Implemented" (green), "Won't Fix" (gray)
- If no feedback found, show: "No feedback found for this email. Submit your first piece of feedback above!"
For now, use dummy data (create 3 sample feedback items) so we can see how it looks.
Claude: [Builds the past feedback view with dummy data]
Exchange 2:
Sarah: Good! But I notice the feedback list isn't very readable. Can you:
- Add more space between each feedback item
- Make the date smaller and lighter gray
- Put the status badge on the right side instead of left
- Add a subtle border between items
Claude: [Makes styling improvements]
Sarah: Much better. Now let's talk about data persistence. Currently it's just dummy data. What's the simplest way to actually store submissions and retrieve them by email?
Claude: I can suggest two approaches:
1. **Simplest (Browser LocalStorage):**
- Stores data in user's browser
- No backend needed
- Limitation: Only works on same device/browser
2. **Simple Backend (Firebase):**
- Free tier available
- Real database storage
- Works across devices
- Requires Firebase setup (I can guide you)
For an MVP, I'd recommend Firebase. Would you like me to integrate that?
Sarah: Yes, let's use Firebase. Please guide me through what I need to do, then integrate it.
Claude: [Provides step-by-step Firebase setup instructions, then integrates Firebase code]
Exchange 3:
Sarah: Following your instructions, I set up Firebase. But when I try to submit, I get an error about Firebase not being initialized. Can you help?
Claude: I need to see the error message. Can you copy it here?
Sarah: [Pastes error]
Claude: Ah, you need to add your Firebase config. In the Firebase console, go to Project Settings > Your apps > Firebase SDK snippet. Copy the config object and replace the placeholder in the code.
Sarah: [Does that] - It works! I just submitted test feedback and I can see it when I check my past feedback.
Claude: Great! Let me add error handling so if Firebase is down, users get a friendly message instead of a technical error.
[Adds error handling]
After 5 features, Sarah does a checkpoint:
Sarah: Let's pause and review. Can you:
1. Explain the main parts of what we've built
2. Check for any security issues
3. Identify any problems with the current approach
Claude: [Provides overview]
Main components:
- Feedback form with validation
- Firebase integration for data storage
- Past feedback viewer with email lookup
Security concerns:
- Currently anyone can view any email's feedback (just need to guess email)
- No rate limiting on submissions (could be spammed)
Suggestions:
- For MVP: Add simple rate limiting (max 10 submissions per hour)
- Future: Add user authentication for viewing feedback
Technical issues:
- Status is hard-coded (you'll need manual process to update)
- No admin panel to manage feedback
Sarah’s response:
Good catches. Let's add rate limiting now.
For status updates and admin panel - those are separate projects, out of scope for this MVP.
I'll manually update Firebase data for now during testing.
Sarah tested with 5 beta customers. Here’s one example:
User: Mike (Technical Founder)
Task: "Submit a feature request about API rate limits"
Observations:
[0:15] Loads page, immediately starts typing
[0:30] Pauses at category dropdown, hovers over options
[0:35] Selects "Feature Request"
[0:50] Submits form
[0:52] Sees success message, smiles
[1:00] Scrolls down, notices "View My Past Feedback" button
[1:05] Clicks it, enters email
[1:10] Sees his submission, nods approvingly
Total time: 1:10
Follow-up:
"How easy was that?" → "Super easy, 9/10"
"What was confusing?" → "Wasn't sure if 'Bug Report' or 'Feature Request' was right for API limits, but picked one"
"Would you use this?" → "Definitely. Way better than emailing support."
"What's missing?" → "Would be nice to get notified when status changes"
PATTERNS ACROSS 5 USERS:
✅ WORKED WELL:
- All 5 submitted feedback successfully
- Times ranged 1:05 to 2:20 (all under goal of 2:00)
- All rated ease of use 8-10/10
- 4/5 proactively checked their past feedback
- 5/5 said they'd use this over email
⚠️ FRICTION POINTS:
- Category confusion: 3/5 paused, unsure which to pick
- "View Past Feedback" button: 2/5 didn't notice it initially (had to scroll)
- Character counter: 2/5 mentioned wishing they knew how much they could write
🚩 CRITICAL ISSUES:
- None! All users completed successfully
FEATURE REQUESTS:
- 4/5 wanted email notifications when status changes
- 2/5 wanted to attach screenshots
- 2/5 wanted to see other customers' feedback (community feature)
Decision: POLISH (minor fixes) then deploy
Fixes:
Implemented in 20 minutes, retested with 2 users → both gave 10/10 rating.
Sarah deployed to Vercel (free tier):
customer-feedback.vercel.appSarah’s decision: This will become a real product → Collaborate with engineering
HANDOFF DOCUMENT:
WHY THIS MATTERS:
- Tested with 5 customers: 100% said they'd use it
- Average rating: 9.2/10
- Solves real problem (centralized feedback tracking)
- 10 customers using it in first week
PROTOTYPE VALIDATES:
- Customers want centralized feedback submission
- Status tracking is valuable (4/5 proactively checked)
- Simple UI works (no one needed help)
PRODUCTION REQUIREMENTS:
MUST HAVE:
- Admin dashboard to view all feedback, update status
- Email notifications when status changes
- Proper authentication (not just email lookup)
- Rate limiting and spam protection
SHOULD HAVE:
- Export to CSV
- Filter/search feedback
- Bulk status updates
- Integration with Slack (notify team of new feedback)
COULD HAVE:
- File attachments
- Voting on feedback
- Customer profiles
PRODUCTION CONCERNS:
- Security: Currently anyone can view any email's feedback
- Scale: Firebase free tier limits (need paid plan or migrate)
- Spam: Need better protection than current rate limit
- Data ownership: Want to self-host data
NEXT STEPS:
1. Engineering reviews prototype (this week)
2. Technical design for production version (next week)
3. Start development (2-week sprint)
4. Beta test with 20 customers (week 4)
COLLABORATION:
- Sarah: Owns product decisions, user testing
- Engineering: Owns architecture, security, code quality
- Together: Feature prioritization, technical tradeoffs
Timeline:
Results:
Engineering estimate for production: 2-3 weeks
Sarah’s reflection: “I couldn’t have coded this myself, and if I’d written a spec for engineering first, we wouldn’t have learned what we learned from real users. Vibe coding let me validate the idea fast, then hand engineering a proven concept.”
Copy-paste these templates and customize for your needs.
I want to build [type of tool].
CONTEXT:
Users: [specific user type and their characteristics]
Problem: [what specific problem this solves with real example]
Success: [what success looks like with measurable outcome]
TECHNICAL:
- [Device: Mobile/Desktop/Both]
- [Data: where does data come from/go to]
- [Performance: any speed requirements]
LET'S START WITH:
[Feature #1 - specific and testable]
Specifically:
- [Detail about feature]
- [Detail about feature]
- [Detail about feature]
Design: [aesthetic description]
Please build this and ask questions if unclear.
Now let's add [feature name].
WHAT IT SHOULD DO:
- [Specific behavior 1]
- [Specific behavior 2]
- [Specific behavior 3]
SUCCESS LOOKS LIKE:
[Concrete example of working]
KEEP EXISTING [previous features] WORKING.
Found an issue with [feature].
STEPS TO REPRODUCE:
1. [Step 1]
2. [Step 2]
3. [Step 3]
WHAT HAPPENS:
[Actual behavior]
WHAT SHOULD HAPPEN:
[Expected behavior]
Can you fix this?
The [feature] works but I want to improve [aspect].
CURRENTLY:
[How it is now]
CHANGE TO:
[How it should be]
REASON:
[Why this matters for users]
I'm confused by what you just built.
Please explain:
1. What does this code do?
2. Why did you build it this way?
3. What are the main parts?
4. What would break if I changed [X]?
Use simple language (I'm not a developer).
I need to implement [specific feature].
Please suggest 3 approaches:
1. Simplest (even if limited)
2. Most robust (handles edge cases)
3. Best user experience
For each, explain:
- How it works
- Pros and cons
- Implementation complexity
Please review what we've built:
1. What are the main components?
2. Any potential issues or problems?
3. Is the code organized well or getting messy?
4. Any security concerns?
5. What would be hardest to change later?
The [feature] is slow.
SPECIFICALLY:
[What action is slow and how slow]
Please optimize for:
- Faster load time
- Better performance
- Smoother experience
Keep all functionality the same.
The code is getting messy. Let's clean up.
ISSUES:
- [Issue 1]
- [Issue 2]
- [Issue 3]
Please refactor to address these while keeping all functionality working.
I'll test everything after to ensure nothing broke.
I need to deploy this.
Please provide:
1. All code organized in files
2. Setup instructions
3. Recommended hosting platform for this project
4. Any environment variables or config needed
I need you to explain [technical thing] in simple terms.
Assume I don't have a technical background.
Use analogies or real-world examples.
Focus on what I need to know, not how it works internally.
Quick guides for frequent use cases.
What forms are good for: Data collection, surveys, lead generation, feedback, contact forms
Basic prompt:
Create a form for [purpose].
Fields:
- [Field 1]: [Type] - [Required/Optional]
- [Field 2]: [Type] - [Required/Optional]
- [Field 3]: [Type] - [Required/Optional]
After submission:
- [What should happen]
Design: [Mobile-first / Desktop / Both]
Style: [Clean / Modern / Minimal / etc.]
Common improvements:
What dashboards are good for: Data visualization, KPIs, project tracking, analytics
Basic prompt:
Create a dashboard showing [type of data].
SECTIONS:
1. [Section 1]: [What it shows]
2. [Section 2]: [What it shows]
3. [Section 3]: [What it shows]
Data source: [Where data comes from]
Update frequency: [Real-time / Daily / Manual]
Design: [Clean layout with cards / Single page / etc.]
Common improvements:
What calculators are good for: ROI calculators, pricing estimators, converters, cost calculators
Basic prompt:
Create a [type] calculator.
INPUTS:
- [Input 1]: [Description]
- [Input 2]: [Description]
CALCULATION:
[Explain the formula or logic]
OUTPUT:
[What result to show and how to format it]
Design: Simple and focused on the calculation
Common improvements:
What lists are good for: Task tracking, inventory, contacts, order history
Basic prompt:
Create a [type] list showing [items].
Each item displays:
- [Field 1]
- [Field 2]
- [Field 3]
Features:
- [Add new items]
- [Edit existing items]
- [Delete items]
- [Sort by X]
Design: [Clean table / Card layout / etc.]
Common improvements:
What these are good for: Training, assessments, engagement, fun internal tools
Basic prompt:
Create a [quiz/game] about [topic].
STRUCTURE:
- [Number] questions
- [Multiple choice / True-false / etc.]
- [Scoring system]
FLOW:
1. [Start screen]
2. [Question screen]
3. [Results screen]
Design: Fun and engaging
Common improvements:
Common problems and how to solve them.
Symptoms:
Solutions:
1. Start a new chat with full context
I'm continuing work on [project].
COMPLETED FEATURES:
✅ [Feature 1]
✅ [Feature 2]
✅ [Feature 3]
CURRENT ISSUE:
[Specific problem]
Please [fix/add] this without breaking existing features.
2. Ask Claude to be more careful
When you make changes, please:
1. Only modify the specific feature I mentioned
2. Leave all other features untouched
3. Test that previous features still work
4. Tell me what you changed and why
3. Consider starting over with clearer architecture
Sometimes it’s faster to rebuild cleanly than fix a mess.
Symptoms:
Solutions:
1. Ask for a plain-English explanation
Please explain what you just built:
1. What are the main parts?
2. What does each part do?
3. How do they work together?
Use simple language (I'm not a developer).
Use analogies if helpful.
2. Ask for a diagram
Can you create a simple text diagram showing:
- Main components
- How data flows
- What triggers what
Keep it high-level (I don't need code details).
3. Request code comments
Please add comments to the code explaining:
- What each main section does
- Why you chose this approach
- What I should know if I need to change something
Symptoms:
Solutions:
1. Simplify the feature
This feature is taking too long. Let's simplify.
CURRENT: [Complex version]
SIMPLIFIED: [Just the essential parts]
Build the simplified version first.
We can add complexity later if needed.
2. Break it into smaller pieces
Let's break [feature] into steps:
STEP 1: [Smallest possible first step]
Build just this first. I'll test it before we continue.
3. Question if it’s feasible
This seems difficult. Before we continue:
1. Is this feature feasible for a prototype?
2. Are there simpler alternatives?
3. Should this be handled differently?
Symptoms:
Solutions:
The UI is broken on mobile devices.
ISSUES:
- [Specific issue 1]
- [Specific issue 2]
Please make it mobile-responsive:
- All text should be readable (min 16px)
- All buttons should be tappable (min 44px height)
- Content should fit screen without horizontal scroll
- Spacing should be comfortable for thumb tapping
I'll test on mobile after you update.
Symptoms:
Solutions:
Form validation needs fixing.
WHAT SHOULD HAPPEN:
- Required fields: Show error if empty
- Email field: Validate proper format
- [Other validations]
ERROR MESSAGES:
- Show below each field
- Use red text
- Be specific ("Email must be valid format")
- Don't allow submission until all valid
Please fix validation and show me error states.
Symptoms:
Solutions:
1. Check what storage method is being used
Where is data currently being stored?
- LocalStorage?
- Firebase?
- Other?
Please explain and show me how to verify data is saving.
2. Add proper error handling
When data fails to save, please:
- Show user-friendly error message
- Log the error so we can debug
- Prevent data loss (keep form filled)
Symptoms:
Solutions:
1. Test in incognito/private mode
2. Get specific error details
Ask testers to:
- Take screenshot of error
- Tell you what device/browser
- Describe what they did before error
3. Check browser compatibility
Are there any browser-specific features in the code?
Please make it work in:
- Chrome
- Firefox
- Safari
- Mobile browsers
Use widely-supported code only.
Symptoms:
Solutions:
Performance is slow, specifically [describe what's slow].
Please optimize:
- Reduce file sizes
- Load data more efficiently
- Make interactions feel instant
- Add loading indicators where needed
Goal: Everything should respond in under 2 seconds.
Symptoms:
Solutions:
We keep hitting the same issue: [describe pattern].
This suggests a fundamental problem.
Please:
1. Identify the root cause
2. Suggest a better approach
3. Implement a fix that prevents this issue
I'm willing to refactor if needed.
Symptoms:
Solutions:
1. Go back to Step 1
2. Create two versions
Build V1 first, test it, then decide on V2.
3. Ask yourself:
Terms you might encounter while vibe coding.
Artifact Claude’s term for generated code/apps that appear in a preview panel. You can interact with artifacts in real-time.
API (Application Programming Interface) A way for different software to talk to each other. Example: Using Google Maps in your app.
Backend The server-side part of an app that handles data, logic, and storage. You typically don’t see this part.
Bug An error or flaw that makes something not work as intended.
CSS (Cascading Style Sheets) Code that controls how things look (colors, sizes, spacing).
Database Where data is stored permanently. Examples: Firebase, PostgreSQL.
Deploy Making your prototype available on the internet for others to access.
Edge Case An unusual scenario that might break your app. Example: User enters 1000-character name.
Frontend The user-facing part of an app—what people see and interact with.
Framework Pre-built code that provides structure. Examples: React, Vue, Next.js.
HTML The code that defines content structure (headings, buttons, forms, etc.).
JavaScript (JS) Programming language that makes websites interactive.
JSON
A format for organizing data. Looks like: {"name": "Sarah", "role": "PM"}
LocalStorage Storage in the user’s browser. Data persists but only on that device.
MVP (Minimum Viable Product) Simplest version that delivers core value and can be tested with real users.
POC (Proof of Concept) Quick prototype to prove something is technically possible.
React Popular framework for building user interfaces.
Refactor Rewriting code to be cleaner/better without changing functionality.
Responsive Design Design that adapts to different screen sizes (mobile, tablet, desktop).
UI (User Interface) What users see and interact with.
UX (User Experience) How it feels to use something (easy, confusing, fast, frustrating, etc.).
Validation Checking that user input is correct (e.g., email format, required fields).
Vercel / Netlify / Replit Platforms for hosting web apps (making them accessible via URL).
You now have a complete framework for turning ideas into working prototypes using AI.
Remember the core principles:
What makes vibe coding successful:
✅ Clear problem definition
✅ Specific, testable requirements
✅ Willingness to iterate
✅ Real user validation
✅ Honest assessment of quality
What kills vibe coding projects:
❌ Vague goals
❌ Scope creep
❌ No user testing
❌ Trying to build production systems
❌ Giving up after first try
If this is your first vibe coding project:
If you’re experienced with vibe coding:
“How long until I’m good at this?”
Your first project: 8-12 hours total
Your third project: 4-6 hours total
Your tenth project: 2-4 hours total
“What if I get stuck?”
“Should I learn to code?” Not necessary for vibe coding! But if you’re curious, you’ll naturally pick up concepts. Focus on product thinking, not syntax.
“Can this really replace developers?” No. Vibe coding is for rapid prototyping and validation. Production systems need proper engineering. Think of this as a superpower for product people, not a replacement for dev teams.
“What should I build first?” Build something that solves a real problem you or your team has. Internal tools are great first projects because:
Keep Learning:
Share Your Success: When you ship something, share it! The vibe coding community grows when we see what’s possible.
The ability to turn ideas into working prototypes in hours instead of months is a superpower. Use it wisely:
The future of product building isn’t about replacing people—it’s about empowering more people to build. You’re now one of them.
Now go build something.
Version 1.0 - October 2025
Questions? Feedback? Improvements? Share them with the community.
Credits:
License: This guide is meant to be shared. Use it, improve it, teach it to others.