The Prototype Expectation Gap: What Happened When AI Made the Impossible Routine
Discover how AI tools like Claude and V0.dev are transforming UX and product consulting. Learn why clients now expect working prototypes early, what skills are shifting, and how to adapt
Futurist AJ Bubb, founder of MxP Studio, and host of Facing Disruption, bridges people and AI to accelerate innovation and business growth.
Three months ago, a UX strategist I’d worked with called me frustrated. A client had just shown her examples from another consultant—working prototypes, not wireframes. Interactive apps you could test immediately. “Why,” the client asked, “am I paying you for static mockups?”
She wasn’t looking for sympathy. She wanted to understand what had changed.
I’ve had five similar conversations since then. Different cities, different specializations, same core problem: deliverable expectations are shifting faster than skillsets. Something fundamental is happening in consulting and product work, and it’s worth examining carefully without panic, but without denial either.
What Changed and When
In early 2023, if you were hired for early-stage product strategy, these deliverables were standard and professional:
Annotated wireframes
User flow diagrams
Clickable Figma prototypes
Research synthesis documents
Working code came later, after strategy approval, handed off to developers. This made economic sense. Strategy work cost $150-200/hour. Development cost $150-250/hour. You didn’t write code during the exploration phase because it was expensive and inflexible.
By mid-2024, something shifted. Clients started asking why they couldn’t test actual prototypes earlier. Not aggressive clients with unreasonable demands—normal clients who’d seen what was newly possible and wanted to know why their consultants weren’t offering it.
The shift correlates directly with the maturation of AI development tools. Claude 3.5 (released June 2024) could architect and build functional prototypes through conversation. V0.dev made React component generation trivial. Cursor and similar tools compressed development time dramatically.
These tools didn’t just make development faster—they collapsed the cost structure that justified the old phase-gate approach.
The Economics That Changed
Here’s what happened to the math:
Traditional approach:
Strategy/UX work: 40 hours at $175/hr = $7,000
Design mockups: 30 hours at $150/hr = $4,500
Frontend prototype: 60 hours at $200/hr = $12,000
Total: $23,500, timeline: 6-8 weeks
AI-assisted approach:
Strategy + working prototype: 50 hours at $175/hr = $8,750
Total: $8,750, timeline: 1-2 weeks
The second approach isn’t just faster—it produces something testable with users immediately, which means you validate assumptions weeks earlier.
I’ve watched three consultancies I know restructure their offerings in the past six months. One agency that specialized in UX research now delivers “research with working prototypes.” They’re using Claude to convert findings directly into testable interfaces. They’re not charging less—they’re delivering more value in less time and winning contracts they would have lost a year ago.
Where the Tools Actually Work (And Where They Don’t)
This matters: AI development tools are not magic. They have specific capabilities and clear limitations.
What they do well:
Standard CRUD applications
Dashboard interfaces
Form-heavy workflows
Common interaction patterns
Prototypes for user testing (where bugs are acceptable)
What they struggle with:
Complex state management
Performance optimization
Accessibility edge cases
Security implementations
Novel interactions without examples
A consultant using Claude can build a functional prototype of a project management dashboard in days. That same consultant cannot build a production-ready version without significant additional expertise. The prototype might have race conditions, accessibility issues, or security vulnerabilities that would fail any serious code review.
This distinction is important. The tools enable rapid prototyping, not rapid production development. But for early-stage product work, prototyping is exactly what matters.
The Real Skill Shift (Not the Obvious One)
The obvious take is: “UX strategists need to learn to code.” That’s directionally correct but misses the nuance.
The actual skill shift is from “document what should be built” to “rapidly build testable versions of ideas.” These sound similar but require different capabilities:
Old skillset:
Synthesize research into requirements
Create clear specifications
Design comprehensive documentation
Communicate intent across handoffs
New skillset:
Synthesize research into requirements (still needed)
Prompt AI tools effectively to build working versions
Debug and iterate on generated code
Test and validate directly with users
The second list isn’t easier—it’s different. You need enough technical literacy to guide AI tools, recognize when they’re producing garbage, and fix common issues. You don’t need to be a senior developer, but you can’t be technically helpless either.
That DC strategist? She spent three weeks learning enough React basics to understand what Claude was generating for her. Not to write React from scratch, but to modify it, fix obvious bugs, and integrate components. She described it as “learning enough to have a conversation with the AI, not enough to replace a developer.”
Her next client meeting included a working prototype. She won the contract.
The Strategic Thinking Question
Here’s the counterargument I hear most: “But clients are paying for strategic thinking, not code. AI tools are just implementation.”
This sounds right but doesn’t match what’s happening in practice.
Strategic thinking still matters—but clients increasingly evaluate it through working artifacts, not documents. A strategy doc that says “users need clearer navigation” is abstract. A prototype they can actually test where you’ve implemented three different navigation approaches is concrete.
The strategy hasn’t changed. The medium for demonstrating strategic insight has changed.
Think about architecture. An architect’s value is in spatial understanding, structural knowledge, and design thinking. But they communicate this through drawings, models, and specifications—not just verbal descriptions. When CAD tools revolutionized architecture, the ones who adapted weren’t abandoning their expertise. They were finding better ways to communicate it.
This feels similar. UX strategists who adopt AI development tools aren’t abandoning strategy. They’re finding more effective ways to test and communicate strategic choices.
What This Means for Different Roles
For UX strategists and researchers: You’re in the uncomfortable position of needing to expand your toolkit or partner differently. The good news: these tools lower the barrier to building prototypes dramatically. The bad news: there’s still a learning curve, and your competitors are already on it.
For designers: The line between design and development is blurring for prototyping work. High-fidelity mockups and coded prototypes are converging. The question is whether you want to control that convergence or have it happen around you.
For developers: Junior and mid-level developers doing straightforward implementation work are most exposed. Senior developers doing architecture, optimization, and complex problem-solving are fine—that’s still beyond AI capabilities. But if your primary value is converting designs to code, that’s increasingly automatable.
For clients and product leaders: You now have options that didn’t exist 18 months ago. You can get testable prototypes earlier, cheaper, and with tighter iteration loops. But you need to understand that “working prototype” and “production-ready code” are still very different things.
The Uncomfortable Truth About Adaptation
Six months ago, knowing how to use AI development tools was a nice advantage. Today, in many consulting contexts, it’s becoming table stakes.
This isn’t fair. The shift happened faster than is reasonable for professionals to retrain. People built careers around specific skill combinations that made sense in 2022 but are misaligned with 2024 expectations.
But unfair doesn’t mean optional.
I know designers who are angry about this shift, and they’re right to be. They spent years mastering their craft, and now clients are accepting AI-generated interfaces that are “good enough” when compared to carefully considered design work. The market is rewarding speed over polish in ways that feel like a race to the bottom.
I also know consultants who adopted these tools early and are now winning work they would have lost. They’re not better strategists or designers. They’re just delivering in the format clients now expect.
You can argue the clients are wrong. You can say they don’t understand the value of traditional approaches. You can be absolutely correct in your assessment. And you can still lose the contract to someone who delivered a working prototype.
What “Learning AI Tools” Actually Looks Like
If you decide to adapt—and I think you should—here’s what that learning path actually involves:
Phase 1: Basic literacy (2-4 weeks)
Understand fundamental web concepts (HTML, CSS, JavaScript basics)
Learn what’s easy vs. hard for AI tools to generate
Practice prompting tools like Claude effectively
Build 3-5 simple prototypes to understand the process
Phase 2: Productive capability (2-3 months)
Get comfortable debugging common issues
Learn to integrate AI-generated components
Develop workflow for iterating on prototypes
Build prototypes complex enough to test real scenarios
Phase 3: Professional competence (4-6 months)
Understand when to use AI tools vs. when to involve developers
Handle client conversations about technical tradeoffs
Distinguish prototype quality from production quality
Integrate prototyping into your existing strategic process
This isn’t trivial. It’s a real investment. But it’s also not learning to become a software engineer. It’s learning enough to leverage tools that dramatically expand your capabilities.
What Happens Next
Two scenarios seem likely:
Scenario 1: Market correction Clients realize that AI-generated prototypes create technical debt and maintenance nightmares. They pull back toward traditional processes with clearer specialization. The expectation shift reverses.
Scenario 2: Market evolution The tools continue improving. The gap between “prototype quality” and “production quality” narrows. What feels like a dramatic shift today becomes normal, and new norms develop around it.
Based on the past 18 months, Scenario 2 looks more likely. The tools are improving monthly, not stagnating. Client expectations are still rising, not stabilizing. The consultants adapting are thriving, not struggling.
But I could be wrong. Markets surprise us. Technologies plateau. Backlashes happen.
What I’m confident about: waiting to see which scenario plays out is a losing strategy. If Scenario 1 happens, the time you spent learning AI tools isn’t wasted—you gained capabilities. If Scenario 2 happens and you didn’t adapt, you’re playing catch-up while competitors are established.
The Actual Choice
That UX strategist who called me three months ago? Her next project included a working prototype built with Claude. The client tested it with users in week two instead of week eight. They found problems early, fixed them cheaply, and launched faster than their original timeline.
She’s not a developer now. She’s a strategist who can rapidly manifest her strategic thinking in testable form. That’s a different capability than she had a year ago, and it’s proving more valuable in today’s market.
You can debate whether this shift is good for the industry. You can argue about craft and quality and the value of specialized expertise. These are legitimate discussions worth having.
But have them while learning the tools, not instead of learning them.
The market moved. The tools evolved. The expectations shifted. What you do about that is genuinely up to you, but pretending you don’t have to do anything is choosing the hardest possible path.


