AI Dev Tools Co-pilot paradox
Most misleading thing about AI dev tools right now is their growth numbers.
An Upekkha startup CTO spent $40 on Cursor and built in 2 weeks what would have taken 3 interns 2 months. Pure ROI: a $40 tool replacing $10,000 of work. For him, it wasn't even a decision – it was a no-brainer. No wonder Cursor breached $100M run rate in 2-3 years.
A non-tech founder I know struggling to hire developers decided to code himself. He burnt through subscription tiers – $20, $50, $100 – jumping between Bolt.new and Lovable, chasing the promise of effortless development. Four weeks later, he's still battling CORS issues with a half-working codebase.
Two different AI dev tools, radically different outcomes.
These AI dev tools promise to make programming accessible to everyone. Feed them English, get working code. What Andrej Karpathy calls "vibe coding" – having natural conversations with AI to build software. No more wrestling with syntax or debugging. The dream of democratized software development.
AI Tools don't seem to democratize skills – they are amplifying existing abilities.
Expert developers like Andrej use these tools like force multipliers, understanding exactly when to use them and when not to. But novices miss the fundamental lessons that come from understanding the basics first.
While experienced developers become dramatically more productive – often 5x their normal output – beginners actually become less effective at learning, dropping to half their usual pace. This creates an interesting divide: the gap between experts and novices widens as tools amplify existing skills at one end while potentially obscuring fundamental learning at the other. Perhaps most concerning, the traditional path to expertise becomes increasingly unclear, as the shortcuts these tools provide can mask the essential understanding needed for true mastery.
This is also showing us broader design patterns in building AI applications
Level 1 - Chatbots (basic Q&A, intelligence pre GPT era)
Level 2 - Assistants (like Claude, chat can help very complex intelligence questions)
Level 3 - Co-pilots (like Cursor)
Level 4 - Agents (like Operator@OpenAI)
Level 5 - Autonomous Agents (like Waymo, magical, fully automated)
Different domains need different levels. For cars, full autonomy makes sense. For coding? The sweet spot seems to be co-pilot.
Autonomous coding sounds amazing – feed in specifications, get working software. But experienced developers are overwhelmingly choosing co-pilots instead. They'd rather learn to prompt effectively than delegate completely.
Biggest reason is technical debt. No-code and autonomous coding tools create what one developer called "architecture by accident" – it works until it really doesn't. The debt compounds in ways that are hard to even understand, let alone fix.
The hardest part of programming isn't writing code – it's knowing what code to write. An AI can generate endless variations of implementations, but it can't tell you which one your system needs.
Look at Waymo: it took 10 years to map San Francisco's controlled domain of roads. Code can branch in exponentially more ways than roads can without much guardrails. That's why coding tools work better as co-pilots than autonomous agents.
In terms of business metrics, both co-pilots and autonomous tools are showing growth beyond the previous era's venture scale expectations of Triple, Triple, Double, Double, Double (T2D3) growth. But experienced investors ask to look deeper at Net Revenue Retention (NRR). NRR measures how much existing customers increase their spending year over year – the true sign of product stickiness. When you look at the retention metrics of these tools the answers diverge.
The future of programming may not be automation – it's about augmentation. The best tools won't be the ones that try to replace programmers, but the ones that make good programmers exceptional.
You can't shortcut mastery. But you can amplify it.