The data has changed
Rachel Thomas: Breaking the Spell of Vibe Coding
It is worth experimenting with AI coding agents to see what they can do, but don’t abandon the development of your current skillset. Part of the appeal of vibe coding is claimed extrapolation about how effective it will be 6 or 12 months from now. These predictions are pure guesswork, often based more on hope than reality.
Thomas also links to this study that I'm sure you've seen a bunch since it "proves" that AI coding is actually slower than traditional coding. Of course, the study summary itself says the opposite:
We do not provide evidence that: AI systems do not currently speed up many or most software developers
But set that aside, and I'd like to call out what tools were used for this test (again, citing from the study):
Cursor Pro with Claude 3.5/3.7 Sonnet—frontier models at the time of the study
Totally fair, and absolutely what they should have tested at that point in time, but as Steve Yegge recently pointed out:
But if you haven’t used specifically Opus 4.5/4.6 with specifically Claude Code for at least an hour, then you’re in for a real shock. Because all your complaining about AI not being useful for real-world tasks is obsolete. AI coding hit an event horizon on November 24th, 2025. It’s the real deal. And unfortunately, all your other tools and models are pretty terrible in comparison.
This makes me think back to a college professor I had who once told us that YouTube would never host any serious video because the internet simply didn't have the bandwidth to serve high-res video like you'd get from a Blu-Ray. Sure, Blu-Rays still have superior visual quality to YouTube, but what I can watch on YouTube is far higher fidelity than anything that was available back in 2005 when this professor said that. Also, I can literally buy Hollywood movies on YouTube and watch them on youtube.com.
By a similar token, I agree with the METR study that showed that experienced developers were a bit slower with the AI coding tools of early 2025, and if the tools never got better, then I'd agree with them today. That's simply not the case, though. A year ago I was giving Sonnet 3.5 in Cursor tiny tasks that it would get right 40% of the time. Today, I can give Opus 4.6 in Claude Code entire features to build out and it gets it right 90% of the time.
I'll close by going back to quoting Rachel Thomas again:
AI coding agents can produce syntactically correct code. However, they don’t produce useful layers of abstraction nor meaningful modularization. They don’t value conciseness or improving organization in a large code base. We have automated coding, but not software engineering.
Similarly, AI can produce grammatically correct, plausible sounding text. However, it does not directly sharpen your ideas. It does not generate the most precise formulations or identify the heart of the matter.
Absolutely true. Generating code is undeniably an important part of the job, but it's not everything. While I don't doubt that AI will get better at doing things (although I think it is particularly well-suited to writing code), I still think knowledgeable humans using these tools will be the most successful ways to deploy this technology.
Not to be a plug for the sub, but I wrote more about this in a members post yesterday as well.