According to MIT Technology Review, the software development industry is undergoing a significant shift from “vibe coding” to “context engineering” in 2025. Andrej Karpathy first coined the term “vibe coding” in February 2025, sparking immediate debate across the industry. Thoughtworks teams expressed skepticism about the approach in their April technology podcast, noting concerns about how it might evolve. The latest Thoughtworks Technology Radar reveals that antipatterns have been proliferating, with complacency about AI-generated code becoming a major issue. As users demanded more and prompts grew larger, model reliability started to falter, driving increased interest in engineering context properly. This shift comes as organizations increasingly develop and leverage AI agents and agentic systems.
<h2 id="vibe–coding-backlash”>The vibe coding hangover
Remember when everyone got excited about just vibing with AI to write code? Yeah, that didn’t last long. The initial enthusiasm for Karpathy’s vibe coding concept quickly ran into reality. Thoughtworks documented this skepticism in their technology podcast back in April, and honestly, they were right to be cautious.
Here’s what happened: developers got complacent. They’d throw vague prompts at AI and expect perfect code. The complacency with AI-generated code became a real problem. Prompts got longer, expectations got higher, but the AI couldn’t keep up. Basically, we learned that vibes alone don’t scale.
Why context engineering matters now
So we’re pivoting to context engineering. But what does that actually mean? It’s about deliberately structuring the information we feed to AI systems rather than just winging it. Thoughtworks has been working with tools like Claude Code and Augment Code, and they’ve found that proper “knowledge priming” makes all the difference.
The results are pretty compelling. When you give AI the right context, you get more reliable outputs. Fewer rewrites. Better software. They’ve even seen success using generative AI to understand legacy codebases – and in some cases, rebuilding applications without full source code access.
The counterintuitive twist
Here’s where it gets interesting. You’d think more context means more data and more detail, right? Wrong. Thoughtworks discovered something surprising when using generative AI for forward engineering.
AI actually performs better when it’s further abstracted from the underlying system. When you remove some of the legacy code specifics, the solution space widens. The AI gets more creative. It’s not about drowning the model in details – it’s about giving it the right kind of context that enables better thinking.
The agent reckoning
Now here’s the real driver behind this shift: AI agents. Everyone wants to build them or use them, but agents demand way more sophisticated context handling. They’re not just executing predefined tasks – they need to navigate complex, dynamic situations.
And that requires significant human intervention to set up properly. We’re learning that you can’t just vibe your way through agent development. The industry is being forced to mature, to move beyond quick prompts and toward deliberate context engineering. It’s messy, it’s challenging, but it’s where the real progress in AI is happening.
