Preview — full styling will appear after the next deploy completes.

2026-03-24

Agentic Development Is Not Vibe Coding

Vibe coding is prompting and hoping. Agentic development is deliberate engineering with AI as a force multiplier. Conflating the two produces the worst outcomes of both.

There's a phrase spreading through the industry right now — "vibe coding" — that describes writing software by feel: prompt an AI, accept what comes back, ship it, and move on. It's fast, it's fun, and it produces software that works until it very suddenly doesn't.

Agentic development is being lumped into the same category. That's a mistake, and it's costing teams that should know better.

What vibe coding actually is

Vibe coding, as originally coined, is a specific and deliberate choice: treat the LLM as the primary author, maintain minimal oversight, and accept that understanding every line of what gets produced is not the goal. The premise is that for certain categories of low-stakes work — prototypes, personal tools, throwaway scripts — the speed trade-off is worth it.

That framing is honest. Where it breaks down is when people apply it to production systems, funded products, or anything a customer will eventually depend on.

At that scale, vibe coding is not a methodology. It's deferred reckoning.

Where agentic development is different

Agentic development is not a looser version of normal engineering. It's a tighter one.

When you use an AI agent to implement a feature, you're delegating the execution of a specific, bounded task to a system that will interpret your instructions literally, make local decisions you haven't anticipated, and produce output you'll need to review. That process only works if the inputs are precise and the outputs are verified.

This means agentic development requires more upfront discipline than writing code yourself. The requirement has to be fully specified before the agent runs — because the agent will implement exactly what it's told, including the gaps. The architecture has to be settled before the agent touches it — because agents are good at implementing designs and bad at questioning them. And the output has to be reviewed by someone who understands both what was asked and what was produced.

The analogy is not "chatting with a smart assistant." It's closer to delegating to a very fast, very literal contractor who will complete the job exactly as scoped and invoice you for the rework when the scope was wrong.

The confusion is costing teams

The conflation matters in practice because it leads teams to adopt the wrong habits.

Teams that treat agentic development as vibe coding tend to share a pattern: they move fast in the first few weeks, accumulate a codebase that nobody fully understands, and then hit a wall where adding anything new risks breaking something unexpected. The AI-generated code looks clean. The tests pass. But the architecture was never validated, the edge cases were never considered, and the assumptions made in week one are now load-bearing.

This is not an AI problem. It's a process problem that AI accelerates.

Conversely, teams that apply real engineering discipline to their agentic workflows — clear specifications, reviewed artefacts, deliberate architecture, genuine test coverage — ship faster than teams that don't use AI at all, and produce codebases that hold up. The agent is handling implementation volume. The humans are handling judgment.

The habits that make it work

The engineers who get the most from agentic development share a few consistent practices.

They specify before they prompt. A vague prompt produces plausible-looking output that requires heavy revision. A precise specification — what the function does, what it doesn't do, what the inputs and outputs are, what the failure cases are — produces output that's genuinely usable. Writing the specification takes time. It saves more time than it costs.

They treat AI output like a junior engineer's PR. Not with suspicion, but with thoroughness. The question is not whether the code looks right. It's whether the code does what the spec says, handles the edge cases the spec covers, and doesn't introduce anything the spec doesn't mention. AI-generated code passes visual inspection easily. It doesn't always pass careful review.

They don't let the AI own the architecture. Agents implement well. They don't validate designs. The decision about how a system is structured, where the boundaries are, and what gets optimised for should be made by a person who will live with those decisions — before the agent runs, not inferred from a prompt by the agent itself.

They separate exploration from production. Prototyping with minimal constraints is legitimate. So is rapid experimentation. The discipline is knowing when exploratory code is done and production work begins — and not carrying vibe-coding habits across that line.

What this means for how you build

If you're using AI agents to build software — and you should be — the question is not whether to apply engineering rigour. It's whether you apply it before or after the problems appear.

The teams that are shipping good software with agentic tooling are not prompting and hoping. They're doing more deliberate work upfront: cleaner requirements, more explicit architecture, more careful review. The AI handles volume. The engineer handles judgment.

That is not a constraint on what agentic development can do. It's why it works.