Preview — full styling will appear after the next deploy completes.

2026-03-11

Microservices vs. Monolith vs. Serverless: Why Most Startups Choose Wrong

Microservices look serious, serverless sounds modern, and monoliths get dismissed as legacy. The reality is more nuanced — and the wrong choice costs more than most founders expect.

The pattern repeats constantly. A founding team, usually with engineers who've worked at larger companies, decides to build on microservices from day one. It feels like the right call — it's what Netflix does, it's what the job postings ask for, and it signals technical maturity to investors. Six months later, they're debugging distributed transactions, fighting service discovery issues, and wondering why shipping a simple feature requires touching four repositories.

Serverless is the newer version of the same mistake. AWS Lambda, cloud functions, event-driven everything — it sounds like infinite scale with no infrastructure to manage. Until you're debugging a timeout cascade across a dozen functions with no local development story and cold starts wrecking your p99 latency.

The problem with cargo-culting architecture

Netflix, Amazon, and Uber use microservices because they have thousands of engineers, multiple teams working on independent domains, and scaling requirements that genuinely can't be served by a single deployment. Their architecture is a response to real organisational and operational constraints.

A twelve-person startup does not have those constraints. It has different ones: speed of iteration, limited engineering capacity, and a product that is still figuring out what it is. Adopting the architecture of a mature organisation before you've outgrown a simpler one is a mismatch between problem and solution.

What a monolith actually gives you

A well-structured monolith is not a legacy codebase. It's a codebase where:

The speed advantage at early stage is real. Teams that start with a monolith and keep it modular ship faster, debug faster, and onboard faster. The constraints that make microservices worth the overhead simply don't exist yet.

When microservices are actually the right answer

There are legitimate reasons to break up a monolith:

Notice what's not on that list: "we want to seem like a serious engineering organisation" and "this is how it's done at big companies."

Where serverless fits — and where it doesn't

Serverless is genuinely useful for specific workloads: background jobs, event processing, webhooks, scheduled tasks, and anything with spiky or unpredictable traffic that would otherwise require provisioning idle capacity. For these use cases it's a good fit — low operational overhead, pay-per-use pricing, and no servers to manage.

Where it breaks down is as a primary application architecture. The local development experience is painful. Cold starts create latency inconsistencies that are hard to reason about. Long-running operations hit execution limits. Stateful workflows require external coordination. Debugging distributed function chains is as hard as debugging microservices — sometimes harder, because the tooling is less mature.

A common pattern that works well: a monolith as the core application, with specific side workloads — image processing, email delivery, data sync jobs — handled by serverless functions. You get the development speed of a monolith with the operational benefits of serverless where they're actually valuable.

The cost people underestimate

The overhead of both microservices and serverless-first architectures is not just technical. It's organisational. You need to manage inter-service contracts, version APIs, handle partial failures gracefully, and deal with distributed data consistency. Each of these is a real engineering investment.

At early stage, that investment competes directly with shipping product. The teams that choose microservices or serverless-first at Series A and then wonder why velocity is low have usually made an invisible trade: they've allocated significant engineering capacity to infrastructure and operations that could have gone to the product.

The path that actually works

Start with a modular monolith. Organise the code into well-bounded modules with clear interfaces. Use serverless for the specific tasks it's actually suited to. When a specific module genuinely needs to scale independently or be owned by a separate team, extract it — at that point you have evidence that the complexity is warranted.

This isn't a compromise. It's the correct sequence. You learn where the real boundaries are from running the product, not from speculating about them before you've shipped. The teams that skip this step don't gain agility — they inherit all the operational complexity of a distributed system before they have the scale to justify it.

---

If you're making architecture decisions ahead of a funding round or a scaling phase, book a call — it's the kind of decision that's cheap to get right early and expensive to undo later.