The pattern was sound. The history is clear. What broke microservices wasn't the architecture — it was practitioners who applied the deployment unit without the engineering discipline the original authors said it required.
There is a version of the microservices post-mortem that treats the pattern itself as the problem. Distributed systems are hard. Eventual consistency is hard. Operational overhead compounds. Better to stay with the monolith. This is the wrong conclusion, drawn from the wrong evidence, by people who perhaps should never have adopted microservices in the first place.
Microservices did not fail. A generation of engineers adopted an architectural pattern without the domain modelling discipline it demands, kept the synchronous call patterns from the systems they were supposed to be replacing, and then blamed the architecture when the complexity became unmanageable. That is not a failure of microservices. It is a failure of craft.
What the Original Authors Actually Said
Fred George was talking about micro-services — note the hyphen, the original form — at conferences as early as 2012. His framing was specific and unambiguous: very small services, a hundred or two hundred lines of code, communicating asynchronously, each doing one thing and signalling when done. He called the underlying philosophy programmer anarchy — not because it was chaotic, but because it required engineers senior and capable enough to operate without the management scaffolding that larger, less capable teams depend on.
George was explicit on this point, and it is the part most consistently erased from the mainstream narrative: microservices require a higher class of engineer. Not because the individual services are technically complex — they are not, that is rather the point — but because drawing correct service boundaries demands deep domain knowledge, and building an async event-driven system that actually holds together requires engineers who understand what they are doing and why. It is not a pattern that compensates for weak engineering. It exposes it.
In 2014, Martin Fowler and James Lewis codified what had been circulating in the practitioner community into the article that would introduce the term to the mainstream. They described services built around business capabilities, independently deployable, each owning its own data. They described communication via lightweight mechanisms, positioning this explicitly against the ESB-era orchestration that SOA had become. And they used a phrase the industry would adopt as a slogan while ignoring its meaning: smart endpoints, dumb pipes.
The Unix reference is not rhetorical. The intellectual lineage of microservices runs directly through Unix philosophy: do one thing and do it well, compose small sharp tools via pipes, keep the pipes dumb and let the intelligence live at the endpoints. This is a forty-year-old idea applied to distributed systems. The message bus is the pipe. The service is the tool. The insight was not new — it was being rediscovered at network scale.
Fowler and Lewis also listed decentralised data management as a defining property — not an optimisation, not a preference, a defining property. Each service owns its data. Not a shared Postgres instance with schema-level separation. Not a common ORM pointed at a central database. Complete data ownership, per service, with no exceptions. It was almost universally abandoned in practice because doing it correctly requires the kind of domain thinking not every team could bring.
The Problem Microservices Were Built To Solve
The original context matters enormously, and stripping it out is how you end up with a five-person startup running fifteen services on Kubernetes. Microservices emerged at organisations — Netflix, Amazon — where the primary engineering bottleneck was not performance, not data volume, not technical complexity. It was people. Hundreds of engineers. Conway's Law made operational: when your organisation is large enough, your architecture will mirror your team structure whether you intend it to or not. The insight was to design for that deliberately.
The goal was independent deployability at organisational scale — the ability for a team of eight to ship their service without synchronising with twenty other teams. The service boundary was, first and foremost, a team boundary. The architecture served the org chart. Without that organisational pressure, most of the complexity budget of microservices buys you nothing.
If your team does not have that problem, you probably do not need this solution. A well-structured monolith, deployed with confidence, will outperform a prematurely distributed system on every axis that matters to a small team. Reaching for microservices without the scale that motivates them is not an architectural decision. It is CV-driven development.
How We Build When Microservices Are the Right Call
When the problem genuinely calls for it — independent teams, independent domains, engineering capability to execute properly — the approach is simple. Not easy. Simple. There is a difference, and it matters.
Services communicate exclusively via events on a message bus. Not synchronous HTTP. Not REST calls between services. Events. A service does its work, emits an event signalling completion, and its responsibility ends there. It has no knowledge of what consumes that event or how many things do. The bus is infrastructure — provisioned once, configured like a database. You are not dynamically discovering it at runtime. The addressing problem that service discovery exists to solve simply does not arise, because you are not making point-to-point calls between services.
Each service owns its data completely. This is where the lake and pond model becomes operational. The event stream is the lake: the canonical, append-only record of everything that has happened across the system. Each service maintains its own pond — a local materialised view of the data relevant to its domain, hydrated by subscribing to the events it cares about. A service never queries another service's store. It never calls another service to assemble a response. It already has what it needs, because it built it from events as they arrived.
This resolves the one objection people reliably raise: how do you serve synchronous queries at the user-facing edge? Frontend-supporting services — the Backend for Frontend pattern — subscribe to events across relevant domains and maintain pre-aggregated read models locally. When a request arrives, they answer from their own store. There is no cross-service call at query time, no dependency chain, no distributed join to fail.
A customer places an order. The order service validates, persists, and emits one event: OrderPlaced. Its job is done. It does not call the inventory service. It does not call the payment service. It does not know either exists.
The inventory service is subscribed to OrderPlaced. It reserves stock and emits InventoryReserved. The payment service is subscribed to InventoryReserved. It charges the customer and emits PaymentConfirmed. Each service reacts, does its work, signals completion. No service is aware of the others. No orchestration layer exists. No saga coordinator manages the sequence.
The customer's dashboard needs to show order status synchronously. A BFF service has been subscribed to all of these events since the system started and has maintained a local read model — its own pond — built from that stream. When the request arrives, it answers from local state. The response time is a single local query. There is nothing to discover, no chain to traverse, nothing to fail.
The simplicity here is not cosmetic. It is the proof that the design is correct. When a system requires a service mesh, a circuit breaker library, and a distributed tracing platform just to understand what is happening at runtime, that is not operational maturity — it is the system telling you something went wrong at design time.
Where the Industry Went Wrong
The mainstream implementation made one foundational mistake and spent the next decade building infrastructure to manage the consequences. It kept synchronous HTTP as the primary communication mechanism between services. Everything else — the service discovery registries, the circuit breakers, the service meshes, the distributed tracing pipelines — is a direct and inevitable consequence of that one decision.
- Synchronous HTTP between services creates a distributed monolith. Call chains couple services at runtime. Latency compounds with every hop. Failures cascade. You carry the full operational burden of distributed systems and the full coupling of a monolith, simultaneously.
- Service discovery — Consul, Eureka, Kubernetes service meshes — exists entirely to manage dynamic addressing in synchronous architectures. It is a solution to a problem that should not exist. In a correctly designed async system, you never ask where another service is.
- The Saga pattern, in most implementations, is a sign that service boundaries were drawn incorrectly. If completing a business operation requires coordinated state changes across multiple services, those services were not decomposed around their actual bounded contexts. The saga is not the solution — it is the symptom.
- Shared databases between services are not a pragmatic shortcut. They are a categorical error. If two services share a schema, they are modules in a distributed monolith. Calling them microservices changes nothing about their coupling.
- Istio is impressive engineering that exists entirely to manage problems generated by synchronous inter-service communication. Its presence as a default recommendation in microservices literature is a precise measure of how far the mainstream drifted from the original intent.
None of this is a failure of the architectural pattern. It is a failure of practitioners who adopted a deployment strategy without the domain modelling discipline that makes it coherent. George said it plainly: this requires engineers who can reason about bounded contexts, who understand event-driven systems, who have the discipline to hold service boundaries under delivery pressure. It was never a template any team could lift and apply without thinking. The original authors were clear about this. The industry chose not to listen, and then blamed the architecture for the results.
The Honest Position
Microservices, built as their originators described — async, event-driven, each service owning its domain and its data completely — are simple. Not operationally trivial, but architecturally simple: no service discovery, no synchronous call chain to trace, no distributed transaction to coordinate, no saga to manage. Each service does one thing, signals when done, and has no knowledge of what comes next. The system's complexity lives in the domain, where it belongs, not in the infrastructure. Simplicity is not a constraint we work within — it is the standard we hold the design to. When you find yourself reaching for a service mesh or a saga orchestrator, the question is not which tool to reach for. It is what decision, made earlier, made that tool feel necessary.