And When to Stop Overthinking Them

Architecture patterns have a marketing problem. Somewhere between the AWS re:Invent keynote and the Medium post titled "Why We Moved to Microservices," the engineering conversation got replaced by a vendor conversation — and a generation of teams inherited distributed systems complexity they didn't earn and can't operate.

This isn't about microservices being wrong. It's about the pattern selection process being broken. Most teams choose architecture based on what the best-funded companies are doing, not what their actual problem requires. Netflix decomposed into hundreds of services because they had to — because the alternative was a single point of failure serving a third of North American internet traffic. You are not Netflix. Neither am I. And that's fine, until you start architecting like you are.

What follows is a straight account of six patterns — what each one actually costs, what it actually buys you, and the specific conditions under which it earns its place. No conference slide optimism. No vendor positioning. Just trade-offs.

01 Foundation

Layered Architecture & MV*

// The default. Mostly fine. Often over-engineered.

Layered architecture organises code into horizontal slabs — presentation, application, domain, infrastructure — where each layer only talks to the one directly below it. MV* (MVC, MVP, MVVM) is the same idea applied to UI: separate what you see from what you do from what you store. This is the pattern most developers learn first, and the one they apply whether it fits or not.

It works because it maps cleanly to how teams think. Handlers handle. Services orchestrate. Repositories persist. The pattern earns its reputation early — on a greenfield codebase with a small team and a clear domain, it's genuinely productive.

It breaks in a specific and predictable way. The domain model — the User, the Invoice, the Order — becomes a passive data container. Methods drift. Validation logic that belongs on the model ends up in the service because the service is where engineers spend their time. Business rules that should be enforced at the domain level leak into HTTP handlers because that's where the request context lives. Six months later you have a UserService with twelve methods, a User struct with no methods, and handlers that know too much about business logic. The layers are still there. They're just not doing what they were supposed to do.

Pros
  • Universally understood — minimal onboarding friction
  • Clear separation of concerns by convention
  • Testable at each layer boundary
  • Works well with most frameworks and ORMs
Cons
  • The layer boundary is a social contract, not a technical one — Go won't stop a handler importing a repository directly, and under deadline pressure someone always does
  • Domain logic migrates upward over time: validation in services, business rules in handlers, User struct with no methods after two years
  • Every trivial change — rename a field, add a flag — touches every layer; the architecture amplifies churn rather than containing it
  • Cross-cutting concerns (auth, audit, rate limiting) end up copy-pasted across services because there's no obvious home for them
Use it when
  • Your team already knows it — don't change for change's sake
  • The domain is well-understood and relatively stable
  • You're building CRUD-heavy applications where domain logic is thin
  • Speed of delivery matters more than architectural purity right now
// repository.go — data access only, no business logic
type UserRepository interface {
    FindByID(ctx context.Context, id string) (*User, error)
    Save(ctx context.Context, u *User) error
}

// service.go — business logic, not pass-through boilerplate
type UserService struct{ repo UserRepository }

func (s *UserService) Activate(ctx context.Context, id string) error {
    u, err := s.repo.FindByID(ctx, id)
    if err != nil { return err }
    u.Activate() // domain logic lives on the model, not in the service
    return s.repo.Save(ctx, u)
}

// handler.go — HTTP concerns only, zero business logic
func (h *Handler) ActivateUser(w http.ResponseWriter, r *http.Request) {
    id := r.PathValue("id")
    if err := h.svc.Activate(r.Context(), id); err != nil {
        http.Error(w, err.Error(), http.StatusInternalServerError)
        return
    }
    w.WriteHeader(http.StatusNoContent)
}
Alan's take

Keep your service layer thin. If a method is just a pass-through to the repository, delete it and call the repo directly from the handler. And keep your domain models active — if User.Activate() doesn't exist and that logic lives in UserService.Activate(), you've already started down the path toward an anemic model.

02 Advanced

Event-Driven Architecture & CQRS

// Powerful. Genuinely complex. Frequently misapplied.

Event-Driven Architecture models the system as a stream of things that happened rather than things that should be done. Components emit events; other components react. CQRS — Command Query Responsibility Segregation — is a natural companion: separate the write model (commands that change state) from the read model (queries optimised for display). Neither is inherently tied to the other, but they're commonly paired because an event log makes it trivial to maintain multiple read projections.

This is the pattern that solves real problems — audit logs, temporal decoupling, independent scaling of read and write paths — and creates real problems: eventual consistency becomes a product requirement, your debugging surface area multiplies, and onboarding a new engineer into a pure event-sourced system is a multi-day exercise. Go in with eyes open.

Pros
  • Write path and read path scale independently
  • Audit trail is free — the event log is the source of truth
  • Temporal decoupling — producers don't know or care about consumers
  • Read models can be rebuilt from scratch when requirements change
Cons
  • Eventual consistency isn't a technical footnote — it's a product decision your users will feel every time they submit a form and their own change doesn't appear
  • A bug in event processing doesn't produce an error, it produces wrong state — silently, across every projection
  • Event schema versioning is a permanent, first-class problem from the moment you deploy event one
  • Teams routinely reach for CQRS on simple CRUD services because they read it's "scalable" — what they get is two models to keep in sync and a command bus nobody understands
Use it when
  • Read and write workloads have fundamentally different scaling characteristics
  • A genuine audit log is a compliance requirement, not a nice-to-have
  • Multiple downstream systems need to react to state changes independently
  • Your team has operated message-driven systems before and understands the failure modes
// command.go — write side: intent to change state
type Command interface{ CommandName() string }

type ActivateUserCmd struct{ UserID string }
func (ActivateUserCmd) CommandName() string { return "user.activate" }

type CommandBus struct{ handlers map[string]func(context.Context, Command) error }

func (b *CommandBus) Dispatch(ctx context.Context, cmd Command) error {
    h, ok := b.handlers[cmd.CommandName()]
    if !ok { return fmt.Errorf("no handler: %s", cmd.CommandName()) }
    return h(ctx, cmd)
}

// event.go — something that happened (immutable fact)
type UserActivated struct{
    UserID     string
    OccurredAt time.Time
}

// projection.go — read side, optimised purely for queries
type ActiveUsersProjection struct{
    mu    sync.RWMutex
    users map[string]bool
}

func (p *ActiveUsersProjection) On(e UserActivated) {
    p.mu.Lock(); defer p.mu.Unlock()
    p.users[e.UserID] = true
}
func (p *ActiveUsersProjection) IsActive(id string) bool {
    p.mu.RLock(); defer p.mu.RUnlock()
    return p.users[id]
}
Alan's take

The most expensive mistake I see teams make with this pattern is conflating CQRS with event sourcing and implementing both because they read about them together. They're independent ideas. CQRS is a read/write model separation — you can implement it this afternoon with two structs and a function. Event sourcing is a persistence strategy where your database is an append-only log of things that happened, and rebuilding state means replaying every event from the beginning of time.

Event sourcing solves real problems: temporal queries, full audit history, the ability to rebuild any projection from scratch. It also means your team needs to think carefully about event schema versioning from day one, your "fix the bad data" script is now a corrective event with its own schema, and explaining the system to a new engineer takes two hours instead of twenty minutes. That's not a reason to avoid it — it's a reason to adopt it deliberately, not because CQRS seemed to imply it.

03 Platform

Microkernel

// Underrated. Exactly right for extensible platforms.

Microkernel is the pattern you reach for when you've shipped version one of your product and discovered that your biggest customers all want something slightly different. Not "different design" different. "Different compliance rules, different data sources, different notification channels, different everything at the edges while the core logic stays identical" different. That's the specific problem microkernel solves — and it solves it better than any alternative.

The core defines a contract. Extensions implement it. The core doesn't care how many extensions exist, who wrote them, or what they do internally. VS Code's language server protocol is microkernel. Jenkins' build pipeline is microkernel. The SafeOps365 notification and reporting system is microkernel — because an energy operator in Aberdeen has different alerting requirements than one in Texas, and those differences should never live in an if statement in the core codebase.

The failure mode is specific: teams apply microkernel to their own internal variation and discover they've built a framework nobody asked for. If every plugin is written by your team, for your own product, you don't need a plugin system — you need a well-factored package. Microkernel earns its overhead when the extension boundary is a real organisational boundary: a customer, a third party, a team that deploys independently.

Pros
  • Core stays small, stable, and testable in isolation
  • Third-party and customer extensions without forking the codebase
  • Features can be loaded, unloaded, or replaced at runtime
  • Clear interface contracts enforce discipline at extension boundaries
Cons
  • Getting the plugin interface wrong on v1 means a breaking change for every extension author — and if those authors are customers, that's a support incident
  • Plugin discovery, version compatibility, and dependency conflicts are problems you now own in perpetuity
  • Security boundaries between the core and plugins require explicit, deliberate design — a plugin that panics takes down the whole process unless you've engineered isolation
  • The most common misuse: building a plugin system for variation that only your own team will ever extend
Use it when
  • You're building a platform that others will extend — internally or externally
  • Customer feature sets differ significantly at the edges while core logic is shared
  • Core functionality must remain stable while the ecosystem evolves around it
  • You need a clear contract boundary between your team and extension authors
// plugin.go — the contract the core enforces
type Plugin interface {
    Name()    string
    Version() string
    Init(cfg map[string]string) error
}

type NotifierPlugin interface {
    Plugin
    Notify(ctx context.Context, event string, payload any) error
}

// registry.go — the core kernel: stable, minimal
type Registry struct{ notifiers map[string]NotifierPlugin }

func (r *Registry) Register(p NotifierPlugin, cfg map[string]string) error {
    if err := p.Init(cfg); err != nil { return err }
    r.notifiers[p.Name()] = p
    return nil
}

func (r *Registry) Broadcast(ctx context.Context, event string, payload any) {
    for _, n := range r.notifiers {
        if err := n.Notify(ctx, event, payload); err != nil {
            log.Printf("plugin %s error: %v", n.Name(), err)
        }
    }
}

// slack_plugin.go — an extension; the core never imports this package
type SlackNotifier struct{ webhookURL string }

func (SlackNotifier) Name()    string { return "slack" }
func (SlackNotifier) Version() string { return "1.0.0" }
func (s *SlackNotifier) Init(cfg map[string]string) error {
    s.webhookURL = cfg["webhook_url"]; return nil
}
Alan's take

Go's interface system was built for this. The compiler enforces your contracts, the type system documents them, and you need zero framework to make it work. The discipline isn't technical — it's organisational. Define the boundary, honour it, and resist the pull to let the core grow because it's convenient. The moment you start importing plugin-internal packages from the core, the architecture has already failed.

04 Distributed

Microservices Architecture

// The most oversold pattern of the last decade.

Microservices decomposes the system into small, independently deployable services each owning a single bounded context. Each service has its own data store, exposes an API, and is deployed, scaled, and versioned independently. Amazon, Netflix, and Uber did it. Half the industry followed without having Amazon's engineering headcount or Netflix's traffic.

The productivity argument goes: independent deployment means independent teams moving fast. The reality is that distributed systems introduce failure modes — network partitions, partial failures, cascading timeouts — that simply don't exist in a monolith. You've traded simple in-process function calls for network I/O, serialisation, service discovery, distributed tracing, and coordinated deployments. Do that trade consciously, not by default.

Pros
  • Services scale independently — CPU-heavy work isolated from I/O-heavy work
  • Fault isolation — one service failing doesn't take down everything
  • Independent deployments when bounded correctly
  • Technology heterogeneity — right tool per service, in theory
Cons
  • The distributed monolith is the most common outcome: services that call each other synchronously in a chain with none of the independence
  • A stack trace across eight services is a distributed tracing query, three dashboards, and a conversation with the team who owns service four
  • Data consistency across service boundaries cannot be solved with a transaction — you need sagas, outbox patterns, or eventual consistency
  • Teams of five running twenty services aren't moving faster — they're running a platform engineering operation without the headcount for it
  • Integration tests for distributed services are slow, flaky, and expensive to maintain
Use it when
  • You have clearly defined bounded contexts that naturally decouple along team lines
  • Different parts of the system have genuinely different scaling requirements — proven, not assumed
  • Multiple autonomous teams need to deploy independently without coordination overhead
  • Your operational maturity supports container orchestration, observability, and distributed tracing
// server.go — owns its own data, exposes its own API
type Server struct{ addr string; store NotificationStore }

func (s *Server) routes() http.Handler {
    mux := http.NewServeMux()
    mux.HandleFunc("POST /v1/notifications", s.create)
    mux.HandleFunc("GET /v1/notifications/{id}", s.get)
    mux.HandleFunc("GET /healthz", s.health)
    return mux
}

func (s *Server) Run(ctx context.Context) error {
    srv := &http.Server{
        Addr: s.addr, Handler: s.routes(),
        ReadTimeout: 5 * time.Second, WriteTimeout: 10 * time.Second,
    }
    go func() { <-ctx.Done(); srv.Shutdown(context.Background()) }()
    return srv.ListenAndServe()
}

// client.go — calling this service from another service
type NotificationClient struct{ base string; http *http.Client }

func (c *NotificationClient) Send(ctx context.Context, n Notification) error {
    b, _ := json.Marshal(n)
    req, _ := http.NewRequestWithContext(ctx, http.MethodPost,
        c.base+"/v1/notifications", bytes.NewReader(b))
    req.Header.Set("Content-Type", "application/json")
    resp, err := c.http.Do(req)
    if err != nil { return err }
    defer resp.Body.Close()
    if resp.StatusCode != http.StatusCreated {
        return fmt.Errorf("notification service: %d", resp.StatusCode)
    }
    return nil
}
Alan's take

If a senior engineer on your team can't explain what happens when the notification service is down during user registration — including retry logic, fallback behaviour, and the user experience impact — you're not ready for microservices. Build the monolith first. Extract services when the seams become obvious from real load and real team friction, not before.

05 Simple

Monolithic Architecture

// Unfairly maligned. Often the correct answer.

A monolith is a single deployable unit. Everything — API, business logic, data access, background jobs — runs in one process. The word has become a pejorative, which tells you more about the industry's fashion cycle than about the pattern's merit. GitHub was a monolith for years. Basecamp still is. Shopify ran on a Rails monolith until they had very good reasons to change it.

The monolith's sins are mostly sins of undisciplined development, not of the pattern itself. A well-structured monolith is simpler to develop, deploy, debug, and reason about than a premature microservices architecture. If your team is spending more time on infrastructure than product features, the monolith deserves a serious re-evaluation.

Pros
  • Simple to run locally — one binary, one database, zero coordination
  • In-process calls — no network latency, no serialisation overhead
  • Single stack trace — debugging is straightforward
  • ACID transactions across the whole system with no distributed saga needed
  • Fast iteration — no contract versioning between services
Cons
  • Without enforced boundaries, the monolith becomes a utils package graveyard — everything imports everything
  • A single bad deployment takes down every feature simultaneously; there's no blast radius — it's the whole thing
  • Hot paths can't scale independently — if your report generation is CPU-bound and your API is I/O-bound, you're scaling both together or neither
  • Build and test times grow with the codebase; what was a 30-second cycle at year one is a 12-minute CI run at year three
Use it when
  • You're starting a new product and the domain isn't fully understood yet
  • The team is small and deployment coordination overhead is a real cost
  • Your traffic profile is uniform — no obviously hot paths needing separate scaling
  • You want to move fast on product without distributed systems overhead
// main.go — one binary, everything wired in one place
func main() {
    cfg := loadConfig()

    db, err := sql.Open("pgx", cfg.DatabaseURL)
    if err != nil { log.Fatal(err) }

    userRepo    := postgres.NewUserRepo(db)
    invoiceRepo := postgres.NewInvoiceRepo(db)
    mailer      := smtp.NewMailer(cfg.SMTP)

    userSvc    := user.NewService(userRepo, mailer)
    invoiceSvc := invoice.NewService(invoiceRepo, userSvc)

    mux := http.NewServeMux()
    user.RegisterRoutes(mux, userSvc)
    invoice.RegisterRoutes(mux, invoiceSvc)

    go jobs.RunInvoiceReminders(invoiceSvc) // background work, same process

    log.Fatal((&http.Server{Addr: cfg.Addr, Handler: mux}).ListenAndServe())
}
Alan's take

The monolith doesn't fail because it's a monolith. It fails because no one enforced package boundaries, everyone wrote to the same global state, and the codebase became an archaeology site. Go's package system makes disciplined monoliths genuinely pleasant. Ship the monolith. Refactor when you have real evidence of where the seams are.

06 Recommended

Modular Monolith

// The pattern most teams should actually be using.

The modular monolith is a monolith with enforced internal boundaries. Each module owns its domain, exposes a public API, hides its internals, and refuses to let other modules reach into its database tables. It deploys as a single binary. It develops as if it were multiple services. You get the operational simplicity of a monolith and the organisational structure of microservices — without the distributed systems tax.

This is the pattern that Sam Newman — who literally wrote the book on microservices — now recommends as a starting point. When the seams become load-bearing and a module needs to scale independently, you extract it into a service. The modular monolith is the precondition for a clean microservices migration, not a consolation prize.

Pros
  • Single deployable — all the operational simplicity of a monolith
  • Enforced module boundaries prevent the big ball of mud
  • Modules can be extracted to services later without rewrites
  • In-process calls between modules — performance without coupling
  • Shared infrastructure without shared data models
Cons
  • The boundary only holds if the whole team respects it — one engineer under pressure who reaches across modules has started the coupling
  • Shared deployment means a broken module blocks release of all other modules — the independence is logical, not operational
  • Module boundaries require up-front domain thinking; get the decomposition wrong early and you're refactoring across module lines
  • It's easy to dress up a poorly structured monolith as a modular one — packages with clean names but tangled internals
Use it when
  • You have a medium-complexity domain you understand reasonably well
  • You anticipate extracting services later and want clean seams ready
  • Your team has the discipline to respect internal package boundaries
  • You want monolith speed now and the option of microservices speed later — on your terms
// internal/billing/service.go — billing's public contract
// Nothing outside this package tree can import billing's internals
package billing

type Service interface {
    CreateInvoice(ctx context.Context, req InvoiceRequest) (Invoice, error)
    GetInvoice(ctx context.Context, id string) (Invoice, error)
}

// InvoiceRequest is billing's public input — other modules construct this
type InvoiceRequest struct{
    CustomerID string
    LineItems  []LineItem
}

// internal/billing/postgres.go — private to billing, invisible outside
type invoiceRow struct{
    id, customerID string
    total          int64
    createdAt      time.Time
}

// app/wire.go — modules communicate through interfaces, never internals
func Wire(db *sql.DB) http.Handler {
    // each module owns its own schema — no cross-module table access
    users   := user.NewService(db)
    billing := billing.NewService(db)
    orders  := order.NewService(db, billing) // interface dep, not internals

    mux := http.NewServeMux()
    user.RegisterRoutes(mux, users)
    billing.RegisterRoutes(mux, billing)
    order.RegisterRoutes(mux, orders)
    return mux
}
Alan's take

Use Go's internal directory convention religiously. internal/billing/ cannot be imported from outside its parent tree — the compiler enforces it. That's your module boundary, for free, with zero framework overhead. Start here. Extract when you have evidence — traffic profiles, team autonomy requirements, deployment frequency data. Not before.

The short version
Layered / MV* Default choice — CRUD-heavy, stable domain, small team
EDA / CQRS Compliance audit trails, async, divergent R/W scaling
Microkernel Extensible platforms, real external extension boundaries
Microservices Large teams, proven seams, full operational maturity
Monolith Greenfield, unknown domain — ship fast and learn
Modular Monolith The answer for most teams, most of the time