A mature rebuttal to performance benchmark culture, and a defence of Go as the most complete choice for modern production systems engineering.

Every few months a post appears on LinkedIn. Someone has rewritten their service in Rust or Zig, run some benchmarks, and concluded that Go is a poor engineering choice. The numbers are usually real. The conclusion is almost always wrong. This is a measured examination of why — and a defence of what Go actually offers the engineers building systems that have to work in the real world, stay secure, and be maintained by human beings for years.

The Benchmark Is a Controlled Pathology

Performance benchmarks in the Go vs Rust vs Zig discourse overwhelmingly test one thing: single-threaded, CPU-bound compute throughput. Fibonacci sequences. Binary tree manipulation. String processing at volume. These are synthetic workloads designed to expose runtime overhead and GC pauses, and they do exactly that.

Go's garbage collector costs something. Write barriers, escape analysis, the tricolor concurrent collector — these are real. Rust and Zig, with no runtime and no GC, will win every benchmark that is specifically designed to surface these costs. Nobody is lying.

But here is the question those benchmarks do not ask: what does your service actually spend its time doing?

In the overwhelming majority of production networked services, the answer is: waiting. Waiting for the database to respond. Waiting for an upstream API. Waiting for a TLS handshake to complete. Waiting for bytes to arrive over a socket. The CPU is not the bottleneck. The CPU is largely idle. The benchmark that shows Zig processing 4.8x more requests per vCPU is almost certainly measuring a service that is CPU-bound by design — which most real services are not.

When your service spends 95% of its time waiting on I/O, the language runtime overhead of the remaining 5% is not your infrastructure bill. Your database query plan, your connection pool size, and your network topology are your infrastructure bill.

What Go's Runtime Actually Gives You

Framing Go's runtime as pure overhead misunderstands its value proposition. The runtime is a set of paid-for capabilities that you would otherwise have to build yourself or source from external libraries.

The goroutine scheduler

Go's M:N concurrency model — many goroutines multiplexed across a smaller pool of OS threads — is not a language curiosity. It is a practical solution to the C10k problem that has been in production at Google, Cloudflare, and Dropbox for over a decade. A goroutine costs approximately 2–8KB of stack at creation, dynamically grows, and is multiplexed transparently. You write sequential-looking code that handles hundreds of thousands of concurrent connections without thinking about thread pools, callback pyramids, or async state machines.

Rust's answer to this is Tokio, an excellent async runtime. But it is a significant external dependency with its own version lifecycle, its own breaking changes, and its own learning curve. Zig's async story is, at the time of writing, incomplete and not recommended for production use. Go ships with the answer built in.

The garbage collector

Go's GC is not a liability to apologise for. It is a deliberate engineering trade-off with a well-understood cost model. Since Go 1.14 the collector runs concurrently with user code, targeting sub-millisecond pause times. Two tuning knobs — GOGC and GOMEMLIMIT — give operators real control over the throughput/latency trade-off without rewriting the application. For the vast majority of services, GC pauses are simply not visible in production latency distributions.

The cases where GC jitter is genuinely unacceptable are real but specific: high-frequency trading systems with microsecond latency requirements, hard real-time embedded systems, game engines targeting consistent 60fps frame budgets. These are not most services. If you are building one of them, Rust is a reasonable choice. If you are building an API server, a data pipeline, a compliance platform, or a SaaS product, GC pauses are not your problem.

The Standard Library as a Security Architecture

This is where the comparison shifts from performance to something more consequential: security posture and supply chain integrity.

Go ships with a standard library that is, for networked service development, genuinely complete. Not adequate. Not sufficient. Complete — in a way that no other language in this comparison approaches.

0
external deps for
production TLS server
0
external deps for
HTTP/2 client
0
external deps for
AES-GCM encryption
0
external deps for
SQL interface + testing

You can build a production-grade, TLS-terminating, HTTP/2-capable API server with JWT authentication, structured logging, SQL database access, pprof profiling endpoints, graceful shutdown, and a full test suite — with zero external dependencies. The standard library provides all of it. This is not a theoretical exercise; it is a practical development pattern that uRadical applies across every service we build.

The contrast with Rust and Zig is stark. In Rust, building the same service requires at minimum: an async runtime (Tokio), an HTTP framework (Axum or Actix), a TLS library (rustls), JSON serialisation (serde + serde_json), and a database layer (sqlx or diesel). Each of these is a community-maintained dependency with its own release cycle, its own CVE history, and its own transitive dependency graph. In Zig, several of these capabilities are either absent from the standard library or not yet production-ready.

// Dependency surface: equivalent production HTTP service
Go
net/http crypto/tls encoding/json database/sql crypto/aes net/http/pprof testing
stdlib. Zero external packages. Zero transitive risk.
Rust
tokio axum rustls serde serde_json sqlx tower hyper
+ transitive dependencies of each of the above
mio bytes pin-project tracing futures ring webpki tokio-util h2 http ...

Supply chain is an attack surface

This is not a theoretical concern. The software supply chain is one of the most actively exploited attack vectors in modern systems. XZ Utils. event-stream. node-ipc. Colors.js. The pattern is consistent: a widely-used package in a large dependency graph is compromised — through maintainer takeover, typosquatting, or deliberate injection — and the blast radius extends to every service that depends on it, directly or transitively.

Every external dependency you add is a trust relationship with a human being or organisation whose security practices, motivations, and future continuity you cannot fully verify. The safest dependency is one that does not exist. Go's standard library philosophy is, at its core, a supply chain security architecture.

net/http, crypto/tls, crypto/aes — these packages are maintained by the Go team at Google, audited by the broader security community, subject to Go's compatibility guarantee, and have a decade of production hardening behind them. When a CVE is filed against Go's TLS implementation, a single upstream fix propagates everywhere. When a CVE is filed against a Rust crate you depend on transitively, you may not even know you are exposed until a scanner tells you.

Memory Safety Without the Complexity Tax

Rust's memory safety guarantees are real and genuinely valuable. The borrow checker eliminates entire categories of memory bugs at compile time — use-after-free, double-free, data races. For systems programming at the OS level, for cryptographic implementations, for anything touching raw memory management, this matters enormously.

But memory safety and Rust's ownership model are not the same thing. Go achieves memory safety through garbage collection and a type system that prevents the unsafe patterns that cause most vulnerabilities in C and C++ code. You do not get use-after-free in idiomatic Go. You do not get buffer overflows from the language itself. You do not get data races if you use channels and sync primitives correctly — and the -race detector catches them at test time when you don't.

The question is not "does Rust offer stronger memory safety guarantees than Go?" It does. The question is "does the marginal safety improvement justify the complexity cost for the service you are actually building?" For OS kernels and embedded firmware, yes. For an API server handling financial data, the answer requires more honesty than the benchmark posts apply.

Rust's complexity cost is concrete. Lifetime annotations, the borrow checker, async/await interaction with ownership, the trait system — these produce real friction for the engineers maintaining the code after it is written. Faster engineers writing clear Go code will outperform slower engineers writing correct but opaque Rust in almost every delivery context that matters. The code that ships is safer than the code still being debugged.

Operational Simplicity Is a Security Property

Go produces statically linked, single binary executables. No shared library dependencies. No runtime to install. No virtual environment to manage. No version conflicts. The deployment artifact is a file that runs wherever the OS matches. This is not a convenience feature — it is an attack surface reduction.

The Go binary contains exactly what it contains. You can inspect it, sign it, hash it, and distribute it with high confidence about its contents. Container images built from Go binaries can use FROM scratch or a minimal distroless base — no shell, no package manager, no utilities that an attacker could use post-compromise. The runtime footprint of a Go service is dramatically smaller than an equivalent Python, Node, or JVM service, and meaningfully smaller than a Rust service that has accumulated ecosystem dependencies.

# A complete Go production container
# Attack surface: the binary and the kernel
FROM scratch
COPY --from=builder /app/service /service
EXPOSE 8080
ENTRYPOINT ["/service"]

Fast compilation is also a security property in practice. When a CVE is disclosed, the gap between "patch available" and "patch in production" is a window of exposure. Go's compilation speed — a large service in seconds, not minutes — compresses that window. The team that can rebuild and redeploy in three minutes responds faster than the team waiting twenty minutes for a Rust build to complete.

The Compounding Effect of Boring Technology

There is a category of value that benchmarks cannot measure because it accumulates over time rather than appearing in a single test run. Go's explicit design goal is readability and maintainability by engineers who did not write the original code.

No operator overloading. No implicit conversions. No complex macro systems. No template metaprogramming. A new engineer reading idiomatic Go code can understand what it does. This is not a limitation — it is a deliberate architectural choice that compounds in value over the lifetime of a system. The engineer who joins the team eighteen months after a service is written can reason about it. The service that can be reasoned about can be secured, optimised, and extended with confidence.

Rust codebases written by experts are often extraordinary. Rust codebases written by teams of mixed experience levels, under delivery pressure, are frequently the opposite. The language's expressive power creates wide variance in code quality. Go's intentional constraints narrow that variance. The floor is higher even when the ceiling is lower.

~2s
Go compile time
large service
~20min
Rust compile time
equivalent service
1.5MB
Go hello-world
static binary
Zero
external deps for
production-grade TLS

An Honest Comparison

None of this is an argument that Go is always the right choice, or that Rust and Zig have no legitimate use cases. The table below attempts the kind of honest comparison that LinkedIn posts rarely provide.

Dimension Go Rust Zig
Raw CPU throughput Good Excellent Excellent
Concurrent I/O throughput Excellent Excellent (complex) Immature
Stdlib completeness Excellent Minimal by design Early stage
Supply chain surface Minimal Large ecosystem dependency Small (ecosystem thin)
Memory safety GC-safe Compile-time proven Manual + optional safety
Compile speed Excellent Poor Good
Code readability Excellent Expert-dependent Early norms
Deployment simplicity Single binary Single binary Single binary
Ecosystem maturity Excellent Good Early
Production track record 15 years, hyperscale Growing Minimal
Genuine use case Networked services, infrastructure, CLIs Systems, crypto, OS-level, embedded Embedded, systems, experiments

The Benchmark Post You Should Be Suspicious Of

The next time you read a post claiming dramatic infrastructure cost savings from a Rust or Zig rewrite, apply these questions before updating your architectural opinions:

Was the Go service actually tuned? Default GOGC settings, no connection pooling, no sync.Pool usage, SDK layers that were not present in the rewrite — these are architectural differences dressed up as language differences.

What is the actual workload? A service that proxies LLM provider APIs is waiting on network calls 98% of the time. The language runtime overhead on the 2% that is compute is not your bottleneck and will not be visible in your infrastructure bill.

Is "we" one person? Posts written in the institutional voice of an engineering organisation are frequently the work of a single developer who has committed to a language and is constructing a post-hoc justification for the choice. The absence of a company LinkedIn page, a team page, or any corroborating social presence are reasonable signals.

Does the rewrite comparison hold the architecture constant? Removing SDK layers, redesigning the request path, and reducing abstraction during a rewrite will improve performance regardless of the target language. Attributing the improvement entirely to the language is analytically dishonest, even if unintentionally so.

The right question

Not "which language wins the benchmark?" but "which language gives my team the best combination of delivery speed, security posture, operational simplicity, and long-term maintainability for the system we are actually building?" For networked service development in 2026, Go's answer to that question remains the strongest available. The benchmark posts are measuring something real. They are just not measuring that.