The unikernel concept is 30 years old. The tools to make it practical just arrived.
Ask yourself what a running Go HTTP service actually needs. A scheduler. A network stack. Memory management. Your application code. That is the complete list. So why are you shipping bash, apt, systemd, glibc, and a package manager to production?
The answer isn't technical. It's historical. We inherited an abstraction model built for a different era — the era of physical servers, shared machines, and general-purpose computing — and we've been layering on top of it ever since. Containers improved the situation. They didn't solve it. Unikernels do.
This is the first post in a series on unikernels — what they are, where they came from, how to build one with Go, and why the moment for them has finally arrived. This post covers the theory and the history. There is no code here. The code comes later. First, let's understand why the idea exists at all.
The Accumulated Weight of History
When Unix was designed in the late 1960s and early 1970s, the decisions that shaped it were reasonable responses to real constraints. Machines were expensive and shared among many users. A kernel that mediated access to hardware — enforcing process isolation, managing memory, scheduling CPU time — was the right abstraction. The POSIX standard that codified this model in 1988 locked it in for a generation.
Then virtualisation arrived. VMware in 1999, Xen in 2003, KVM in 2007. Suddenly you could run multiple operating systems on a single physical machine. Which is exactly what the industry did — it took the existing OS model, wrapped it in a hypervisor, and shipped it to the cloud. Every virtual machine was a full Linux installation: kernel, init system, package manager, shell, the works. The fact that most of this was irrelevant to the workload running inside it wasn't considered a problem. Resources were cheap, and the model worked well enough.
Docker arrived in 2013 and offered a lighter alternative. Rather than a full OS per workload, containers shared the host kernel and isolated at the process level. Images got smaller. Boot times improved. But the fundamental model remained: applications ran on top of an OS abstraction that existed to serve a general-purpose computing environment. The shell was still there. The package manager was still there. The attack surface was still there.
When your application is a single-purpose HTTP server, why does it need a general-purpose operating system? The OS was designed to serve many users running many programs on shared hardware. You have one program. You own the hardware. The abstraction is the wrong shape for the problem.
The Exokernel: Where the Idea Begins
The insight that general-purpose OS abstractions are unnecessarily limiting isn't new. In 1995, Dawson Engler, Frans Kaashoek and James O'Toole at MIT's Laboratory for Computer Science published a paper that argued this precisely.
The Exokernel paper argued that traditional operating systems significantly limit application performance and flexibility by abstracting physical hardware — and that the solution is to strip the kernel to its minimal protective role and let applications implement OS abstractions themselves at the user level. The kernel secures and exports hardware. Library operating systems — linked directly with the application — do everything else.
The prototype they built was ten to one hundred times faster than Ultrix (a mature Unix system) on primitive kernel operations, and five to forty times faster on virtual memory and IPC. The numbers were extraordinary. The idea was sound. And it went almost nowhere in production for nearly two decades.
The reason was practical, not conceptual. Building a library OS from scratch meant reimplementing the entire software stack: networking, storage, memory management, scheduling. You couldn't reuse existing Linux drivers. You couldn't run unmodified software. The engineering cost was prohibitive, and the cloud infrastructure to host the result didn't yet exist.
MirageOS: The Concept Made Real
Fast forward to 2013. Anil Madhavapeddy and colleagues at the University of Cambridge published a paper that took the Exokernel insight and made it deployable on commodity cloud infrastructure. They called the result a unikernel.
A unikernel is a specialised, single-address-space machine image constructed by combining application code with a minimal set of OS libraries. The entire software stack — system libraries, language runtime, application — is compiled into a single bootable image that runs directly on a hypervisor. There is no general-purpose OS in between. There is no shell. There is no package manager. There is no process table with anything in it other than your code.
The paper demonstrated working prototypes — a key-value store, an HTTP server, a DNS resolver, an OpenFlow controller — all running as unikernels on Xen. Their DNS server outperformed BIND 9 by 45% and produced an image of 200KB. BIND's equivalent appliance was over 400MB.
MirageOS, the OCaml-based unikernel framework they built, is still actively maintained and used in production. The paper is the founding document of the field. Every subsequent project stands on it.
Why Unikernels Lost the First Round
If the idea was this compelling in 2013, why didn't unikernels win? The same reason the Exokernel didn't win in 1995: the engineering cost was too high relative to the alternatives.
MirageOS required you to write your entire application in OCaml — a fine language, but not the language your team was already using. Other projects existed — HaLVM in Haskell, ClickOS in C++, OSv for JVM workloads — but each imposed significant constraints on what software you could run and required substantial porting effort. You couldn't take an existing application and run it on a unikernel without largely rewriting it.
Meanwhile, Docker won. Containers were not technically superior to unikernels, but they were vastly more accessible. You could take your existing application, write a twelve-line Dockerfile, and ship it. The ecosystem — Docker Hub, Kubernetes, Helm — grew at a speed that made correctness arguments irrelevant. The industry had a working solution. It moved on.
Unikernels didn't lose because the concept was wrong. They lost because the tooling was hostile. The best technical idea consistently loses to the adequate idea with a working developer experience. This is not a permanent condition. Tooling eventually catches up to good ideas.
The State of the Art: 2020-2025
Two things changed the calculus. First, Amazon shipped Firecracker. Second, the Unikraft project published its EuroSys paper.
Firecracker is not a unikernel, but it proves the underlying premise at production scale. AWS built it because they recognised that existing virtualisation technology was not designed for serverless workloads — the event-driven, sometimes milliseconds-long nature of functions and containers demanded something different.
Firecracker is written in Rust, uses KVM, and deliberately excludes most of what a traditional VMM includes. No BIOS emulation. No PCI bus. Five emulated devices. The result: microVMs booting in under 125ms, consuming less than 5MB of memory overhead, with a server capable of launching 150 instances per second. This is the technology running AWS Lambda today, at trillions of invocations per month.
The industry voted. Minimal, purpose-built virtualisation is not a research curiosity. It is the infrastructure that powers modern serverless computing at the largest scale in history.
Then in 2021, Simon Kuenzer and colleagues published the Unikraft paper at EuroSys, winning Best Paper — one of the highest accolades in systems research.
Unikraft tackled the portability problem directly. Rather than requiring applications to be rewritten, it built a modular micro-library OS where every component — scheduler, network stack, filesystem, memory allocator — can be independently included or replaced. The build system composes only what your application actually uses.
These are not theoretical benchmarks. They are independently reproducible — the EuroSys paper ships with a full artifact repository. The numbers hold up.
The Timeline: 30 Years to Practical
The Security Argument
Most infrastructure security conversations are about hardening: adding controls, reducing exposure, patching faster. Unikernels offer something categorically different — structural elimination of attack surface rather than hardening against it.
Consider what doesn't exist in a running unikernel. There is no shell. The entire class of remote code execution attacks that pivot to a shell, install tooling, and move laterally through a network — that class is structurally eliminated. There is no package manager, so there is no mechanism to install a backdoor on the running instance. There is no process table containing anything other than your application. There is nothing else to compromise.
The syscall surface is enumerable at compile time and unchanging at runtime. A conventional container exposes thousands of Linux syscalls. A Nanos unikernel running a Go HTTP service exposes around thirty. That number can be profiled, audited, and locked at build time. It does not drift.
For compliance-heavy environments — and the energy sector, financial services, and public sector clients we work with regularly are exactly that — this is a qualitatively different conversation to have with a CISO or an auditor. You are not describing controls that mitigate risk. You are describing an architecture where entire risk categories do not exist.
Containers ask for immutability. Unikernels enforce it. There is no apt upgrade on a running instance. There is no configuration drift. The image that passed your CI pipeline is byte-for-byte identical to what is running in production. Patch management becomes image replacement, not running-system surgery.
The Data Centre Context
The timing argument matters. This isn't just a technical improvement that was always available. The economic context has shifted in a way that makes unikernels more competitive than they have ever been.
Cloud costs have matured past the point where they are obviously cheaper than owned hardware for steady-state workloads. 37signals made this argument loudly and publicly, and the numbers held up. More quietly, the energy sector, financial services, and regulated industries have been building or expanding private and co-located data centres for reasons of sovereignty, latency, and cost for several years.
If you own or co-locate hardware, you own the hypervisor. KVM is production-grade, well-understood, and runs on commodity servers. Your deployment unit is a 10MB bootable image. Your cold start is under 100 milliseconds. Your operational overhead per service is dramatically lower than a Kubernetes cluster.
Unikernels are the deployment model that makes self-hosted infrastructure genuinely competitive with managed cloud — not by compromising on agility, but by delivering faster deploys, smaller artefacts, and stronger security guarantees. You get cloud-like ergonomics on hardware you control, in a jurisdiction you choose.
What Comes Next in This Series
This post has made the case at the conceptual level. The rest of the series gets practical. Post 2 surveys the current landscape in detail — Nanos, Unikraft, MirageOS, OSv, Firecracker, gVisor — and maps what's production-ready against what's research-grade. Post 3 examines why Go is specifically well-suited to this model. Post 4 is the hands-on build: a Go REST service that boots as a unikernel in an afternoon, with benchmarks.
And if you're in Belfast on the date of the next Belfast Gophers meetup, come and watch it boot live.
Bare Metal Thinking
7-part seriesResearch & References
- Engler, Kaashoek & O'Toole (1995). Exokernel: An Operating System Architecture for Application-Level Resource Management. 15th ACM Symposium on Operating Systems Principles (SOSP '95). PDF →
- Madhavapeddy, Mortier, Rotsos, Scott et al. (2013). Unikernels: Library Operating Systems for the Cloud. 18th ACM ASPLOS. DOI: 10.1145/2451116.2451167. PDF →
- Madhavapeddy & Scott (2013). Unikernels: Rise of the Virtual Library Operating System. ACM Queue, 11(11). ACM Queue →
- Madhavapeddy et al. (2015). Jitsu: Just-In-Time Summoning of Unikernels. USENIX NSDI '15. PDF →
- Agache, Brooker et al. (2020). Firecracker: Lightweight Virtualization for Serverless Applications. USENIX NSDI '20. PDF →
- Kuenzer, Badoiu, Lefeuvre, Santhanam et al. (2021). Unikraft: Fast, Specialized Unikernels the Easy Way. EuroSys '21 — Best Paper Award. DOI: 10.1145/3447786.3456248. arXiv →