An honest, opinionated survey of every project worth your attention. What's production-ready, what just raised venture capital, and what the AI agent era means for the field.

If Post 1 made the case that unikernels are worth your attention, Post 2 answers the practical question: which one? The ecosystem is more active than most engineers realise, venture capital has entered the space, and AI agent workloads are providing the production use case the field has waited a decade for.

What follows is an opinionated survey of every project worth your attention, updated to April 2026. The landscape shifted materially in late 2025 — Unikraft raised venture funding and launched a commercial cloud platform, NanoVMs won a DARPA contract, and the AI agent workload argument gave the entire space a new commercial narrative. We'll cover each project, then close with a section on where the field is actually going.

Why 2026 is different

AI agents have an infrastructure profile that maps precisely onto what unikernels have always been good at: triggered not persistent, requiring hardware-level isolation because they execute untrusted LLM-generated code, needing millisecond cold starts, and scaling wildly before returning to zero. The industry spent a decade looking for a workload that justified the unikernel model at scale. AI agents are that workload. Prisma is running over 100,000 isolated PostgreSQL instances on a single machine. Vercel Ventures made their first startup investment in Unikraft. This is no longer a research conversation.

One framing note before the projects. The term "unikernel" covers several related but distinct things: true library OS unikernels where application and kernel compile together (MirageOS, Nanos, Unikraft); ELF-compatible unikernels that run existing binaries without recompilation (Nanos, Unikraft's binary-compat mode); and userspace kernel sandboxes that reduce syscall surface without a hypervisor (gVisor). All address the same underlying problem. They are not the same solution. We'll be precise about which is which.

The Projects

NS
Nanos / NanoVMs
The most accessible path to a Go unikernel today
Production Ready Go Native Any ELF Binary

Nanos is a purpose-built unikernel kernel that runs any Linux ELF binary as a unikernel — with no rewriting, no porting, no language constraint. You compile your Go binary for Linux, hand it to OPS (the orchestration CLI), and the result is a bootable image. NanoVMs was the first company to produce a Go unikernel and the first to produce a .NET unikernel. For Go developers, this is the most direct on-ramp available.

OPS, the build and run tool, is a single Go binary that handles image creation, local QEMU execution, and deployment to AWS, GCP, Azure, DigitalOcean and bare metal. The developer experience is deliberately close to what you'd expect from a container workflow: build, package, run, push. NanoVMs runs their own production infrastructure — including nanos.org and ops.city — as Nanos unikernels, which is meaningful evidence that the platform handles real workloads.

The kernel is open source under Apache 2.0, though the NanoVMs binary builds require a commercial subscription for organisations with over 50 employees. Support tiers run from $30/month for individual developers to enterprise contracts. NanoVMs has also secured government-level validation: they were selected for Phase II of a DARPA program focused on transitioning legacy software from pre-virtualised environments into unikernel deployments — a signal that the technology is being taken seriously at the level of national defence infrastructure. The licensing model is worth understanding before you build a critical deployment on the free tier.

<100ms
Cold start for Go HTTP service
4-15MB
Typical Go unikernel image size
2x
Faster than Linux static content (GCP benchmark)
UK
Unikraft
EuroSys Best Paper 2021 · Linux Foundation · v0.20 (2025)
Production Ready POSIX Compatible Modular

Unikraft is the most architecturally rigorous project in the space and the one that moved furthest in 2025-2026. It won Best Paper at EuroSys 2021, is a Linux Foundation hosted project, and in October 2025 raised a $6 million seed round led by Heavybit — with Vercel Ventures participating in their first ever startup investment, alongside Mango Capital, Firestreak, Fly VC and First Momentum Ventures. Simultaneously, the team publicly launched Unikraft Cloud, a managed platform purpose-built for AI-driven workloads. v0.20 (Kiviuq) landed in late 2025 with a substantially rewritten POSIX VFS stack and multiprocess support.

The production numbers are no longer theoretical. Prisma is running over 100,000 strongly isolated PostgreSQL instances on a single machine using Unikraft — a density described by Prisma's CEO as "unheard of in traditional architectures." One engineering team reported taking Go codebases from an average 4-second startup with Docker on GCP to 30ms on Unikraft Cloud. Unikraft Cloud integrates with existing developer workflows — Dockerfiles, Kubernetes, Prometheus, Grafana — removing the tooling objection that slowed adoption in earlier years. Each unikernel instance is managed as a Kubernetes node, making unikernels as operationally familiar as containers for teams already running Kubernetes.

The portability story has matured. Binary-compatibility mode via app-elfloader runs unmodified Linux ELF binaries directly without recompilation. The application catalog includes nginx, Redis, SQLite, Python, Ruby, Go and others. MirageOS shipped its first Unikraft backend in November 2025, running OCaml unikernels on Unikraft with Firecracker as the VMM. The ecosystem is converging on Unikraft as the common substrate — and now has venture capital and commercial customers behind it.

30ms
Go cold start on Unikraft Cloud (vs 4s on Docker/GCP)
100K+
Isolated Postgres instances per machine (Prisma production)
$6M
Seed round — Heavybit, Vercel Ventures, Fly VC (Oct 2025)
MO
MirageOS
The original · Cambridge lineage · OCaml · Actively maintained 2026
Production Ready OCaml Required Ideologically Pure

MirageOS is where the field began. The 2013 ASPLOS paper that coined the term "unikernel" was built on MirageOS. Eleven years later it remains actively developed — the GitHub organisation shows commits through February 2026. MirageOS 4.0 shipped in March 2022, and the project reached OCaml 5 compatibility in early 2025 via the Solo5 backend, followed by a Unikraft backend released by Tarides in November 2025.

The ideological purity is both the strength and the constraint. MirageOS implements the entire OS stack in OCaml: TCP/IP, DNS, TLS, HTTP, storage — all type-safe, all memory-safe, all composable as OCaml modules. The compiler that produces the unikernel image is the same compiler you develop with. There is no C runtime hidden under the covers. The attack surface argument is as clean as it gets.

The constraint is OCaml. For teams not already working in the language — and most Go shops are not — the adoption cost is high. The Unikraft backend partially addresses this by providing POSIX compatibility that allows embedding OCaml libraries that haven't been ported to MirageOS's module interface. But MirageOS remains, at its core, an OCaml ecosystem. If your team writes OCaml, it is the most architecturally honest option available. If not, Nanos or Unikraft are more practical paths.

FC
Firecracker
Not a unikernel · Adjacent · Powers AWS Lambda and Fly.io
Production at Scale Open Source Not a True Unikernel

Firecracker is not a unikernel in the strict sense — there is still a minimal Linux kernel inside the microVM. But it belongs in this survey for two reasons. First, it proves the underlying premise at production scale: minimal virtualisation, purpose-built, works. AWS Lambda runs on Firecracker. Fly.io runs on Firecracker. The NSDI 2020 paper describes a system handling trillions of invocations per month for hundreds of thousands of customers.

Second, Firecracker is increasingly the VMM of choice for true unikernel projects. Unikraft supports Firecracker as a backend. MirageOS now supports Firecracker via the Unikraft backend. Nanos supports Firecracker deployments. If you are building a unikernel platform for self-hosted infrastructure, Firecracker is the most battle-tested choice for the hypervisor layer. It is written in Rust, exposes only five emulated devices, and launches microVMs in under 125ms with a memory overhead of less than 5MB.

The correct mental model: Firecracker provides the fast, secure VM foundation. A true unikernel — Nanos, Unikraft — eliminates the Linux kernel inside that VM. Together they give you the full picture: hardware isolation at the hypervisor boundary, no general OS above it.

<125ms
microVM boot time
<5MB
Memory overhead per instance
150/s
microVM launches per host per second
GV
gVisor
Google · Written in Go · Userspace kernel · Not a unikernel
Production at Scale Written in Go Different Architecture

gVisor takes a different architectural approach to the same problem. Rather than eliminating the OS, it reimplements the Linux system call interface in Go, running entirely in userspace. The container's application calls syscalls against gVisor's "sentry" — a Linux-like kernel process — rather than the host kernel directly. The host kernel's attack surface is dramatically reduced. There is no hypervisor involved.

gVisor is in production at significant scale. Google Cloud Run uses it. DigitalOcean App Platform migrated to it with measurable performance improvements — their testing showed more than double throughput on Node.js workloads and more than seven times throughput on PHP workloads after upgrading to gVisor's systrap interception model. It integrates directly with Docker, Kubernetes, and containerd via the runsc runtime. If you are operating in a containerised environment and need stronger isolation without changing your deployment model, gVisor is the most practical path.

For regulated sector teams — energy, finance, healthcare, public sector — gVisor deserves specific attention in the context of NIS2 and IEC 62443. It does not move your workload outside your existing container infrastructure, which matters when your compliance posture is already built around container tooling. You get a meaningfully reduced syscall surface and memory-safe kernel code without replatforming. That said, gVisor runs on infrastructure you do not control when deployed on managed cloud — for genuine data sovereignty requirements, a self-hosted Nanos or Unikraft deployment on bare metal or co-located KVM is the stronger position.

The distinction from a true unikernel remains important: gVisor still runs a general-purpose kernel, just one written in Go rather than C. You get memory safety and reduced host syscall surface. You do not get the image size, the cold-start performance, or the compile-time syscall enumeration of a proper unikernel. It's a hardening layer within the container model, not an architectural reset of it.

Also worth knowing

OSv — designed for JVM workloads (Java, Scala, Kotlin), published at USENIX ATC 2014, still technically sound but significantly less active than Nanos or Unikraft. The JVM-specific optimisations that were OSv's core value proposition are now largely addressable through Unikraft's binary-compat mode. For Go workloads specifically it offers no advantage and is not recommended for new projects.

HaLVM — Haskell unikernel, Galois Inc., historically significant but effectively unmaintained.

IncludeOS — C++ unikernel, clean architecture, less active since 2021. Worth knowing exists; not the right choice for greenfield Go work.

The Comparison at a Glance

Project Go Ready Cold Start Image Size Self-hostable Isolation Level
Nanos / NanoVMs Yes — any ELF <100ms 4-15MB KVM / bare metal Unikernel VM
Unikraft / Unikraft Cloud Yes — 30ms 10-50ms ~1MB KVM / Firecracker Unikernel VM
MirageOS OCaml only <100ms <1MB Xen / KVM Unikernel VM
Firecracker Yes — microVM <125ms Linux kernel KVM / bare metal Hardware VM
gVisor Yes — container Milliseconds Container size Self-host or cloud Userspace kernel

The Cost Argument

Performance and security are the lead arguments for unikernels. Cost is the one that closes the sale with a CFO or an infrastructure lead. The calculation is more nuanced than most engineers expect — and the crossover point where self-hosted becomes obviously cheaper is lower than most assume.

The headline Lambda pricing looks competitive: $0.20 per million requests, $0.0000166667 per GB-second of compute. What the headline doesn't surface: since August 2025 AWS bills for the INIT phase on cold starts — the ~2% of invocations where Lambda spins up a new execution environment. SQS triggers add $0.40 per million invocations. CloudWatch log ingestion adds $0.50 per GB (and every invocation generates at minimum a START, END, and REPORT line before your application logs a byte). And if you need guaranteed sub-100ms response times you are paying for provisioned concurrency — a standing charge on warm instances whether they are serving requests or sitting idle. These line items compound quickly at scale.

A self-hosted Nanos deployment on a Hetzner AX102-class dedicated server (~€160/month) running hundreds of concurrent unikernel instances on KVM has exactly one line item: the server. Traffic is included. There is no per-invocation charge. There is no cold-start billing. There is no provisioned concurrency overhead, because the unikernel cold-start is already under 100ms. The same hardware handles 10x the load without a larger bill.

Cost is the one dimension where Lambda is genuinely competitive at low volume. On security posture, data sovereignty, and operational simplicity at scale, the self-hosted unikernel model wins regardless of where the sliders sit. Run the numbers for your workload below — then read the decision guide with all of that in mind.

Why cost isn't the whole story

Security. The unikernel security position is the same at 1M events/month as it is at 1B. No shell means the entire class of RCE-to-shell attacks is structurally eliminated — not mitigated, eliminated. No package manager. No lateral movement surface. Syscall surface enumerable at compile time and locked. Lambda runs inside a Firecracker microVM with a minimal Linux kernel. There is still a shell somewhere in that stack. You are trusting the boundary rather than removing it. For a CISO or a compliance auditor, that is a categorically different conversation.    Sovereignty. Lambda is managed cloud regardless of the pricing. Your function executes on infrastructure AWS controls, in a region AWS operates, on a hypervisor you cannot audit. For NIS2, IEC 62443, or GDPR data residency requirements, that is not a configuration option — it is a structural constraint. Self-hosted unikernels on hardware you own or co-locate give you the compute model Lambda promises (fast cold starts, scale-to-zero, immutable deployments) on infrastructure that stays in your jurisdiction.

The crossover in plain terms

For a typical 256MB/200ms Go service on a single-purpose server, Lambda becomes more expensive than self-hosted at around 84 million events per month — the point at which CloudWatch logs, SQS trigger costs, and compute charges together exceed a dedicated KVM host. But real infrastructure runs multiple services on the same hardware. With 10 services sharing a server, the crossover drops to around ~8 million events per service per month. Below that Lambda wins on pure cost, especially for truly spiky or unpredictable load. Above it, the same hardware handles further growth without a larger bill: at 500 million events per month the self-hosted option is roughly 84% cheaper. For regulated sector teams where data sovereignty means the compute cannot run on managed cloud, the cost comparison is irrelevant — self-hosted is the only option, and the economics happen to be in your favour.

Which One Should You Use

The answer depends on what you're optimising for. Here's how to cut through the choice quickly.

If you write Go
Start with Nanos
Build your Go binary for Linux, hand it to OPS, boot it. You'll have a running unikernel in an afternoon. The developer experience is the lowest-friction path from a Go binary to a booting image available today. This is what the Belfast Gophers talk will demonstrate live.
If you want maximum flexibility
Use Unikraft / Unikraft Cloud
The modular architecture, production deployments at Prisma scale, Vercel-backed venture funding, Kubernetes integration, and active research community make Unikraft the most future-proof choice. Unikraft Cloud provides a managed path — Go codebases that cold-started in 4s on Docker/GCP run in 30ms on Unikraft Cloud. Self-host on KVM for sovereignty.
If you write OCaml
Use MirageOS
The most architecturally honest unikernel available. Every layer is type-safe OCaml. The Unikraft backend (shipped November 2025) now gives you Firecracker as a VMM option. If your team already writes OCaml, there's no stronger option on any dimension.
If you're building the platform
Nanos or Unikraft + Firecracker
If you're building unikernel infrastructure for your organisation — an image registry, a scheduler, a deploy pipeline — pair Firecracker as your VMM with Nanos or Unikraft as the unikernel layer. You get battle-tested isolation at the hypervisor boundary with a proper unikernel above it.
If you operate in a regulated environment — energy, finance, healthcare, public sector
Nanos or Unikraft, self-hosted on bare metal or co-located KVM
This is the sovereignty path. Managed cloud options (Unikraft Cloud, AWS Lambda) give you the unikernel performance story but the compute leaves your jurisdiction. For NIS2 compliance, IEC 62443 OT environments, or GDPR data residency requirements, the only defensible position is hardware you own or co-locate, running KVM you control, deploying unikernel images that you build and attest internally. The attack surface argument — structural elimination of the shell, package manager, and lateral movement surface — lands most powerfully with a CISO when they can see the hypervisor inventory. You cannot see that on a managed cloud. Nanos running on your hardware gives you the complete picture: auditable images, known-good boot state, no SSH, no shell, no drift, jurisdiction you chose. That is a fundamentally different conversation with an auditor than "we've hardened our containers."
The honest caveat

None of these projects should be your first production deployment without first running one in staging and measuring it. The benchmarks are real but your workload is specific. Observability deserves deliberate design — there is no SSH, no shell, no strace. Post 6 in this series covers running unikernels seriously in production. Read it before you go live.

Where the Field Is Going in 2026

Something shifted in late 2025 and early 2026 that changes the trajectory materially: the AI agent workload argument.

AI agents have a specific infrastructure profile — they are triggered not persistent, they need hardware-level isolation because they execute untrusted LLM-generated code, they must cold-start in milliseconds to be useful, and they scale wildly with demand before returning to zero. That profile maps precisely onto what unikernels have always been good at. The industry spent a decade looking for a workload that justified the unikernel model at scale. AI agents are that workload.

The New Stack put it directly in a February 2026 piece: AI's crushing demands on infrastructure may warrant a serious look at unikernel technology. Vercel's Guillermo Rauch, in announcing their investment in Unikraft, framed it clearly: when code is generated by AI and deployed automatically, the infrastructure behind it must be radically faster and lighter. That is not a research hypothesis. It is a VC investment thesis backed by production deployments.

Three trends are converging in 2026 that make this the most important moment the field has seen.

The first is commercial legitimacy. Unikraft has venture capital, a commercial cloud platform, production customers, and Kubernetes integration. NanoVMs has a DARPA contract and US Air Force revenue. MirageOS has active development through 2026 and a Unikraft backend. These are not research projects hedged with "not production-ready" caveats. They are commercial products.

The second is tooling maturity. The developer experience problem — the reason unikernels lost to Docker in 2013 — is largely solved. You can deploy a Go service to Unikraft Cloud using a Dockerfile. You can run an existing ELF binary on Nanos with a single OPS command. Kubernetes manages unikernel instances the same way it manages containers. The debugging story is still harder than containers, but it is no longer the blocker it was.

The third is the AI agent workload itself. The FaaS programming model — stateless, event-driven, scale-to-zero — was always a better fit for unikernels than for containers. The AI agent era is forcing the industry to take the FaaS model seriously at a scale and isolation level that containers cannot safely provide. Unikernels are the natural beneficiary.

What remains genuinely unsolved: observability tooling depth, standardised image formats across projects, and ecosystem breadth in application support. These are engineering problems being actively worked. They are not conceptual blockers. Post 6 in this series covers running unikernels seriously in production — the observability design, the debugging discipline, the image rotation policy.

Post 3 looks at why Go specifically is well-positioned for this model — the runtime assumptions, the stdlib completeness, and the minimal-dependency philosophy that maps naturally onto what a unikernel demands.