A real-time entertainment and licensing platform for multi-property hospitality operators — built single-binary, deployed sovereign, and architected so a single operator can run dozens of properties without an ops team behind them.
The Problem
An operator running multiple hospitality properties — short-term rentals, boutique hotels, holiday lets, serviced apartments — has to deliver a consistent guest experience across every door without standing up a platform team behind them. Each property has its own check-in flow, its own house rules, its own entertainment options, its own licensing position with the local music and broadcast authorities. Operators were duct-taping it together with PDFs, paper welcome books, third-party concierge SaaS that did not understand licensing, and a folder of contracts nobody could find when the auditor turned up.
The platforms that existed were either generic property-management systems with bolted-on guest portals, or single-property tools that fell apart the moment an operator added a second venue. Neither understood that the operator's actual problem was running n properties as a single business while keeping each property's data, branding, and compliance position cleanly separate.
The hard part of the job — the part nobody was solving — is doing all of that without exposing the operator's whole portfolio to a single vendor's outage, pricing change, or data-handling decision. The operator does not want a tighter integration with someone else's cloud. The operator wants their guests' data on infrastructure they can point at.
Why It Mattered
Hospitality is a regulated business with operational margins that don't tolerate platform tax. Music and broadcast licensing in the UK and EU is a per-venue compliance question with real penalties; data-protection law treats each property's guest record as personal data with all the obligations that brings; and a single operator running ten venues is, in regulatory terms, ten separate sites of activity that have to be defensible individually.
The platform had to support all of that as the default behaviour, not as an enterprise add-on. Per-property data isolation isn't a feature an operator on tight margins should pay extra for — it's the architecture the system has to start with.
For the engagement, the brief was specific: ship a multi-tenant platform that scales linearly with the operator's portfolio, costs almost nothing to operate, and never requires the operator to ask permission of a cloud vendor to add the next property to the system. The architecture pattern that's now described in How We Build: Single-Binary Multi-Tenant Services emerged from this engagement.
The requirement was specific: per-property data isolation as the default, single-binary deployment, no orchestration plane, no managed-service dependency. An operator with twenty properties should pay for one VM, not for an ever-growing line item against a serverless bill that scales with their success.
What We Built
MyWelcomeBook is a multi-tenant hospitality platform delivered as a single Go binary with per-property SQLite databases. Each property gets its own isolated data store on the same host, addressed by tenant context derived from the request — so the operator's portfolio scales by adding rows to a registry, not by provisioning new infrastructure.
The operator console and the guest welcome app are both Progressive Web Apps — installable, offline-capable, brand-themed per-property at runtime. The guest never knows the welcome book they're using is the same platform their host's other twenty properties use; what they see is the venue's branding, the venue's house guide, the venue's recommendations.
The platform API is the durable tier. A single Go binary, no runtime dependencies, talking to per-tenant SQLite files on the same host. Per-property data isolation is not enforced by row-level filters in a shared schema — it is physical: each tenant has its own database file, its own backup, its own life-cycle. Compromising one tenant's data does not put any other tenant at risk, because there is no shared store to traverse.
How We Built It
Architecture
- API Go + SQLite per tenant — single static binary, no runtime dependencies
- Console Vanilla JS + Web Components + Vite — installable PWA
- Guest App Vanilla JS + Web Components — per-property themed PWA
- Real-time Server-Sent Events on the platform API — no separate message broker
- Auth JWT for platform sessions, per-tenant operator scoping
- Front Door lugh — uRadical's reverse proxy / WAF, single binary, single config
- Tenancy Per-property SQLite file, registry-keyed, physically isolated on disk
- Deploy systemd + rsync, versioned, reversible — every deploy is a git tag and a tarball
- Hosting Single VM, no environment files on disk, secrets baked into build artefacts
- Backups Per-tenant snapshots; restore one property without touching any other
The defining decision was per-tenant SQLite. The pattern is documented in detail in How We Build: Single-Binary Multi-Tenant Services, but the short version is: each property's data lives in its own file, addressed by a registry on tenant context, opened on demand, closed when idle. The operator's portfolio scales by inserting rows in a registry. Backups are per-property, restores are per-property, breaches (if they happened) would be per-property. There is no shared schema to compromise.
That decision unlocked everything else. No schema migrations across all tenants at once — each tenant's database evolves on its own version. No noisy-neighbour query plans. No row-level security policies to audit. No "but this customer needs different settings" branching scattered across the codebase, because each customer literally has a different file. Compliance, support, and operability all benefit from the same architectural choice.
The real-time layer is intentionally boring. Server-Sent Events on the platform API itself rather than a separate broker — one fewer service to operate, one fewer dependency to upgrade, one fewer port to defend. SSE is sufficient for the operator console's needs: live check-ins, live licensing events, live anomalies, in seconds rather than milliseconds.
Deployment is the same minimalism. systemd unit, rsync drop, journalctl logs. There is no Kubernetes cluster. There is no orchestration plane. There is no service mesh. New properties come online in seconds because the platform was designed for that to be a registry insert, not an infrastructure event.
The Outcome
The operator's portfolio runs on infrastructure they can point at. Per-property data isolation is a default of the architecture, not a tier of the pricing. The platform's operating cost scales with the operator's chosen host, not with the platform's success. Adding a property is a sub-second registry write, not a procurement event.
The architecture decisions made early — single binary, per-tenant SQLite, no orchestration plane, sovereign deployment — are the decisions that made the platform cheap to run, easy to audit, and forward-compatible with whatever the operator's portfolio looks like in three years.
What Made the Difference
Three architectural decisions matter most, in retrospect:
Per-tenant SQLite, not shared schema with row-level filtering. Physical isolation gives operational, security, and audit benefits that no row-level policy ever matches. Tenants get their own files, their own backups, their own migration cadence. The mental model collapses to "one database per property" — which is exactly what an operator running multiple properties already thinks.
Single binary, no orchestration. The platform is one Go binary fronted by lugh, deployed by systemd. There is no Kubernetes cluster. There is no Helm chart. There is no service mesh to debug at three in the morning. An operator could host this themselves on a VM they understand, and many do. The deployment story is part of the product, not separate from it.
SSE instead of a message broker. The real-time tier is part of the platform binary, not a separate service. One fewer thing to run, one fewer thing to upgrade, one fewer thing to worry about. SSE is sufficient for the operator's actual real-time needs — the platform is not pretending to be a financial trading system, and the architecture reflects that honestly.
How We Stayed Out of Trouble
Multi-tenant platforms have a specific failure mode: the moment one tenant's needs leak into the shared codebase, every other tenant inherits the constraint. The discipline that kept us out of it was the per-tenant SQLite decision held early and held hard. Once each property has its own database file, "this customer wants their data formatted differently" is a configuration concern, not a schema migration concern. The instinct to add a "tenant_settings" table that everyone reads from is the instinct that turns a clean multi-tenant platform into a brittle one. We did not.
The other discipline was operational: the platform has to be runnable by the operator, not just by us. Every deployment script, every migration tool, every backup procedure was written so that an operator with a Linux box and a working SSH key could run the platform without our help. That constraint kept the platform honest. Anything that drifted toward "uRadical-only operability" was a sign the architecture had taken a wrong turn.
The places we got things wrong were small enough to fix in the next iteration — usually because the per-tenant isolation contained the blast radius before the issue was visible to other operators or other properties. That isn't luck. It's the consequence of building isolation into the architecture rather than into a policy layer on top of a shared store.
Related Reading
- How We Build: Single-Binary Multi-Tenant Services — the architecture pattern this case study made operational
- Microservices Didn't Fail. People Did. — why a single Go binary was the right answer rather than a service-per-domain breakup
- Why Does Your Web Server Need a Shell? — the philosophy behind the deployment model
- How We Work — uRadical's outcome-driven engagement process
MyWelcomeBook is the platform that proves the per-tenant SQLite pattern is not a thought experiment. It is a multi-tenant hospitality platform running operator portfolios in production, on a single VM, with file-system-enforced isolation between every property. The architecture isn't an aesthetic preference. It is the reason the platform costs almost nothing to operate, takes seconds to onboard a new property, and stays sovereign all the way down to the disk.
If you are building a multi-tenant product and you suspect the right answer is one binary and a registry rather than a Kubernetes cluster and a billing surprise — we should talk.