← All case studies

A real-time music bingo platform for entertainment companies and the host networks they run — deployed internationally, and consistently rated above competing platforms by the people running the games.

Sector
Live Entertainment
Buyer
Entertainment companies
Engagement
Contractor → CTO
Reach
UK, EU & US

The Problem

Running a live music bingo night sounds simple. Pick a playlist, hand out cards, play tracks, mark off the songs as they play, call winners. In practice, the host running the game is doing six things at once: managing the music, watching the room, calling numbers, validating wins, taking payment for entry, keeping the energy up, and dealing with whatever the venue throws at them. Anything the technology gets wrong becomes their problem in front of a paying audience.

The platforms that existed when Music Bingo Live started were built for a different time. Manual playlist preparation. Disconnected card generators. Fragile audio handling. Web apps that worked on the host's laptop in rehearsal and broke under real venue Wi-Fi on a Friday night. Hosts ended up duct-taping together Spotify, a card printer, a stopwatch, and a spreadsheet — and the platforms charging them subscriptions were not actually doing the hard part of the job.

The hard part of the job is reliability under pressure. Wi-Fi will drop. Audio devices will glitch. Players will join late, leave mid-game, lose their cards, or claim a win that was not actually a win. The host needs the software to handle all of that without ever becoming the thing they are thinking about.

Entertainment Companies Operators & Host Networks Bars & Venues Fundraisers Corporate Events

Why It Mattered

Live music bingo is a high-margin, high-frequency entertainment format. A single host running two nights a week, ten months a year, is delivering eighty events. Multiply that by a network of hosts and venues across multiple countries and you have a platform that runs thousands of events a year — every one of them a small live production where, if anything goes wrong, there is no second take.

For the business, that meant the platform was not a marketing site with a card generator bolted on. It was the operational backbone of a recurring entertainment service. The product had to support the host before, during, and after every event: licence-aware playlist building beforehand, live game management with players' phones as remote controllers during, and clean payment, ticketing, and reporting around it.

For the engagement, it meant something specific too. What started as a contract to help with a feature became responsibility for the whole technical direction. The platform's architecture, deployment topology, third-party integrations, and reliability story all became things to design and own — and then things to defend when international rollout pushed the load and the audience profile beyond the original scope.

The requirement was specific: a platform a non-technical host could rely on under live conditions, that scaled cleanly across venues and countries, with a deployment story simple enough to operate without a platform team behind it. Reliability was not a feature — it was the product.

What We Built

Music Bingo Live is a multi-surface live entertainment platform. Each surface is a distinct product with a distinct reason to exist:

Music Bingo Live (PWA)
Vanilla JS · Web Components · Vite · PWA
The host application. Loads on any device, works offline once installed, drives the game from a single screen — playlist, audio, room, players, calls, winners.
Real-time WebSocket Server
Node · Express · ws
Game rooms with hosts and players. Manages joins, leaves, calls, and win claims with low-latency event delivery. Players' phones become live remote controllers.
REST API
Go · SQLite · JWT
Single-binary backend. Account management, persistence, billing integration, and the playlist library. Boring, fast, durable — the tier that absolutely cannot fail under load.
Marketing Website
Static · Vite · CDN-friendly
Vertical-specific landing pages for DJs, venues, fundraisers, and players, plus how-to-host and how-to-play guides. Every audience finds the page that talks to them.
Playlist Builder
Go · Wails · Spotify Web API
Desktop tool for hosts to assemble, verify, and export licence-aware playlists from their Spotify libraries. Native binary, no host-side install pain.
Operations Tooling
Go · Bash · systemd
Game updater, store updater, voice-over generator, playlist verifier — the unglamorous internal tooling that keeps the catalogue fresh and the platform shippable.

The host application is a Progressive Web App — installable on any modern device, working offline once cached, and behaving like a native app when the venue's network inevitably wobbles. It uses the Spotify Web API via OAuth2 with PKCE so hosts authenticate against their own Spotify account, and licensing stays where it belongs: with the host and the platform they already pay for.

The WebSocket server is the live backbone. When players scan a code at a venue, their phones join a room hosted by that night's host. Card state, calls, and win claims flow over WebSockets so the host sees players in real time and players see the game state without refreshing. It is intentionally a thin server — most of the game logic runs on the host's device because that is the only device guaranteed to be in the room.

The REST API is the durable tier. A single Go binary, SQLite for persistence, JWT for auth. Account management, billing, the master playlist library — anything that has to survive across sessions. It runs on a single small VM and has not needed to be more than that.

How We Built It

Architecture

  • Host App Vanilla JS + Web Components + Vite — installable PWA, offline-capable
  • Real-time Node + ws WebSocket server — low-latency room and event delivery
  • API Go + SQLite — single static binary, no runtime dependencies
  • Auth Spotify OAuth2 + PKCE for hosts, JWT for platform sessions
  • Front Door lugh — uRadical's reverse-proxy / WAF, single binary, single config
  • Deploy systemd + rsync, versioned, reversible — every deploy is a git tag and a tarball
  • Hosting EC2, no environment files on disk, secrets baked into build artefacts
  • Payments Stripe for billing, server-side intent confirmation
  • Observability Sentry for error reporting, structured logs in journald

The architecture is deliberately polyglot. Go for the durable API tier — single binary, no GC surprises under load, no dependency creep. Node for the WebSocket server — ws is the most battle-tested real-time stack in the language with the widest event-driven ecosystem. Vanilla JS with Web Components for the host app — a deliberate rejection of framework churn for a product that has to keep working in five years' time as well as today.

Each surface uses the right tool for its job, not a single stack stretched to cover all of them. The cost of polyglot — one more language to keep current — is paid once and recovered every time a service does the right thing for its problem rather than the convenient thing for the team.

Deployment is intentionally boring. Each service has its own systemd unit, its own working directory, its own deploy script. There is no Kubernetes cluster. There is no orchestration plane. There is no service mesh. There is a small EC2 instance, four systemd services, and lugh on the front door routing requests to the right place. New environments — including the international rollouts — provision in minutes from a single bootstrap script.

The host application is the most opinionated decision in the stack. A Progressive Web App rather than a native iOS/Android pair removed an entire category of problems: app store review cycles, native build pipelines, and the divergence in feature parity that always creeps in when you ship to two app stores at once. The trade-off — slightly less native polish — is invisible to a host running a game in a pub. The reliability and shippability gains are not.

The Outcome

50+
Active hosts running on the platform via partner networks
Daily
Live events across time zones, every day of the week
5+
Countries served — UK, Ireland, Spain, the wider EU, and the United States
B2B
Distributed through entertainment-company partners, not direct-to-host

The platform is now used by hosts running music bingo nights across the UK, Ireland, Spain and the wider EU, and the United States — with active hubs in cities including Chicago and New York. Hosts who have used competing platforms consistently report Music Bingo Live as the more reliable one to run in front of a paying room — the platform that does not become the host's problem when something goes wrong, because the platform was built to assume something will.

The architecture decisions made early — single-binary API, polyglot by surface, sovereign deployment, no orchestration overhead — are the decisions that made international rollout a small operational change rather than a re-platforming exercise.

What Made the Difference

Three architectural decisions matter most, in retrospect:

Polyglot by surface, not by team preference. Each service uses the language and runtime best suited to its workload. The API tier is Go because the durable tier needs a single static binary with predictable behaviour. The WebSocket server is Node because its event-loop model and ws library are the right shape for that specific job. The host app is vanilla JS with Web Components because a six-year-old framework choice should not be the reason a host's app stops working in 2030.

The host's device runs the host's game. Most of the live game logic lives on the device in the room — the device with the music, the speakers, and the host's eyes on it. The server is a fan-out and a referee, not a single point of failure. When a venue's network drops mid-song, the music does not stop, the cards do not freeze, and the players' phones reconnect cleanly when the network returns. That behaviour is not a graceful-degradation feature. It is the consequence of putting the logic in the right place from the start.

Reversibility everywhere. Every deployment is a git tag. Every build is a tarball. The npm scripts include a rollback target. The Go binary is versioned and replaced in place. The platform can be rolled back to any prior release with a single command. For a live entertainment service, "we can fix it in fifteen minutes" is the difference between a Saturday night incident and a Saturday night problem.

How We Stayed Out of Trouble

Most case studies have a "lessons learned" section because most engagements have a moment where something cracked, the team rebuilt, and the rebuild taught them something. This one doesn't, because the engagement was structured to avoid that pattern from the start.

Small feedback loops. Iteration tight enough that what shipped on Monday was being talked about Tuesday and adjusted by Friday. Internal dogfooding — "music bingo night" was something we ran for fun, which meant the problems a host would hit on a Saturday were problems we hit first. And a stack kept deliberately minimal — single-binary API, no orchestration plane, no message bus, no service mesh. Every component that isn't in the system can't fail.

The places we got things wrong were small enough to fix in the next iteration. That isn't luck. It's the consequence of three choices held to deliberately: keep the loop short, eat your own product, and add infrastructure only when the architecture demands it rather than when fashion does. That's our way.

Related Reading

Music Bingo Live demonstrates what happens when an engagement is allowed to evolve into ownership and the architecture is allowed to evolve with the product. The platform is reliable because it was designed for the specific reliability profile of live entertainment, not for a generic SaaS template. The engagement turned into a CTO role because the right architectural decisions were available to make, and the team made them.

If you have a product where reliability under live pressure is the actual feature, and you want a team that will treat that as a first-class design constraint — we should talk.