The government claims £10 billion in growth. It refused to publish what the scheme will cost. We are engineers. We have read the numbers. This is why we oppose it.
Part 1 traced who built the internet, who profited from it, and why declining platforms see Digital ID as corporate immortality. Part 2 examined the human cost of losing anonymous access. Part 3 made the technical and legal case against the scheme. Part 4 showed that opposition works. This part examines the numbers — why the economic case does not survive contact with the government's own data.
The first four parts of this series examined the open internet as a commons — who built it, who enclosed it, what it gave to real people, and why Digital ID represents a structural threat to anonymous access. Those arguments are moral. They are also correct.
This part makes a different argument. Not about rights. Not about surveillance. About numbers.
We are a software engineering firm. We build distributed systems, real-time platforms, and operational safety infrastructure for regulated industries. We are not civil liberties campaigners. We do not oppose Digital ID because opposition is fashionable, or because we enjoy conflict with government, or because we have a reflexive distrust of all authority. We oppose it because we have read the specification and it is dangerous, and because the economic case being made for it does not survive contact with the data that the same government has published on its own systems.
Specifically: the government is making a ten-billion pound growth claim for a programme it refuses to cost, built on infrastructure that lost its security certification, administered by departments whose track record on data protection is, by any objective measure, catastrophic.
What follows is the analysis that should have preceded the legislation. It did not. We are writing it anyway.
Why an Engineering Firm Is Writing This
uRadical builds systems in which failure is not an abstraction. We work in operational safety compliance for the energy sector — environments in which a software error does not mean a bad user experience, it means a worker in danger. We build multi-tenant hospitality platforms and real-time entertainment systems. Our clients are small businesses, most of them. The compliance overhead that large enterprises absorb as a rounding error is, for a hospitality operator running three properties or an independent venue running a licensing platform, a material constraint on what they can build and whether they can afford to build it.
We understand, in detail, what centralisation does to systems at scale. We understand why distributed architectures — where data remains with the organisations that hold it, where no single node is a catastrophic failure point — are not merely a preference but a design principle with structural consequences. We understand what happens when that principle is violated.
The architecture of Digital ID, as proposed, violates it.
We also oppose it because we operate in the digital economy, and the digital economy runs on trust. Every time a government system is breached — every time 100,000 citizens discover their tax records have been exposed, every time a flagship government service loses its security accreditation without public explanation — the ambient trust that makes people willing to transact online erodes a little further. That erosion is not abstract. It falls on us. It falls on our clients. It falls on every small business that depends on people being willing to enter their details into a form. A national identity system does not create that trust. It concentrates the risk of destroying it.
The Number They Keep Repeating
The government launched the Data (Use and Access) Act 2025 with a claim: the bill would "boost the UK economy by £10 billion."[1] The broader figure cited by proponents — techUK and the Open Data Institute among them — is between ten and thirty billion pounds in economic growth, driven by increased competition, consumer switching, and innovation.[2]
These figures come from proponents of the scheme, modelling a best-case scenario: high adoption rates, no major security incidents, smooth implementation, and a functioning trust framework. They are the numbers that appear in the press release. They are not the numbers in the risk assessment — because no public risk assessment has been published.
When an independent body attempted to cost the programme, the government rejected it. The Office for Budget Responsibility produced an estimate of £1.8 billion. The government's response was not to publish a competing figure or explain an alternative methodology — it was simply to reject the estimate.[3] The House of Commons Science and Technology Committee subsequently criticised the government's "failure to address questions about cost."[4]
This is the structure of the economic case for Digital ID: a maximum growth figure, produced by supporters, under optimistic assumptions; and a rejected cost estimate, with no alternative figure provided. It is not an economic argument. It is a marketing document.
What the Government's Own Data Shows About Its Systems
The digital identity scheme is not being built from scratch. It will be layered on top of GOV.UK One Login — the unified government account system currently replacing over 190 separate login systems — and the GOV.UK Wallet, which stores government-issued documents including driving licences.[5] The security record of that foundation is not reassuring.
In November 2022, the Cabinet Office issued a formal warning to the Government Digital Service about serious data protection failings in One Login. In September 2023, the National Cyber Security Centre issued warnings about significant shortcomings in information security that could increase the risk of data breaches and identity theft. Those warnings were, by the evidence of subsequent events, largely ignored.[6]
The system subsequently lost its security certification when a key technology supplier, iProov, failed to renew its compliance — automatically removing One Login from the official accreditation scheme. The government did not proactively disclose this. It was reported externally.[6]
Simultaneously, in 2025, HMRC suffered a data breach affecting 100,000 taxpayers, costing £47 million in direct losses — before investigation, remediation, and system improvement costs.[6]
These are not isolated events. They form a pattern. The pattern is what the economic case for Digital ID refuses to account for.
The Baseline: What Cyber Failure Already Costs
The government's own independent research programme — five separate studies published in 2025 — quantifies what cyber failure already costs before a national identity system adds a single point of catastrophic exposure.
£14.7bn
Estimated annual UK cost of significant cyber attacks — 0.5% of GDP
£38.1bn
Total cybercrime cost to the UK economy in 2024 alone
£3.29m
Average cost of a data breach to a single UK organisation in 2025
£755m
Annual cost of fraud enabled specifically by organisational data breaches
The NCSC managed, on average, one significant cyber incident every two days in the year to September 2025 — incidents defined as having serious impact on essential services, public safety, or economic stability.[7] Forty-three percent of UK businesses reported experiencing a cyber breach or attack in the last twelve months.[8]
The UK is, according to available data, the second-most targeted country in the world for cyber attacks, after the United States.[9] The cybercrime cost of £38.1 billion in 2024 is projected to reach £44.6 billion in 2025 and could climb to £71.9 billion by 2027.[10]
This is the environment into which the government proposes to introduce a national identity infrastructure.
The Problem That Identity Is Not Passwords
When a password database is breached, the response is well understood: force a password reset, notify affected users, implement stronger hashing. The damage is bounded. The mechanism for recovery exists.
Identity does not work this way. Identity attributes — name, date of birth, national identity number, biometric data — cannot be reset. A compromised password is a temporary vulnerability. A compromised identity is a permanent one.
This distinction is not theoretical. It is the reason that identity-related fraud is structurally different from other financial crime, and why its costs persist long after the initial breach. Approximately 437,000 people are victims of fraud resulting from organisational data breaches annually in the UK — representing around 11% of all estimated fraud victims — at a current cost of £755 million per year.[11] That figure reflects the current fragmented landscape, where identity data is dispersed across many organisations, each of which holds a partial picture.
A national identity system does not simply aggregate that risk. It concentrates it. A single breach of an identity infrastructure that underpins access to banking verification, employment checks, public services, healthcare records, and age verification creates a single point of catastrophic, unrecoverable failure — for every person whose data it holds.
The government's response to this concern is that the system will be federated — that data will remain with the organisations that currently hold it, with no central database. This is a better architecture than a monolithic central store. It is not the same as a safe one. Privacy researchers at Big Brother Watch have noted that even decentralised systems can behave like centralised ones if a unique identifier links data across platforms — which, by design, a national identity scheme must do in order to function.[5] The federation is in the storage. The centralisation is in the identifier. And it is the identifier that an attacker wants.
The Aadhaar Precedent
There is a working example of a national digital identity system at scale. It is instructive.
India's Aadhaar system was launched in 2009. It now covers 1.3 billion people, linking biometric data — historically iris scans and all ten fingerprints, now moving to facial recognition — to a unique twelve-digit identifier used to access banking, government services, benefits, and increasingly voting.[13]
In October 2023, uniquely identifiable information including Aadhaar and passport numbers for approximately 850 million Indian citizens was leaked onto the dark web. The World Economic Forum identified it as the largest data breach in the world.[14]
Eight hundred and fifty million people whose national identity data — data that cannot be changed, cannot be reissued, cannot be reset — is now permanently available to anyone with the means and motivation to use it.
India is not an outlier in its technical capacity. It is one of the world's largest producers of software engineers. Aadhaar was designed with the involvement of world-class engineering talent. It was breached anyway — not once, but repeatedly — because large centralised systems of high value are, by their nature, targets of corresponding ambition and resource.
The UK is already the second-most targeted country in the world for cyber attacks. UK public sector organisations take, on average, over 200 days to detect a breach.[15] The question is not whether a UK national identity infrastructure would be targeted. The question is what we lose when it is breached — and whether the projected upside justifies that exposure.
The Innovation Tax on Small Companies
The economic argument for Digital ID tends to be presented as if the only relevant unit of analysis is the national aggregate. It is not.
The Data (Use and Access) Act creates a regulated framework for digital verification service providers, with a trust framework, certification requirements, code of practice, and regulatory oversight. Compliance with this framework is not free. It requires technical preparation, governance realignment, and ongoing regulatory engagement.[16] The implementation burden falls not on the organisations best equipped to absorb it — the large platforms and financial institutions whose lobbying helped shape the legislation — but on the small and medium businesses that must comply with what those organisations negotiated.
There are currently 266 firms operating in the UK digital identity market, 49% of which operate at small or medium size.[17] The compliance architecture being built will not benefit these firms equally. It will benefit the firms large enough to achieve early certification, establish market position, and participate in the ongoing governance process. For smaller operators — the hospitality platform, the independent venue, the regional software consultancy — it creates a moat they did not build and did not vote for.
We have seen this pattern before. It is not incidental to how regulatory capture works. It is how regulatory capture works.
The Asymmetry the Treasury Won't Show You
The upside for Digital ID has been modelled, promoted, and placed in press releases. The downside has not been published. That is not an accident.
The structure of the risk is this. The upside — £10 to 30 billion in growth — is conditional on high adoption, successful implementation, no major breach, and sustained public trust. Each of these conditions is contestable. The government's track record on large IT programmes — a subject with its own extensive literature — does not justify confidence in all four conditions being met simultaneously.
The downside has no ceiling. A breach of national identity infrastructure does not cost what a commercial data breach costs. It costs what a commercial breach costs, multiplied by the number of citizens whose identity data is compromised, plus the systemic fraud enabled over years or decades using that data, plus the cost of litigation and GDPR enforcement action, plus the collapse of public trust in digital government services, plus the economic damage to every business that depends on that trust.
The IBM/Ponemon study puts the average cost of a UK data breach at £3.29 million in 2025.[18] A national identity breach is not an average breach. It is a breach from which there is no recovery path — because the data cannot be revoked, and the citizens whose data was exposed will carry that vulnerability permanently.
This asymmetry — bounded and conditional upside, unbounded and irreversible downside — is the reason the government has not published a risk-adjusted cost-benefit analysis. It is not because the analysis is complex. It is because the analysis does not produce a number the Treasury is willing to put in a press release.
The Specification Is Dangerous
There is a version of digital identity infrastructure that a software engineer could respect. It would be genuinely federated, with no common identifier linking behaviour across platforms. It would be optional in law and in practice, with statutory protections that bind future governments, not political commitments that don't. It would be built on a security foundation that had earned trust through transparency and sustained performance — not one that lost its certification and disclosed the fact only when journalists found out. It would come with a published cost estimate the government hadn't rejected, and a risk-adjusted downside analysis that Treasury was willing to put its name to.
That is not what is being built.
What is being built is a national identity infrastructure on a platform with known security failings, by departments with a documented pattern of data loss, in the second-most cyber-attacked country in the world, with a cost estimate the government refused to publish and a breach scenario that has no recovery path. The projected upside is conditional on four optimistic assumptions holding simultaneously. The downside has no ceiling and cannot be undone.
Aaron Swartz understood that the architecture of a system determines what it enables and who it serves. He spent his life arguing that access to information should not require surrendering your identity to the people who control the gate. The Digital ID scheme does not resolve that tension. It institutionalises it — in legislation, in infrastructure, in a single identifier that links every interaction you have with every service that adopts it.
Engineers do not oppose things for the sake of opposition. We oppose things when the specification is dangerous. We have read this one. It is.
References and Sources
- Department for Science, Innovation and Technology et al. (2024). New data laws unveiled to improve public services and boost UK economy by £10 billion. 24 October 2024. gov.uk
- Kennedys Law (2025). Smart Data Schemes and Digital Services under the Data (Use and Access) Bill 2025. kennedyslaw.com — citing techUK and Open Data Institute, £10–30bn economic growth estimate.
- House of Commons Library Research Briefing (2026). Digital ID. CBP-10369. March 2026. — "The government has rejected a cost estimate from the Office for Budget Responsibility of £1.8 billion." researchbriefings.files.parliament.uk
- House of Commons Science and Technology Committee (2026). Cited in CBP-10369, ibid. — criticism of government's failure to address questions about cost.
- House of Commons Library Research Briefing (2026). CBP-10369, ibid. — GOV.UK One Login; GOV.UK Wallet; Big Brother Watch on federated identifier risk.
- Parliamentary written evidence submission (2025). Response to Question 4: Risks of Digital Identification. committees.parliament.uk — Cabinet Office warning Nov 2022; NCSC warning Sept 2023; One Login certification lapse; HMRC breach 2025 (100,000 taxpayers; £47m direct cost).
- National Cyber Security Centre (2025). NCSC Annual Review 2025. ncsc.gov.uk — 204 significant incidents, average one every two days.
- Department for Science, Innovation and Technology / Home Office (2025). Cyber Security Breaches Survey 2025. gov.uk — 43% of UK businesses experienced a breach or attack in the last 12 months.
- Binary Blue (2024). 20 Eye-Opening UK Data Breach Statistics for 2024. binaryblue.co.uk — citing NCSC and gov.uk sources; UK second-most targeted country globally.
- ITIF (2025). How the Proposed UK Cyber Security and Resilience Bill Can Unlock Growth in the Nation's Cyber Insurance Market. December 2025. itif.org — citing Statista/Petrosyan projections: £38.1bn (2024), £44.6bn (2025), £71.9bn (2027).
- Frontier Economics (2025). Assessing the Feasibility of Modelling the Link Between Data Breaches and Fraud. Commissioned by DSIT; cited in techUK summary. — 437,000 victims; £755m annual cost; 11% of all estimated fraud victims.
- Bloomsbury Intelligence and Security Institute (2026). National Digital IDs: Convenience vs Risk. February 2026. bisi.org.uk
- TechPolicy.Press (2025). Lessons from National Digital ID Systems for Privacy, Security, and Trust in the AI Age. June 2025. techpolicy.press
- Parliamentary written evidence submission (2025). Op. cit. — World Economic Forum Global Risk Report; Aadhaar breach October 2023; 850 million records leaked to dark web.
- ANSecurity (2025). The Real Cost of Cyber Attacks in 2025. September 2025. ansecurity.com — UK public sector average detection time: over 200 days.
- Kennedys Law (2025). Op. cit. — "regulated organisations face substantial compliance obligations requiring technical preparation, governance realignment, and regulatory engagement."
- Office for Digital Identities and Attributes / DSIT (2025). Digital Identity Sectoral Analysis Report 2025. May 2025. gov.uk — 266 firms; 49% SME; £2.1bn revenue; 10,246 FTE; £888m GVA.
- IBM / Ponemon Institute (2025). Cost of Data Breach Report 2025 — UK Edition. Reported in Northdoor (2025). northdoor.co.uk — average UK breach cost £3.29 million.