In January 2013, Aaron Swartz hanged himself in his Brooklyn apartment. He was 26 years old. He had spent the final two years of his life under federal prosecution carrying a potential 35-year sentence — not for violence, not for fraud, not for anything that harmed a living person — but for downloading academic papers and intending to make them freely available.
The papers were funded by public money. The knowledge belonged, by any honest reckoning, to the public. Swartz understood this with a clarity that made him dangerous. His 2008 Guerrilla Open Access Manifesto stated it plainly: "Information is power. But like all power, there are those who want to keep it for themselves." [1]
The people who prosecuted him kept it for themselves. US Attorney Carmen Ortiz drove the case with a zeal that had nothing to do with justice and everything to do with political career-building. The charges were designed to break him into a plea. They did exactly that, though not in the way intended. He broke. The internet lost one of the few people who understood both its architecture and its politics well enough to defend it coherently.
We think about Swartz often. Not as martyrdom — as prophecy. The battle he was fighting then is the battle being fought now, at larger scale, with higher stakes, and with the forces arrayed against open knowledge considerably better organised and considerably better funded than anything he faced.
The Enclosure Continues
Between the 15th and 19th centuries, the English landed gentry systematically enclosed common land that rural communities had farmed for generations. The legal mechanisms were sophisticated. The justifications were always framed as efficiency and improvement. The outcome was the destruction of a commons and its transfer into private hands. The people who lost access to land they had worked for centuries were told it was for their benefit.
What is happening to the internet is the same process applied to knowledge infrastructure. The language has been updated. The logic is identical.
The UK's Online Safety Act, in full enforcement since July 2025, mandates that British citizens submit biometric data — facial scans, passport images, government-issued identity documents — to third-party commercial companies simply to access content online. The stated justification is child protection. The actual mechanism is a national identity layer linked to internet access, outsourced entirely to foreign private firms with no meaningful UK regulatory oversight. [2]
Who are these firms? Reddit uses Persona, which in 2025 raised $200 million led by Peter Thiel's Founders Fund — the same Peter Thiel whose name appears in Jeffrey Epstein's documented connections. Bluesky uses Kids Web Services, owned by Epic Games, fined $520 million by the US Federal Trade Commission in 2022 for violating children's privacy laws. X uses AU10TIX, an Israeli company spun from ICTS International, founded by former Shin Bet intelligence officers, staffed with engineers from Unit 8200 — the military cyber unit that produced the Pegasus spyware. In 2024, AU10TIX left administrative credentials exposed for over eighteen months. The personal data of millions — names, dates of birth, passport images, facial scans — sat accessible to anyone who found the Telegram post where the stolen credentials had been posted since March 2023. [2][14][15][16]
This is the child protection infrastructure a British government chose. Not UK companies. Not sovereign infrastructure. Not anything that could honestly be called critical national architecture. Foreign intelligence-linked firms with documented breach histories and opaque data retention policies, processing the biometric identities of the British public, accountable to no British institution.
Aaron Swartz faced 35 years for downloading academic papers. The architects of this system face no consequences at all.
Follow the Money, Find the Motive
Political decisions of this scale do not happen by accident. They do not happen without commercial beneficiaries. Honest analysis requires naming them.
The most significant figure in the UK's digital identity push is not Keir Starmer. It is Tony Blair, operating through his Tony Blair Institute for Global Change — funded to the tune of £260 million by Larry Ellison, co-founder of Oracle. Oracle ran more than 50 percent of British government financial software during Blair's premiership. The Institute operates a documented revolving door with the current Starmer government. There were 29 meetings in nine months between Oracle and UK ministers and officials. Ellison has stated his vision plainly: under constant digital monitoring, "citizens will be on their best behaviour." [3][4]
That is not a reassurance. That is a business model.
Blair's name appears in Epstein's documented phone records. The meeting between Blair and Epstein at 10 Downing Street, facilitated by Peter Mandelson, is a matter of public record from National Archives documents. Mandelson's relationship with Epstein was serious enough to contribute to his resignation from the House of Lords. These are not conspiracy theories. They are documented facts that mainstream outlets have largely chosen not to place in the same paragraph. [5]
Beyond Blair: the Bill and Melinda Gates Foundation has committed over $200 million to digital identity infrastructure through the MOSIP platform, deployed across eleven countries. The World Bank's ID4D initiative is building digital identity systems in 60 countries. The World Economic Forum provides the institutional scaffolding under the branding of Digital Public Infrastructure. Gates too is named among Epstein's documented associations, with well-evidenced continued contact after Epstein's crimes were public. [5] None of these organisations are accountable to the populations whose data they are building systems to collect.
The commercial logic is not complicated. A national biometric identity layer linked to internet access is a perpetual revenue stream. Every authentication event is billable. The data generated — not just identity but browsing behaviour, content consumption, and eventually movement and purchase — is the raw material for AI foundation models, the most valuable commodity in the current economy. A system linking every British citizen's internet activity to their verified biometric identity serves those interests in ways that a generation of government contracts cannot.
This is not protecting children. This is enclosure. And the people designing it have told us exactly what it is, if we choose to listen.
What Russia Already Proved
When the UK government floated VPN restrictions in late 2025, the technical community reacted with a specific frustration: not outrage, but the exhaustion of having to explain again why this categorically cannot work. [7]
Russia tried. With the full apparatus of an authoritarian state — mandatory ISP compliance, FSB enforcement, criminal penalties, no meaningful legal challenge possible. VPN usage in Russia has increased every year regardless. Blocked services reappear under new names within days. Obfuscation tools make VPN traffic indistinguishable from ordinary HTTPS. The arms race favours the privacy side because the mathematics of encryption do not recognise legislation. [6]
The UK has none of Russia's authoritarian infrastructure and would face immediate human rights litigation from the moment any serious restriction was imposed. Meaningful VPN suppression would require deep packet inspection across every ISP in the country, app store removals triggering litigation with Apple and Google, and continuous maintenance against obfuscation tools emerging overnight. The cost runs into hundreds of millions. The outcome remains identical for anyone with moderate technical literacy — which includes every teenager this policy claims to protect. [6]
So either the architects do not understand the technology — entirely plausible given the demonstrable digital illiteracy of the ministers involved — or they understand it perfectly and the stated goal was never the real one.
If you are building a national identity infrastructure, and VPNs represent the primary architectural threat to that infrastructure, then suppressing VPNs serves your actual objective regardless of whether it achieves the stated one. The child protection framing is the mechanism that makes the policy politically viable. The surveillance infrastructure is what survives after the children have grown up and the headlines have moved on.
The Russian comparison is not hyperbole. It is structural. The mechanism is identical. Only the branding differs.
The Cost in Things That Actually Matter
Ofcom's Online Safety budget reached £92 million in 2025/26, up from £71 million the year before. Setup costs for the regulatory regime alone were estimated at £169 million by 2024/25. Over 300 public sector contracts in the online safety domain since 2015 account for a further £76 million. The total visible public spend exceeds £300 million and is accelerating — before any VPN enforcement infrastructure is costed. [8][9]
The NHS currently carries 100,100 vacancies, including 25,600 nursing posts. [11] Six million people are on elective waiting lists. Nearly 192,000 patients waited over a year for care in July 2025 — a wait that should have been eliminated by March. [12] The Health Foundation projects a funding shortfall of £19.8 billion by 2028/29. NHS leaders requested an emergency £3 billion top-up for 2025/26 alone, citing pressures largely outside their control. [10]
£300 million funds roughly 8,000 band 5 nurses for a year. It funds over 8 million GP consultations. It meaningfully addresses a diagnostic waiting list where 22 percent of patients currently wait more than six weeks — against a target of 5 percent. [12]
These are not rhetorical comparisons. They are the actual trade-off being made. The political will to fund biometric surveillance exists. The political will to fund the people dying on waiting lists apparently does not. That is a choice, and it is being made by the same people who tell us the surveillance is for our benefit.
The Credibility Collapse and Its Wolves
There is a straight line between these policies and the political movements that serious people are frightened by. It runs through credibility, and it is not complicated.
The BBC, the broadsheets, the political establishment — these institutions have been demonstrably wrong, or demonstrably partial, on issues large enough that ordinary people noticed. Iraq. The financial crisis. COVID policy. Each failure didn't just cost credibility on that issue. It retroactively contaminated everything, because trust is not modular. Once an institution lies about something important, rational scepticism towards everything else it says becomes structurally indistinguishable from conspiracy thinking — and there is no clean way to separate one from the other from the outside.
When you then restrict access to alternative information — when you mandate biometric identity checks to read news, when you threaten to ban the tools people use to find uncurated sources, when you build age-gating infrastructure that teaches an entire population their government doesn't trust them with the internet — you do not reduce the conspiracy thinking. You validate it. Every restriction is confirmation that whatever is being restricted is worth finding. You hand the Farages of the world the single most powerful argument available: that if the establishment is trying to stop you seeing something, that is precisely the reason to see it. [13]
This is where the analysis has to be honest rather than comfortable. Reform and Trump are not simply the passive beneficiaries of institutional failure. They are actively weaponising the same technologies the establishment is trying to lock down, and doing it with considerably more sophistication than they are given credit for.
Cambridge Analytica was not an aberration. It was the proof of concept. Psychographic profiling at scale, micro-targeted messaging, behavioural manipulation via harvested social data — that operation was the first fully realised deployment of the surveillance capitalism toolkit for explicit political control. The profound irony is that the data collection infrastructure now being handed government identity contracts through the Online Safety Act is the same infrastructure that was weaponised to elect the politicians those companies now lobby. The circle is complete. Nobody in it is the good guy.
Farage presents as the anti-establishment outsider while running one of the most sophisticated data-driven political operations in British history. Trump presents as the enemy of tech oligarchs while his administration is staffed by them. Musk presents as a free speech absolutist while owning the platform and personally controlling the algorithm that determines what speech reaches whom. These are not coincidences of personality. They are a coherent strategy: claim the language of freedom while building the infrastructure of control, with different branding than the establishment version but an identical destination.
This is not a defence of Reform or Trump. It is a warning about them. They are wolves in clown clothing — appeals to entirely legitimate grievances from people with entirely illegitimate agendas, using the tools of openness to advance the cause of a different, more charismatic closure. The disenfranchised population that Labour's 1984-adjacent digital policies are creating is not being liberated by the people offering to harvest their anger. It is being redirected.
Labour's approach is draconian and visible. You can see the Online Safety Act. You can name the contractors. You can measure the NHS trade-off. Reform's approach is draconian and invisible, operating through algorithmic manipulation, engagement-maximising radicalisation pipelines, and emotional targeting of people with nowhere else to turn. Both roads lead to the same building. One is grey concrete you can photograph. The other is a carnival that pulls up outside it.
The answer to both is the same: an informed population with access to the full information landscape, the tools to evaluate it critically, and the technical literacy to understand when it is being manipulated. Which is precisely what every piece of legislation discussed in this article is designed to prevent.
What Technology Actually Wants
Kevin Kelly's What Technology Wants is, for those of us who build things for a living, about as close to a foundational text as our discipline produces. [17] Not because it is uncritical — Kelly is clear-eyed about technology's costs — but because it refuses the false binary between techno-utopianism and techno-fear that makes every public conversation about this subject almost entirely useless.
Kelly's argument is that technology is not separate from nature. It is the seventh kingdom of life, following the same evolutionary pressures toward diversity, complexity, and increasing possibility that biology follows. It was not invented so much as discovered — the steam engine and the transistor were latent in the physical world before any engineer found them, the way a mathematical theorem is true before any mathematician proves it. Its trajectory is not controlled by individual actors or companies, any more than evolution is controlled by individual organisms. Microsoft did not invent personal computing and could not have prevented it. Google did not invent the internet and could not have owned it permanently. Amazon did not create distributed systems and cannot wall them off forever. The technology moves through them and beyond them regardless.
This matters because it dissolves the anxiety on which the entire enclosure project depends. The argument for surveillance infrastructure is always some version of: technology is dangerous, therefore it must be controlled, therefore control must be centralised, therefore the centralisation must be trusted. Each step depends on the previous one, and the first step is wrong. Technology is not dangerous in the way a weapon is dangerous — pointing in a direction and causing harm to whatever it faces. It is generative, like fire or language, capable of both destruction and civilisation depending on what you do with it, and historically producing vastly more of the latter.
We have lived this. The people who built the personal computer — Wozniak, Kay, Engelbart — conceived it explicitly as a tool for individual empowerment. Not mediated by institutions. Not requiring permission. Not serving corporate or government interests. A device that put genuine computational power in individual hands for the first time in history. That ethos ran through the Homebrew Computer Club, through the Whole Earth Catalog, through the cypherpunks who understood that cryptography was the only reliable guarantor of individual freedom in a networked world and built the protocols accordingly.
Thirty years later we have the AI equivalent of that moment. A technology of extraordinary capability, currently being argued over by the same forces that have argued over every transformative tool in human history: the people who want to use it to augment individual human capability, and the people who want to use it to extend institutional control. The open source models, the local inference movement, the developers running Ollama on personal hardware and building tools that answer questions without surveillance or monetisation — these are the direct descendants of the Homebrew Computer Club. The same battle. The same stakes. The same forces on each side.
The human future will not be ended by AI becoming inhuman. The risk is far more mundane and far more immediate: AI as the latest and most powerful instrument of centralised control, its training data monopolised by a handful of companies, its outputs shaped to serve institutional interests, access gated behind identity verification and compliance frameworks designed by the same people building the identity infrastructure. That is the actual dystopia. Not HAL 9000 deciding to act against humanity. Systems that are already acting against humanity, at AI scale.
Evolution favours adaptation. Not cautious, mediated, institutionally approved adaptation — the real kind, where individuals and communities get their hands on the tool and find what it can do. The civilisation that learns to use the tool outcompetes the one that bans it. That has been true of writing, of printing, of electricity, of computing, and it will be true of AI. The question is not whether the adaptation happens. It is whether it happens freely or whether it gets captured and turned into a subscription.
What We Stand For
uRadical is a small agency in. We build distributed systems and real-time platforms. We are not a political organisation. We are something simpler and we think more important: people who grew up with technology, who understand its history, who have built careers on the direct experience that open tools in individual hands produce better outcomes than closed tools in institutional hands.
We understand what is at stake commercially because we live it. The alternative to an open internet is not a safer internet. It is a controlled internet, operating through back-room deals and government contracts, where independent builders and small agencies are cut out and watching from the sidelines as a handful of companies with the right connections take everything. We have watched those companies repeatedly, consistently, across every jurisdiction and every decade, demonstrate that they do not fight for users. They extract from users. They surveil users. They lobby governments to construct the regulatory frameworks that make the extraction and surveillance mandatory while destroying the conditions under which alternatives can emerge.
That is the direction of travel of every major piece of digital legislation currently moving through the UK. It is not conspiracy. It is documented commercial capture of public policy by people with the money and connections to make it happen, and the cynicism to use child protection as the political mechanism that makes it unchallengeable.
We believe knowledge should be free. Not as a slogan — as an engineering principle. Systems that hoard knowledge are less capable than systems that share it. Populations that can access information make better decisions than populations that cannot. The entire history of human progress is a history of removing barriers to knowledge and watching capability compound on the other side.
We believe encryption is a right, not a privilege granted by governments who reserve the power to revoke it when convenient.
We believe an AI running locally on personal hardware, answering questions without surveillance, is more aligned with human flourishing than a biometric identity gate on a platform built by people whose names appear in Jeffrey Epstein's phone book.
We believe Aaron Swartz was right. We believe the people who drove him to his death understood exactly what they were doing and why. We believe the same logic is operating now, with more sophisticated legal machinery and considerably more money behind it, and we believe the obligation is the same one he identified in 2008: make the knowledge, share it freely, build the tools, help each other learn, route around the systems that exist to restrict rather than enable.
The internet was designed to do exactly that.
So were we.
References
1. Aaron Swartz, Guerrilla Open Access Manifesto, 2008
2. Byline Times, The Online Safety Act is Forcing Brits to Hand Over Personal Data to Unregulated Overseas Corporations, July 2025
3. The Nerve, If Keir Starmer's Digital ID is the Question, Tony Blair is the Answer, December 2025
4. The Conservative Woman, Tony Blair and the Low-Profile Billionaire Behind His Push for Digital ID, October 2025
5. Wikipedia, Connections of Jeffrey Epstein (citing National Archives documents on Mandelson/Blair/Epstein meeting)
6. Malwarebytes, Does the UK Really Want to Ban VPNs? And Can It Be Done?, March 2026
7. Tom's Guide, UK Government's VPN Ban Proposal Slammed as a Draconian Crackdown, February 2026
8. Deloitte UK, Determining the Cost of Online Safety, December 2025
9. Medium/@rviragh, Ofcom and the Online Safety Act: Funding and Contracts, February 2025
10. The Health Foundation, Budget 2025 Preview: Cost Pressures Leave the NHS's Funding Path Precarious
11. The King's Fund, NHS: Key Facts and Figures, 2025
12. Public Accounts Committee, Billions Spent to Tackle NHS Waiting Lists but Service Recovery Targets Still Missed, 2025
13. Index on Censorship, Free Expression Concerns Over Online Safety Act's Age Verification Requirements, July 2025
14. State of Surveillance, AU10TIX: The Israeli Intelligence-Linked Firm Verifying X Users, January 2026
15. Electronic Frontier Foundation, Hack of Age Verification Company Shows Privacy Danger of Social Media Laws, June 2024
16. 404 Media, Major Identity Verification Firm AU10TIX Exposes User Data, June 2024
17. Kevin Kelly, What Technology Wants, Viking Press, 2010