$252B

Global corporate AI investment in 2024 — 13x higher than a decade ago

Stanford HAI - 2025 AI Index
37.4%

US workers now using GenAI at work — adoption already ahead of PCs at the same stage

St. Louis Fed - Nov 2025
~90%

Cost reduction available running open-source models vs closed proprietary APIs

California Management Review - Jan 2026

Every software technology cycle has had an exploitation window — a period where inventors extract margin before commoditisation arrives. The window for centralised AI lasted less than three years. The commentators telling you this is all hype aren't being cautious. They're getting in the way.

The Window Is Closing Faster Than Anyone Expected

Every major software technology cycle follows the same arc. A genuine capability emerges. Early movers extract significant margin while barriers to replication are high. Then the window closes — open-source alternatives appear, the underlying capability commoditises, and pricing power collapses. The survivors aren't the inventors. They're the ones who used the window to build something at the application layer that couldn't simply be forked.

What the current AI debate almost universally misses is how measurably that window has been shrinking with each cycle.

The Shrinking Exploitation Window
Approximate years of meaningful pricing power before competitive commoditisation, per major software technology cycle. Each cycle is shorter than the last.
Source: uRadical synthesis based on documented market history

This pattern is structural, not coincidental. Each new layer of commoditised technology becomes the foundation on which the next disruption moves faster. The baseline rises. The friction of replication falls. The window shrinks. Anyone betting on a 15-year AI moat is not analysing technology history — they're extrapolating from a world that no longer exists.

For software, monetising invention is over.
The replication cost has hit practical zero. AI-assisted development has compressed time-to-replication further still. No legal or technical moat outruns this. Durable value now lives entirely at the application layer.


Open Source Has Already Proved It

In January 2025, DeepSeek released R1 — open-source, MIT licensed, trained at a claimed fraction of frontier model costs, performing at or near OpenAI's best offerings.13 Nvidia recorded a single-day market cap loss that made global headlines. Not because the technology failed. Because it worked so well, so cheaply, that the assumption of sustained centralised pricing power looked structurally unsound overnight.

By mid-2025, open-source models offered 70–90% cost reductions versus closed proprietary APIs across a wide range of tasks.3 The California Management Review drew directly on Christensen's disruption framework: open-source LLMs are following the classic pathway — entering on cost advantage, improving rapidly through community innovation, and offering data sovereignty that closed models structurally cannot match.3

Closed vs Open-Source AI — Cost Per Million Output Tokens
Note logarithmic scale. Open-source deployment on owned hardware reduces marginal inference cost to near zero once hardware is amortised. This is the Linux dynamic repeating: not better on day one — cheaper enough, good enough, and infinitely forkable.
Source: California Management Review, Jan 2026

OpenAI's concession on open source in August 2025 is the Microsoft moment. The question now is whether they can find their Azure — a services and application business built on top of the commoditised capability — before the window closes entirely.

The Linux parallel is the right one and it runs deeper than most analysts acknowledge. Linux won. The kernel became ubiquitous infrastructure — free, community-maintained, and the actual profit surface moved entirely to the layers above and below it. Nobody charges for Linux. Everyone charges for what runs on or around Linux. The model weights are already out. You cannot un-open-source them. Llama, Mistral, Qwen, Gemma — these are already the kernel. The door is coming off its hinges.

The hardware layer is where that parallel becomes instructive about where durable profit actually lands. Red Hat took the money from Linux. IBM took Red Hat. But the more structurally interesting winners were the hardware manufacturers who built the machines Linux ran on. The same dynamic is forming now. Apple's M-series chips are not incidentally suited to local inference — the unified memory architecture directly addresses the memory bandwidth constraint that kills LLM performance on conventional hardware. Apple largely solved that problem as a side effect of building chips for their own reasons. Combined with a billion-plus device distribution network, an installed user base that already trusts them with sensitive data, and a credible privacy story — on-device inference delivered without the user ever thinking about it — Apple is quietly one of the best-positioned companies in this entire transition. Not because they're building frontier models. Because they're building the substrate those models run on, in the hands of the people who will use them.


The Narrative Machine Is Running Hot

In April 2026, OpenAI published a 13-page paper titled "Industrial Policy for the Intelligence Age," calling for a rethink of everything from the tax system to the working week to prepare for superintelligence. Sam Altman compared it to the New Deal in a subsequent interview. Critics called it what it is: comms work providing cover for regulatory nihilism.

Read the timing. The paper landed the same day The New Yorker published the results of an 18-month investigation into OpenAI that raised direct questions about Altman's trustworthiness on AI safety. The juxtaposition was not accidental — someone with access timed the publication deliberately. The policy paper was the counter-programme.

OpenAI is the most interested party in how the regulatory conversation turns out. The proposals it advances shape an environment in which it operates with significant freedom under constraints it has largely helped define. A former Senate AI policy advisor noted that the "share prosperity broadly, mitigate risks, democratize access" framing has been the foundation of every major AI governance conversation since ChatGPT launched in 2022. It was said in Senate committee rooms in 2023. It was in OECD framework documents before that. None of it is new. The problem is not the ideas — it is the gap between naming solutions and building mechanisms to achieve them, combined with the fact that OpenAI has simultaneously been lobbying against the very safety legislation it now claims to support.

This is what the for-profit conversion actually signals. That was not a governance decision — it was a funding necessity. The capital structure required for a viable IPO does not exist inside a non-profit. The "New Deal" paper is pre-IPO narrative construction dressed as policy thinking.

Marc Andreessen claiming AGI has arrived is the investor-side version of the same problem. Andreessen has no technical standing to make that call. AGI has a contested definition that working researchers cannot agree on after decades of serious engagement with the question. Andreessen does not engage with that literature and does not need to — the financial press will not push back, and the claim serves the portfolio. His entire post-GPT-3 thesis was built on "AI is the last platform shift and we own the infrastructure layer." That was a defensible bet in 2021. The problem is he has been adding to that position — financially, reputationally, politically — at every stage since. He cannot afford to be wrong. Declaring AGI is not analysis. It is collateral protection.

The political dimension makes it materially worse. Operating inside the current administration's orbit means the AGI claim is no longer just investor narrative — it is policy-adjacent. If enough people with enough power accept that framing, regulatory frameworks get shaped around it, and the companies that shaped the premise benefit from the resulting environment. That is what regulatory nihilism looks like in practice — not absence of rules, but rules written by the people who needed them to be permissive.


The Hype Cycle Was the Real Damage

The bubble debate — is AI investment rational — is a legitimate question but it misses the more immediate damage. The hype cycle did not just inflate valuations. It actively poisoned the adoption curve.

Ordinary people and businesses heard "AGI is coming," "your job is gone," "superintelligence in 18 months" — repeatedly, from people with megaphones and apparent authority — and the rational response to that level of declared uncertainty is to wait and see. To not commit. To treat the entire category with suspicion. That scepticism is now load-bearing for the technology's critics. Every legitimate use case gets tarred with the same brush as the circus around it.

The organic path would have been healthier and faster. Researchers and engineers quietly shipping things that worked. Capability improving incrementally. Trust building through demonstrated usefulness rather than declared civilisational importance. That is how databases matured. How the web matured. How containerisation matured. Nobody needed to claim Docker was a civilisational inflection point to get adoption — it solved a problem people actually had, and word spread.

Instead, the people who wrote the cheques and cultivated the access and manufactured the consensus — without building the substrate, without running the systems, without shipping a line of production code — ran the SoftBank playbook. Flood the zone with capital. Manufacture urgency. Corner the narrative. Extract before the cycle turns. It works for them financially while leaving wreckage behind.

The wreckage: real tools that genuinely help people are now culturally and politically contaminated. Enterprises that should be quietly integrating useful inference are running AI ethics committees that exist mainly to manage reputational exposure from the hype. Developers building real things have to disclaim and caveat constantly. And the workforce fear — amplified relentlessly because fear drives engagement — created genuine anxiety in people who had no tools to evaluate the claims. That is not a side effect. That is what happens when you weaponise a technology's potential for investor narrative purposes, at scale, over years.

The tools are valid. They were always going to find their way into everyday life. The hype merchants made that journey longer and harder than it needed to be. Their obsessive self-serving was their own undoing — and an unnecessary tax on everyone else.


The Bubble Debate Is a Distraction

Yes, the financial risk is real. AI investment now represents a larger share of the economy than internet-related investment did at the dot-com peak.12 A correction is possible. But conflating financial bubble with technically invalid is the central error. The dot-com crash wiped out 78% of NASDAQ's value.4 It did not wipe out the internet. Amazon fell 90% and became one of the most valuable companies in history.

"Great technologies and great investments are not always the same thing, especially at euphoric valuations. The dot-com crash didn't end the internet. It ended the companies that had no reason to exist."

Edelweiss Capital - Dot-Com Bubble vs. the AI Boom, 2025
The Dot-com Arc — Financial Crash, Technology Survived
NASDAQ Composite indexed with Amazon trajectory, 1995–2010. The companies with genuine application-layer value survived and compounded for decades.
Source: Historical NASDAQ Composite data

If an AI financial correction comes, the open-source models, the trained engineers, the accumulated institutional knowledge — none of that disappears. The only question is who is positioned to use it.


The Adoption Data Is Not Ambiguous

GenAI work adoption reached 37.4% of US workers by late 2025 — already running ahead of PC adoption at the equivalent point in the 1980s.2 Workers using it save an average of 5.4% of work hours weekly; frequent users save over nine hours per week. Industries with higher AI time savings are recording 2.7 percentage points higher productivity growth versus their pre-pandemic trend.2

AI Adoption vs Historical Technology Diffusion
Percentage of workers using each technology for work, from mass-market launch. Generative AI is already running ahead of both PCs and the internet at the same point in their diffusion curves.
Source: St. Louis Fed Real-Time Population Survey, Nov 2025

The macro GDP data is lagging, as it always does during foundational technology transitions. Robert Solow named this in 1987 — "you can see the computer age everywhere except in the productivity statistics." The PC productivity revolution took another decade to appear in GDP data after he said that. The measurement problem is not the same as the capability problem. Every serious economist knows this. Most AI sceptics ignore it, because it removes their most cited data point.

The companies building institutional AI knowledge now — proprietary training data, fine-tuned models, integrated workflows, upskilled engineers — are compounding an advantage that won't appear in any quarterly report until it's too late for competitors to close the gap.


What This Means in Practice

The capability question has been answered. You are not evaluating whether AI is real — that debate is over. You are evaluating how quickly your organisation can move from the invention layer, where the moat is gone, to the application layer, where the moat is built on domain knowledge, customer understanding, and accumulated workflow intelligence that no open-source release can replicate overnight.

The profit surface in the next decade is clear: hardware that runs inference locally — Apple and whoever builds the next generation of edge silicon — and focused vertical applications with domain-specific training. Not general-purpose chat interfaces competing on benchmark scores. A solicitor's tool trained on case law. A diagnostics assistant trained on clinical guidelines. A safety compliance platform that understands IEC 62443 and PUWER natively, not generically. The model is a component. The domain knowledge, the workflow integration, the trust built with a specific user base — that is the product. The API middlemen who built businesses on being the only door to capable models are watching that door come off its hinges.

We know this from building it, not just analysing it. Music Bingo Live — a real-time venue entertainment platform developed under the uRadical umbrella — would not have been financially viable without AI-assisted development. The platform required simultaneous depth across audio encoding and DRM, a real-time multiplayer sync engine, a venue-facing dashboard, music licensing integration, and a DJ-facing toolchain. Without AI, covering that surface area would have required a team too large and a timeline too long for the revenue model to work. The opportunity had a window. We built it with a fraction of the headcount that window would previously have demanded. That is not a productivity statistic. It is a product that exists because of a capability shift that makes certain things economically possible that simply weren't before.

The real question for your organisation

It isn't "is AI overhyped?" It's: what would you build if the engineering headcount constraint was no longer the binding constraint? The answer to that question is where your application-layer moat starts. The organisations asking it now are 24 months ahead of the ones still reading bubble commentary — or waiting for Sam Altman's New Deal to arrive.

Global Corporate AI Investment — The Scale of the Bet
Global private AI investment 2014–2024. 13x growth in a decade. The window between invention and commoditisation is shrinking.
Source: Stanford HAI, 2025 AI Index Report

The organisations that do this well in the next 24 months will be genuinely difficult to compete with in 2028. Not because they will own a model. Because they will own workflows, data, and institutional judgment that took two years to build.


On Ed Zitron

Ed Zitron runs a newsletter called "Where's Your Ed At." His background is public relations. He has built a large, profitable media operation by being the loudest and most consistent voice arguing that AI is an overhyped fraud perpetuated by tech industry grifters.

His PR background explains both his strengths and his hard limits. He is genuinely skilled at identifying narrative mechanics — how hype cycles are constructed, the incentive structures that produce breathless coverage, the pattern of tech industry promises that don't land. These are real observations worth taking seriously on their own terms.

What a PR background does not qualify you to assess: whether a specific agentic coding workflow changes what a team of five engineers can ship in a sprint. Whether running a fine-tuned open-source model on your own infrastructure changes your cost structure. Whether the exploitation window for centralised AI is closing faster than the investment cycle assumes. These require being inside the work — running systems, measuring outputs, watching what actually changes when you remove the bottleneck of mechanical implementation.

The deeper problem is structural. Zitron's audience wants confirmation that nothing needs to change. That's what the product consistently delivers. The mechanism is identical to what he critiques in AI coverage — an engagement-optimised content operation calibrated to audience expectation rather than accurate prediction. His thesis is also conveniently unfalsifiable: if AI succeeds, it's a bubble that will pop later; if it stumbles, he was right. This is not analysis. It is a position designed to never be threatened by evidence.

uRadical builds distributed systems, real-time platforms, and AI-integrated architecture for teams who have stopped debating whether AI is real and started asking what to build with it.

uradical.io

If you're a CTO, VP Engineering, or founder sitting on a product opportunity that was too expensive to build eighteen months ago — that constraint may already be gone. The window to build your application-layer advantage is open now. It will not stay open indefinitely.

References

  1. Stanford HAI, "2025 AI Index Report," 2025.https://hai.stanford.edu/ai-index/2025-ai-index-report
  2. Bick, Blandin, Deming, "The State of Generative AI Adoption in 2025," Federal Reserve Bank of St. Louis, November 2025.https://www.stlouisfed.org
  3. Li, C., "The Coming Disruption: How Open-Source AI Will Challenge Closed-Model Giants," California Management Review, January 2026.https://cmr.berkeley.edu
  4. Historical NASDAQ Composite data. Peak 5,048.62 on March 10, 2000; trough 1,114.11 by October 9, 2002.
  5. Quartz, "From Pets.com to Amazon: Companies That Died and Survived the Dot-Com Bubble," March 2025.
  6. Barron's, "Burning Up," March 2000.
  7. Fortune, "AI Dot-Com Bubble Parallels History Explained," September 2025.https://fortune.com/2025/09/10/ai-dot-com-bubble-parallels-history-explained/
  8. Janus Henderson Investors, "AI vs. the Dotcom Bubble," October 2025.https://www.janushenderson.com/en-gb/investor/article/ai-vs-the-dotcom-bubble/
  9. Al Jazeera, "IMF Says AI Investment Bubble Could Burst," October 2025.https://www.aljazeera.com/economy/2025/10/22/imf-says-ai-investment-bubble-could-burst
  10. Dallas Federal Reserve, "Advances in AI Will Boost Productivity," June 2025.https://www.dallasfed.org/research/economics/2025/0612
  11. Penn Wharton Budget Model, "The Projected Impact of Generative AI on Future Productivity Growth," September 2025.https://budgetmodel.wharton.upenn.edu/issues/2025/9/4/generative-ai-productivity
  12. Penn Wharton Budget Model, AI investment as share of GDP vs dot-com era comparison, September 2025.
  13. CNBC, "DeepSeek's Breakthrough Emboldens Open-Source AI Models," February 2025.https://www.cnbc.com/2025/02/03/deepseeks-breakthrough-emboldens-open-source-ai-models.html
  14. FabriXAI, "OpenAI's Open-Source Revolution," 2025. Sam Altman "wrong side of history" from Reddit Q&A, August 2025.https://fabrixai.com/en/blog/openai-gpt-oss-vs-llama-vs-deepseek