The AI era doesn't need more process, more frameworks, or more wellness programmes. It needs smaller teams of people who genuinely care — and the honesty to stop pretending anything else works.

The Split That Was Manufactured

Pick up Paul Graham's Hackers and Painters and you'll find a world that doesn't exist inside most modern software organisations. His central argument — that great hackers are more like painters than engineers — was a statement of observable fact about the founding generation of computing. The people who built the languages, the protocols, the operating systems didn't experience a tension between technical rigour and humanistic sensibility. They were the same impulse.

That world got systematically dismantled. Not by accident, and not because it was unproductive — the evidence runs entirely the other way. It got dismantled because separated roles are easier to hire for, manage, measure, and blame. The organisational logic of Taylorism, developed to optimise factory floors, leaked into knowledge work and found a willing host in the enterprise software industry.

Scientific management wanted clean handoffs between thinking roles and doing roles. Software absorbed this as: designers think the interface, engineers implement it. The problem is that in software, the medium and the design are inseparable. You cannot fully design a thing you don't understand how to build. You cannot build well without design intuition.

What gets destroyed is the tight feedback loop. When the person feeling the design problem is the same person who can immediately probe whether a solution is feasible, you get iteration at thought-speed. Separate the two and every loop now requires a meeting, a ticket, a handoff document, and a product manager who understands neither discipline deeply enough to translate between them.

The counter-evidence is everywhere: Wozniak. Carmack. Hopper. DHH's design opinions baked into Rails itself. The tools that defined the industry were built by people who refused the separation. It's an administrative convenience that got mistaken for an engineering truth.

23%
Global employee engagement, 2023 — lowest in over a decade
$66B
Global corporate wellness market size in 2023
$340B
Annual cost of disengagement to the US economy alone
Attrition rate at low psychological safety orgs vs high
Fig. 01 — Gallup State of the Global Workplace
Employee Engagement vs. Corporate Wellness Spend

The inverse correlation no management consultant will put in their proposal deck.

Sources: Gallup State of the Global Workplace (2012–2023); Global Wellness Institute Corporate Wellness Market Reports

McMindfulness and the Pressure Valve Economy

Ron Purser's McMindfulness named the mechanism, not just the symptoms. The corporate wellness apparatus — mindfulness programmes, psychological safety workshops, resilience training — isn't failing to do what it claims. It's succeeding at what it actually does, which is individualising structural problems.

You're stressed because of workload, surveillance, meaningless work, and no agency over your conditions. The McMindfulness response is a breathing exercise. The structural critique never arrives because the intervention is aimed entirely at the individual nervous system rather than the organisation producing the damage. It's a pressure valve that protects the machine from ever having to change.

The Goldman Test

Goldman Sachs has a mindfulness programme. Anything that genuinely threatened the power structure would never receive a budget line and a lunchtime session. The fact of corporate adoption should end the conversation about whether these tools are radical.

Psychological safety follows the same logic. As Edmondson originally framed it, it was descriptive — she was observing what high-performing teams already had organically. The consultancy industry took a description and sold it back as a prescription.

Fig. 02 — Structured Disengagement
Management Consulting Revenue Growth vs. Knowledge Worker Engagement

More advice purchased. Less engagement delivered.

Sources: IBISWorld Management Consulting Market Report; Gallup Workplace Analytics 2013–2023

The T-Rex Taylorists: Perfectly Adapted to a World About to End

The Taylorist enterprise didn't get here by being stupid. The logic of decomposing work into auditable roles, of separating thinking from doing, of managing through measurement — it worked extraordinarily well for the conditions it was designed for. The T-Rex was the dominant organism for good reason.

The problem is it's scaling up exactly that logic at the precise moment it becomes most dangerous. AI, from inside the Taylorist worldview, looks like vindication. More decomposition. More measurement. More process automation. The asteroid looks like a comet carrying gifts.

The thing AI cannot do is care. It can produce. It can iterate. It can synthesise at scale. But the reason a team with shared stakes builds something better than a process with clean handoffs is that caring produces decisions no specification captures — a thousand small judgements that go the right way because someone was holding the whole thing in their head and wanted it to be good. That's not a prompt.

The Asteroid: Self-Hosted, Small, and Dangerously Human

The narrative around AI has been almost entirely captured by the cloud AI framing — and that framing suits both vendors and Taylorists perfectly. Cloud AI maps straight onto existing organisational logic: centralised, metered, auditable, procurement-friendly. It slots into the enterprise stack without disturbing anything, which is exactly why it won't be the thing that disrupts them.

The Actual Threat Vector

A small team with a local model, their own data, their own fine-tuning: no usage costs, no terms of service changes overnight, no vendor pricing decisions on their roadmap, no dependency on someone else's infrastructure or political alignment. The economics and the autonomy combine into something the enterprise procurement model has no answer for.

The mammals were already there before the asteroid — small, warm-blooded, present the whole time. What looked like a disadvantage, the inability to specialise into a single enormous niche, becomes the survival trait when conditions change.

The Hiring Pantomime: Optimising for the Wrong Person

This isn't an argument for brutal high-performance culture where humanity is the casualty. The tech bro version of "hire only the best and cut everyone else" produces the same fragmentation as the Taylorist org, just faster and louder. Both eliminate the thing that actually builds great products: people who hold the whole system in their head and care whether it's architecturally honest.

The argument is more specific. Some people have a quality that's genuinely hard to screen for — an architectural sensibility. When they encounter a problem they instinctively build a mental model of the whole system before touching any part of it. They hold complexity without needing to decompose it prematurely. They get uncomfortable with solutions that are locally correct but structurally wrong.

What the Pantomime Selects For

CVs reward legibility over depth. Technical interviews reward performance under artificial conditions. Culture fit rounds reward people who reflect the existing team back at itself. The architecture thinker is often quiet in interview, has an unusual career shape, gets bored by the problems you've set them, and says something slightly inconvenient about your system design. They fail the pantomime consistently — and the org files them as "not a fit."

The Taylorist Hire
  • Fills the role specification
  • Passes the process
  • Manages up effectively
  • Stays in their lane
  • Produces auditable output
  • Doesn't cause friction
The Tech Bro Hire
  • Fast and loud
  • Pedigree signals
  • High output velocity
  • Competitive not collaborative
  • Leetcode-optimised
  • Available and hungry
What Actually Works
  • Sees the whole system
  • Cares if it's architecturally honest
  • Uncomfortable with local fixes
  • Asks inconvenient questions
  • Unusual career shape
  • Has something special — and knows it
Fig. 03 — Team Size vs. Impact
Team Size at Launch for High-Impact Software Products

The products that defined categories were not built by large, well-structured teams.

Sources: Company founding records, published interviews, and retrospective accounts

No Passengers: The Economics of the Small Team

AI compression is making the small team the default unit of software delivery. What previously required fifteen people to build and maintain can be executed by five — if those five are the right five. That's not a marginal efficiency gain. It changes the entire tolerance for passengers.

In a fifty-person org a passenger is a drag on morale and a waste of budget. In a six-person team a passenger is an existential threat. They consume coordination overhead the team can't afford. They dilute the shared ownership that makes the whole thing work.

This also reframes retention entirely. The cost of losing a senior engineer is consistently quoted at 150–200% of annual salary when you factor in recruitment, onboarding, and productivity loss. That's what's visible. What's invisible is the institutional knowledge, the system understanding, the product intuition that walks out with them.

Fig. 04 — Retention Economics
Cost of Replacing a Senior Engineer vs. Annual Salary

Replacement cost only measures what you pay. It doesn't measure what walks out the door.

Sources: SHRM Talent Acquisition Benchmarking Report; LinkedIn Talent Insights; Gallup Cost of Replacing an Employee

The Threat From Inside the Building

The fast follower from outside is a known risk. The fast follower built by your own former people — who know your weaknesses, your customers, your complaints, and the fixes you could never bring yourself to ship — is a different category of threat entirely.

The people most capable of seeing what's wrong are exactly the people most capable of doing something about it. They've been watching the dysfunction. They know where the value is. They've hit the ceiling of what's possible inside the structure. AI lowers the activation energy for that dramatically. What previously required a team of twenty to bootstrap now requires four people and clear thinking.

The org meanwhile measures attrition as an HR metric rather than as competitive intelligence leaking out the door. It runs exit interviews that produce nothing actionable. It files them somewhere and books another workshop.

What Actually Works

None of this requires a framework. It doesn't require a workshop, a wellbeing budget, a diversity initiative, or values on a wall. It requires something harder: the honesty to build the conditions where people who genuinely care can do their best work — and the discipline to stop filling seats with people who don't.

That means hiring for the architectural sensibility even when it makes interviews uncomfortable. Building teams small enough that shared ownership is possible and the whole thing fits in someone's head. Removing the organisational friction that turns capable people into executors of someone else's decomposed specification.

The organisations that compound through the AI transition are not the ones that deployed AI most aggressively into their existing process structure. They're the ones fluid enough that the technology amplifies genuine human judgement rather than papering over the absence of it.

The Taylorists will call this disruption when it arrives. They'll commission a framework for adapting to it. They'll hire a consultancy. They'll run a workshop on psychological safety in the age of AI.

The asteroid doesn't negotiate.