The best management thinkers of the last four decades accidentally wrote the playbook for the AI-era small company. But only if you strip away the corporate scaffolding and read them for what actually matters: how to lead people who have to choose to be there.
In January 2025, a company called Anysphere had 20 employees. Their product, an AI code editor called Cursor, had just crossed $100 million in annual recurring revenue. By March, revenue had doubled. By May it hit $500 million — making Cursor the fastest-growing SaaS product in history. They did it with no marketing budget, fuelled entirely by word of mouth.[1] The team has since grown to 60 people. The company is valued at $29.3 billion.[2]
Cursor isn't an anomaly. It's a pattern. Midjourney reached $200 million ARR with 11 people. Bolt.new hit $40 million ARR in two months with a team of 15. Lovable reached $17 million ARR in two months, also with 15.[3]
These companies aren't succeeding despite being small. They're succeeding because of it. And their existence raises a question that the best management books of the last four decades never anticipated: if a team of 20 can build a $29 billion company, what exactly were those other hundreds of employees for?
The best management books were written for a world that assumed growing headcounts, layered hierarchies, and companies measured in hundreds or thousands of employees. That world is ending. What's replacing it is smaller, faster, and fundamentally different: founder-operators running tight teams, augmented by AI agents that can do the work that once required departments. Y Combinator CEO Garry Tan recently noted that small teams of 10 engineers are now delivering the output previously requiring 50 to 100.[4]
But here's the thing — strip away the corporate scaffolding, and some of these authors nailed principles that matter more now than when they wrote them. Not despite the shift to smaller, AI-augmented teams. Because of it.
I've read and re-read these books over years. Good to Great, Built to Last, BE 2.0, The Great Game of Business, In Search of Excellence, High Output Management. Each shaped how I think about running teams and building companies. Each had lessons that held up across decades of change. The question is which lessons survive the biggest change any of them could have imagined — the emergence of human-AI teams where three people can do what thirty did before.
Grove Was Writing About You, Solo Founder With AI Agents
Andy Grove's High Output Management is the most resilient book on this list, and it's not close.
His core idea is managerial leverage: your output as a manager is the output of the organisation under you. That was written for Intel's middle managers overseeing teams of engineers. Today it describes a solo founder directing AI agents.
When you prompt an AI model to write code, draft documentation, analyse data, or generate content, you're exercising leverage in exactly the way Grove described. Your single decision — what to ask, how to frame it, what to verify — multiplies across every agent interaction. One person's judgment, amplified. Anthropic's own internal research shows their engineers are delegating increasingly complex work to AI with less oversight over time, with everyone becoming more "full-stack" as AI augments their core expertise into adjacent domains.[5]
Grove also insisted on measuring output, not activity. He didn't care how busy you looked. He cared what got shipped. That matters even more now because AI agents are infinitely "busy" — they'll generate output endlessly. The question is whether any of it is the right output. Research from Faros AI, analysing telemetry from over 10,000 developers across 1,255 teams, confirmed exactly this tension: developers using AI are writing more code and completing more tasks, but organisations are not seeing measurable improvement in delivery velocity or business outcomes.[6] More activity. Not necessarily more output. Grove would have seen through that immediately.
His concept of task-relevant maturity — adjusting your management style based on how experienced someone is with a specific task — also survives, but with a twist. AI agents have zero task-relevant maturity that persists between sessions. Every interaction starts from scratch. You can't build trust over time the way you can with a human colleague who learns and grows. That changes the management challenge from motivation and development to verification and context management. Grove didn't anticipate that, but his framework still holds if you adjust for it.
Stack Understood That People Won't Care If They Don't Share
Jack Stack's The Great Game of Business made one argument that most management books dance around: if people don't share in the winnings, they won't care about the game.
Open-book management wasn't a philosophy for Stack. It was survival. His people at SRC needed to understand the numbers because the company couldn't afford passengers. Everyone had to think like an owner because everyone's livelihood depended on it.
That's exactly where small, AI-augmented companies are heading. When the team is three people, there are no passengers. Everyone can see everything. The numbers aren't hidden behind layers of management and corporate reporting — they're visible to everyone who matters. Stack's model wasn't designed for this, but it fits perfectly.
The deeper point is about incentives. Stack gave his people equity. He tied bonuses to the numbers they could influence. He made the connection between effort and reward explicit and tangible.
This matters more now than ever, for a reason Stack couldn't have predicted: when AI can help a competent founder replicate a product in weeks that took a team of twenty a year to build, your competitive moat isn't your code. Vibe coding platforms like Lovable, Cursor, and Bolt.new now generate full-stack applications from natural language prompts — in some cases, functional clones of existing products in under a minute.[4] The startup that Emergent, a vibe-coding platform, funded just seven months after launch at a $300 million valuation, specifically targets entrepreneurs building products without large engineering teams.[7] When development costs approach zero, the only moat that matters is people. Their knowledge, their buy-in, their willingness to stay and fight when a competitor spins up a clone over a weekend.
And that means the "what's in it for me" conversation isn't a nice-to-have. It's existential.
The data backs this up. Payscale's 2024 research found that only 19% of organisations use profit sharing as an incentive — making it a genuine differentiator for retention.[8] Meanwhile, Carta's data shows that equity grants have fallen roughly 26% since the 2022 market correction, even as startups get leaner — the average Series A company in 2024 had 15.6 employees, down from 17.6 in 2021.[9] Companies are asking people to do more with less while offering less of the upside. That's a strategy built to fail.
If you're asking someone to work their arse off building a company alongside AI agents — to bring the judgment, creativity, and human relationships that no model can replicate — they need concrete assurances of tangible reward. Equity. Profit sharing. A real stake. Not pizza parties, not "we're a family," not vague promises about future opportunities.
That isn't selfish. It's human nature. To expect people to pour themselves into building something without a real share of the outcome is entitled management thinking that may have worked building pyramids in ancient Egypt. It will not work building 21st-century companies that depend on human-AI collaboration. The humans have to choose to be there. Make it worth their while.
Collins Got the People Right and the Structure Wrong
Good to Great has two ideas that survive intact and several that don't.
The first survivor: "Get the right people on the bus, then figure out where to drive it." When your bus has three seats, this isn't just important — it's everything. One wrong hire in a company of three isn't a management challenge; it's an existential threat.
Nearly a century of research supports this. J. Richard Hackman, Harvard's professor of Social and Organisational Psychology, demonstrated that as group size increases, the number of interpersonal links explodes — a team of six has 15 links to manage; a team of twelve has 66.[10] Bezos encoded this into Amazon's culture with the two-pizza rule. Robin Dunbar's research found that our capacity for intimate working relationships maxes out at roughly five people.[11] Collins was right that who matters more than what, but he was studying companies where a bad hire in the wrong division might go unnoticed for years. In a three-person company, you'll know by Tuesday.
The second survivor: Level 5 Leadership — the idea that the most effective leaders combine fierce professional will with personal humility. Quiet determination over charisma. This maps perfectly onto the small technical founder. No corporate speak. No management buzzwords. No empty sloganeering or performative leadership theatre. Just straight talk and competence.
This isn't soft advice. In a world where AI can generate any amount of polished corporate communication on demand, the ability to speak plainly and honestly becomes a genuine differentiator. People can smell corporate speak, and they always could. The difference now is that they have options. Startup turnover remains at 57% annually — triple the US average — even though turnover fell 31% in 2024 as salaries caught up to market.[9] People leave when they don't trust leadership. Authenticity retains them.
What doesn't survive is the implicit assumption running through both Good to Great and Built to Last — that you're building a large, enduring institution with layers of management, cultural rituals at scale, and longevity measured in decades. The flywheel effect, the hedgehog concept, the BHAGs — these were frameworks for companies with hundreds of people and long time horizons. When the competitive landscape can shift in months and your entire team fits in a room, you need speed and adaptability, not institutional momentum.
Collins' companies were also studied in an era where scale itself was a moat. You couldn't easily replicate what a large, well-run company had built. That moat is eroding fast. AI software development trends now point to internal "software factories" and one-person AI companies shrinking the trillion-dollar SaaS market, with multi-agent clusters turning a plain-English spec into runnable software overnight.[12]
The E-Myth Insight Has Been Absorbed by Machines
Michael Gerber's core insight in The E-Myth Revisited (revised as BE 2.0) was that most small businesses fail because the founder does everything themselves instead of building systems. Work on your business, not in it. Create processes. Document. Systematise. Then hire people into those systems.
The underlying problem Gerber identified hasn't gone away. Bureau of Labor Statistics data shows that 20.4% of businesses still fail in their first year and 49.4% by year five.[18] The SBA attributes 82% of small business failures to cash flow problems and 23% to management issues — both symptoms of the founder-as-bottleneck trap Gerber described decades ago.
The principle survives. The mechanism has changed completely.
You don't need to hire people into your systems anymore. You encode the systems directly. AI agents are the systematisation. The playbook, the standard operating procedure, the documented process — these become prompts, workflows, and agent configurations. Gerber was telling founders to stop being the bottleneck by building replicable processes. AI lets you do exactly that, without the overhead of hiring and training people to follow them.
What remains relevant is the underlying discipline: thinking about your work as a system rather than a collection of tasks. That mental model matters as much as ever. But Gerber's detailed advice about organisational charts for future employees and training programmes for roles you haven't filled yet — that was written for a world where scaling meant adding people. Scaling now means adding capability, and capability comes from AI as much as headcount.
Peters Had the Energy, Not the Durability
Tom Peters' In Search of Excellence was electric when it came out. Action bias. Close to the customer. Autonomy and entrepreneurship. These principles feel right because they are right — in spirit.
The problem is that Peters was studying large corporations trying to act small. The entire book is about how big companies can recapture the energy and responsiveness of smaller ones. If you're already a three-person company, you don't need a book telling you to stay close to your customers — you're probably on a first-name basis with most of them.
What's worth keeping from Peters is the bias toward action over analysis. In an AI-augmented world, you can generate analysis endlessly. Market research, competitive analysis, strategy documents — AI will produce them by the ream. The risk isn't under-analysis; it's over-analysis. Psychologist Barry Schwartz documented this in The Paradox of Choice: more options reliably lead to worse decisions, greater anxiety, and decision paralysis. In the foundational study by Iyengar and Lepper at Columbia, shoppers shown 24 jam varieties were ten times less likely to buy than those shown six.[17] AI doesn't just give you 24 options. It gives you 2,400. Peters' instinct that doing something imperfect beats studying the perfect approach has actually become more relevant, even if his specific examples haven't.
The Gap None of Them Saw
Every one of these authors assumed something that no longer holds: that managing means managing people. Human beings with memory, judgment, loyalty, ego, career ambitions, and the ability to grow over time.
None of them anticipated the trust boundary problem.
When Grove talks about delegation based on task-relevant maturity, he assumes the delegate learns. An AI agent doesn't learn between sessions. When Stack talks about opening the books, he assumes people will internalise the numbers and change their behaviour. An AI agent has no behaviour to change. When Collins talks about disciplined people, he assumes discipline is an internal quality that compounds over time. AI agents don't compound.
The cost of "hiring" has dropped to nearly zero. You can spin up an AI agent for any task in minutes. But the cost of trusting has gotten higher — not just remained high, actively increased. Stanford researchers found that even domain-specific legal AI tools hallucinated in 17% to 34% of cases — and these were purpose-built RAG systems marketed as "hallucination-free."[13] Deloitte's 2024 enterprise survey of 2,770 directors and C-suite executives identified risk management and hallucination concerns among the top barriers to GenAI deployment, with 68% of organisations still stuck at pilot stage.[14] In 2024, 47% of enterprise AI users admitted to making at least one major business decision based on hallucinated content.[15] OpenAI's own researchers demonstrated that when forced to answer every question, even GPT-4-class systems produced 20-30% factual errors.[16]
You're not just trusting code you wrote. You're trusting output from systems that hallucinate, that lose context between sessions, that can be manipulated, and that have no skin in the game. Every interaction with an AI agent is a trust decision. That's a management challenge none of these authors imagined.
This makes the human elements more important, not less. The people on your team — the actual humans — are the ones who bring persistent context, real judgment, genuine creativity, and the ability to care whether the company succeeds. They're also the ones who can leave if you treat them like interchangeable parts.
The Synthesis
Here's what holds up:
From Grove: Your output is the output of your team. Leverage is everything. Measure output, not activity. In a world of AI agents, your ability to direct and verify is your most valuable skill.
From Stack: Share the numbers. Share the winnings. People who don't benefit from the outcome won't fight for it. This isn't soft management philosophy — it's the only rational strategy when your people can walk and your product can be cloned.
From Collins: Hire for character, not skill. In a team of three, every person matters disproportionately. And lead with quiet competence, not corporate theatre.
From Gerber: Think in systems. But build those systems with AI, not just with people.
From Peters: Act. Don't let the infinite analytical capability of AI become an excuse for paralysis.
What doesn't hold up is the implicit world these books described — a world of large organisations, hierarchical management, institutional moats, and the assumption that scaling means adding people. That world produced brilliant insights about human motivation, leadership, and organisational design. The structural advice is obsolete. The human insights are timeless.
The companies that win from here won't be the biggest. They'll be the ones where a small team of humans — properly motivated, honestly led, and genuinely sharing in the outcome — directs an arsenal of AI capability with judgment, speed, and trust.
No corporate speak. No management theatre. No pretending that loyalty flows one way. Just straight talk, shared stakes, and the discipline to verify what the machines produce.
Grove, Collins, and Stack wrote the playbook. They just didn't know what game it would be used for.
References
- eWeek, "This Tiny AI Startup Is Quietly Disrupting Coding — And Making Millions," April 2025. eweek.com
- Sacra, "Cursor Revenue, Valuation & Funding," 2025. sacra.com
- BBF Digital, "The Tiny Team Revolution Changing the Face of Startups," December 2025. bbf.digital
- ProfileTree, "Vibe Coding: How AI is Transforming Software Development in 2025," September 2025. profiletree.com
- Anthropic, "How AI Is Transforming Work at Anthropic," 2025. anthropic.com
- Faros AI, "The AI Productivity Paradox Report 2025," June 2025. faros.ai
- CryptoRank/BitcoinWorld, "Emergent AI Startup's Stunning $70M Funding Signals Vibe-Coding Revolution," March 2025. cryptorank.io
- Payscale, "New Research Finds Profit Sharing Increases Employee Retention and Engagement," December 2024. payscale.com
- Carta, "State of Startup Compensation, H1 2024," July 2024. carta.com
- Nuclino, "Two-pizza teams: The science behind Jeff Bezos' rule." nuclino.com
- Sync, "How to Assemble the Ultimate Dream Team With Dunbar's Number(s)," March 2025. sync.com
- Belitsoft, "AI Software Development Trends in 2025," August 2025. belitsoft.com
- Magesh, Surani, Dahl, Suzgun, Manning, Ho, "Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools," Stanford RegLab and HAI, 2024. hai.stanford.edu
- Deloitte AI Institute, "The State of Generative AI in the Enterprise: Now decides Next," Q3 2024. deloitte.com
- Drainpipe.io, "The Reality of AI Hallucinations in 2025," July 2025. drainpipe.io
- Balbix, "When 'Good Enough' Hallucination Rates Aren't Good Enough," October 2025. balbix.com
- Iyengar, S. S., & Lepper, M. R., "When Choice is Demotivating," Journal of Personality and Social Psychology, 2000. swarthmore.edu
- U.S. Bureau of Labor Statistics, Business Employment Dynamics, 2024. bls.gov