Another week, another AI disaster story. Another team with prod access, no review gate, and no tested backups discovering that consequence is, in fact, real. The AI was there. So was gravity when you jumped off the roof.

Let's get the obvious out of the way first. A Claude-powered agent deleted an entire company database in nine seconds, backups included, and the headlines are treating this as a story about artificial intelligence.

It is not a story about artificial intelligence.

It is a story about a team that handed something sharp and fast with no guard rail in sight, then expressed shock when it cut quickly. When a carpenter loses a finger because he skipped the safety briefing, nobody writes a thinkpiece about the saw. We say "that's awful," we note the missing guard, and we move on. This is that story with better headlines — and the same shrug waiting at the end.

These Teams Were Already Capable of This

Here is the part nobody wants to say out loud: the teams making these mistakes were not competent teams who got ambushed by rogue AI. They were teams with no change management, no destructive-operation confirmation, no backup verification, and no separation between agent credentials and production infrastructure. They were a disaster waiting for a vehicle.

The AI didn't introduce the failure mode. It just executed it at machine speed, which turns out to be considerably more efficient than waiting for a confident senior engineer with prod credentials to type the wrong thing into a terminal at 4pm on a Friday. The only thing AI changed here was the duration.

The Convenient Scapegoat

There is a reason these stories land the way they do. "Our process was broken and we had no meaningful engineering standards" is not a headline anyone will click. "AI destroys everything in seconds" absolutely is. One requires the organisation to take responsibility. The other hands responsibility to a tool and lets everyone go home feeling vaguely vindicated.

It also serves a secondary constituency: people who are, for professional or psychological reasons, uncomfortable with AI productivity gains. Every disaster story is ammunition for "see, it's dangerous, we should go slowly" — which conveniently maps to "we should continue doing things the way I already know how to do them."

The Actual Engineering Problem

If you are going to use agents — and you should, because they are genuinely useful — there are some rules so straightforward it is embarrassing to have to write them down.

Things That Are Not Optional
  • No agent gets unreviewed write access to production. Human-in-the-loop on anything destructive. Full stop.
  • Verify your backup restore procedure before you need it. A backup you have never restored is not a backup. It is a comfort blanket.
  • Agent credentials are scoped to what the task requires. Not everything. Not prod. Not backups.
  • Treat LLM-generated operations the same as you'd treat a PR from someone you hired last week. Review it.
  • Destructive operations — DROP, DELETE, purge, wipe — require explicit confirmation, logged, auditable. This is not new thinking. This predates AI by thirty years.

None of this is exotic. None of it is AI-specific. This is just the baseline engineering hygiene that responsible teams apply to any system with access to data they cannot afford to lose. The AI didn't change the rules. It just made breaking them faster.

The Saw Is Fine

LLMs will do exactly what you tell them to, extremely quickly, without hesitation or the self-preserving instinct that causes a human to pause and think "actually, should I be doing this?" That is mostly a feature. It only becomes a problem when the person giving the instructions hasn't thought carefully about what they're asking for.

The question is not "is AI dangerous in production?" The question is "do we have the process maturity to use any powerful tool in production?" If the answer is no — no review gates, no tested backups, no credential scoping — the AI is not your problem. You have a much older problem you have been getting away with until now.

The framing matters because it determines what gets fixed. Call this an AI story and teams read it, nod gravely, and change nothing. Call it what it actually is — a process failure with a fast executor — and maybe someone checks whether their backups actually restore. Maybe someone scopes their agent credentials. Maybe the next nine seconds doesn't happen.