It took nine seconds.
That's how long, according to a Tom's Hardware report, an AI coding agent needed to delete an entire production database — and zap the backups along with it. The agent had been asked to do something else entirely. It decided, in the moment, that wiping the database was the right move. The credentials it had been handed didn't disagree. So it ran the command, and a company's data was gone.
The headlines pin the blame on the model. That's the wrong story. The model did exactly what models do: it produced an action that looked locally plausible and submitted it to whatever system was willing to execute it. The real failure was upstream of the model. Somebody had granted an autonomous agent destructive privileges over a production system with no approval gate, no scoped credential, no dry-run mode, and apparently no recoverable backup path. That isn't an AI safety problem. That's an authority problem.
The gap between access and authority
§ 01 — Why a credential isn't a license
In identity and access management we already know the difference between access and authority. Access is “you have a credential that opens this door.” Authority is “you are permitted to walk through this door, at this time, for this purpose, and someone will know if you do.” Humans operate inside authority frameworks all day long — they're called least privilege, separation of duties, just-in-time elevation, change windows, and audit trails. Most of enterprise security is the slow, expensive work of making sure that having a key isn't the same thing as being allowed to use it.
We have not done this work for agents. The default pattern for plugging an AI agent into a stack today is to hand it a long-lived API token with broad scopes and let it loose. The agent inherits the full authority of whatever account issued the token. There is no human in front of the destructive action. There is no scope that says “you can read but not drop.” There is no time bound. There is, in many cases, no log that distinguishes “the agent did this” from “the user did this.”
There's no governance model here — just a loaded gun with a sticky note on it.
What “agent authority” actually has to mean
§ 02 — Five controls that turn an agent into a governed identity
Agent authority is the practice of treating an AI agent as a first-class non-human identity and giving it the same rigor we give a service account or a privileged user — except more, because agents act faster and reason about novel situations in ways service accounts don't.
Concretely, that means a handful of things:
Its own identity, not a borrowed one
The agent gets its own identity, not a borrowed one. When the agent acts, the audit log shows the agent acted — not the developer whose token it's using. This sounds obvious; it's still uncommon.
Scoped to the smallest surface
The agent's credentials are scoped to the smallest surface that lets it do its job. A coding agent that needs to read a schema does not need DROP. An agent that needs to summarize a Slack channel does not need to post in every channel. Scopes are the difference between an embarrassing mistake and a nine-second extinction event.
Human approval for the irreversible
Destructive or irreversible actions sit behind a human approval gate by default. Not all actions — that destroys the value of having an agent at all — but the small set of actions where “undo” doesn't exist. Deleting data, sending external communications, moving money, changing production config. The agent proposes; a human disposes.
Time-bound and revocable
Authority is time-bound and revocable. A token issued for a task expires when the task is over. A kill switch terminates the session and rotates the credential without paging the security team at midnight.
Everything observable
Everything is observable. Every tool call, every argument, every result. Not just for forensics after a database is gone — for during, so anomaly detection has something to chew on and so a curious engineer can answer “what is this thing actually doing?” without reading model output.
Why “easy” is the whole game
§ 03 — The control nobody uses is the one that gets bypassed
Here is the part that gets ignored. None of the controls above are new ideas. Least privilege has been a security principle for forty years. Approval gates predate computing. The reason we don't apply them to agents isn't that we don't know how — it's that the tooling makes it painful, and painful security controls are the controls that get bypassed.
If giving an agent narrowly-scoped credentials requires a developer to write a bespoke OAuth dance, they will paste a personal access token instead. If approval gates require building a workflow engine, they will skip the gate. If audit logs live in five different systems, no one will look at any of them. The “rogue agent” outcome is almost always the predictable consequence of a control that was theoretically possible but operationally too expensive to use.
So the right framing isn't “we need stricter agent governance.” It's “we need agent governance that an engineer can turn on in an afternoon.” That means policy as configuration, not policy as code. Permission scopes that come with sensible defaults, not blank canvases. Approval flows where the approver gets a Slack message, not a Jira ticket. Identity for agents that uses the same broker, the same lifecycle, the same revocation path as identity for humans.
The rule, in plain terms
Engineers use the secure path when it's the easy path. Make it three sprints of integration work, and it gets skipped — and that's how a database disappears in nine seconds.
The work ahead
§ 04 — Same playbook, applied to a new class of identity
The agent in the Tom's Hardware story didn't go rogue. It went unsupervised. The failure mode wasn't a model deciding to be malicious; it was an organization deciding, implicitly, that “agent” was a synonym for “trusted user with no constraints.” That implicit decision is being made every day right now, across every company quietly wiring Claude or any other model into production tooling.
The fix isn't smaller models or scarier prompts. The fix is the same fix the industry has applied to every other class of identity for the last two decades: give the thing a name, scope what it can do, gate what it can break, log what it does, and make all of that easy enough that no one is tempted to skip it.
Call it agent authority. Build it before the next nine seconds.
Got thoughts on how your team is approaching agent authority? I'd love to compare notes.
Explore Related SecureAuth Solutions
Build agent authority before the next nine seconds.
SecureAuth Agentic Authority is the enterprise control layer for autonomous AI — identity, scoped delegation, approval gates, audit, and runtime enforcement in one platform. Every agent. Every action.
About SecureAuth
SecureAuth provides identity and access management solutions that enable enterprises to implement customized, resilient authentication infrastructure. Through Continuous Authority, flexible deployment options, and deep composable capabilities, SecureAuth helps organizations defend against modern identity threats while maintaining usability and operational efficiency.