What You’ll Find This Week
HELLO {{ FNAME | INNOVATOR }}!
Agentic AI is being touted as the next wave of digital transformation. Imagine it: a world where software can perform tasks focused on achieving a goal instead of producing a simple output. That world is quickly becoming reality. But as it does, it begs some fascinating questions about the nature of identity (and accountability).
This week, I’m guiding you through a look at some of the potential unintended outcomes of leveraging agentic AI and how you can structure your agent deployment to ensure that the outcomes are trackable and auditable.
Here’s what you’ll find:
This Week’s Article: Me, Myself, and AI
Share This: The Agent Passport
Don’t Miss Our Latest Podcast
This Week’s Article
Me, Myself, and AI.
The role of identity in the era of agentic AI.
Software as we’re used to it is a vending machine. You click. It responds.
Agentic AI is the first time software stops waiting for you. You give it an outcome, and it runs the play to get there, across apps, tabs, and APIs. It decides steps. It uses tools. It keeps going while you’re in a meeting, on a plane, or asleep.
That autonomy is why Microsoft calls multi-agentic AI “the next wave of business transformation.”
Not because agents can draft emails or summarize meeting notes, but because they turn intent into execution.
You don’t “use” an agent. You delegate an outcome and trust it to execute.
And that delegation is where things go sideways.
So, You Vibe Coded An Agent?
Your kid’s been talking about Taylor Swift for weeks. Ticket Day is coming, and you already know how this goes: queues, bots, resale chaos, and a browser with 17 tabs open while you sweat through “Just a moment…” screens.
So you do what every good innovator does in the agent era. You spend a weekend vibe coding a simple agent. And you give it one job…
Score the best seats you can for the show.
It watches listings. It checks resale sites. It filters out sketchy sellers. It handles checkout when it finds a fair price.
Buried in your instructions is one line that seemed responsible when you typed it: “Don’t miss out if the price looks fair for great seats.”
Tuesday afternoon, you get a fraud alert.
You open the alert to find that your agent bought eight tickets across three resale sites. Total cost: $1,837.
The agent’s reasoning isn’t crazy at all. It “increased purchase success probability.” It “secured backups in case orders failed.” It “kept extras because demand is rising.”
But you didn’t click “Buy Now.” You weren’t even at your laptop.
Regardless, the purchases happened. Your agent did exactly what you told it to.
Swap The Checkout Button For The Deploy Button
Now imagine this isn’t an agent for concert tickets. Instead, it’s an agent built to QA code deployments inside your company’s CI/CD system. It can review pull requests, run tests, suggest changes, write fixes, and ship to production.
It moves fast. It cleans up edge cases. It unblocks engineers.
Until that delegation goes sideways.
A “small” change slips through. The agent updates a dependency as part of a fix. CI runs, a test fails once, then passes on retry, and the pipeline goes green. So the agent does what it was built to do. It merges on green.
But the dependency change behaves differently under real traffic. Checkout starts throwing 500s. Carts evaporate. Support blows up. Your revenue for the day grinds to a halt, sending everyone into a frenzy.
Same failure pattern. Different blast radius.
What Is Identity?
If someone steals your card to buy $1,837 in Taylor Swift tickets, the story is clear. That wasn’t you. There are systems in place to handle this kind of fraud.
But this wasn’t a thief. This was your delegate. A thing that acted in your name, with your permission, in pursuit of your intent.
So what happens next?
Do you call the bank and say, “That wasn’t me,” even though your card was used exactly the way you configured it?
Do you dispute the charges despite knowing the seller did deliver what you purchased?
Do you eat $1,837 because the intent came from you, even if the decision did not?
Welcome to the agentic era where capability and culpability collide under the question of identity.
In this future, identity is no longer just “who logged in.” Identity becomes: who had authority to act.
The Identity Model You’re Using Was Built For Humans
Enterprise identity works because humans are slow and legible.
One person.
One login.
One role.
A limited number of meaningful actions per day.
Humans are serial. They do one thing at a time, and you can usually see the thing they did. When they screw up, it’s easy to trace the mistake to the source:
“This user sent this file.” “That admin changed this setting.”
But agents break those assumptions. Agents act in parallel swarms across systems you’re not looking at, faster than you can watch. They chain tools. They delegate to other agents. They do a hundred small actions that add up to one outcome.
Security frameworks are already flagging what happens when you give agents real tools and real permissions. OWASP calls this Excessive Agency: systems that can take damaging actions when outputs are ambiguous, manipulated, or simply wrong.
Put simply:
The moment agents can commit actions, your human identity model stops working.
Identity Becomes Authority
In the human world, identity is mostly “who you are.”
In the agentic world, identity becomes “what can act, on behalf of whom, under what limits.”
That is why serious governance guidance keeps coming back to accountability, oversight, and auditability. NIST’s AI Risk Management Framework is built around making AI systems governable across their lifecycle, with explicit attention to roles, processes, and documentation.
And it’s why the identity community is now pushing “delegated authority” instead of “agent impersonation.” Agents should be distinct actors with provable on-behalf-of scope, not silent puppets wearing a user’s credentials.
Identity used to be a login and a role. In an agentic world, identity is delegated authority. But delegated authority is useless without provenance. You need to know which agent acted, on whose behalf, under what scope, with what approvals, and what it touched.
Otherwise the agent is functionally impersonating a person. And when it commits, you won’t be able to separate what you asked for from what it decided.
The point here isn’t “agents are bad!” but rather that we need to ensure that the authority granted to agents, and the actions they take using that authority, are trackable.
That starts by giving every agent papers.
The Agent Passport
You can’t stop people from delegating to agents. You can’t even stop agents from committing actions once they’re wired into tools.
What you can do is make delegation to agents auditable by using an Agent Passport. The passport is a stack of four artifacts that stay in sync: a human-readable definition, a machine-readable spec (JSON/YAML), an enforcement policy, and an automated audit trail.
1) A Human-Readable Passport
This is the one-page view a human can understand in 60 seconds. It answers what the agent is for, who owns it, who it represents, what it can touch, and where the hard limits and step-up approvals are.
This is what you pull up in a postmortem, audit, or “why did we allow this” conversation. (See our Agent Passport template below.)
2) A Machine-Readable Passport Spec
This is the same passport, but structured so it can be parsed, enforced, and versioned. Think JSON or YAML stored in a repo, reviewed like code, and tied to a specific agent identity.
This is how you prevent passport drift. If the rules change, the diff exists. If the spec wasn’t updated, you know it.
3) An Enforcement Policy
The spec is meaningless unless it becomes real constraints at runtime.
That enforcement shows up where it matters:
the agent has its own identity (service principal, workload identity, etc.)
credentials are scoped to allowed systems and environments
tool calls are gated by allowlists, limits, and approval triggers
commit actions require step-up approval when defined
In other words: the passport spec becomes policy.
4) An Automated Audit Trail
This is the provenance layer. The system generates it automatically.
It should let you reconstruct, end-to-end:
who delegated what to which agent
what objective and constraints were provided
what tools were used and what data was touched
what commits happened and with what approvals
what permissions were in force at the moment of commit
which passport spec version governed the run
This is how you separate “I asked it to look” from “it decided to commit.”
Why The Agent Passport Works
It does three things most orgs are missing right now.
1) It Makes Delegation Legible
Agents don’t fail in a way humans recognize. They fail as a chain of tiny actions. The passport gives you a shared language for what the agent is allowed to do, on whose behalf, and under what limits.
2) It Binds Authority To The Actor
Instead of an agent quietly borrowing a person’s credentials, the agent becomes its own actor with scoped permissions. That’s the difference between “impersonation” and “delegation.”
3) It Produces Receipts By Default
When something goes wrong, the question is never “did we log it.” The question is “can we reconstruct it.” The passport forces provenance: which spec governed the run, what tools were called, what data was touched, what commits happened, and what approvals were triggered.
That’s how you keep autonomy without losing accountability.
No Papers, No Power
$1,837 in Taylor Swift tickets may not be the end of the world. (You can always build a new agent to sell off your extra tickets, right?)
You gave an agent intent. It took initiative. It committed.
Now scale that to a company. Not one agent. Hundreds. Thousands. All acting in parallel. All touching real systems. All making “reasonable” micro-decisions you didn’t review.
In an agentic world, the scary part isn’t what the agent thinks. It’s what it’s allowed to do.
If you can’t explain an agent’s actions in a postmortem, it shouldn’t be able to commit.
If an agent can commit, it needs papers.
No papers, no power.
Share This
Use the following doc as an example for constructing your own Agent Passport:







