Independent research and operating notes on AI agent governance.
Wrkr Series / Post 1 of 4 / AppSec
The Most Dangerous AI Agent Is the One Security Never Inventoried
The call usually starts the same way: "We are moving fast with agents, but we still need to know our real exposure." The hard part is that most organizations do not fail first on the tools they reviewed. They fail on the write-capable path that became normal before security ever saw it.
That is why unknown-to-security inventory deserves executive attention before model debates.
In this piece
Implementation context
The current Wrkr repo is explicit about this layer of the problem:
deterministic discovery across local setup, repositories, GitHub
orgs, MCP declarations, and CI execution patterns, with
machine-readable states such as approved,
known_unapproved, and
unknown_to_security.
Where the pressure shows up
A CISO asks for the current list of write-capable AI paths. The room has answers, but not one answer. Platform has a list of approved pilots. Security has a list of reviewed vendors. Team leads have local exceptions and workflow shortcuts. Individual engineers know what is actually installed on the machine that shipped last night's fix. The organization has governance intent, but no shared runtime map.
That mismatch creates the real risk. A reviewed path can be bounded, challenged, and improved. An unreviewed path forces the team to do three jobs at once under pressure: discover what exists, infer what it can touch, and reconstruct who thought it was acceptable. None of that work is quick when the path spans local tooling, repository files, MCP declarations, and CI runners.
"Developers are using AI tools" is not the useful diagnosis. The useful diagnosis is: which paths can already change source, alter CI behavior, or inherit credentials from surrounding systems. That answer is operationally narrow and politically neutral, which is why it moves decisions forward faster than broad policy arguments.
The unknown path is usually cross-surface and incremental. A local tool is enabled for convenience. A repo adds guidance files. An MCP endpoint appears to simplify integration. A workflow then makes the pattern unattended. No single step looks like a major launch. The combined path still becomes a governed surface whether leaders planned it or not.
The failure mode
The anti-pattern is governance by procurement memory. Security tracks products. Engineering tracks what is convenient. Nobody owns the end-to-end path from declaration to execution. That approach can look acceptable when tools are read-only assistants. It breaks when those same tools can write, approve, restart, or trigger changes downstream.
The deeper failure is confidence inflation. Leadership hears "approved list" and assumes "controlled surface." But unknown paths hide in config files, scripts, IDE settings, and workflows that nobody classifies as agent infrastructure. The control story sounds mature while the runtime story remains incomplete.
The better pattern
The better pattern treats inventory as control infrastructure, not as reporting theater. The required output is deterministic and durable: what exists, where it is declared, what authority it inherits, and whether security has ever reviewed it. A tool like Wrkr is useful here as implementation context because it emits that structure in machine readable form. The value is the model, not the label.
Good output also separates categories that teams tend to mix. Local setup is different from repo declarations. Interactive use is different from unattended CI execution. Direct write permission is different from conditional write capability via inherited access. When those categories collapse, prioritization collapses too.
The tradeoff is straightforward: deeper inventory work now versus slower incident and audit work later. Teams often resist this layer because it looks administrative. It reduces friction for both security and platform, because arguments move from "what do we think is happening?" to "what changed in the recorded surface?"
Why security and the CISO care
Unknown inventory degrades every downstream control. Approval programs turn into exception memory. Audit preparation turns into spreadsheet reconstruction. Incident calls begin with discovery work that should have been done months earlier. Even strong teams then answer executive questions with inference instead of evidence.
For a CISO, this is a credibility issue as much as a tooling issue. If the organization cannot enumerate write-capable AI paths today, every assurance statement about control quality carries hidden uncertainty. You cannot meaningfully approve what you cannot name.
Why platform and engineering care
Platform teams usually absorb this cost first. They debug workflow drift, reconcile conflicting tool assumptions, and clean up local differences that only appear after CI starts behaving strangely. A unified inventory turns that pain into visible inputs the team can prioritize and automate.
Discovery also improves delivery speed when done well. Onboarding gets clearer, repo expectations become explicit, and later boundary control can target high impact paths first. That is how security and velocity align: selective control based on known surfaces, not blanket slowdown caused by uncertainty.
What to do next
- Run a 30-day inventory pilot on one engineering organization instead of starting with company-wide policy.
- Catalog AI paths across local tooling, repo declarations, MCP entries, and CI workflows in separate buckets.
- Tag each path with approval status, including an explicit
unknown_to_securitystate. - Map direct and inherited write capability so triage reflects actual authority, not just declarations.
- Require machine-readable output that can be re-run and compared, not hand-maintained spreadsheets.
If that pilot cannot produce one consistent answer to leadership in a week, the organization is not ready to claim mature AI governance yet. Once it can, the next question gets much sharper: which small config surfaces are already expanding boundary reach?