Sprawl Series / Post 2 of 4 / CISO

An 11:1 Approval Gap Is a Proof Gap, Not a Panic Statistic

Ratios like 11:1 can either sharpen governance or derail it. In one board packet they become panic fuel. In a mature security program they become a control design signal: we detect AI tooling faster than we can prove approval posture in a machine-readable way.

That is a proof problem before it is a messaging problem.

Grounding

Run ID: sprawl-v2-full-20260312b
Core metric: 11:1 not-baseline-approved to approved tools
Counts: 715 not-baseline-approved, 65 approved
Scope: non-source tools only
Core artifact: runs/tool-sprawl/sprawl-v2-full-20260312b/agg/campaign-summary-v2.json

Why CISOs should not overread the ratio

A weak reading of the ratio says: there are eleven bad AI tools for every good one. That is not what the report measured. The report uses a stricter and more useful classification. A tool is counted as baseline-approved only when the deterministic approval policy can resolve it positively. Everything else sits outside that approved baseline.

In this subset, there were no explicit-unapproved markers at all. Almost the entire unresolved side of the ratio is approval-unknown. The problem is not that public repositories loudly admit to using prohibited tools. The problem is that they often fail to expose the evidence needed to prove which tools are approved and under what boundary.

That is why the ratio is better understood as a proof ratio than as a danger ratio. It tells leadership how much visible AI use can be defended with a machine-readable approval record versus how much still sits in the unresolved bucket.

That makes it useful as an operating metric, not a headline. A good program should report whether the unresolved side is shrinking, where approval proof is brittle, and which workflow surfaces still fail audit-grade reconstruction.

Why unknown approval state is expensive

From a CISO perspective, unresolved approval is not a cosmetic issue. If the organization cannot produce machine-readable proof of approved AI usage across delivery-adjacent repositories, then every downstream conversation gets weaker: regulatory attestations, internal audit, board reporting, incident response, and policy exception handling.

This is especially important because public AI adoption often happens faster than governance systems mature. The ratio is telling leadership where visibility breaks down under that speed.

Unknown approval state also pushes cost into the worst places: audit exceptions, manual follow-up, policy disputes, and executive reviews run under time pressure. A ratio like this is valuable because it reveals that systems weakness before it shows up as a crisis.

What good approval normalization looks like

Mature approval is not just a list on an internal wiki. It has states, scope, inheritance, and reviewability. A CISO should expect to know at least four things: whether a tool is approved, what that approval covers, what exceptions exist, and where the evidence lives when the question comes back six months later.

This is why the ratio is strategically useful. It creates a defensible numerator and denominator for a governance program that otherwise stays qualitative. You can ask whether unresolved state is shrinking, whether new approvals are machine-readable, and whether one control model applies across repos, CI, and developer tooling.

A practical maturity target is not "everything approved." It is "every approval decision leaves reusable evidence with clear scope and owner."

How to use the ratio correctly

Use it as a pressure test for governance quality, not as a sensational risk score.

A good program will shrink the unresolved side of the ratio over time. That is a more honest maturity target than demanding perfect elimination of every unrecognized signal on day one.

The leadership lesson

Many AI governance conversations fail because they jump too quickly from discovery to judgment. The report's ratio is useful precisely because it resists that move. It shows the gap between visible AI use and provable approval posture.

For CISOs, that is the right place to focus first. If the organization cannot prove what is approved, every later assurance claim rests on weaker ground.

This is a memory problem as much as a policy problem. Strong organizations preserve the reasoning behind approval decisions long after the original conversation. Weak ones re-litigate approvals because evidence never traveled with the workflow.