Independent research and operating notes on AI Software Delivery Control.
Field Note / Security Leadership
The Question Security Will Be Asked About AI-Assisted Engineering
The board question will not be "did employees use AI?" for long. The harder question is already forming: which workflow acted, with what authority, who approved it, and what proof exists?
Last updated: May 9, 2026
The question behind the policy
Most AI usage policies start with allowed tools, data-handling rules, and employee guidance. Those are useful, but they are not enough once AI-assisted engineering enters software delivery.
When a workflow can open a PR, change a workflow file, inherit a credential, call an engineering tool, or trigger CI/CD, security will eventually be asked to explain the action path.
Which workflow acted, with what authority, who approved it, and what proof remains?
Why this is a leadership question
This is not only an AppSec implementation detail. It affects risk ownership, auditability, customer trust, and executive reporting. Leadership needs a defensible answer that does not depend on memory, screenshots, or a heroic reconstruction after something goes wrong.
The answer should be available for normal review, not only incident response. If a privileged AI-assisted action happened last week, the team should be able to reconstruct the actor, owner, credential, action, target, approval, validation, and outcome.
What security should ask before rollout
- Where are AI-assisted engineering workflows being used?
- Which workflows can write code, change CI/CD, call tools, or reach release paths?
- Which identities, credentials, tokens, or service accounts do they inherit?
- Which actions are allowed automatically?
- Which actions require approval before execution?
- Which actions should be blocked regardless of prompt or user intent?
- What evidence remains after the action?
What a good answer looks like
A good answer is specific. It names the workflow, owner, repo, tool, credential, target system, approval rule, and proof trail. It does not stop at "we approved the AI tool" or "the PR was reviewed."
Workflow: AI-assisted release-note PR
Owner: Developer productivity
Authority: branch write, issue tracker read, CI test run
Approval-required: workflow-file change, package publish, release job
Proof: PR, workflow run, credential identity, reviewer, policy verdict, outcome
This level of specificity changes the internal conversation. Security can approve useful workflows without pretending that every AI-assisted action has the same risk.