Independent research and operating notes on AI Software Delivery Control.
Field Note / SDLC Risk
Why AI Coding Tools Change the SDLC Risk Model
The first AI coding risk most teams discuss is generated code quality. That matters. It is also too narrow. The larger SDLC shift starts when AI-assisted work can enter delivery paths that change systems.
The risk model moves from "did the model write safe code?" to "which workflow acted, through which credential, against which delivery surface, under which approval rule, with what proof?"
Last updated: May 9, 2026
In this field note
The old model
Traditional SDLC risk programs are good at reviewing code, scanning dependencies, finding exposed secrets, checking infrastructure templates, and enforcing branch or release controls. Those controls assume a fairly familiar path: a person proposes a change, a reviewer inspects it, automation runs, and a release path decides whether the change moves forward.
AI coding tools put pressure on that model because output volume rises faster than review capacity. More importantly, the human review step may no longer be the only place where authority enters the system.
The new model
A coding assistant that suggests a function is still mostly an input to a human workflow. A coding agent or automation that can open a PR, edit workflow files, call tools, run scripts, inherit credentials, or trigger CI/CD is part of the delivery system.
That changes the review question. The team still needs code quality, vulnerability detection, and tests. But it also needs a map of the delivery authority attached to the workflow.
workflow -> repo/PR -> tool/script -> credential -> action -> target -> approval/proof
Delivery authority
Delivery authority is the ability to affect the software delivery path. It can appear in ordinary places: a GitHub Actions file, a package script, a deploy key, an MCP tool declaration, a reusable workflow, a CI secret, a cloud role, or a release job.
The agent does not need direct production access to create production-adjacent risk. Changing a workflow file, modifying a package manifest, altering infrastructure code, or using a credential in CI can be enough to change the risk profile.
What to map first
Start with workflows where AI-assisted output becomes executable or privileged. The first review should be small and concrete:
- Which AI-assisted workflows can open or update PRs?
- Which workflows can change CI/CD files, package scripts, or release configuration?
- Which credentials, tokens, OAuth grants, service accounts, or CI secrets can they inherit?
- Which tools, MCP servers, APIs, shells, or cloud paths can they call?
- Which actions are allowed, approval-required, or blocked?
- What proof survives after the workflow acts?
Takeaway
AI coding tools do not only add a new code-generation surface. They add new ways for work to enter the SDLC with inherited authority. The practical response is not to review every prompt. It is to map the action paths where AI-assisted work can change delivery.