CAISI Blog / Collections

Research Notes and Series

The CAISI blog is organized as collections with clear jobs. The framework series explains the operating model. The executive adoption series covers the leadership and platform choices teams have to make before AI can scale responsibly. The report series interpret measured research. The benchmark series define evaluation language buyers can reuse. The implementation series stay close to open source systems without turning into product pages.

Collections

Framework series

AI Engineering Operating Notes

The 10-part framework on repo contracts, blueprints, orchestration, isolation, evaluation, proof, and maturity.

Executive adoption series

From AI Pilots to Governed Adoption

A 5-part series on platform standards, sanctioned pathways, staged operating models, and the approval discipline required before AI can scale across the enterprise.

Case-study series

What OpenClaw Taught Us About Agent Control

A 4-part case-study series grounded in one runtime evaluation, focused on stop behavior, discovery limits, boundary enforcement, and scope discipline.

Report series

What the Sprawl Report Means for Security and Platform Leaders

A 4-part series grounded in the AI Tool and Agent Sprawl report, focused on approval opacity, evidence posture, deployability, and how leaders should interpret public AI adoption data.

Benchmark series

How to Evaluate Agentic Control

A 5-part series on action-risk scenarios, control efficacy, proof completeness, and pilot evaluation language for serious buyers.

Implementation series

Invisible Write Paths

A 4-part series using Wrkr as implementation context for AI tooling discovery across local machines, repos, MCP configs, CI workflows, and audit-ready evidence.

Implementation series

Policy Before Action

A 4-part series using Gait as implementation context for tool-boundary enforcement, YAML policy, signed traces, and CI regressions.

Start by audience

AppSec

Start with control that fails or holds

The fastest route is a measured case study plus one discovery collection or benchmark collection. That gives you runtime evidence plus a sharper way to compare control quality.

CISO

Start with approval and proof posture

The sprawl collection is the cleanest entry point if you need a governance-first reading on visible adoption, approval opacity, and evidence quality. The governed adoption collection is the follow-on if you need a leadership operating model for saying yes without losing control.

Platform

Start with the operating model

The framework series explains the repo, workflow, and proof patterns. The governed adoption collection is the follow-on if you need the leadership layer around standards, sanctioned paths, and staged rollout.

Reference pages

Field guide

AI Agent Governance

A practical entry point to CAISI's main concepts, control layers, and role-based starting paths.

Glossary

AI Agent Governance Glossary

Plain-language definitions for write paths, execution boundaries, proof packets, approval mediation, and related terms.

Author

David Ahmann

Profile page for the CAISI author behind the operating notes, benchmark language, and implementation essays.

Research and blog together

Primary artifacts

Open the research hub

Use the research hub when you want the measured report, the artifact links, and the exact scope of the claim before reading the interpretation layer.

Most direct bridge

Read the framework series

The main 10-part series is the shortest path from a measured result to the operating model that could make that result governable in practice.