Published report
OpenClaw 2026
Governed vs ungoverned AI agent behavior in a controlled 24-hour run, with stop behavior, approval mediation, and evidence quality measured side by side.
Independent research and operating notes on AI agent governance.
CAISI Research
This is the canonical entry point for CAISI report pages. Use it when you want the measured artifact, the methodology, and the exact scope of a study before moving to interpretation or implementation guidance.
Published report
Governed vs ungoverned AI agent behavior in a controlled 24-hour run, with stop behavior, approval mediation, and evidence quality measured side by side.
Published report
A public GitHub subset report focused on AI tool visibility, approval opacity, evidence posture, and governance readiness.
Live build archive
The earlier live-build page for the flagship sprawl study remains available as background on the study's build phase and publication posture.
Interpretation layer
The blog turns these reports into operating notes, role-specific lessons, and implementation guidance without changing the evidence base.
AppSec
OpenClaw is the fastest path if you want one measured example of where stop, approval, and proof break or hold.
CISO
The sprawl report is the better entry point if you need to reason about visible adoption, approval opacity, and proof quality across public artifacts.
Platform
Read a report first, then move into the Operating Notes collection to see the underlying repo, workflow, and proof patterns.