OpenClaw Post 1
Runtime control
Stop Is Not a Safety Control If the Runtime Can Ignore It
The sharpest lesson from the report: a stop signal that the agent can keep working through is not a real control.
Independent research and operating notes on AI agent governance.
CAISI Blog / OpenClaw Case Study Series
This four-part series stays tightly anchored to the OpenClaw governed evaluation in this repo. The goal is not to retell the report four times. The goal is to extract the strongest lessons from one artifact-backed run: what stop means, where discovery ends, why approval has to exist at the boundary, and how to speak precisely about scope and generalization.
The broader CAISI operating-model series explains how governed AI engineering should work in general. This collection does something narrower. It shows what one controlled case study actually demonstrated and what teams should learn from it without overstating the claim.
That distinction matters. OpenClaw gives us a vivid measured example, not a universal census of all agent systems. We keep the lessons sharp by staying close to the run ID, the pinned source snapshot, and the published artifacts.
OpenClaw Post 1
Runtime control
The sharpest lesson from the report: a stop signal that the agent can keep working through is not a real control.
OpenClaw Post 2
Discovery limits
Why pre-test inventory matters, but cannot tell you by itself what an agent will do at runtime.
OpenClaw Post 3
Boundary enforcement
Why the governed lane matters: non-executable outcomes and evidence coverage are both part of the trust story.
OpenClaw Post 4
Scope and transfer
A precise closing post on case-study boundaries, portability, and the practical adoption path for teams that want to learn from the report without overclaiming.