CinderpointAI › Runtime Governance Gap
AI Governance Concept

The Runtime Governance Gap

By Waydell D. Carvalho  ·  Cinderpoint  ·  First published April 2026
Definition
The Runtime Governance Gap is the structural mismatch between AI regulation, which is built around discrete certification events, and adaptive AI systems, which keep changing after those events are over. The system being governed at runtime is not the system that was certified. The certificate proves nothing about what the system is doing now. The gap between paper compliance and operational reality is where accountability fails.

The assumption regulation makes

Every major AI law in force today shares one quiet assumption. The EU AI Act, US sectoral frameworks, the UK's principle-based approach, China's regulatory regime: all of them treat AI as something you certify once, deploy, and trust to behave. Conformity assessments at deployment. Discrete approval events. Bounded accountability tied to whoever signed off on what the system was supposed to do.

That assumption made sense for static software. A program that runs the same way every time can be evaluated once and reasonably governed forever, as long as nobody ships an update. Regulators built decades of practice around that idea.

What adaptive AI actually does

Modern AI systems do not stay still after deployment. They retrain on new data, fine-tune for new contexts, modify their own parameters in response to feedback, and in some cases reorganize their internal logic. None of these changes pass through a regulator. Many are not even visible to the operator's compliance team. The system that gets reviewed in week one is not the system in production by week thirty.

The Runtime Governance Gap is what opens up between those two systems. The wider the post-deployment changes, the wider the gap.

Where the gap shows up

Failure 1
No continuous oversight

Regulation evaluates at discrete checkpoints. Adaptive systems change between checkpoints. The regulator never sees the version of the system that is actually operating. By the time the next review arrives, the system has changed again.

Failure 2
Accountability diffusion

When something goes wrong, who is responsible. The original developer. The operator. The fine-tuner. Self-modification fractures the chain. Operators can argue the system was different when they certified it. The certified version is gone.

Failure 3
Evidentiary instability

Investigating an incident requires reconstructing what the system was doing when it caused harm. If parameters, weights, or workflows have changed since, the evidence is unrecoverable. Forensic auditing of self-modifying systems is fundamentally harder than auditing static ones.

Failure 4
Compliance drift

A system can pass a regulatory review and then drift, slowly, into a state that would not pass the same review today. The certificate stays valid on paper. The system stops being the thing the certificate described. Drift is silent and cumulative.

Why the gap can not be closed by better certification

The temptation is to fix the gap with stricter approval standards. More documentation. Tighter conformity assessments. Larger pre-deployment evaluations. None of that addresses the actual problem. The problem is not that one-time certification is too lax. The problem is that one-time certification, no matter how rigorous, is the wrong shape for systems that keep changing.

Closing the gap requires governance with the same temporal structure as the system it governs. Continuous, not discrete. Adaptive, not fixed. Capable of tracking transformation in real time rather than re-anchoring to a snapshot that is already obsolete.

What that looks like in practice

The Runtime Governance Gap is the diagnosis. The framework that addresses it is CARG, a six-component architecture for runtime AI oversight: continuous monitoring obligations, persistent liability, capability-tiered classification, harm-graduated response, dynamic verification cycles. CARG operationalizes adaptive oversight. The Runtime Governance Gap is what CARG was built to close.

Why this name

"Runtime" because the gap exists in the period when a system is actually running, not in the design or testing phases. "Governance" because the gap is regulatory and institutional, not technical. "Gap" because the issue is not that governance is wrong, it is that there is governance for one thing and reality is now a different thing, and nothing in between.

Cite this concept
Carvalho, W. D. (2026). The Runtime Governance Gap: Governing Self-Modifying Artificial Intelligence Through Adaptive Oversight. Cinderpoint. https://cinderpoint.com/ai/runtime-governance-gap/
About the author
Waydell D. Carvalho

Founder of Cinderpoint Systems LLC. M.S. Artificial Intelligence (MSAI), M.S. Management (MSM). Researches how systems fail under speed, opacity, and scale.

More by this author SSRN ↗ Zenodo ↗