Every major AI law in force today shares one quiet assumption. The EU AI Act, US sectoral frameworks, the UK's principle-based approach, China's regulatory regime: all of them treat AI as something you certify once, deploy, and trust to behave. Conformity assessments at deployment. Discrete approval events. Bounded accountability tied to whoever signed off on what the system was supposed to do.
That assumption made sense for static software. A program that runs the same way every time can be evaluated once and reasonably governed forever, as long as nobody ships an update. Regulators built decades of practice around that idea.
Modern AI systems do not stay still after deployment. They retrain on new data, fine-tune for new contexts, modify their own parameters in response to feedback, and in some cases reorganize their internal logic. None of these changes pass through a regulator. Many are not even visible to the operator's compliance team. The system that gets reviewed in week one is not the system in production by week thirty.
The Runtime Governance Gap is what opens up between those two systems. The wider the post-deployment changes, the wider the gap.
Regulation evaluates at discrete checkpoints. Adaptive systems change between checkpoints. The regulator never sees the version of the system that is actually operating. By the time the next review arrives, the system has changed again.
When something goes wrong, who is responsible. The original developer. The operator. The fine-tuner. Self-modification fractures the chain. Operators can argue the system was different when they certified it. The certified version is gone.
Investigating an incident requires reconstructing what the system was doing when it caused harm. If parameters, weights, or workflows have changed since, the evidence is unrecoverable. Forensic auditing of self-modifying systems is fundamentally harder than auditing static ones.
A system can pass a regulatory review and then drift, slowly, into a state that would not pass the same review today. The certificate stays valid on paper. The system stops being the thing the certificate described. Drift is silent and cumulative.
The temptation is to fix the gap with stricter approval standards. More documentation. Tighter conformity assessments. Larger pre-deployment evaluations. None of that addresses the actual problem. The problem is not that one-time certification is too lax. The problem is that one-time certification, no matter how rigorous, is the wrong shape for systems that keep changing.
Closing the gap requires governance with the same temporal structure as the system it governs. Continuous, not discrete. Adaptive, not fixed. Capable of tracking transformation in real time rather than re-anchoring to a snapshot that is already obsolete.
The Runtime Governance Gap is the diagnosis. The framework that addresses it is CARG, a six-component architecture for runtime AI oversight: continuous monitoring obligations, persistent liability, capability-tiered classification, harm-graduated response, dynamic verification cycles. CARG operationalizes adaptive oversight. The Runtime Governance Gap is what CARG was built to close.
"Runtime" because the gap exists in the period when a system is actually running, not in the design or testing phases. "Governance" because the gap is regulatory and institutional, not technical. "Gap" because the issue is not that governance is wrong, it is that there is governance for one thing and reality is now a different thing, and nothing in between.
Founder of Cinderpoint Systems LLC. M.S. Artificial Intelligence (MSAI), M.S. Management (MSM). Researches how systems fail under speed, opacity, and scale.