CinderpointAISAFEMACHINE › SAFEARC
SAFEMACHINE Module 1  ·  Review Architecture

SAFEARC

By Waydell D. Carvalho  ·  Cinderpoint  ·  First published January 2026
Definition
SAFEARC is the seven-pillar architecture that structures every governance review inside SAFEMACHINE. The acronym names the sequence: Scan, Align, Filter, Evaluate, Assign, Renew, Contain. Each pillar answers a specific question and produces a specific output. Together they convert informal judgment into a scored, auditable, revisable decision record.

Why seven pillars, in this order

Governance reviews fail in predictable ways. Teams skip scoping and try to assess a system whose boundaries no one has defined. They jump to risk scoring without first checking which regulations apply. They treat hard-stop conditions as a final filter instead of an early one, so the team has already invested weeks before discovering the system should never have been on the table. They produce a verdict but never name who is accountable for it.

SAFEARC fixes the order. Each pillar has a prerequisite relationship to the next. You cannot Align before you Scan, you cannot Evaluate before you Filter, and you cannot Assign before you have something to assign authority over. The sequence is part of the architecture.

The seven pillars and what each one produces

Pillar 1 . Scan
What is this system, and where does it live?

Scan documents the system, its data flows, its users, and its deployment context. The output is a system registry entry and a dependency map. Without this, every later pillar is guessing at scope.

Pillar 2 . Align
Which rules apply, and where do we stand against them?

Align maps the system against regulation, standards, and internal policy. The output is a compliance profile that lists applicable frameworks, current conformance, and gaps to close.

Pillar 3 . Filter
Are there reasons this should not proceed at all?

Filter applies hard-stop conditions. If the system trips a hard-stop, the review ends. The output is a risk-tier assignment that either clears the system to scoring or ends the process with cause documented.

Pillar 4 . Evaluate
How risky is it, on what dimensions?

Evaluate scores the system across accuracy, fairness, security, privacy, transparency, and human oversight. The output is a scored evaluation record with reasoning attached to each dimension.

Pillar 5 . Assign
Who decides, and who is on the hook?

Assign translates the evaluation into a deployment decision and names the people accountable for it. The output is a responsibility matrix that survives staff turnover and audit questions.

Pillar 6 . Renew
When do we look at this again?

Renew sets the review cadence and the drift triggers that pull a system back into review off-schedule. The output is a monitoring log and a calendar of scheduled reassessments.

Pillar 7 . Contain
What happens when something goes wrong?

Contain documents incident classification, escalation paths, and shutdown authority. The output is an incident response plan that exists before the incident, not after. The deeper protocol lives in the STABLE module.

Where the framework comes from

SAFEARC synthesizes 68 source documents covering the EU AI Act, the OECD AI Principles, the NIST AI Risk Management Framework, and ISO/IEC 42001. The contribution is not a new set of rules. It is an architecture that takes existing rules and converts them into a sequence a real review team can execute, score, and document.

Cite this framework
Carvalho, W. D. (2026). SAFEARC: A Seven-Pillar Operational Architecture for Socio-Technical AI Governance. Cinderpoint. https://cinderpoint.com/ai/safemachine/safearc/
About the author
Waydell D. Carvalho

Founder of Cinderpoint Systems LLC. M.S. Artificial Intelligence (MSAI), M.S. Management (MSM). Researches how systems fail under speed, opacity, and scale.

More by this author SSRN ↗ Zenodo ↗