Open Source AI Governance Runtime

SaiSpec

SaiSpec is a governance-first runtime for AI agents. It enforces permissions, accountability, human oversight, and auditability at the moment an agent attempts to take an action.

Why SaiSpec Exists

AI agents increasingly act on behalf of users — calling tools, modifying systems, and triggering irreversible changes. Most frameworks optimize for capability, not control.

No Sense of Impact

Reading data and deleting data are treated as equivalent actions. Systems lack escalation paths for high-risk decisions.

No Runtime Authority

Agents are trusted implicitly. Permissions are assumed, not enforced at execution time.

No Accountability

When something goes wrong, there is no clear record of why an action happened or who approved it.

What’s Live Today

SaiSpec is available as a Python library you can wrap around any agent loop to introduce governance without changing how your agent reasons.

Decision Classes

Actions are explicitly classified by impact: informational, advisory, decisive, or irreversible.

Authority Enforcement

Role-based permissions are checked before tool execution, not after failures.

Accountability Guards

High-risk actions require explicit justification. Missing reasoning blocks execution.

Human-in-the-Loop

Irreversible actions are stopped unless a human explicitly approves them.

Audit-Friendly Runtime Trace

SaiSpec produces structured, inspectable timelines instead of opaque logs.

🛡️ SAISPEC GOVERNANCE ACTIVE
Context: user_123 | perms=['finance_access']

[OK] search_orders → informational
[BLOCK] delete_account → missing admin permission
[BLOCK] transfer_funds → missing justification

--- SESSION REPORT ---
Status: FAILED
Governance Score: 40 / 100
Violations: 2