AI decisions can trigger real-world consequences—like deployments, account changes, or infrastructure modifications. But how do you prove they were authorized and compliant under audit or legal scrutiny?
Keon is an AI governance platform that enforces explicit authorization, cryptographic receipts, and verifiable evidence for AI-assisted systems. It turns untraceable actions into provable, audit-ready artifacts—ensuring your AI operations survive forensics, compliance reviews, and court examinations.
This repository contains the public documentation, concepts, and verification artifacts for the Keon governance model.
AI systems have crossed a threshold.
They no longer just advise — they trigger actions:
- deployments
- account changes
- workflow execution
- infrastructure modification
- automated decisions with real consequences
Most “AI governance” solutions focus on:
- prompts
- alignment
- monitoring
- after-the-fact logs
Keon addresses a harder problem:
How do you prove, months or years later, that an AI-assisted action was explicitly authorized, policy-compliant, and executed under accountable human or system authority?
This is not an observability problem. It is a forensics and accountability problem—critical for AI compliance evidence, provable AI execution, and court-defensible AI proofs.
Keon is a decision governance layer that sits between intent and execution.
It enforces a strict, mechanical boundary:
Decision → Authorization → Receipt → Execution → Evidence
Every governed action produces:
- an explicit authorization decision
- a cryptographic receipt
- a traceable audit record
- a verifiable evidence bundle suitable for investigation and review
If authorization is missing, ambiguous, or unverifiable — execution does not occur.
flowchart LR
A[Decision] --> B[Authorization]
B --> C[Receipt]
C --> D[Execution]
D --> E[Evidence Pack]
Keon is intentionally narrow.
Keon does not:
- reason with LLMs
- generate plans or workflows
- initiate actions
- execute tasks
- replace compliance frameworks
- render legal judgments
Keon exists to make execution provable, not intelligent — and to ensure that claims about authorization can be independently verified.
In Keon’s model:
- AI outputs are treated as requests, not commands
- Execution is fail-closed by default
- Authority is explicit, never implied
- Evidence is a first-class artifact
This allows organizations to answer questions commonly raised in:
- security investigations
- compliance reviews
- internal audits
- incident response
- e-discovery
- litigation and court proceedings
Such as:
- Who authorized this action?
- Under which policy and version?
- What evidence was evaluated at the time?
- What would have happened if authorization failed?
- Can a third party reproduce and verify this decision path?
Every governed decision produces a receipt.
Receipts are:
- immutable
- attributable
- verifiable
- chainable across systems
Related receipts and artifacts are bundled into evidence packs, which are designed to be:
- preserved for long-term retention
- exported for investigations
- reviewed during audits
- produced during e-discovery
- examined in adversarial or legal contexts
Evidence packs are built so that trust in the system operator is not required.
Keon exposes human governance authority through a dedicated interface called the Courtroom.
The Courtroom is the only place where:
- human decisions are recorded
- rationale is enforced
- policy lineage is bound
- evidence is rendered and exported
Execution systems cannot make or alter decisions.
- Courtroom UI (Governance Authority)
- Governed Execution Diagram
- Auditor Walkthrough
- Why Not Open Source (Yet)
- Separation of Powers
How does governed architecture differ from conventional agent platforms? This framework evaluates systems on architecture, not claims.
- Cryptographically Governed AI Execution (CGAE) — v1.0.0 · Canonical
- Whitepapers index (canonical + archive links)
- Archived: Governed Execution for Operational AI — v1.1
Keon’s evidence model is designed for post-incident reconstruction.
Using Keon artifacts, an investigator can determine:
- what decision was proposed
- what policy evaluated it
- who or what authorized it
- when execution occurred
- what evidence was available at the time
- whether execution would have failed under different conditions
This makes Keon suitable for environments where actions must be explainable long after execution, not just observable in real time.
Keon doesn't just govern individual decisions — it governs the entire lifecycle of autonomous digital entities.
Most governance frameworks address point-in-time authorization. Keon addresses the harder question:
What happens when AI systems create, destroy, and automate actions on entities with real-world consequences — and you need to prove every step was legitimate?
Entity creation requires explicit human or system authority. Every entity begins with a receipt-bound genesis event. No entity exists without a governed creation record. If authority is missing — creation does not occur.
Revocation and termination produce immutable lineage. No death without birth (prevents phantom entities). No double-death (prevents state corruption). Receipt chains link creation through termination with no gaps.
Policies can trigger automatic governance actions — but always with accountability:
- Severity gradation — RECOMMEND (log only) → AUTO_REVOKE (create gate) → AUTO_TERMINATE (immediate)
- Human supremacy — Irreversible actions can require human gate approval before proceeding
- Cooldown enforcement — Prevents policy flapping with configurable cooldown periods
- Fail-closed ambiguity — Missing context defaults to NO_ACTION, never proceeds on assumption
- Full attribution — Every automated action records policy ID, policy version, automation flag, and trigger events
Keon allows machines to act automatically — but can always prove why, under whose authority, and with what limits.
Keon is built for teams operating systems where mistakes have legal, financial, or safety impact:
| Role/Team | Use Case | Benefits |
|---|---|---|
| Platform Engineering | AI infrastructure governance | Provable deployments and changes |
| AI Infrastructure | Executing AI-triggered actions | Verifiable compliance |
| DevOps and SRE | Workflow automation | Audit-ready execution traces |
| Security and Compliance | AI decision auditing | Court-defensible proofs |
| Regulated Environments | Enterprise AI deployment | Forensic evidence packs |
| Audit, Risk, and Assurance | Incident response | Independent verification |
| Legal and Investigations | E-discovery and litigation | Tamper-evident artifacts |
If your system may one day be examined by an auditor, regulator, or court, Keon exists to make its behavior defensible and provable.
Keon does not execute actions.
Instead, governed systems integrate Keon to:
- request authorization
- receive decisions
- emit receipts
- preserve evidence
- prove compliance after the fact
This separation ensures Keon remains neutral, portable, and suitable as an independent governance and evidentiary layer.
Keon decides. Governed systems execute. Receipts prove.
This repo is structured to support both humans and verification tools.
Recommended reading order:
- START_HERE.md — conceptual foundation and mental model
- Concepts — governed execution, receipts, fail-closed systems
- Runtime / API docs — decision and execution boundaries
- Proof campaigns — what is proven and how to verify it
- Evidence artifacts — audit- and court-grade outputs
If a claim cannot be traced to code, a tag, or a proof artifact, it should be treated as incomplete. [KS-EVIDENCE-004]
- Explore the docs folder for in-depth guides.
- Star this repo for updates on AI governance best practices.
- Contribute: Open an issue for questions or suggestions.
Keon documentation follows these rules:
- Precision over persuasion
- Proof over promises
- Explicit authority over implicit trust
- Determinism over heuristics
- Auditability over convenience
- Forensic defensibility over narrative comfort
Every public claim in the Keon governance model is backed by sealed, tagged, independently verifiable proof campaigns. These are not demos — they are immutable evidence bundles with SHA256-hashed manifests. [KS-EVIDENCE-004]
| Capability | What It Proves | Status |
|---|---|---|
| Agent Registry & Capability Routing | Governed agent discovery and dispatch | ✅ Proven |
| Workflow Orchestration | Execution spine with explicit policy evaluation | ✅ Proven |
| Human-in-the-Loop Gates | First-class, ergonomic, auditable pause/resume | ✅ Proven |
| Multi-Agent Collaboration | Safe coordination with action-level attribution | ✅ Proven |
| Governed Birth | Entity creation under human-governed execution | ✅ Proven |
| Governed Death | Revocation & termination with receipt-backed finality | ✅ Proven |
| Policy Automation | Automated governance with fail-closed semantics | ✅ Proven |
Birth, Death, and Automation are all governed, attributed, and provable.
Full proof artifacts, harness code, and evidence bundles: Proof Campaign Status
Keon is actively developed and used as a governance substrate for real execution systems, with 7 sealed proof campaigns covering the full governance lifecycle.
Public documentation focuses on:
- concepts
- verification models
- forensic evidence artifacts
- reproducible proof
- lifecycle governance (birth, death, automation)
Product positioning, deployments, and integrations live outside this repository.
See the repository license for usage terms.
Keon makes execution provable — from creation through termination, even under investigation.