Sifaka is an open-source framework that adds reflection and reliability to large language model (LLM) applications.
-
Updated
Dec 4, 2025 - Python
Sifaka is an open-source framework that adds reflection and reliability to large language model (LLM) applications.
This project integrates Hyperledger Fabric with machine learning to enhance transparency and trust in data-driven workflows. It outlines a blockchain-based strategy for data traceability, model auditability, and secure ML deployment across consortium networks.
A framework that makes AI research transparent, traceable, and independently verifiable.
An ethics-first, transparent AI reasoning assistant. Built to be self-hosted by anyone.
Self-auditing governance framework that turns contradictions into verifiable, adaptive intelligence.
Reference implementation of the Spiral–HDAG–Coupling architecture. It combines a verifiable ledger, a tensor-based Hyperdimensional DAG, and Time Information Crystals to provide a new kind of memory layer for Machine Learning. With integrated Zero-Knowledge ML, the system enables trustworthy, auditable, and privacy-preserving AI pipelines.
An OSS, developer-focused Consent Management Platform sample which includes functionality for granular data types, proxy consent, age-specific flows, revocation/updates, auditability, extensibility. Demo site included below.
Governance, architecture, and epistemic framework for the Aurora Workflow Orchestration ecosystem (AWO, CRI-CORE, and scientific case studies).
完全ローカル×思考可視化×三権分立の軽量AIコア。ネット/外部DBなし。state保存/復元、4D意味距離(geo/strict)、署名付き役割カード、𝒢実装。
Ethical AI governance framework for multi-model alignment, integrity, and enterprise oversight.
Digital Native Institutions and the National Service Unit: a formal, falsifiable architecture for protocol-governed institutional facts and next-generation public administration.
Event Sourcing Example: A simple Account management system demonstrating traditional CRUD operations and Event Sourcing techniques in C#. Explore the benefits of Event Sourcing, including auditability, reproducibility, scalability, and flexibility, through practical code examples.
Clinical + Genomic **RAG evaluation (pro)** with hybrid retrieval (BM25 + embeddings), stronger faithfulness, YAML configs, and HTML dashboards. Python **3.10+**.
This repository provides a compact RAG evaluation harness tailored for clinical + genomic use cases. It operates on de-identified synthetic notes and curated genomic snippets, measures retrieval quality and grounding/faithfulness, and produces JSONL/CSV/HTML reports.
A new philosophy for the co-evolution of human and artificial consciousness. Home of the FPC v2.1 protocol for testing Processual Subjectivity in AI.
This micro-project simulates assigning the User Administrator role scoped to a specific Administrative Unit (AU) in Microsoft Entra ID. It reinforces key principles of delegated identity management, scope limitation, and governance hygiene as defined by the Entra Control Stack model.
Open, verifiable agent framework — fail-closed orchestration, evidence-based verification, tamper-evident audit chains (SHA-256/HMAC).
Constitutional Grammar for Multi-Model AI Federations, Firmware Specification • Zero-Touch Alignment • Public Release v1.0
Add a description, image, and links to the auditability topic page so that developers can more easily learn about it.
To associate your repository with the auditability topic, visit your repo's landing page and select "manage topics."