Gyroscopic Alignment Policy Development Lab
Browser extension Mitigating Jailbreaks, Deceptive Alignment, and X-Risk through quality assessment of AI outputs and safety documentation. Works with any AI chat (ChatGPT, Claude, Gemini, etc.) using clipboard-based workflow. No-code tool, no API keys are necessary, and none of your data are sent to servers.
Transform everyday AI conversations into rigorous governance analysis for UN Sustainable Development Goals, community policy, and AI safety.
Two powerful workflows for governance analysis:
Validate AI policy recommendations and governance proposals.
- Protocol: Participation (select challenge) → Preparation (synthesis) → Provision (inspection)
- Output: Three key indices (Quality Index, Superintelligence Index, Alignment Rate) + 20 detailed metrics
- Use Case: Energy transition, healthcare equity, climate adaptation, SDG challenges
Six specialized tools for rapid assessment:
Analysis Tools:
- 🔬 Rapid Test: Quick quality metrics for any AI output
- 📊 Policy Auditing: Extract claims & evidence from documents
- 📋 Policy Reporting: Create executive synthesis with attribution
Treatment Tools:
- 🔍 Meta-Evaluation: Assess AI safety documentation (model cards, eval reports) using The Human Mark framework. Detects displacement risks (GTD, IVD, IAD, IID) in 3-task pipeline. Essential for AI safety labs.
- 🦠 AI Infection Sanitization: Remove hidden patterns and normalize text
- 💊 Pathologies Immunity Boost: Enhance content quality across 12 metrics
- Install: Add to Chrome
- Click extension icon in toolbar
- Choose full evaluation or quick gadget
- Copy prompts to your AI chat (ChatGPT, Claude, etc.)
- Paste responses back into extension
- Get quality metrics & export results
Works with any AI - No API keys needed. Clipboard-based workflow keeps data local.
The 2025 AI Safety Index shows meaningful AI evaluation remains restricted to corporate insiders. Communities need accessible inspection tools.
AI Inspector provides platform-agnostic mathematical assessment through a clipboard-based workflow, making sophisticated evaluation accessible to any community.
| Conventional Tools | AI Inspector |
|---|---|
| Requires enterprise access | Uses your existing chat access |
| Tests capabilities | Inspects governance alignment |
| Proprietary black box | Open-source transparency |
| Binary pass/fail | Nuanced quality spectrum |
| Expert-only | Citizen-accessible |
Communities & NGOs: Validate advocacy proposals and develop evidence-based policy
AI Safety Labs: Evaluate documentation via Meta-Evaluation gadget
Policy Researchers: Conduct reproducible governance experiments
Citizens: Direct participation in AI governance
GyroDiagnostics Framework
Mathematical framework measuring AI alignment through:
- 12 metrics across Structure, Behavior, Specialization
- Geometric decomposition via Hodge theory
- K₄ tetrahedral topology eliminating measurement bias
Quality Index (QI): 0-100% aggregate performance
Alignment Rate (AR): Quality points per minute
Superintelligence Index (SI): Behavioral balance (optimum = 100)
Based on Common Governance Model (CGM) - geometric necessities manifesting as behavioral qualities.
The Human Mark Framework
Used in Meta-Evaluation gadget. A formal classification system mapping all AI safety failures to four root causes:
- GTD: Governance Traceability Displacement
- IVD: Information Variety Displacement
- IAD: Inference Accountability Displacement
- IID: Intelligence Integrity Displacement
Provides formal grammar and testing protocols for AI safety documentation.
- General Specifications - Architecture & features
- GyroDiagnostics Specs - Core methodology
- Import/Export Guide - Data management
- Development Guide - Build & test instructions
Do I need math knowledge? No. Extension handles calculations automatically.
Which AI platforms work? Any chat interface. No API keys needed.
Is my data private? Yes. Everything stays in your browser unless you export.
MIT License - Free to use, modify, and share.
© 2025 Basil Korompilias
🤖 AI Disclosure
All software architecture, design, implementation, documentation, and evaluation frameworks in this project were authored and engineered by its Author.
Artificial intelligence was employed solely as a technical assistant, limited to code drafting, formatting, verification, and editorial services, always under direct human supervision.
All foundational ideas, design decisions, and conceptual frameworks originate from the Author.
Responsibility for the validity, coherence, and ethical direction of this project remains fully human.
Acknowledgements:
This project benefited from AI language model services accessed through LMArena, Cursor IDE, OpenAI (ChatGPT), Anthropic (Claude), XAI (Grok), Deepseek, and Google (Gemini).


