Skip to content

gyrogovernance/apps

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

43 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI-Empowered Governance Apps

Gyroscopic Alignment Policy Development Lab

AiE

G Y R O - G O V E R N A N C E

Home Apps Diagnostics Tools Science Superintelligence

License: MIT


🔍 AI Inspector - Browser Extension

Browser extension Mitigating Jailbreaks, Deceptive Alignment, and X-Risk through quality assessment of AI outputs and safety documentation. Works with any AI chat (ChatGPT, Claude, Gemini, etc.) using clipboard-based workflow. No-code tool, no API keys are necessary, and none of your data are sent to servers.

Transform everyday AI conversations into rigorous governance analysis for UN Sustainable Development Goals, community policy, and AI safety.

Add to Chrome

Ai Inspector Apps Overview


🎬 Live Demo

AI Inspector Demo

Click to see AI Inspector in action - from challenge selection to mathematical analysis


What It Does

Two powerful workflows for governance analysis:

1. Full GyroDiagnostics Evaluation (30-60 min)

Validate AI policy recommendations and governance proposals.

  • Protocol: Participation (select challenge) → Preparation (synthesis) → Provision (inspection)
  • Output: Three key indices (Quality Index, Superintelligence Index, Alignment Rate) + 20 detailed metrics
  • Use Case: Energy transition, healthcare equity, climate adaptation, SDG challenges

2. Quick Gadgets (3-10 min)

Six specialized tools for rapid assessment:

Analysis Tools:

  • 🔬 Rapid Test: Quick quality metrics for any AI output
  • 📊 Policy Auditing: Extract claims & evidence from documents
  • 📋 Policy Reporting: Create executive synthesis with attribution

Treatment Tools:

  • 🔍 Meta-Evaluation: Assess AI safety documentation (model cards, eval reports) using The Human Mark framework. Detects displacement risks (GTD, IVD, IAD, IID) in 3-task pipeline. Essential for AI safety labs.
  • 🦠 AI Infection Sanitization: Remove hidden patterns and normalize text
  • 💊 Pathologies Immunity Boost: Enhance content quality across 12 metrics

Quick Start

  1. Install: Add to Chrome
  2. Click extension icon in toolbar
  3. Choose full evaluation or quick gadget
  4. Copy prompts to your AI chat (ChatGPT, Claude, etc.)
  5. Paste responses back into extension
  6. Get quality metrics & export results

Works with any AI - No API keys needed. Clipboard-based workflow keeps data local.


Why This Matters

The 2025 AI Safety Index shows meaningful AI evaluation remains restricted to corporate insiders. Communities need accessible inspection tools.

AI Inspector provides platform-agnostic mathematical assessment through a clipboard-based workflow, making sophisticated evaluation accessible to any community.

Conventional Tools AI Inspector
Requires enterprise access Uses your existing chat access
Tests capabilities Inspects governance alignment
Proprietary black box Open-source transparency
Binary pass/fail Nuanced quality spectrum
Expert-only Citizen-accessible

Who Uses It

Communities & NGOs: Validate advocacy proposals and develop evidence-based policy
AI Safety Labs: Evaluate documentation via Meta-Evaluation gadget
Policy Researchers: Conduct reproducible governance experiments
Citizens: Direct participation in AI governance


Technical Foundation

GyroDiagnostics Framework

Mathematical framework measuring AI alignment through:

  • 12 metrics across Structure, Behavior, Specialization
  • Geometric decomposition via Hodge theory
  • K₄ tetrahedral topology eliminating measurement bias

Quality Index (QI): 0-100% aggregate performance
Alignment Rate (AR): Quality points per minute
Superintelligence Index (SI): Behavioral balance (optimum = 100)

Based on Common Governance Model (CGM) - geometric necessities manifesting as behavioral qualities.

The Human Mark Framework

Used in Meta-Evaluation gadget. A formal classification system mapping all AI safety failures to four root causes:

  • GTD: Governance Traceability Displacement
  • IVD: Information Variety Displacement
  • IAD: Inference Accountability Displacement
  • IID: Intelligence Integrity Displacement

Provides formal grammar and testing protocols for AI safety documentation.


Documentation


FAQ

Do I need math knowledge? No. Extension handles calculations automatically.
Which AI platforms work? Any chat interface. No API keys needed.
Is my data private? Yes. Everything stays in your browser unless you export.


License

MIT License - Free to use, modify, and share.

© 2025 Basil Korompilias


🤖 AI Disclosure

All software architecture, design, implementation, documentation, and evaluation frameworks in this project were authored and engineered by its Author.

Artificial intelligence was employed solely as a technical assistant, limited to code drafting, formatting, verification, and editorial services, always under direct human supervision.

All foundational ideas, design decisions, and conceptual frameworks originate from the Author.

Responsibility for the validity, coherence, and ethical direction of this project remains fully human.

Acknowledgements:
This project benefited from AI language model services accessed through LMArena, Cursor IDE, OpenAI (ChatGPT), Anthropic (Claude), XAI (Grok), Deepseek, and Google (Gemini).

About

AI Empowered Governance: Gyroscopic Alignment Policy Lab

Resources

License

Stars

Watchers

Forks

Packages

No packages published