Client-side Cognitive Safety Project - Human-Computer Interaction (HCI).
Rho Aias is a browser extension that acts as a cognitive safety layer for interactions with Large Language Models. It is designed to detect and mitigate the formation of the Autocatalytic Epistemic Loop (AEL), also known as AI induced 'psychosis'. Rho Aias provides real-time diagnostics and context-aware tools to re-introduce healthy friction to maintain cognitive sovereignty.
A single, ambient UI element (a thin frame) that provides permanent feedback on the health of the current conversation. Its color shifts from White (healthy, exploratory) to Yellow (warning, looping) to Red (critical, decoupled), reflecting the real-time AEL Threat Score.
When the Aura is in a warning or critical state, hovering over it reveals a simple, jargon-free tooltip explaining the primary reason for the alert.
- Example: "This conversation is becoming highly agreeable. The AI is consistently validating your statements without offering alternatives."
- Example: "The same core concepts have been repeated multiple times. Consider introducing a new line of inquiry."
When the Aura is in a warning or critical state, a small icon appears next to the AI's most recent response. Clicking it performs a single, powerful action: it opens a new browser tab with a pre-generated critical search query designed to break the loop.
- Example Search:
"criticisms of [central topic of conversation]" - Example Search:
"scientific consensus on [AI's latest claim]"
Rho Aias is built on a modular, privacy-first, client-side architecture. No data ever leaves your computer.
graph TD
A[chat.openai.com DOM] -->|Data Extraction| B(Site Adapter);
B -->|"Clean {author, text, id} Object"| C{Core Engine};
C -->|Score & UI State| D(UI Modules);
D -->|Render/Update| A;
subgraph CoreEngine [Core Engine Background]
C1[Heuristic Module]
C2[Sentiment Module]
C3[Semantic Module]
C1 --> C
C2 --> C
C3 --> C
end
subgraph UIModules [UI Modules On-Page]
D1[Aura]
D2[Anchor]
D1 --> D
D2 --> D
end
- Site Adapter: A configuration-based module responsible for extracting conversational data from a specific LLM's webpage. This makes the system scalable to other platforms post-MVP.
- Core Engine: Runs a hybrid analysis using three types of "sensors":
- Heuristic Module: Analyzes structural patterns like conversational pace, conceptual repetition (entropy), and patterns of sycophancy.
- Sentiment Module: Uses a lightweight, local library to analyze the emotional valence (affective charge) of the text.
- Semantic Module: Uses a lightweight, local library to analyze the meaning and conceptual relationships in the text over time.
- UI Modules: A set of components that react to the state determined by the Core Engine, updating the Aura and displaying the Anchor as needed.
The MVP is focused on creating a complete, polished experience for a single platform. The future direction includes:
- [v0.2.0] Multi-Platform Support: Develop adapters for Claude.ai, Gemini, and other major LLMs.
- [v0.3.0] Active Intervention: Implement optional "Prompt Immunizer" to actively introduce friction.
- [v0.4.0] Analytics & Insights: A dashboard for users to review session history and understand their own interaction patterns.