ECHO is a modular, privacy-first memory assistant built on the Internet Computer Protocol (ICP). It helps Alzheimerβs patients with context-aware cues and supports caregivers with real-time alerts and memory anchoring.
Frontend :- Archana15-codes
Backend :- Kartik1446, Satyabrat2005
IOT :- GitGudScrubss
| Feature | Description |
|---|---|
| π§ Smart Memory Queries | Answers patient questions using pre-set or AI-generated responses |
| π On-chain Actor Logic (Motoko) | Secure, tamper-proof memory logic stored on ICP |
| π©ββ Caregiver Anchoring | Caregivers can set names, identities, or routine facts for recall |
| π§ͺ Emotion Detection | Detects vocal stress using Whisper or pyAudioAnalysis |
| π§© Dspy-AI Reasoning (Under Development) | LLM agent generates calming, empathetic, dynamic replies |
| π² Real-time Caregiver Alerts | Notifies family in critical behavior (e.g., wandering or distress) |
| π agent-js Frontend | Lightweight HTML/JS interface that talks to ICP canister |
| π Inter-Canister Communication | Future support for caregiver β patient sync architecture |
| π Logging & Analytics | Optional logging of queries for pattern review and analytics |
| π Modular Design | Easy to extend with face recognition, TTS, IoT, and more |
[Patient Device / Web Interface]
β
[agent-js β ICP Canister]
β
[MemoryAssistant.mo (Motoko)]
β β
[ML API (FastAPI + Whisper)]
β
[Empathetic Reply / Alert / Log]
| Layer | Technology | Role / Purpose |
|---|---|---|
| Frontend | HTML, CSS, JavaScript (agent-js) | Patient-facing UI and communication with the ICP canister |
| Smart Contracts | Motoko (on Internet Computer) | On-chain logic for memory queries, caregiver updates, and patient data |
| SDK / Bridge | agent-js (DFINITY JavaScript SDK) | Enables frontend to securely communicate with the ICP backend (canister) |
| Dev Tools | DFX CLI, ICP Local Replica | Tooling to develop, test, deploy Motoko canisters on local/test/main network |
| ML/AI Layer | Python, FastAPI, Whisper, pyAudioAnalysis, pydantic | Emotion detection from voice or text inputs; optional behavior inference |
| LLM Agent | Dspy.ai | Generates adaptive, empathetic responses using language models (LLM agent) |
| Hosting | Replit, Render, or ICP Mainnet | Hosts frontend, ML API, and canisters for demo or production environments |
- DFX SDK (for Motoko/ICP)
- Node.js (for frontend/agent-js)
- Python 3.8+ (for backend-ml, optional)
git clone <repo-url>
cd Echo_Backend_ml (for bacckend)
cd Echo (for whole website)
cd Echo_Frontend_ml(For Frontend)dfx start --background./deploy.shOpen frontend/index.html in your browser.
cd backend-ml
pip install -r requirements.txt
uvicorn app:app --reloadTo connect the frontend to the Motoko canister locally:
- Run:
dfx generate memory_assistant
- Copy the generated JS file (e.g.,
.dfx/local/canisters/memory_assistant/memory_assistant.js) into thefrontend/directory or serve it via a local server. - Add the following script tag to
index.htmlbeforememory_assistant.js:<script src="memory_assistant.js"></script> <script src="./memory_assistant.js"></script>
- Now the frontend can call the canister methods directly.
- Enter queries like "Who is this?" or "Where am I?" in the frontend.
- Caregivers can set memory anchors (future feature).
- Emotion detection and alerts are stubbed for demo.
- Add new Motoko canisters for caregiver sync.
- Integrate real ML models in
backend-ml/. - Expand frontend for voice, TTS, or IoT integration.
| Component | Description |
|---|---|
| ESP32-S3-WROOM (U4) | Dual-core MCU with USB OTG, AI acceleration, and camera interface |
| OV2640 (CAM1) | 2MP camera module for capturing images and video |
| PAM8403 (U6) | Stereo audio amplifier for driving speakers |
| TP4056 (U1) | Li-Ion battery charger with status indicators |
| AMS1117-3.3 (U2) | Linear voltage regulator to supply 3.3V |
| MT3608 (U3) | Boost converter (optional power conditioning) |
| AO4407A (Q1) | P-Channel MOSFET used as a load switch |
| Li-Ion Battery (BT1) | 3.7V 600mAh rechargeable battery |
-
Power Input
- The system can be powered via USB-C (VBUS) or from a Li-Ion battery (BT1).
- The TP4056 charges the battery and powers the system when USB is connected.
- The AO4407A MOSFET acts as a switch to allow battery power when USB is disconnected.
-
Voltage Regulation
- AMS1117-3.3 converts the battery voltage (~3.7V) to 3.3V to power the ESP32, camera, and other peripherals.
-
ESP32-S3 Controller
- Acts as the brain of the system, communicating with:
- Camera Module (OV2640) via DVP interface (DATA[0β7], PCLK, XCLK, HREF, VSYNC).
- Audio Amplifier (PAM8403) for audio output.
- USB interface (for programming/data transfer).
- Acts as the brain of the system, communicating with:
-
Camera
- The OV2640 connects directly to the ESP32-S3 using a parallel 8-bit interface with control signals.
- Powered by 3.3V and has separate analog (AVDD), digital (DVDD), and I/O power domains.
-
Audio Amplifier
- PAM8403 amplifies stereo output signals (LOUT, ROUT) for speaker playback.
- Controlled via SHDN and MUTE pins.
- β Compact and power-efficient design
- β Built-in USB charging via TP4056
- β Battery-operated with automatic switching
- β ESP32-S3 support for AI, image processing, and ML
- β Integrated camera and speaker support
| Module | Voltage | Current (approx.) |
|---|---|---|
| ESP32-S3 | 3.3V | 120β240 mA |
| OV2640 | 3.3V | 60β100 mA |
| PAM8403 | 5V | 100β200 mA |
| Total Peak | β | ~500 mA |
This project is open-source and licensed under the MIT License. You are free to use, modify, and distribute with attribution.





