Skip to content

Rohitgautam02/Floatchat-AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

FloatChat-AI πŸŒŠπŸ€–

License: MIT React Flask Python Ollama RAG

The Intelligent Interface for Indian Ocean ARGO Float Data.

FloatChat-AI explores the depths of the Indian Ocean using real-time data from the ARGO Float Network. It combines a modern interactive dashboard with an AI-powered streaming RAG chatbot to make oceanographic data accessible, understandable, and actionable.

Dashboard Chat

Map Charts


Table of Contents


✨ Key Features

🧠 AI-Powered Streaming Chat (RAG)

  • Real-time Streaming β€” Tokens arrive word-by-word (like ChatGPT) via Ollama streaming, so you never wait for the full response.
  • Context-Aware RAG β€” Uses FAISS vector embeddings + SentenceTransformers to retrieve relevant float data for every query.
  • Local & Cloud AI β€” Supports Ollama (Llama 3.2, FREE) for local inference or OpenAI GPT-4o-mini for cloud-based reasoning.
  • Auto Pre-warming β€” Ollama model is pre-loaded into RAM at server startup for instant first responses.
  • Chart Intent Detection β€” Automatically detects when a visualization is needed and renders the right chart.

πŸ“Š Interactive Data Dashboard

  • Scientifically Verified Data β€” Only real, confirmed ARGO floats from the Indian Ocean (Arabian Sea, Bay of Bengal, Equatorial, Southern IO).
  • Live Map β€” Interactive Leaflet map showing real-time float positions and drift trajectories.
  • Advanced Visualization:
    • Temperature & Salinity depth profiles
    • T-S Diagrams for water mass analysis
    • Depth-time sections
    • Regional surface statistics (0–10 m)

πŸ›‘οΈ Robust Backend

  • Data Pipeline β€” Automated fetching and validation of float profiles from the Argovis 2.0 API.
  • Vector Search β€” Automatic FAISS embedding generation for new float data.
  • SQLite Database β€” Efficient local storage for metadata and profiles.
  • Fallback Mode β€” Structured data responses even when no AI is configured.

πŸ› οΈ Technology Stack

Layer Technology Purpose
Frontend React 19, Recharts, React-Leaflet UI, charts, interactive maps
Backend Flask 3.1, SQLAlchemy, Gunicorn REST API, ORM, production server
AI / RAG FAISS, SentenceTransformers (all-MiniLM-L6-v2) Vector search & embeddings
LLM Ollama (Llama 3.2) / OpenAI (GPT-4o-mini) Conversational AI
Data Source Argovis 2.0 API Scientific ARGO float observations
Database SQLite Local persistent storage
DevOps Docker, Dev Containers, GitHub Codespaces Reproducible environments

πŸš€ Quick Start

Prerequisites

Tool Version Required
Python 3.10+ Yes
Node.js 18+ Yes
Ollama latest Recommended (for free AI)
Git any Yes

Option 1: Automated Setup ⚑ (Recommended)

git clone https://github.com/Rohitgautam02/Floatchat-AI.git
cd Floatchat-AI
chmod +x setup.sh
./setup.sh

The interactive script will:

  1. Install all Python & Node.js dependencies
  2. Let you choose AI mode (Ollama / OpenAI / Basic)
  3. Create the SQLite database
  4. Fetch sample ARGO data from the Indian Ocean
  5. Show you how to start the app

Option 2: Manual Setup

# 1. Clone the repository
git clone https://github.com/Rohitgautam02/Floatchat-AI.git
cd Floatchat-AI

# 2. Backend setup
cd backend
pip install -r requirements.txt
cp .env.example .env          # then edit .env with your AI settings

# 3. Initialize database and fetch ocean data
python fetch_argovis.py

# 4. Start the backend (Terminal 1)
python app.py                  # runs on http://localhost:5000

# 5. Frontend setup (Terminal 2)
cd ../frontend
npm install
npm start                      # opens http://localhost:3000

Option 3: Docker Compose

git clone https://github.com/Rohitgautam02/Floatchat-AI.git
cd Floatchat-AI
cp backend/.env.example backend/.env   # edit .env with your settings
docker compose up --build

Open http://localhost:3000 β€” the frontend proxies API calls to the backend automatically.

Windows-Specific Setup

On Windows, the setup.sh script requires Git Bash or WSL. Alternatively, follow the manual setup:

# PowerShell
git clone https://github.com/Rohitgautam02/Floatchat-AI.git
cd Floatchat-AI

# Backend
cd backend
pip install -r requirements.txt
Copy-Item .env.example .env    # edit .env with notepad
python fetch_argovis.py
python app.py

# Frontend (new PowerShell window)
cd frontend
npm install
npm start

🐳 Dev Container / GitHub Codespaces

This project includes a full Dev Container configuration for one-click development.

GitHub Codespaces (Easiest)

  1. Go to github.com/Rohitgautam02/Floatchat-AI
  2. Click Code β†’ Codespaces β†’ Create codespace on main
  3. Wait for the container to build (~2–3 minutes)
  4. The post-create script auto-installs all dependencies
  5. Open two terminals:
    # Terminal 1 β€” Backend
    cd backend && python app.py
    
    # Terminal 2 β€” Frontend
    cd frontend && npm start
  6. Codespaces will auto-forward ports 5000 & 3000

Note: Ollama cannot run inside Codespaces. Use AI_MODE=openai with an API key in backend/.env, or use Basic Mode.

VS Code Dev Container (Local)

  1. Install Docker Desktop and the Dev Containers extension
  2. Open the repo folder in VS Code
  3. Press Ctrl+Shift+P β†’ Dev Containers: Reopen in Container
  4. All dependencies install automatically via postCreateCommand
  5. For Ollama support, install Ollama on your host machine (not in the container) and set:
    OLLAMA_URL=http://host.docker.internal:11434

πŸ€– AI Configuration

FloatChat-AI gives you three options β€” choose what fits your needs:

Option Cost Speed Setup
πŸ†“ Ollama FREE forever ~5 tok/s (CPU) 5 min local install
πŸ’³ OpenAI ~$0.001 / chat Very fast 2 min (API key)
πŸ“Š Basic Free Instant None

πŸ†“ Ollama β€” Free Local AI (Recommended)

# Install Ollama (one-time)
# Linux / macOS:
curl -fsSL https://ollama.ai/install.sh | sh
# Windows: download from https://ollama.com/download

# Pull the model (~2 GB download)
ollama pull llama3.2

# Start Ollama (keep running in background)
ollama serve

In backend/.env:

AI_MODE=ollama
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3.2

Performance tip: The backend pre-warms Ollama at startup (loads model into RAM). The first chat after a cold start may take a few extra seconds, but all subsequent responses stream in real-time.

πŸ’³ OpenAI β€” Cloud AI

Get an API key from platform.openai.com/api-keys.

In backend/.env:

AI_MODE=openai
OPENAI_API_KEY=sk-your-actual-key-here
OPENAI_MODEL=gpt-4o-mini

πŸ“Š Basic Mode β€” No AI

No configuration needed. You'll get structured data tables, charts, and statistics without AI conversations. Upgrade anytime by editing .env.


πŸ“‚ Project Structure

Floatchat-AI/
β”œβ”€β”€ .devcontainer/
β”‚   β”œβ”€β”€ devcontainer.json       # Dev Container / Codespaces config
β”‚   └── setup.sh                # Post-create setup for Codespaces
β”œβ”€β”€ .vscode/
β”‚   └── settings.json           # VS Code workspace settings
β”œβ”€β”€ backend/
β”‚   β”œβ”€β”€ app.py                  # Flask application & API routes
β”‚   β”œβ”€β”€ fetch_argovis.py        # ARGO data fetcher (Argovis 2.0 API)
β”‚   β”œβ”€β”€ requirements.txt        # Python dependencies
β”‚   β”œβ”€β”€ .env.example            # Environment variable template
β”‚   β”œβ”€β”€ services/
β”‚   β”‚   β”œβ”€β”€ ai_service.py       # LLM integration (Ollama streaming / OpenAI)
β”‚   β”‚   β”œβ”€β”€ rag_service.py      # FAISS vector search & RAG retrieval
β”‚   β”‚   └── data_service.py     # Database queries, stats & chart data
β”‚   β”œβ”€β”€ db/
β”‚   β”‚   β”œβ”€β”€ models.py           # SQLAlchemy models (ArgoRecord, FloatMetadata)
β”‚   β”‚   └── session.py          # Database session factory
β”‚   └── data/
β”‚       β”œβ”€β”€ argo_indian_ocean.csv
β”‚       └── faiss_index/        # FAISS vector index + documents
β”œβ”€β”€ frontend/
β”‚   β”œβ”€β”€ package.json
β”‚   β”œβ”€β”€ public/
β”‚   β”‚   └── index.html
β”‚   └── src/
β”‚       β”œβ”€β”€ App.jsx             # Main app with routing
β”‚       β”œβ”€β”€ App.css             # Ocean-themed dark mode styles
β”‚       β”œβ”€β”€ index.jsx           # React entry point
β”‚       └── components/
β”‚           β”œβ”€β”€ ChatPanel.jsx   # AI chat with real-time streaming
β”‚           β”œβ”€β”€ Dashboard.jsx   # Data dashboard with stats cards
β”‚           β”œβ”€β”€ Charts.jsx      # Recharts visualizations
β”‚           β”œβ”€β”€ MapView.jsx     # Leaflet interactive map
β”‚           └── Navbar.jsx      # Navigation bar
β”œβ”€β”€ screenshots/                # App screenshots
β”œβ”€β”€ setup.sh                    # Interactive one-click setup
β”œβ”€β”€ docker-compose.yml          # Docker Compose multi-service
β”œβ”€β”€ Dockerfile                  # Multi-stage Docker build
β”œβ”€β”€ CONTRIBUTING.md             # Contribution guidelines
β”œβ”€β”€ LICENSE                     # MIT License
└── README.md                   # This file

πŸ“‘ API Reference

All endpoints are served from http://localhost:5000.

Chat Endpoints

Method Endpoint Description
POST /api/chat Send message, get full JSON response
POST /api/chat/stream Send message, get streaming NDJSON tokens

Request body:

{ "message": "What is the average ocean temperature?" }

/api/chat response:

{
  "reply": "Based on 885,570 measurements...",
  "chart_type": "temperature_profile"
}

/api/chat/stream response (newline-delimited JSON):

{"token": "Based ", "done": false, "chart_type": "temperature_profile"}
{"token": "on ", "done": false, "chart_type": "temperature_profile"}
...
{"token": "", "done": true, "chart_type": "temperature_profile"}

Data Endpoints

Method Endpoint Description
GET / Health check & setup status
GET /api/data/stats Dataset statistics overview
GET /api/data/floats All float metadata + positions
GET /api/data/floats/<wmo_id> Single float details
GET /api/data/floats/<wmo_id>/trajectory Float trajectory data
GET /api/data/charts/temp-profile?platform=X Temperature vs depth
GET /api/data/charts/sal-profile?platform=X Salinity vs depth
GET /api/data/charts/ts-diagram?platform=X T-S diagram data
GET /api/data/charts/depth-time?platform=X Depth-time section
POST /api/data/query Query records with filters
POST /api/data/export Export filtered data as CSV

βš™οΈ Environment Variables

All configuration lives in backend/.env. Copy from the template:

cp backend/.env.example backend/.env
Variable Default Description
AI_MODE ollama AI provider: ollama, openai, or leave unset for basic
OLLAMA_URL http://localhost:11434 Ollama server URL
OLLAMA_MODEL llama3.2 Ollama model name
OPENAI_API_KEY β€” OpenAI API key (only for AI_MODE=openai)
OPENAI_MODEL gpt-4o-mini OpenAI model name
DATABASE_URL sqlite:///./db/argo_data.db SQLite database path
FLASK_ENV development development or production
FLASK_SECRET_KEY your-secret-key-here Flask session secret
PORT 5000 Backend server port
FRONTEND_URL http://localhost:3000 Frontend URL for CORS

πŸ§ͺ Scientific Verification

This project adheres to strict oceanographic data quality standards:

  • Boundary Checks β€” Floats drifting outside 20Β°E–120Β°E longitude are automatically filtered.
  • Surface Statistics β€” "Surface Temperature" uses only the top 10 m of the water column (scientifically accepted mixed-layer definition).
  • Verified Sources β€” All data from the Argovis 2.0 API, cross-checked against official float indices.
  • No Synthetic Data β€” Every record is a real ARGO float profile in the Indian Ocean.

Tracked Regions

Region Latitude Longitude
Arabian Sea 5Β°N – 30Β°N 45Β°E – 78Β°E
Bay of Bengal 5Β°N – 25Β°N 78Β°E – 100Β°E
Equatorial Indian Ocean 10Β°S – 10Β°N 40Β°E – 100Β°E
Southern Indian Ocean 35Β°S – 10Β°S 20Β°E – 120Β°E

🀝 Contributing

Contributions are welcome! Please see CONTRIBUTING.md for detailed guidelines.

Quick version:

  1. Fork the repo
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Commit changes: git commit -m 'Add amazing feature'
  4. Push: git push origin feature/amazing-feature
  5. Open a Pull Request

πŸ“„ License

Distributed under the MIT License. See LICENSE for details.


πŸ™ Acknowledgments

  • Argovis β€” API for accessing ARGO float data
  • CSIRO & INCOIS β€” Deploying Indian Ocean ARGO floats
  • Ollama β€” Democratizing local AI inference
  • FAISS β€” Efficient vector similarity search
  • SentenceTransformers β€” State-of-the-art text embeddings

Made with 🌊 for ocean science

About

No description, website, or topics provided.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •