A comprehensive IoT-based water quality monitoring system with AI-powered insights, developed as a Sejong University Capstone project.
- 🌊 Real-Time Monitoring: Live telemetry data (pH, temperature, dissolved oxygen)
- 🤖 AI-Powered Analysis: Machine learning-based water quality prediction
- 💬 AI Assistant: Veronica - conversational AI assistant for aquarium management
- 📊 Interactive Dashboard: Charts, gauges, and real-time visualizations
- 🔐 Multi-Provider Auth: Email/Password, Google, and Kakao OAuth
- 📱 Responsive Design: Works on desktop, tablet, and mobile devices
- 📤 Data Export: CSV/JSON export functionality
- 🎥 Camera Integration: Live camera feed support
- 🍽️ Feeder Control: Automated feeding system control
- Frontend: Next.js 15, React 19, TypeScript, Tailwind CSS
- Backend: Next.js API Routes, Express.js, Socket.IO
- Database: PostgreSQL with Prisma ORM
- AI/ML: FastAPI (Python), Ollama (LLM)
- Hardware: Arduino with sensors (pH, temperature, DO)
- Authentication: NextAuth.js (Credentials, Google, Kakao)
- Node.js 18+ and pnpm
- PostgreSQL database
- Python 3.8+ (for AI service)
- Arduino hardware (optional - mock data available)
git clone https://github.com/azizbekdevuz/fishlinic.git
cd fishlinic# Install Node.js dependencies
pnpm install
# Install Python dependencies (for AI service)
cd ai-service
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
cd ..Create a .env file in the root directory:
# Database
DATABASE_URL="postgresql://user:password@localhost:5432/fishlinic"
# NextAuth
AUTH_SECRET="your-secret-key-here"
NEXTAUTH_URL="http://localhost:3000"
# OAuth Providers (optional)
GOOGLE_CLIENT_ID="your-google-client-id"
GOOGLE_CLIENT_SECRET="your-google-client-secret"
KAKAO_CLIENT_ID="your-kakao-client-id"
KAKAO_CLIENT_SECRET="your-kakao-client-secret"
# Hardware (optional)
SERIAL_PATH="auto" # or "COM3" on Windows, "/dev/ttyACM0" on Linux
SERIAL_BAUD=9600
# AI Service
AI_BASE_URL="http://localhost:8000"
OLLAMA_URL="http://localhost:11434"
# WebSocket
NEXT_PUBLIC_WS_URL="http://localhost:4000"# Generate Prisma client
pnpm db:generate
# Push schema to database
pnpm db:push
# (Optional) Open Prisma Studio
pnpm db:studioOption A: Run all services together
pnpm devOption B: Run services separately
Terminal 1 - Web App:
pnpm dev:uiTerminal 2 - Mock Server (Serial Bridge):
pnpm dev:bridgeTerminal 3 - AI Service:
cd ai-service
source venv/bin/activate # On Windows: venv\Scripts\activate
python main.py- Web App: http://localhost:3000
- Mock Server API: http://localhost:4000
- AI Service API: http://localhost:8000
- Prisma Studio: http://localhost:5555 (if running)
fishlinic/
├── app/ # Next.js application
│ ├── api/ # API routes
│ ├── components/ # React components
│ ├── dashboard/ # Dashboard page
│ ├── vassistant/ # AI assistant page
│ └── ...
├── mock-server/ # Serial bridge server
├── ai-service/ # Python AI service
├── arduino-code/ # Arduino firmware
├── prisma/ # Database schema
└── public/ # Static assets
pnpm dev- Run all services (web + bridge)pnpm dev:ui- Run Next.js web app onlypnpm dev:bridge- Run mock server only
pnpm db:generate- Generate Prisma clientpnpm db:push- Push schema to databasepnpm db:migrate- Run database migrationspnpm db:studio- Open Prisma Studio
pnpm build- Build for productionpnpm start- Start production serverpnpm start:prod:all- Start all production services
- Upload
arduino-code/Working_Fishlinic_Code/Working_Fishlinic_Code.inoto your Arduino - Connect sensors:
- pH sensor
- Temperature sensor
- Dissolved oxygen sensor
- Connect Arduino to computer via USB
- Set
SERIAL_PATHin.env(or use "auto" for auto-detection)
If no hardware is connected, the system automatically generates mock data for testing.
- Sign up or sign in at
/auth/signin - Verify your email (if required)
- Access the dashboard at
/dashboard - View real-time telemetry data, charts, and gauges
- Navigate to
/vassistant - Click "Initiate Assistant"
- Ask questions about your aquarium
- Request water quality reports
- Profile:
/profile - Settings:
/settings - Account:
/account
GET /api/telemetry/latest- Get latest telemetry dataPOST /api/telemetry/save- Save telemetry dataGET /api/telemetry/export- Export telemetry data
POST /api/assistant/initiate- Initialize AI assistantPOST /api/assistant/ask- Ask question to assistantGET /api/assistant/status- Get assistant status
POST /api/auth/signup- User registrationPOST /api/auth/verification/generate- Generate verification tokenPOST /api/auth/verification/complete- Complete verification
The feeder API is publicly accessible with rate limiting. All endpoints support CORS and can be called from external services (e.g., Python virtual assistant).
Base URL: https://your-dashboard.vercel.app/api/feeder (production) or http://localhost:3000/api/feeder (development)
Rate Limits:
- Maximum 3 requests per 10 seconds
- Maximum 5 requests per 30 minutes
- Maximum 15 requests per 1 minute
CORS: All origins allowed by default (configurable via ALLOWED_ORIGINS environment variable)
Trigger an immediate feed operation.
Endpoint: POST /api/feeder/feed
Request Body:
{
"duration": 2, // Number of cycles (1-5)
"source": "api" // Optional: source identifier
}Response (200 OK):
{
"status": "ok",
"action": "feed",
"cycles": 2,
"timestamp": "2025-01-15T10:30:00.000Z",
"hardware": {
"main": true,
"secondary": true
}
}Error Responses:
400 Bad Request: Invalid duration (must be 1-5)429 Too Many Requests: Rate limit exceeded503 Service Unavailable: Feeder hardware not connected
Example (Python):
import requests
response = requests.post(
"https://your-dashboard.vercel.app/api/feeder/feed",
json={"duration": 2, "source": "python-va"},
headers={"Content-Type": "application/json"}
)
print(response.json())Get all scheduled feeding times.
Endpoint: GET /api/feeder/schedule
Response (200 OK):
{
"status": "ok",
"schedules": [
{
"id": "uuid-here",
"name": "Morning Feed",
"cron": "07:30",
"duration": 2,
"next_run": "2025-01-16T07:30:00.000Z"
}
]
}Example (Python):
response = requests.get("https://your-dashboard.vercel.app/api/feeder/schedule")
schedules = response.json()["schedules"]
for s in schedules:
print(f"{s['name']} at {s['cron']}")Create a new scheduled feed.
Endpoint: POST /api/feeder/schedule
Request Body:
{
"name": "Evening Feed", // Optional
"cron": "18:00", // Time in HH:MM format
"duration": 3 // Number of cycles (1-5)
}Response (200 OK):
{
"status": "ok",
"id": "uuid-here",
"name": "Evening Feed",
"cron": "18:00",
"cycles": 3
}Error Responses:
400 Bad Request: Invalid cron format or duration429 Too Many Requests: Rate limit exceeded
Example (Python):
response = requests.post(
"https://your-dashboard.vercel.app/api/feeder/schedule",
json={
"name": "Evening Feed",
"cron": "18:00",
"duration": 3
}
)
print(response.json())Remove a scheduled feed.
Endpoint: DELETE /api/feeder/schedule/{id}
Response (200 OK):
{
"status": "ok",
"deleted": "uuid-here"
}Error Responses:
404 Not Found: Schedule not found429 Too Many Requests: Rate limit exceeded
Example (Python):
schedule_id = "uuid-here"
response = requests.delete(
f"https://your-dashboard.vercel.app/api/feeder/schedule/{schedule_id}"
)
print(response.json())Get overall feeder system status including hardware connection and schedules.
Endpoint: GET /api/feeder/status
Response (200 OK):
{
"device": "fish-feeder",
"hardware": {
"connected": true,
"main": true,
"secondary": true
},
"last_feed": {
"timestamp": "2025-01-15T10:30:00.000Z",
"source": "api",
"success": true,
"details": "duration=2s"
},
"schedules": [
{
"id": "uuid-here",
"name": "Morning Feed",
"cron": "07:30",
"duration": 2,
"next_run": "2025-01-16T07:30:00.000Z"
}
]
}Example (Python):
response = requests.get("https://your-dashboard.vercel.app/api/feeder/status")
status = response.json()
print(f"Hardware connected: {status['hardware']['connected']}")
print(f"Last feed: {status['last_feed']['timestamp']}")Get only the hardware connection status.
Endpoint: GET /api/feeder/feed-status
Response (200 OK):
{
"connected": true,
"main": true,
"secondary": true,
"mockMode": false
}Example (Python):
response = requests.get("https://your-dashboard.vercel.app/api/feeder/feed-status")
hw = response.json()
if hw["connected"]:
print("Feeder hardware is ready")
else:
print("Feeder hardware not connected")All endpoints return consistent error responses:
{
"status": "error",
"error": "Error message here",
"retryAfter": 60 // For rate limit errors (seconds)
}HTTP Status Codes:
200: Success400: Bad Request (invalid parameters)429: Too Many Requests (rate limit exceeded)503: Service Unavailable (hardware not connected)500: Internal Server Error
To customize CORS origins, set the ALLOWED_ORIGINS environment variable:
# Allow all origins (default)
ALLOWED_ORIGINS="*"
# Allow specific origins
ALLOWED_ORIGINS="https://example.com,https://another.com,http://localhost:3000"- Ensure Arduino Serial Monitor is closed
- Check COM port in Device Manager (Windows) or
/dev/(Linux/Mac) - Verify baud rate matches (default: 9600)
- Try setting
SERIAL_PATHexplicitly in.env
- Verify
DATABASE_URLis correct - Ensure PostgreSQL is running
- Run
pnpm db:pushto sync schema
- Check if Python service is running on port 8000
- Verify
AI_BASE_URLin.env - Check AI service logs for errors
- Ensure mock server is running on port 4000
- Verify
NEXT_PUBLIC_WS_URLin.env - Check CORS settings in mock server
This is a Capstone project, but contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
This project is developed for educational purposes as part of Sejong University Capstone Design Course.
- Developer: Azizbek Arzikulov
- Institution: Sejong University
- Course: Capstone Design Course 2025
Other team members:
- Leader: Tran Dai Viet Hung
- Hardware Engineer: Azizjon Kamoliddinov
- AI Software Development Team: Nomungerel Mijiddor & Phyo Thiri Khaing
For detailed architecture and technical documentation, see architecture.md.
© 2025 Team Fishlinic - Sejong University Capstone Project