Privacy-First β’ Powerful β’ Portable
Run powerful AI models completely offline on your Android device. No internet required. No subscriptions. Complete privacy.
Download β’ Features β’ Documentation β’ Community
nano.ai is a comprehensive AI ecosystem for Android that puts powerful AI capabilities directly in your pocketβcompletely offline. Chat with LLMs, generate images, use voice AI, and inject custom knowledge, all without an internet connection.
- π Privacy First - Your data never leaves your device in offline mode
- πͺ Fully Featured - Chat, images, voice, and custom knowledge in one app
- π Hybrid Mode - Switch between offline and cloud models seamlessly
- π Free & Open - No subscriptions, no paywalls, Apache 2.0 licensed
- π Extensible - Plugin system for unlimited customization
- Offline Models: Run GGUF models (Llama, Mistral, Gemma, Phi) locally
- Cloud Access: Connect to 100+ models via OpenRouter (GPT-4, Claude, Gemini)
- Smart Streaming: Real-time token generation with context preservation
- Conversation History: Full chat persistence with SQLite storage
- Stable Diffusion 1.5: Generate images completely offline
- Censored & Uncensored: Choose the model that fits your needs
- Mobile Optimized: Runs on phones with 6GB+ RAM
- Fast Generation: 30-90 seconds depending on device
- Text-to-Speech: 11 professional voices (American & British accents)
- Speech-to-Text: Offline Whisper-powered recognition
- Zero Latency: All processing happens on-device
- Hands-Free: Perfect for driving or multitasking
- Custom Data-Packs: Inject Wikipedia, docs, notes into AI context
- No Retraining: Add knowledge without model retraining
- Dynamic Mounting: Attach/detach knowledge bases on the fly
- Universal Support: Works with both local and cloud models
- Web Search: Real-time information retrieval
- Web Scraper: Extract content from any URL
- DataHub: Manage custom knowledge bases
- Document Viewer: Analyze PDFs and text files
- Extensible: Build your own plugins
| Chat Interface | Model Selection | Settings |
|---|---|---|
| Multi-modal AI conversations | 100+ models available | Complete customization |
Screenshots coming soon
Download the latest APK from Releases and install on Android 8.0+ devices.
# Clone repository
git clone https://github.com/NextGenXplorer/Nano.AI-Native.git
cd nano.ai
# Build with Gradle
./gradlew assembleDebug
# Install on connected device
./gradlew installDebug-
Load a Chat Model
- Download a GGUF model from HuggingFace
- Recommended:
Llama-3-8B-Q4_K_M.gguf(4.5GB) - Import via Settings β Local Models
-
Enable Image Generation
- Download Stable Diffusion 1.5 model
- Import via Settings β Image Models
-
Activate Voice AI
- TTS voices included by default
- Download Whisper for STT via Settings β Voice Models
- Get API key from OpenRouter.ai
- Enter key in Settings β API Configuration
- Access 100+ cloud models instantly
- Android 8.0+ (API 26)
- 4GB RAM
- 2GB storage
- Android 10+
- 6GB+ RAM (8GB preferred)
- Snapdragon 8 Gen 1 or equivalent
- 5GB+ storage
- Android 11+
- 12GB+ RAM
- Snapdragon 8 Gen 3 or equivalent
- 10GB+ storage
Core Technologies:
- Language: Kotlin + C++
- UI: Jetpack Compose
- Local Inference: llama.cpp (GGUF models)
- Image Generation: Stable Diffusion C++
- Voice: Sherpa-ONNX (TTS/STT)
- Cloud API: Retrofit + OkHttp
- Database: Room (SQLite)
- Async: Kotlin Coroutines + Flow
Performance:
- Quantized model support (Q4_K_M, Q5_K_S)
- Context caching
- Memory-mapped loading
- NPU acceleration (where available)
We welcome contributions! Here's how to get started:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open a Pull Request
- π Bug fixes and device compatibility
- π Documentation improvements
- π§ͺ Testing on various devices
- π New plugin development
- π Internationalization
Distributed under the Apache 2.0 License. See LICENSE for details.
What you can do:
- β Commercial use
- β Modification
- β Distribution
- β Private use
- β Patent use
Requirements:
- π Include license and copyright notice
- π Document changes made
nano.ai is built on the shoulders of giants:
- llama.cpp - Efficient LLM inference
- Sherpa-ONNX - Offline speech synthesis
- Stable Diffusion - Image generation
- OpenRouter - Unified API for cloud models
- Jetpack Compose - Modern Android UI
- π Issues - Report bugs or request features
- π‘ Discussions - Ask questions and share ideas
- π§ Email: support@nano.ai
Q: Will this drain my battery?
A: Local inference is power-intensive. For extended use, keep your device plugged in. Cloud mode uses minimal battery.
Q: How big are the model files?
A: GGUF models: 0.5GB-8GB. Stable Diffusion: ~2GB. Voice models: 50-500MB.
Q: Is my data really private?
A: In offline mode, absolutely nothing leaves your device. Verify in our open-source code.
Q: Can I use my own API keys?
A: Yes! Bring your own OpenRouter key. You control costs and usage.
Q: Does it support iOS?
A: Not currently. Android only due to platform constraints.
Built with β€οΈ by NextGenXplorer
Privacy-first AI for everyone, everywhere
If nano.ai empowers your AI journey, please β star the repository!
Download β’ Report Bug β’ Request Feature
Powered by llama.cpp β’ Sherpa-ONNX β’ Stable Diffusion β’ OpenRouter β’ Jetpack Compose