noAI is a lightweight macOS app that turns Ollama-powered language models into a coding and creativity companion.
Think of it as a local-first AI agent where you can spin up different models (Llama, Gemma, code-specialized LLMs, etc.) and use them for tasks like:
- Coding help — write, debug, or refactor code with context-aware suggestions
- Model switching — easily swap between multiple Ollama models depending on your use case
- Creativity and brainstorming — not just for coding, but also writing, ideation, or general Q&A
- Local-first privacy — runs on your machine via Ollama, so your prompts and data stay private
It’s meant to feel like a “coding buddy” with a bit of personality — quick, local, and customizable completely offline.
- Multiple Model Support: Run and swap between LLMs like Llama, Gemma, DeepSeek LLMs directly on macOS
- LoRA Fine-tuning: Load lightweight adapters to specialize a model on your own data
- RAG (Retrieval-Augmented Generation): Connect your agent to local files, docs, or databases for grounded answers
- MCP (Model Context Protocol): Extend your agent with tools, APIs, or external systems for custom workflows
- Thread Management: Create, delete, rename, and organize chat threads
- Export Options: Save chats as JSON or Markdown for later reference
-
Install Ollama
Follow the official installation guide: https://github.com/ollama/ollamaOn macOS, you can use Homebrew:
brew install ollama
-
Run the Ollama server
Start the Ollama daemon (it runs the models locally):ollama serve
By default it listens on
http://127.0.0.1:11434. -
Pull a model
Example pulling the Llama 3 model:ollama pull llama3
You can also pull other models, e.g.
gemma,codellama, or any from Ollama's model library.
noAI can also generate images in dedicated “Image Generation” threads using Apple’s Core ML Stable Diffusion models.
Go to Apple’s Core ML Stable Diffusion collection on Hugging Face:
Pick a model you like (e.g. Stable Diffusion 1.5 or 2.1) and download / clone it.
For each model, you’ll end up with a folder structure similar to:
coreml-stable-diffusion-*-*/ # model root
original/
compiled/ <-- ✅ this is the folder you need
packages/
split_einsum/
compiled/
packages/
For noAI you only need the original/compiled folder for the model you want to use.
- ✅ Required:
original/compiled/(this is what the app points at) - ❌ Optional to keep/remove:
packages/folders, or the unused variant (split_einsum) if you want to save disk space
- Open noAI.
- In the sidebar, expand Settings.
- In the Image generation section:
- Click “Choose folder…”.
- Select the
original/compiledfolder of the Core ML SD model you downloaded.
(For example:.../coreml-stable-diffusion-v1-5/coreml-stable-diffusion-v1-5/original/compiled.)
- The status should change to something like “Stable Diffusion ready”.
The app will remember this folder using a security-scoped bookmark, so you don’t need to reselect it every launch.
- In the sidebar header, click New ▾ → New image generation.
- This creates a special image thread:
- The composer shows an “Image generation” pill (no model selector).
- Every prompt you send will:
- Add your text message.
- Generate an image locally with Stable Diffusion.
- Display the image as a bubble in the chat.
Your local LLM chat threads and image-generation threads are completely separate, so the user doesn’t have to think about which LLM model is “linked” to the image generation — it always uses the selected Core ML Stable Diffusion model.
Ollama Cloud models let you run large models (*:cloud) on Ollama’s hosted hardware while still using your local noAI app.
From your terminal:
ollama signinFollow the instructions to sign in or create an account in your browser.
Once signed in, you can pull a cloud model directly:
ollama pull gpt-oss:120b-cloudCloud models use the :cloud suffix (for example, gpt-oss:120b-cloud). They appear in your model list just like local models, but when you run them, the heavy lifting happens on Ollama’s servers.
You have two options to get started:
- Go to the Releases page.
- Download the latest
.dmg. - Drag noAI.app into your Applications folder.
- On first launch, macOS may warn you since the app isn’t signed. Right-click → Open to approve.
-
Clone this repository:
git clone https://github.com/NodleCode/noai cd noai -
Open the Xcode project and build the app:
open noai.xcodeproj
-
Run the app in Xcode, or build and install it on your system.
The app will automatically try to connect to http://127.0.0.1:11434 (you can adjust this in settings).
Start a new thread and chat with a model. Messages stream in real-time.
Swap between Ollama models like Llama, Gemma, or Code LLMs.

Organize your work with multiple chat threads. Easily rename or delete them.
Save any thread as JSON or Markdown for later use. Export all threads in one go.
- Open an issue if you find a bug or have a feature request.
- Submit a pull request if you’d like to add or improve functionality.
- Ollama – local-first LLM runtime
MIT License – see LICENSE for details.


