DevOps & Cloud AI Assistant
Expert guidance for Kubernetes, Terraform, Docker, and multi-cloud deployments.
Runs locally with Ollama for complete privacy.
- DevOps Expertise - Specialized in Kubernetes, Terraform, Docker, CI/CD, and cloud platforms
- 58 Built-in Tools - DevOps, security scanning, file operations, git, web search
- Multi-Agent System - 7 specialized agents for complex tasks
- Privacy-first - Runs fully local with Ollama, your data never leaves your machine
- No Account Required - Open source, just install and use
# macOS / Linux
brew install ollama
# Or download from https://ollama.aiollama pull gemma3:27b # Recommended (16GB+ RAM)
ollama pull gemma3:12b # For limited hardwareQuick install (recommended):
curl -fsSL https://code.tara.vision/install.sh | bashHomebrew (macOS / Linux):
brew install tara-vision/tap/taracodeGo install:
go install github.com/tara-vision/taracode@latestManual download:
Download binaries from GitHub Releases
cd your-project
taracode
> /init # Initialize project featuresThat's it! Start asking questions about your infrastructure.
Let the AI watch your screen and catch errors before you do:
> /watch this # Capture and analyze all screens now
> /watch start # Start continuous monitoring
> /watch stop # Stop monitoring7 specialized agents work together on complex tasks:
| Agent | Specialty |
|---|---|
| Planner | Task decomposition and dependency analysis |
| Coder | Code generation and editing |
| Tester | Test execution and output analysis |
| Reviewer | Code review and quality checks |
| DevOps | Infrastructure and deployment operations |
| Security | Security scanning and vulnerability analysis |
| Diagnostics | Failure analysis and root cause detection |
> /agent list # List all agents
> /agent use security # Route next prompt to specific agentPlan and execute multi-step tasks with checkpoints:
> /task "Add authentication to the API"
> /task "Deploy to production with blue-green strategy"
> /task templates # List built-in templatesRemember project-specific knowledge across sessions:
> /remember We use PostgreSQL for production databases
> /remember Always run tests before pushing #workflow
> /memory search database| Category | Tools |
|---|---|
| Kubernetes | kubectl get/apply/delete/describe/logs/exec, helm list/install |
| Terraform | init, plan, apply, destroy, output, state |
| Docker | build, ps, logs, compose, exec |
| AWS | aws cli, ecs, eks operations |
| Azure | az cli, aks operations |
| GCP | gcloud cli, gke operations |
| Security | trivy, gitleaks, SAST, tfsec, kubesec, dependency audit |
Full DevSecOps capabilities with audit logging:
> /mode security # Switch to security mode
# Security scanning
> Scan this image for vulnerabilities: nginx:latest
> Check for secrets in the current directory
> Run a SAST scan on the codebase| Command | Description |
|---|---|
/init |
Initialize project |
/mode |
Switch mode (devops, security) |
/model |
Switch between models |
/task |
Execute multi-step tasks |
/agent |
Manage specialized agents |
/watch |
Screen monitoring |
/memory |
Project memory management |
/permissions |
Tool permission controls |
/audit |
Security audit log |
/history |
File operation history |
/undo |
Undo file modifications |
/diff |
Show session changes |
/tools |
List available tools |
/upgrade |
Check for and install updates |
/context |
Context window budget breakdown |
/compact |
Force conversation compaction |
/stats |
Session statistics |
/hosts |
Multi-host status (v2.0) |
/help |
Show help |
Create ~/.taracode/config.yaml:
# Single Host (simple setup)
host: http://localhost:11434
# Multi-Host Setup (v2.0) - for multiple Ollama servers
hosts:
primary:
url: http://gpu-server:11434
models: [ gemma3:27b, qwen2.5-coder:32b ]
priority: 1
local:
url: http://localhost:11434
fallback: primary # Use primary if local is down
priority: 2
default_host: primary
# Model generation options (v2.0.4)
model:
temperature: 0.7 # Sampling randomness (0.0-2.0)
top_p: 0.9 # Nucleus sampling threshold (0.0-1.0)
num_predict: 0 # Max tokens per response (0 = model default)
# Search
search:
primary: duckduckgo
fallback: searxng
brave_api_key: "" # Optional: Brave Search API
# Memory
memory:
enabled: true
auto_capture: true
# Per-agent host assignment
agents:
coder:
host: primary
model: qwen2.5-coder:32b
reviewer:
host: local
model: llama3.2:3bSee config.example.yaml for all options.
| Backend | Setup | Notes |
|---|---|---|
| Ollama | brew install ollama |
Recommended, easiest setup |
| vLLM | Self-hosted | For production deployments |
| llama.cpp | Self-hosted | Lightweight option |
make deps # Install dependencies
make build # Build binary
make test # Run tests
make install # Install to /usr/local/binSee CONTRIBUTING.md for development guidelines.
Contributions are welcome! Please read our Contributing Guide and Code of Conduct.
For security issues, please see our Security Policy.
MIT License - see LICENSE for details.
Built with ❤️ by Tara Vision · Created by Dejan Stefanoski