Skip to content

DevOps & Cloud AI Assistant that runs locally with Ollama for complete privacy.

License

Notifications You must be signed in to change notification settings

tara-vision/taracode

taracode

DevOps & Cloud AI Assistant
Expert guidance for Kubernetes, Terraform, Docker, and multi-cloud deployments.
Runs locally with Ollama for complete privacy.

Release License: MIT Go Report Card Go Version

Stars Forks Issues

Quick Start Features Commands Documentation Contributing


Why taracode?

  • DevOps Expertise - Specialized in Kubernetes, Terraform, Docker, CI/CD, and cloud platforms
  • 58 Built-in Tools - DevOps, security scanning, file operations, git, web search
  • Multi-Agent System - 7 specialized agents for complex tasks
  • Privacy-first - Runs fully local with Ollama, your data never leaves your machine
  • No Account Required - Open source, just install and use

Quick Start

1. Install Ollama

# macOS / Linux
brew install ollama

# Or download from https://ollama.ai

2. Pull a Model

ollama pull gemma3:27b    # Recommended (16GB+ RAM)
ollama pull gemma3:12b    # For limited hardware

3. Install taracode

Quick install (recommended):

curl -fsSL https://code.tara.vision/install.sh | bash

Homebrew (macOS / Linux):

brew install tara-vision/tap/taracode

Go install:

go install github.com/tara-vision/taracode@latest

Manual download:

Download binaries from GitHub Releases

4. Run

cd your-project
taracode
> /init    # Initialize project features

That's it! Start asking questions about your infrastructure.

Features

Screen Monitoring (/watch)

Let the AI watch your screen and catch errors before you do:

> /watch this          # Capture and analyze all screens now
> /watch start         # Start continuous monitoring
> /watch stop          # Stop monitoring

Multi-Agent System

7 specialized agents work together on complex tasks:

Agent Specialty
Planner Task decomposition and dependency analysis
Coder Code generation and editing
Tester Test execution and output analysis
Reviewer Code review and quality checks
DevOps Infrastructure and deployment operations
Security Security scanning and vulnerability analysis
Diagnostics Failure analysis and root cause detection
> /agent list          # List all agents
> /agent use security  # Route next prompt to specific agent

Autonomous Task Execution (/task)

Plan and execute multi-step tasks with checkpoints:

> /task "Add authentication to the API"
> /task "Deploy to production with blue-green strategy"
> /task templates      # List built-in templates

Project Memory

Remember project-specific knowledge across sessions:

> /remember We use PostgreSQL for production databases
> /remember Always run tests before pushing #workflow
> /memory search database

DevOps Tools

Category Tools
Kubernetes kubectl get/apply/delete/describe/logs/exec, helm list/install
Terraform init, plan, apply, destroy, output, state
Docker build, ps, logs, compose, exec
AWS aws cli, ecs, eks operations
Azure az cli, aks operations
GCP gcloud cli, gke operations
Security trivy, gitleaks, SAST, tfsec, kubesec, dependency audit

Security Mode

Full DevSecOps capabilities with audit logging:

> /mode security       # Switch to security mode

# Security scanning
> Scan this image for vulnerabilities: nginx:latest
> Check for secrets in the current directory
> Run a SAST scan on the codebase

Commands

Command Description
/init Initialize project
/mode Switch mode (devops, security)
/model Switch between models
/task Execute multi-step tasks
/agent Manage specialized agents
/watch Screen monitoring
/memory Project memory management
/permissions Tool permission controls
/audit Security audit log
/history File operation history
/undo Undo file modifications
/diff Show session changes
/tools List available tools
/upgrade Check for and install updates
/context Context window budget breakdown
/compact Force conversation compaction
/stats Session statistics
/hosts Multi-host status (v2.0)
/help Show help

Configuration

Create ~/.taracode/config.yaml:

# Single Host (simple setup)
host: http://localhost:11434

# Multi-Host Setup (v2.0) - for multiple Ollama servers
hosts:
  primary:
    url: http://gpu-server:11434
    models: [ gemma3:27b, qwen2.5-coder:32b ]
    priority: 1
  local:
    url: http://localhost:11434
    fallback: primary      # Use primary if local is down
    priority: 2
default_host: primary

# Model generation options (v2.0.4)
model:
  temperature: 0.7     # Sampling randomness (0.0-2.0)
  top_p: 0.9           # Nucleus sampling threshold (0.0-1.0)
  num_predict: 0       # Max tokens per response (0 = model default)

# Search
search:
  primary: duckduckgo
  fallback: searxng
  brave_api_key: ""    # Optional: Brave Search API

# Memory
memory:
  enabled: true
  auto_capture: true

# Per-agent host assignment
agents:
  coder:
    host: primary
    model: qwen2.5-coder:32b
  reviewer:
    host: local
    model: llama3.2:3b

See config.example.yaml for all options.

Supported LLM Backends

Backend Setup Notes
Ollama brew install ollama Recommended, easiest setup
vLLM Self-hosted For production deployments
llama.cpp Self-hosted Lightweight option

Development

make deps      # Install dependencies
make build     # Build binary
make test      # Run tests
make install   # Install to /usr/local/bin

See CONTRIBUTING.md for development guidelines.

Contributing

Contributions are welcome! Please read our Contributing Guide and Code of Conduct.

Security

For security issues, please see our Security Policy.

License

MIT License - see LICENSE for details.


Built with ❤️ by Tara Vision · Created by Dejan Stefanoski

About

DevOps & Cloud AI Assistant that runs locally with Ollama for complete privacy.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published

Languages