wypełnianie formularza:
nlp2cmd -r "otwórz https://www.prototypowanie.pl/kontakt/ i wypelnij formularz i wyslij"# 🚀 Run Mode: otwórz https://www.prototypowanie.pl/kontakt/ i wypelnij formularz i wyslijdsl: auto
query: otwórz https://www.prototypowanie.pl/kontakt/ i wypelnij formularz i wyslij
status: success
confidence: 1.0
generated_command: playwright open https://www.prototypowanie.pl/kontakt/ && fill_form && submitinteligentna reakcja na nie działająca w danym kontekscie polecenie i propzycja dzialajacego

| Project | Description |
|---|---|
| NLP2CMD + App2Schema | Automatic schema generation from any applications, services |
Natural Language to Domain-Specific Commands - Production-ready framework for transforming natural language into DSL commands with full safety, validation, and observability.
# Install with all dependencies
pip install nlp2cmd[all]
# Setup external dependencies cache (Playwright browsers)
nlp2cmd cache auto-setup
# Start using with enhanced output
nlp2cmd "uruchom usługę nginx"
```bash
systemctl start nginxdsl: auto
query: uruchom usługę nginx
status: success
confidence: 1.0
generated_command: systemctl start nginxnlp2cmd "znajdź pliki większe niż 100MB"
find . -type f -size +100MBnlp2cmd "pokaż pliki użytkownika większe niż 50GB"
find $HOME -type f -size +50GB- SQL - Natural language to SQL queries
- Shell - System commands and file operations
- Docker - Container management
- Kubernetes - K8s orchestration
- Browser - Web automation and search (Google, GitHub, Amazon)
- DQL - Domain Query Language
- Polish Language Support - Native Polish NLP with spaCy (87%+ accuracy)
- Fuzzy Matching - Typo tolerance with rapidfuzz
- Lemmatization - Word form normalization
- Priority Intent Detection - Smart command classification
- Enhanced Entity Extraction - Time, size, username, path detection
- Time-based Search -
znajdź pliki zmodyfikowane ostatnie 7 dni - Size-based Filtering -
znajdź pliki większe niż 100MB - Combined Filters -
znajdź pliki .log większe niż 10MB starsze niż 30 dni - User Directory Operations -
pokaż pliki użytkownika→find $HOME -type f - Username-specific Paths -
pokaż foldery użytkownika root→ls -la /root
- APT Installation -
zainstaluj vlc→sudo apt-get install vlc - Multi-variant Support - Polish and English package commands
- Cross-platform Ready - OS detection and appropriate commands
- Pattern Matching - Multi-word keyword detection
- Confidence Scoring - Intent detection reliability
NLP2CMD now features beautiful syntax-highlighted output with clean bash codeblocks:
# Natural language query
nlp2cmd "znajdź pliki zmodyfikowane ostatnie 7 dni"
# Output with syntax highlighting
```bash
find . -type f -mtime -7 dsl: auto
query: znajdź pliki zmodyfikowane ostatnie 7 dni
status: success
confidence: 1.0
generated_command: find . -type f -mtime -7 - 🎨 Syntax Highlighting - Rich syntax highlighting for bash, SQL, and YAML
- 📋 Clean Codeblocks - No more complex Rich panels, just clean markdown-style blocks
- 🌍 Multi-language Support - Full Polish language support with 87%+ accuracy
- ⚡ Instant Feedback - Real-time command generation with confidence scores
- 🇵🇱 Polish - Native support with lemmatization and fuzzy matching
- 🇬🇧 English - Full English language support
- 🔀 Mixed - Seamless handling of mixed-language queries
# Extract schema from any website
nlp2cmd web-schema extract https://example.com
# Fill forms automatically
nlp2cmd -r "otwórz https://www.prototypowanie.pl/kontakt/ i wypełnij formularz i wyślij"
# Manage interaction history
nlp2cmd web-schema history --stats# Configure service settings
nlp2cmd config-service --host 0.0.0.0 --port 8000 --debug
# Start HTTP API service
nlp2cmd service --host 0.0.0.0 --port 8000 --workers 4
# Test the API
curl -X POST http://localhost:8000/query \
-H "Content-Type: application/json" \
-d '{"query": "list files in current directory", "dsl": "shell"}'- 🌐 HTTP API - RESTful API with FastAPI + Pydantic
- ⚙️ Configuration Management - Environment variables + .env support
- 🔧 Flexible Deployment - Host/port configuration, CORS, workers
- 📊 Real-time Processing - Sub-second API responses
- 🛡️ Type Safety - Full Pydantic model validation
- 📝 Auto-documentation - Automatic OpenAPI/Swagger docs at
/docs
# External dependencies cache management
nlp2cmd cache info # Show cache status
nlp2cmd cache auto-setup # Install and configure
nlp2cmd cache clear # Clear cache if needed ┌─────────────────┐
│ User Query │
└────────┬────────┘
│
▼
┌─────────────────┐
│ NLP Layer │ → Intent + Entities + Confidence
└────────┬────────┘
│
▼
┌─────────────────┐
│ Intent Router │ → Domain + Intent Classification
└────────┬────────┘
│
▼
┌─────────────────┐
│ Entity Extractor│ → Time, Size, Username, Path
└────────┬────────┘
│
▼
┌─────────────────┐
│ Command Generator│ → Domain-specific Commands
└────────┬────────┘
│
▼
┌─────────────────┐
│ Safety Validator│ → Command Safety Check
└────────┬────────┘
│
▼
┌─────────────────┐
│ Execution │ → Run Command with Confirmation
└─────────────────┘
- Shell Operations: 90%+ (files, processes, packages)
- Package Management: 100% (apt install, zainstaluj)
- User File Operations: 100% (user directory detection)
- Advanced Find: 100% (size + age filtering)
- Web Search: 33% (Google, GitHub, Amazon)
- Overall System: 85%+ Production Ready
# Find files modified in last 7 days larger than 100MB
nlp2cmd "znajdź pliki większe niż 100MB zmodyfikowane ostatnie 7 dni"
# → find . -type f -size +100MB -mtime +7
# Search user's home directory for large files
nlp2cmd "pokaż pliki użytkownika większe niż 50GB"
# → find $HOME -type f -size +50GB
# Find specific file types with age filter
nlp2cmd "znajdź pliki .log większe niż 10MB starsze niż 2 dni"
# → find . -type f -name '*.log' -size +10MB -mtime -2# List current user's files
nlp2cmd "pokaż pliki użytkownika"
# → find $HOME -type f
# List specific user's directory
nlp2cmd "pokaż foldery użytkownika root"
# → ls -la /root
# List files in user directory
nlp2cmd "listuj pliki w katalogu domowym"
# → ls -la .# Install packages (Polish & English)
nlp2cmd "zainstaluj vlc"
# → sudo apt-get install vlc
nlp2cmd "apt install nginx"
# → sudo apt-get install nginx
nlp2cmd "install git"
# → sudo apt-get install git# Search Google
nlp2cmd "wyszukaj w google python tutorial"
# → xdg-open 'https://www.google.com/search?q=w google python tutorial'
# Search GitHub
nlp2cmd "znajdź repozytorium nlp2cmd na github"
# → xdg-open 'https://github.com/search?q=nlp2cmd&type=repositories'
# Search Amazon
nlp2cmd "szukaj na amazon python books"
# → xdg-open 'https://www.amazon.com/s?k=python books'Natural Language → System Commands with 85%+ accuracy and full safety validation.
- 🗣️ 6 DSL Adapters: SQL, Shell, Docker, Kubernetes, DQL (Doctrine), Browser
- 🧠 Polish NLP: Native Polish language support with 87%+ accuracy
- 🔍 Advanced Search: Time-based, size-based, and combined filtering
- 👤 User Operations: Username-specific directory operations
- 📦 Package Management: APT installation with Polish variants
- 🌐 Web Automation: Google, GitHub, Amazon search integration
- 🚀 Service Mode: HTTP API with FastAPI + Pydantic for integration
- ⚡ Real-time Processing: Sub-second command generation
- 🛡️ Safety Validation: Command safety checks and confirmation
- 📁 11 File Format Schemas: Dockerfile, docker-compose, K8s manifests, GitHub workflows, .env, and more
- 🛡️ Safety Policies: Allowlist-based action control, no eval/shell execution
- 🔄 Multi-step Plans: Support for
foreachloops and variable references between steps - 🌍 Polish NLP: Native Polish language support with lemmatization and fuzzy matching
- 💾 Smart Caching: External dependencies cache for Playwright browsers
- 🔀 Decision Router: Intelligently routes queries to direct execution or LLM planner
- 📋 Action Registry: Central registry of 19+ typed actions with full validation
- ⚡ Plan Executor: Executes multi-step plans with tracing, retry, and error handling
- 🤖 LLM Planner: Generates JSON plans constrained to allowed actions
- 📊 Result Aggregator: Multiple output formats (text, table, JSON, markdown)
- 🌐 Web Schema Engine: Browser automation with Playwright integration
- 💾 Cache Manager: Smart caching for external dependencies
- ✅ No direct LLM access to system
- ✅ Typed actions (no eval/shell)
- ✅ Allowlist of permitted actions
- ✅ Full plan validation before execution
- ✅ Traceable execution (trace_id per request)
| Document | Description |
|---|---|
| Documentation Hub | Entry point and navigation for all docs |
| Installation Guide | Setup instructions and installation options |
| User Guide | Complete usage tutorial and examples |
| CLI Reference | Comprehensive CLI documentation |
| Python API Guide | Detailed Python API usage |
| Examples Guide | Comprehensive examples overview |
| API Reference | Detailed API documentation |
| Thermodynamic Integration | Advanced optimization with Langevin dynamics |
| Thermodynamic Architecture | Deep technical architecture overview |
| Contributing Guide | Development guidelines and contribution process |
| Generation Module | DSL generation implementation details |
| Quick Fix Reference | Common issues and solutions |
| Keyword Detection Flow | Detailed keyword intent detection pipeline and fallback mechanisms |
| Enhanced NLP Integration | Advanced NLP libraries integration with semantic similarity and web schema context |
| Web Schema Guide | Browser automation and form filling |
| Cache Management Guide | External dependencies caching |
| Service Mode Guide | HTTP API service with FastAPI + Pydantic |
# Install with all dependencies (including service mode)
pip install nlp2cmd[all]
# Or install specific components
pip install nlp2cmd[browser,nlp] # Web automation + Polish NLP
pip install nlp2cmd[sql,shell] # Database + system commands
pip install nlp2cmd[service] # Service mode (FastAPI + Pydantic)NLP2cmd provides full shell emulation capabilities for system commands:
# Interactive shell mode
nlp2cmd --interactive --dsl shell
nlp2cmd> list files in current directory
nlp2cmd> find files larger than 100MB
nlp2cmd> show running processes
nlp2cmd> exit
# Single query mode
nlp2cmd --dsl shell --query "list files in current directory"
# Output: ls -la .
# Execute immediately (run mode)
nlp2cmd --run "list files in current directory" --auto-confirm
# Executes: ls -la . with real output
# Polish language support
nlp2cmd --dsl shell --query "znajdź pliki .log większe niż 10MB"
# Output: find . -type f -name "*.log" -size +10MB -exec ls -lh {} \;
# Process management
nlp2cmd --dsl shell --query "uruchom usługę nginx"
# Output: systemctl start nginx
nlp2cmd --dsl shell --query "pokaż procesy zużywające najwięcej pamięci"
# Output: ps aux --sort=-%mem | head -10# Auto-setup Playwright browsers and cache
nlp2cmd cache auto-setup
# Manual setup
nlp2cmd cache install --package playwrightThe fastest way to use NLP2CMD is through the command line interface:
# Basic query
nlp2cmd --query "Pokaż użytkowników"
# Specific DSL
nlp2cmd --dsl sql --query "SELECT * FROM users WHERE city = 'Warsaw'"
nlp2cmd --dsl shell --query "Znajdź pliki .log większe niż 10MB"
nlp2cmd --dsl docker --query "Pokaż wszystkie kontenery"
nlp2cmd --dsl kubernetes --query "Skaluj deployment nginx do 3 replik"
# Web automation
nlp2cmd --dsl browser --query "otwórz https://example.com i wypełnij formularz"
nlp2cmd web-schema extract https://example.com
nlp2cmd web-schema history --stats
# With options
nlp2cmd --explain --query "Sprawdź status systemu"
nlp2cmd --auto-repair --query "Napraw konfigurację nginx"
# Interactive mode
nlp2cmd --interactive
# Cache management
nlp2cmd cache info
nlp2cmd cache auto-setup
# Environment analysis
nlp2cmd analyze-env
nlp2cmd analyze-env --output environment.json
# Service mode
nlp2cmd config-service --host 0.0.0.0 --port 8000 --debug
nlp2cmd service --host 0.0.0.0 --port 8000 --workers 4
nlp2cmd service --reload # Development mode with auto-reload
# File validation and repair
nlp2cmd validate config.json
nlp2cmd repair docker-compose.yml --backupnlp2cmd "show user folders"find ~ -maxdepth 1 -type d dsl: auto
query: show user folders
status: success
confidence: 1.0
generated_command: find ~ -maxdepth 1 -type d
errors: []
warnings: []
suggestions: []
clarification_questions: []
resource_metrics:
time_ms: 1.6
cpu_percent: 0.0
memory_mb: 56.6
energy_mj: 0.015
resource_metrics_parsed:
time_ms: 1.6
cpu_percent: 0.0
memory_mb: 56.6
energy_mj: 0.015
token_estimate:
total: 1
input: 1
output: 0
cost_usd: 2.0e-06
model_tier: tiny
tokens_per_ms: 0.625
tokens_per_mj: 66.66666666666667 $ nlp2cmd --run "list files in current directory" --auto-confirm
Generating command... ✗ Could not generate command with rule-based pipeline: # Unknown: could not detect domain for: list files in current directory Attempting LLM fallback via LiteLLM... ✓ LLM fallback succeeded Detected: shell/llm_fallback
$ ls CHANGELOG.md COMMIT_MESSAGE.md CONTRIBUTING.md Dockerfile ENHANCED_README.md INSTALLATION.md... ✓ Command completed in 13.4ms
$ nlp2cmd --dsl shell --query "uruchom usługę nginx" systemctl start nginx
$ nlp2cmd --dsl shell --query "pokaż procesy zużywające najwięcej pamięci" ps aux --sort=-%mem | head -10
$ nlp2cmd --dsl shell --query "znajdź pliki z rozszerzeniem .py" find . -name "*.py" -type f
##### Service Mode Examples
```bash
# Configure service settings
$ nlp2cmd config-service --host 127.0.0.1 --port 8080 --debug --log-level debug
Configuration saved to .env
Current configuration:
host: 127.0.0.1
port: 8080
debug: True
log_level: debug
...
# Start the service
$ nlp2cmd service --host 0.0.0.0 --port 8000 --workers 4
2026-01-25 11:24:32,971 - nlp2cmd.service - INFO - Starting NLP2CMD service on 0.0.0.0:8000
INFO: Started server process [1493894]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
# Test API endpoints (in another terminal)
$ curl http://localhost:8000/health
{"status": "healthy", "service": "nlp2cmd"}
$ curl -X POST http://localhost:8000/query \
-H "Content-Type: application/json" \
-d '{"query": "list files in current directory", "dsl": "shell"}'
{
"success": true,
"command": "ls -la",
"confidence": 0.95,
"domain": "shell",
"intent": "list_files",
"entities": {},
"explanation": "Generated by RuleBasedPipeline with confidence 0.95"
}
# Get service configuration
$ curl http://localhost:8000/config
{
"host": "0.0.0.0",
"port": 8000,
"debug": false,
"log_level": "info",
"cors_origins": ["*"],
"max_workers": 4,
"auto_execute": false,
"session_timeout": 3600
}
# Update configuration
$ curl -X POST http://localhost:8000/config \
-H "Content-Type: application/json" \
-d '{"log_level": "debug"}'
{
"message": "Configuration updated",
"config": {
"host": "0.0.0.0",
"port": 8000,
"debug": false,
"log_level": "debug",
...
}
}
# Python client example
$ python3 -c "
import requests
response = requests.post('http://localhost:8000/query', json={
'query': 'znajdź pliki .log większe niż 10MB',
'dsl': 'shell',
'explain': True
})
result = response.json()
print(f'Command: {result[\"command\"]}')
print(f'Confidence: {result[\"confidence\"]}')
print(f'Explanation: {result[\"explanation\"]}')
"
Command: find . -type f -name "*.log" -size +10MB -exec ls -lh {} \;
Confidence: 0.9
Explanation: Generated by RuleBasedPipeline with confidence 0.90
$ nlp2cmd --dsl docker --query "Pokaż wszystkie kontenery"
docker ps -a
📊 ⏱️ Time: 2.2ms | 💻 CPU: 0.0% | 🧠 RAM: 55.2MB (0.1%) | ⚡ Energy: 0.019mJ
$ nlp2cmd web-schema extract https://httpbin.org/forms/post
✓ Schema extracted successfully
📊 Extracted Elements: 12 inputs, 1 button, 1 form
$ nlp2cmd cache info
╭─────────────────────────────── Cache Overview ───────────────────────────────╮
│ Cache Directory: /home/tom/github/wronai/nlp2cmd/.cache/external │
│ Total Size: 0.0 MB │
│ Cached Packages: 0 │
╰──────────────────────────────────────────────────────────────────────────────╯
No packages cachedfrom nlp2cmd import (
DecisionRouter,
RoutingDecision,
PlanExecutor,
ExecutionPlan,
PlanStep,
ResultAggregator,
OutputFormat,
get_registry,
)
# Initialize components
router = DecisionRouter()
executor = PlanExecutor()
aggregator = ResultAggregator()
# Route a query
routing = router.route(
intent="select",
entities={"table": "users"},
text="show all users",
confidence=0.9,
)
if routing.decision == RoutingDecision.DIRECT:
# Simple query - direct execution
plan = ExecutionPlan(steps=[
PlanStep(action="sql_select", params={"table": "users"})
])
else:
# Complex query - use LLM Planner
from nlp2cmd import LLMPlanner
planner = LLMPlanner(llm_client=your_llm_client)
result = planner.plan(intent="select", entities={}, text="...")
plan = result.plan
# Execute and format results
exec_result = executor.execute(plan)
output = aggregator.aggregate(exec_result, format=OutputFormat.TABLE)
print(output.data)from nlp2cmd.generation import HybridThermodynamicGenerator
generator = HybridThermodynamicGenerator()
# Simple query → DSL generation
result = await generator.generate("Pokaż użytkowników")
# → {'source': 'dsl', 'result': HybridResult(...)}
# Optimization → Thermodynamic sampling
result = await generator.generate("Zoptymalizuj przydzielanie zasobów")
# → {'source': 'thermodynamic', 'result': ThermodynamicResult(...)}# Define a multi-step plan
plan = ExecutionPlan(steps=[
PlanStep(
action="shell_find",
params={"glob": "*.log"},
store_as="log_files",
),
PlanStep(
action="shell_count_pattern",
foreach="log_files", # Iterate over results
params={"file": "$item", "pattern": "ERROR"},
store_as="error_counts",
),
PlanStep(
action="summarize_results",
params={"data": "$error_counts"},
),
])
# Execute with tracing
result = executor.execute(plan)
print(f"Trace ID: {result.trace_id}")
print(f"Duration: {result.total_duration_ms}ms")from nlp2cmd import NLP2CMD, SQLAdapter
# Initialize with SQL adapter
nlp = NLP2CMD(adapter=SQLAdapter(dialect="postgresql"))
# Transform natural language to SQL
result = nlp.transform("Pokaż wszystkich użytkowników z Warszawy")
print(result.command) # SELECT * FROM users WHERE city = 'Warszawa';NLP2CMD uses a robust multi-layered detection pipeline that ensures reliable intent recognition even with typos, variations, or missing dependencies:
- Text Normalization - Polish diacritics, typo corrections, optional lemmatization
- Fast Path Detection - Quick browser/search queries
- SQL Context Detection - Identify SQL keywords
- SQL DROP Detection - High-priority dangerous operations
- Docker Detection - Explicit Docker commands
- Kubernetes Detection - K8s-specific commands
- Service Restart Detection - Service management priority
- Priority Intents - Configured high-priority patterns
- General Pattern Matching - Full keyword matching with confidence scoring
- Fuzzy Matching - Optional rapidfuzz for typos (85% threshold)
- Final Fallback - Always returns
unknown/unknown
✅ Always works - Final fallback ensures no method returns None
✅ Graceful degradation - Missing dependencies don't break the pipeline
✅ Typo tolerance - Built-in corrections + optional fuzzy matching
✅ Performance optimized - Fast path and priority checks first
✅ Safety first - Dangerous operations get highest priority
Input: "dokcer ps" (typo)
1. Normalization: "dokcer" → "docker"
2. Pattern matching: "docker ps" → docker/list
Result: ✅ Works without fuzzy matching
from nlp2cmd import get_registry
registry = get_registry()
# List all domains
print(registry.list_domains())
# ['sql', 'shell', 'docker', 'kubernetes', 'utility']
# List actions by domain
print(registry.list_actions(domain="sql"))
# ['sql_select', 'sql_insert', 'sql_update', 'sql_delete', 'sql_aggregate']
# Get destructive actions (require confirmation)
print(registry.get_destructive_actions())
# ['sql_insert', 'sql_update', 'sql_delete', 'docker_run', ...]
# Generate LLM prompt with available actions
prompt = registry.to_llm_prompt(domain="sql")| DSL | Adapter | Status |
|---|---|---|
| SQL (PostgreSQL, MySQL, SQLite) | SQLAdapter |
✅ Stable |
| Shell (Bash, Zsh) | ShellAdapter |
✅ Stable |
| DQL (Doctrine) | DQLAdapter |
✅ Stable |
| Docker / Docker Compose | DockerAdapter |
✅ Stable |
| Kubernetes | KubernetesAdapter |
✅ Stable |
- Dockerfile
- docker-compose.yml
- Kubernetes manifests (Deployment, Service, Ingress, ConfigMap)
- SQL migrations
- .env files
- nginx.conf
- GitHub Actions workflows
- Prisma Schema
- Terraform (.tf)
- .editorconfig
- package.json
from nlp2cmd import ResultAggregator, OutputFormat
aggregator = ResultAggregator()
# Text format (default)
result = aggregator.aggregate(exec_result, format=OutputFormat.TEXT)
# ASCII Table
result = aggregator.aggregate(exec_result, format=OutputFormat.TABLE)
# JSON (for programmatic use)
result = aggregator.aggregate(exec_result, format=OutputFormat.JSON)
# Markdown (for documentation)
result = aggregator.aggregate(exec_result, format=OutputFormat.MARKDOWN)
# Summary (for dashboards)
result = aggregator.aggregate(exec_result, format=OutputFormat.SUMMARY)The framework enforces safety at multiple levels:
- Action Allowlist: Only registered actions can be executed
- Parameter Validation: Full type checking and constraints
- Plan Validation: All plans validated before execution
- No Code Generation: LLM only produces JSON plans, not executable code
- Destructive Action Marking: Actions that modify state are flagged
# Run all tests
pytest tests/ -v
# Run E2E tests for service mode
python3 run_e2e_tests.py
# Run specific component tests
pytest tests/unit/test_router.py -v
pytest tests/unit/test_registry.py -v
pytest tests/unit/test_executor.py -v
# Service mode E2E tests
pytest tests/e2e/ -m service
# With coverage
pytest --cov=nlp2cmd --cov-report=htmlBased on Whitelam (2025) "Generative thermodynamic computing", the framework now includes thermodynamic optimization for complex constraint satisfaction problems.
- Langevin Dynamics Sampling: Natural evolution from noise to structured solutions
- Energy-Based Models: Domain-specific constraint functions
- Hybrid Routing: Automatic selection between DSL generation and thermodynamic optimization
- Energy Efficiency: 50-70% reduction vs pure LLM inference
nlp2cmd "Zaplanuj 5 zadań w 10 slotach z ograniczeniami" --explainThis command triggers thermodynamic optimization for constraint satisfaction problems and outputs results in YAML format, including energy estimates and scheduling solutions.
- Scheduling: Task scheduling with deadlines and constraints
- Resource Allocation: Optimal distribution under capacity limits
- Planning: Multi-step planning with constraint satisfaction
- Optimization: General constrained optimization problems
See Thermodynamic Integration for detailed documentation.
- Shell Commands Demo - Complete CLI usage examples
- Simple Demo - Python API + Shell concepts
- Complete Examples - Full Python API examples
- DSL Commands Demo - Direct DSL generation examples
- Basic SQL - Simple SQL queries
- Shell Commands - Common shell operations
- Docker Management - Container operations
- Kubernetes - K8s cluster management
- End-to-End Demo - Complete workflow
- Log Analysis Pipeline - Data processing
- Infrastructure Health - System monitoring
- Configuration Validation - File validation
- DevOps Automation - IT operations
- Data Science & ML - Data workflows
- Healthcare - Medical applications
- Finance & Trading - Financial operations
- Smart Cities - Urban management
- CLI Reference - Complete CLI documentation
- Python API Guide - Detailed Python API usage
- Examples Guide - Comprehensive examples overview
See Examples README for all available examples.
nlp2cmd/
├── src/nlp2cmd/
│ ├── __init__.py # Main exports
│ ├── core.py # Core NLP2CMD class
│ ├── router/ # Decision Router
│ ├── registry/ # Action Registry
│ ├── executor/ # Plan Executor
│ ├── planner/ # LLM Planner
│ ├── aggregator/ # Result Aggregator
│ ├── adapters/ # DSL Adapters (SQL, Shell, Docker, K8s, DQL)
│ ├── schemas/ # File Format Schemas
│ ├── feedback/ # Feedback Loop
│ ├── environment/ # Environment Analyzer
│ └── validators/ # Validators
├── tests/
│ ├── unit/ # Unit tests (~150 tests)
│ └── integration/ # Integration tests
├── examples/
│ ├── architecture/ # End-to-end demos
│ ├── sql/ # SQL examples
│ ├── shell/ # Shell examples
│ └── docker/ # Docker examples
└── docs/ # Documentation
- NEW: Thermodynamic optimization using Whitelam's generative framework
- Langevin dynamics for constraint satisfaction problems
- 50-70% energy reduction vs pure LLM inference
- Hybrid router: DSL generation + thermodynamic optimization
- Domain-specific energy models (scheduling, allocation, planning)
- Parallel sampling with energy-based voting
- New architecture: LLM as Planner + Typed Actions
- Decision Router for intelligent query routing
- Action Registry with 19+ typed actions
- Plan Executor with foreach, conditions, and retry
- Result Aggregator with multiple output formats
- Full observability (trace_id, duration tracking)
- 150+ tests
- Initial release
- 5 DSL adapters
- 11 file format schemas
- Safety policies
- Feedback loop
Apache License - see LICENSE for details.
- Whitelam, S. (2025) "Generative thermodynamic computing" - Theoretical foundation for thermodynamic optimization
- spaCy - NLP processing
- Anthropic Claude - LLM integration
- Rich - Terminal formatting



