Skip to content
Mike Morgan edited this page Jan 11, 2026 · 1 revision

Frequently Asked Questions

Comprehensive FAQ covering general questions, technical details, AI capabilities, and business/licensing questions.


Table of Contents

  1. General Questions
  2. Technical Questions
  3. AI Capabilities
  4. Installation and Setup
  5. Performance and Optimization
  6. Security and Privacy
  7. Business and Licensing
  8. Troubleshooting

General Questions

What is Cortex Linux?

Cortex Linux is an AI-native operating system that integrates a reasoning engine (Sapiens 0.27B) directly into the kernel and system services. Unlike traditional Linux distributions that require external API calls for AI functionality, Cortex provides on-device reasoning with zero API costs.

How is Cortex Linux different from Ubuntu/Debian with AI tools?

Cortex Linux:

  • AI engine integrated at the OS level
  • Zero API costs (on-device processing)
  • Kernel-level optimizations for AI workloads
  • Built-in system services for AI operations

Ubuntu/Debian + AI Tools:

  • AI capabilities via external APIs
  • Per-request API costs
  • Standard Linux kernel
  • Application-level AI integration

Is Cortex Linux free?

Yes, Cortex Linux is open source and free to use. It is distributed under the GNU General Public License v3.0 (GPL-3.0). The Sapiens reasoning engine components have separate licensing - see LICENSE files in the repository.

What are the system requirements?

Minimum:

  • 2GB RAM, 10GB disk space, x86_64 or ARM64 CPU

Recommended:

  • 4GB+ RAM, 20GB+ disk space, 4+ CPU cores

See Technical Specifications - Hardware Requirements for detailed requirements.

Can I use Cortex Linux in production?

Yes, Cortex Linux 1.0.0 is production-ready. However, we recommend:

  • Testing in a staging environment first
  • Reviewing Security and Compliance documentation
  • Implementing proper backup and monitoring
  • Staying updated with security patches

Is there commercial support available?

Commercial support options are being developed. For now, community support is available through:

  • GitHub Issues
  • Discord community
  • Documentation wiki

Technical Questions

What Linux distribution is Cortex Linux based on?

Cortex Linux is based on Debian 12 (Bookworm) / Ubuntu 22.04 LTS, with custom kernel enhancements for AI workloads.

What kernel version does Cortex Linux use?

Cortex Linux uses Linux kernel 6.1.0+ with custom enhancements for AI-aware scheduling and memory management.

Can I install Cortex Linux alongside another OS?

Yes, Cortex Linux can be installed in a dual-boot configuration. The installer supports manual partitioning for dual-boot setups.

Does Cortex Linux support containers?

Yes, Cortex Linux supports Docker and Podman. However, note that:

  • AI engine requires access to host resources
  • Some kernel features may not be available in containers
  • For full functionality, consider bare metal or VM deployment

Can I run Cortex Linux in a virtual machine?

Yes, Cortex Linux runs well in virtual machines:

  • VirtualBox 6.0+
  • VMware Workstation/Player 15+
  • QEMU/KVM
  • Hyper-V

See Installation Guide - VirtualBox for details.

What programming languages are supported for integration?

  • Python: Full SDK available
  • Bash: CLI and shell integration
  • C/C++: C API for system integration
  • Go: HTTP API client libraries
  • Rust: CLI tool source available
  • HTTP/REST: Any language with HTTP client

See AI Integration Guide for examples.

How do I update Cortex Linux?

# Update package lists
sudo apt update

# Upgrade system packages
sudo apt upgrade

# Upgrade Cortex AI specifically
sudo apt install --only-upgrade cortex-ai

# Reboot if kernel was updated
sudo reboot

Can I customize the kernel?

Yes, the kernel source is available. See Developer Documentation - Kernel Modifications for details.


AI Capabilities

What AI model does Cortex Linux use?

Cortex Linux uses the Sapiens 0.27B (270 million parameters) reasoning engine, optimized for on-device inference.

How accurate is the AI compared to ChatGPT?

Cortex Linux (Sapiens 0.27B):

  • Sudoku solving: 55%
  • Code debugging: 72%
  • Architecture planning: 68%

ChatGPT (GPT-3.5/GPT-4):

  • Generally 85-95% on similar tasks

Trade-off: Cortex Linux prioritizes privacy, cost ($0 vs $0.002-$0.06 per request), and latency (156ms vs 1200ms) over maximum accuracy.

See Comparison Pages - Performance Benchmarks for detailed comparisons.

Can I use my own AI model?

Custom model support is planned for version 2.0.0 (Q4 2024). Currently, only the Sapiens 0.27B model is supported.

Does Cortex Linux work offline?

Yes, Cortex Linux works completely offline. All AI processing happens on-device with no network requirements (except for initial installation and updates).

How fast is the AI response time?

  • Simple queries: 50-100ms
  • Medium queries: 100-200ms
  • Complex queries: 200-400ms
  • Average: 156ms

See Technical Specifications - Performance Benchmarks for detailed metrics.

What tasks can Cortex AI perform?

  • Reasoning: General question answering and problem solving
  • Planning: Task and project planning
  • Debugging: Error analysis and code debugging
  • Optimization: Code and configuration optimization
  • Documentation: Documentation generation
  • Analysis: System and log analysis

See Use Cases and Tutorials for examples.

Can I fine-tune the model?

Fine-tuning support is planned for version 2.0.0. Currently, the model cannot be fine-tuned.

Does Cortex Linux support GPU acceleration?

GPU acceleration (CUDA, ROCm) is planned for version 2.0.0. Currently, only CPU inference is supported.


Installation and Setup

How do I install Cortex Linux?

See Installation Guide for complete instructions. Quick start:

  1. Download ISO from releases.cortexlinux.org
  2. Create bootable USB or use in VM
  3. Boot from installation media
  4. Follow installation wizard
  5. Reboot into installed system

Can I install Cortex Linux on a Mac?

Yes, you can run Cortex Linux on Mac using:

  • VirtualBox: Free virtualization
  • VMware Fusion: Commercial option
  • UTM: Free alternative
  • Parallels: Commercial option

See Installation Guide - VirtualBox.

Can I install Cortex Linux on Windows?

Yes, using virtualization:

  • VirtualBox: Free
  • VMware Workstation/Player: Commercial
  • Hyper-V: Built into Windows Pro/Enterprise
  • WSL2: Not recommended (limited kernel features)

How do I deploy Cortex Linux in the cloud?

Cortex Linux can be deployed on:

  • AWS EC2: Use custom AMI or install from ISO
  • DigitalOcean: Use custom image or install from ISO
  • Google Cloud: Use custom image or install from ISO
  • Azure: Use custom image or install from ISO

See Installation Guide - Cloud Deployment for details.

What should I do after installation?

  1. Update system: sudo apt update && sudo apt upgrade
  2. Configure AI service: Check /etc/cortex-ai/config.yaml
  3. Test installation: cortex-ai reason "Test query"
  4. Review security: Security and Compliance
  5. Read documentation: AI Integration Guide

See Installation Guide - Post-Installation Setup.


Performance and Optimization

How can I improve performance?

  1. CPU: Use more CPU cores (up to ~8 cores)
  2. Memory: Increase RAM for larger batches
  3. Storage: Use SSD/NVMe for faster model loading
  4. Configuration: Tune num_threads and batch_size

See Technical Specifications - Performance Tuning.

What is the maximum throughput?

  • Single-threaded: 6.2 requests/second
  • 4 threads: 18.5 requests/second
  • 8 threads: 28.3 requests/second
  • With batching: 45+ requests/second

How much memory does Cortex AI use?

  • Base: ~200MB (AI engine)
  • Per request: +10MB
  • Total (idle): ~245MB
  • Total (active): ~245MB + 13MB per concurrent request

Can I run multiple instances for load balancing?

Yes, multiple Cortex Linux instances can be load-balanced using nginx, HAProxy, or cloud load balancers. See Technical Specifications - Scalability.

Why is my system slow?

Common causes:

  • Insufficient RAM (check with free -h)
  • CPU throttling (check with cpupower frequency-info)
  • Disk I/O bottlenecks (check with iostat)
  • Too many concurrent requests

See Installation Guide - Troubleshooting.


Security and Privacy

Is my data private?

Yes, all AI processing happens on-device. No data is transmitted to external servers. See Security and Compliance - Data Privacy.

Does Cortex Linux send telemetry?

No, telemetry is disabled by default. You can verify in /etc/cortex-ai/config.yaml:

telemetry:
  enabled: false

Is Cortex Linux secure for production use?

Yes, Cortex Linux includes:

  • Process isolation
  • SELinux/AppArmor support
  • API key authentication
  • Rate limiting
  • Audit logging
  • Regular security updates

See Security and Compliance for details.

How do I enable API authentication?

Edit /etc/cortex-ai/config.yaml:

security:
  api_key_required: true
  api_key: "your-secure-api-key"

Then restart the service:

sudo systemctl restart cortex-ai

Is Cortex Linux GDPR compliant?

Yes, Cortex Linux is designed for GDPR compliance:

  • On-device processing (no data transmission)
  • No persistent storage of personal data
  • Configurable data retention
  • Encryption support

See Security and Compliance - GDPR Compliance.

Can I use Cortex Linux in air-gapped environments?

Yes, Cortex Linux works completely offline. No network connection is required for AI operations (only for initial installation and updates).


Business and Licensing

What license is Cortex Linux under?

Cortex Linux is distributed under the GNU General Public License v3.0 (GPL-3.0). The Sapiens reasoning engine components have separate licensing - see LICENSE files in the repository.

Can I use Cortex Linux commercially?

Yes, GPL-3.0 allows commercial use. However, if you modify and distribute Cortex Linux, you must also distribute your modifications under GPL-3.0.

Do I need to pay for API usage?

No, there are no API costs. All processing happens on-device with zero per-request fees.

What are the total costs?

Cortex Linux:

  • Software: $0 (open source)
  • API costs: $0
  • Infrastructure: Your server costs only

Comparison: See Comparison Pages - Cost Analysis for detailed cost comparisons.

Is commercial support available?

Commercial support options are being developed. For enterprise needs, contact: enterprise@cortexlinux.org

Can I contribute to Cortex Linux?

Yes! We welcome contributions. See Developer Documentation - Contributing Guidelines.


Troubleshooting

The AI service won't start

Check service status:

sudo systemctl status cortex-ai

View logs:

journalctl -u cortex-ai -n 50

Common issues:

  • Insufficient memory
  • Missing model files
  • Configuration errors
  • Permission issues

See Installation Guide - Troubleshooting.

API returns "connection refused"

Check if service is running:

systemctl status cortex-ai

Check if port is listening:

sudo netstat -tlnp | grep 8080

Check firewall:

sudo ufw status

Verify configuration:

cat /etc/cortex-ai/config.yaml

AI responses are slow

Check system resources:

htop
free -h

Optimize configuration:

  • Increase num_threads (match CPU cores)
  • Enable caching
  • Reduce max_tokens if not needed

See Technical Specifications - Performance Tuning.

Installation fails

Common causes:

  • Corrupted ISO (verify checksums)
  • Insufficient disk space
  • Hardware incompatibility
  • BIOS/UEFI configuration

Solutions:

  • Re-download ISO and verify checksum
  • Ensure 20GB+ free disk space
  • Check hardware compatibility
  • Disable Secure Boot (if needed)

See Installation Guide - Troubleshooting.

Model files are missing

Check model location:

ls -lh /usr/lib/cortex-ai/models/

Reinstall AI engine:

sudo apt remove cortex-ai
sudo apt install cortex-ai

Verify package:

dpkg -L cortex-ai | grep model

Permission denied errors

Check file permissions:

ls -l /usr/lib/cortex-ai/
ls -l /etc/cortex-ai/
ls -l /var/log/cortex-ai/

Fix permissions (if needed):

sudo chown -R cortex-ai:cortex-ai /var/log/cortex-ai/
sudo chmod 750 /var/log/cortex-ai/

High memory usage

Check memory usage:

free -h
ps aux | grep cortex-ai

Reduce memory allocation: Edit /etc/cortex-ai/config.yaml:

ai:
  max_memory_mb: 512  # Reduce if needed

Check for memory leaks:

valgrind --leak-check=full cortex-ai reason "test"

Getting Help

Where can I get help?

  1. Documentation: This wiki
  2. GitHub Issues: github.com/cortexlinux/cortex/issues
  3. Community: Discord, Matrix
  4. Reddit: r/cortexlinux

How do I report a bug?

  1. Check existing issues: Search GitHub Issues first
  2. Create issue: github.com/cortexlinux/cortex/issues/new
  3. Include information:
    • Cortex Linux version
    • System information
    • Steps to reproduce
    • Error messages/logs

How do I report a security vulnerability?

Email: security@cortexlinux.org (encrypted preferred)

Do not report security issues in public forums. See Security and Compliance - Vulnerability Reporting.


Next Steps


Last updated: 2024

Clone this wiki locally