MIND is a Rust-first language and runtime for building intelligent systems with auditable foundations. It blends declarative tensor algebra, static shape inference, automatic differentiation, and MLIR/LLVM lowering in a compact toolchain that scales from research prototypes to production.
This repository contains the open-core stack: the MIND language, type system, compiler front-end, IR, and MLIR lowering passes. Production-grade runtime backends for CPU, GPU, and accelerators live in the private mind-runtime repository. Functions in src/exec/* marked with todo!() or unimplemented!() are runtime hooks that the proprietary backend fulfills.
git clone https://github.com/cputer/mind.git
cd mind
cargo run -- eval "let x: Tensor[f32,(2,3)] = 0; x + 1"Explore the full language tour and runtime guides in /docs.
The mindc binary provides a deterministic source→IR→MLIR pipeline suitable
for demos and snapshot tests:
cargo run --bin mindc -- examples/hello_tensor.mind --emit-ir
cargo run --bin mindc -- examples/hello_tensor.mind --func main --autodiff --emit-grad-ir
cargo run --features "mlir-lowering autodiff" --bin mindc -- examples/hello_tensor.mind --func main --autodiff --emit-mlirMLIR emission requires the mlir-lowering feature. Autodiff support is
experimental and currently focused on single-output entry points.
The crate exposes the Core v1 GPU profile and a --target=gpu flag in mindc.
CPU remains the only implemented target, but the GPU contract (enums, error
model, and GPUBackend trait) is treated as stable for downstream runtimes.
Selecting the GPU target returns a structured "no backend available for target gpu" error.
See docs/gpu.md for the device/target model and current
status.
- Type System — ranks, shapes, polymorphism, and effect tracking.
- Shapes — broadcasting, reductions, and shape-preserving tensor transforms.
- Autodiff — reverse-mode differentiation on the SSA IR.
- IR core — deterministic IR pipeline with verifier and printer.
- IR & MLIR — compiler pipeline from parser to MLIR dialects.
MIND's combination of static shape inference, reverse-mode autodiff, ultra-low-latency compilation, and deterministic execution makes it uniquely suited for real-time neural signal processing and brain-computer interface (BCI) applications:
- Real-time Neural Decoding — Sub-millisecond inference for invasive BCI systems (Neuralink-style implants, ECoG arrays)
- Multi-channel Time-Series — Native tensor operations for Channel × Time × Batch neural data
- On-device Adaptation — Gradient-based decoder optimization directly on implanted devices using autodiff
- Reproducible Research — Deterministic builds critical for FDA-regulated medical devices and neuroscience studies
- Edge Deployment — Deploy to resource-constrained BCI hardware (ARM Cortex-M, RISC-V) with minimal runtime overhead
See Phase 13 in the roadmap for planned neuroscience-specific standard library modules, data format support, and benchmarks.
MIND's design also supports machine learning research, embedded AI, and safety-critical intelligent systems requiring auditable execution and reproducible builds.
MIND Core v1 follows the public contract in mind-spec Core v1. The stability
model, SemVer policy, and CLI guarantees are documented in
docs/versioning.md.
The /docs/benchmarks.md report covers baseline compiler/runtime performance, regression tracking, and methodology.
Upcoming milestones and release planning live in /docs/roadmap.md.
MIND follows an open-core dual licensing model maintained by STARGA Inc.
-
Community Edition (this repository)
The language, core compiler, and runtime found here are provided under the Apache License, Version 2.0.
SeeLICENSEfor the full text. -
Enterprise & SaaS Offerings
Enterprise-only features, hosted “MIND Cloud” services, and proprietary extensions are available under a separate commercial license from STARGA Inc. These components are not covered by the Apache License and are not present in this repository.
Commercial and trademark terms are summarized inLICENSE-COMMERCIALand governed by separate agreements with STARGA Inc.
For commercial licensing, OEM partnerships, or large-scale deployments, please contact:
info@star.ga or legal@star.ga.
Looking for implementation details? Start in /docs and join the conversation in mind-runtime and mind-spec.