Skip to content
/ mind Public

The native language for intelligent systems - Machine Intelligence Native Design

License

Apache-2.0, Unknown licenses found

Licenses found

Apache-2.0
LICENSE
Unknown
LICENSE-COMMERCIAL
Notifications You must be signed in to change notification settings

cputer/mind

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

MIND logo

MIND — Native Language for Intelligent Systems

CI License: Apache 2.0

Overview

MIND is a Rust-first language and runtime for building intelligent systems with auditable foundations. It blends declarative tensor algebra, static shape inference, automatic differentiation, and MLIR/LLVM lowering in a compact toolchain that scales from research prototypes to production.

Open-core vs proprietary runtime

This repository contains the open-core stack: the MIND language, type system, compiler front-end, IR, and MLIR lowering passes. Production-grade runtime backends for CPU, GPU, and accelerators live in the private mind-runtime repository. Functions in src/exec/* marked with todo!() or unimplemented!() are runtime hooks that the proprietary backend fulfills.

Quick Start

git clone https://github.com/cputer/mind.git
cd mind
cargo run -- eval "let x: Tensor[f32,(2,3)] = 0; x + 1"

Explore the full language tour and runtime guides in /docs.

CLI / Compiler Driver

The mindc binary provides a deterministic source→IR→MLIR pipeline suitable for demos and snapshot tests:

cargo run --bin mindc -- examples/hello_tensor.mind --emit-ir
cargo run --bin mindc -- examples/hello_tensor.mind --func main --autodiff --emit-grad-ir
cargo run --features "mlir-lowering autodiff" --bin mindc -- examples/hello_tensor.mind --func main --autodiff --emit-mlir

MLIR emission requires the mlir-lowering feature. Autodiff support is experimental and currently focused on single-output entry points.

GPU backend profile

The crate exposes the Core v1 GPU profile and a --target=gpu flag in mindc. CPU remains the only implemented target, but the GPU contract (enums, error model, and GPUBackend trait) is treated as stable for downstream runtimes. Selecting the GPU target returns a structured "no backend available for target gpu" error. See docs/gpu.md for the device/target model and current status.

Core Concepts

  • Type System — ranks, shapes, polymorphism, and effect tracking.
  • Shapes — broadcasting, reductions, and shape-preserving tensor transforms.
  • Autodiff — reverse-mode differentiation on the SSA IR.
  • IR core — deterministic IR pipeline with verifier and printer.
  • IR & MLIR — compiler pipeline from parser to MLIR dialects.

Applications

Neuroscience & Brain-Computer Interfaces

MIND's combination of static shape inference, reverse-mode autodiff, ultra-low-latency compilation, and deterministic execution makes it uniquely suited for real-time neural signal processing and brain-computer interface (BCI) applications:

  • Real-time Neural Decoding — Sub-millisecond inference for invasive BCI systems (Neuralink-style implants, ECoG arrays)
  • Multi-channel Time-Series — Native tensor operations for Channel × Time × Batch neural data
  • On-device Adaptation — Gradient-based decoder optimization directly on implanted devices using autodiff
  • Reproducible Research — Deterministic builds critical for FDA-regulated medical devices and neuroscience studies
  • Edge Deployment — Deploy to resource-constrained BCI hardware (ARM Cortex-M, RISC-V) with minimal runtime overhead

See Phase 13 in the roadmap for planned neuroscience-specific standard library modules, data format support, and benchmarks.

Other Applications

MIND's design also supports machine learning research, embedded AI, and safety-critical intelligent systems requiring auditable execution and reproducible builds.

Stability & Versioning

MIND Core v1 follows the public contract in mind-spec Core v1. The stability model, SemVer policy, and CLI guarantees are documented in docs/versioning.md.

Architecture

Benchmarks

The /docs/benchmarks.md report covers baseline compiler/runtime performance, regression tracking, and methodology.

Roadmap

Upcoming milestones and release planning live in /docs/roadmap.md.

Links

Licensing

MIND follows an open-core dual licensing model maintained by STARGA Inc.

  • Community Edition (this repository)
    The language, core compiler, and runtime found here are provided under the Apache License, Version 2.0.
    See LICENSE for the full text.

  • Enterprise & SaaS Offerings
    Enterprise-only features, hosted “MIND Cloud” services, and proprietary extensions are available under a separate commercial license from STARGA Inc. These components are not covered by the Apache License and are not present in this repository.
    Commercial and trademark terms are summarized in LICENSE-COMMERCIAL and governed by separate agreements with STARGA Inc.

For commercial licensing, OEM partnerships, or large-scale deployments, please contact: info@star.ga or legal@star.ga.


Looking for implementation details? Start in /docs and join the conversation in mind-runtime and mind-spec.

About

The native language for intelligent systems - Machine Intelligence Native Design

Topics

Resources

License

Apache-2.0, Unknown licenses found

Licenses found

Apache-2.0
LICENSE
Unknown
LICENSE-COMMERCIAL

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages