JAX-based framework for brain network simulation and gradient-based optimization.
- Gradient-based optimization: Fit thousands of parameters using automatic differentiation through the entire simulation pipeline
- Performance: JAX-powered with seamless GPU/TPU scaling
- Flexible & extensible: Build models with Network Dynamics, a composable framework for whole-brain modeling. Existing TVB workflows supported via TVB-O.
- Intuitive parameter control: Mark values for optimization with Parameter(). Define exploration spaces with Axes for automatic parallel evaluation via JAX vmap/pmap.
Requires Python 3.11 or above
# Using uv (recommended)
uv pip install tvboptim
# Using pip
pip install tvboptimimport jax.numpy as jnp
from tvboptim.experimental.network_dynamics import Network, solve, prepare
from tvboptim.experimental.network_dynamics.dynamics.tvb import ReducedWongWang
from tvboptim.experimental.network_dynamics.coupling import LinearCoupling
from tvboptim.experimental.network_dynamics.graph import DenseDelayGraph
from tvboptim.observations.tvb_monitors import Bold
from tvboptim.observations import compute_fc, rmse
from tvboptim.optim import OptaxOptimizer
import optax
# Build brain network model
network = Network(
dynamics=ReducedWongWang(),
coupling={'delayed': LinearCoupling(incoming_states="S", G=0.5)},
graph=DenseDelayGraph(weights, delays)
)
# Run simulation
result = solve(network, Heun(), t0=0.0, t1=60_000.0, dt=1.0)
# Optimize coupling strength to match empirical functional connectivity
simulator, params = prepare(network, Heun(), t0=0.0, t1=60_000.0, dt=1.0)
bold_monitor = Bold(history=result, period=720.0)
def loss(params):
predicted_fc = compute_fc(bold_monitor(simulator(params)))
return rmse(predicted_fc, target_fc)
opt = OptaxOptimizer(loss, optax.adam(learning_rate=0.03))
final_params, history = opt.run(params, max_steps=50)See the full example with visualization in the documentation or run it directly in Google Colab:
- Network Dynamics - Build differentiable brain network models in TVB-Optim
- Parameters & Optimization - Gradient-based parameter inference
- API Reference - Complete API documentation
We welcome contributions and questions from the community!
- Report Issues: Open an issue
- Ask Questions: Start a discussion
- Contribute Code: Open a pull request
If you use TVB-Optim in your research, please cite:
@article{2025tvboptim,
title={Fast and Easy Whole-Brain Network Model Parameter Estimation with Automatic Differentiation},
author={Pille, Marius and Martin, Leon and Richter, Emilius and Perdikis, Dionysios and Schirner, Michael and Ritter, Petra},
journal={bioRxiv},
year={2025},
doi={10.1101/2025.11.18.689003}
}Copyright © 2025 Charité Universitätsmedizin Berlin
