Skip to content

virtual-twin/tvboptim

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TVB-Optim

Tests Ruff Python 3.11+ PyPI version Documentation

JAX-based framework for brain network simulation and gradient-based optimization.

Key Features

  • Gradient-based optimization: Fit thousands of parameters using automatic differentiation through the entire simulation pipeline
  • Performance: JAX-powered with seamless GPU/TPU scaling
  • Flexible & extensible: Build models with Network Dynamics, a composable framework for whole-brain modeling. Existing TVB workflows supported via TVB-O.
  • Intuitive parameter control: Mark values for optimization with Parameter(). Define exploration spaces with Axes for automatic parallel evaluation via JAX vmap/pmap.

Installation

Requires Python 3.11 or above

# Using uv (recommended)
uv pip install tvboptim

# Using pip
pip install tvboptim

Quick Example

import jax.numpy as jnp
from tvboptim.experimental.network_dynamics import Network, solve, prepare
from tvboptim.experimental.network_dynamics.dynamics.tvb import ReducedWongWang
from tvboptim.experimental.network_dynamics.coupling import LinearCoupling
from tvboptim.experimental.network_dynamics.graph import DenseDelayGraph
from tvboptim.observations.tvb_monitors import Bold
from tvboptim.observations import compute_fc, rmse
from tvboptim.optim import OptaxOptimizer
import optax

# Build brain network model
network = Network(
    dynamics=ReducedWongWang(),
    coupling={'delayed': LinearCoupling(incoming_states="S", G=0.5)},
    graph=DenseDelayGraph(weights, delays)
)

# Run simulation
result = solve(network, Heun(), t0=0.0, t1=60_000.0, dt=1.0)

# Optimize coupling strength to match empirical functional connectivity
simulator, params = prepare(network, Heun(), t0=0.0, t1=60_000.0, dt=1.0)
bold_monitor = Bold(history=result, period=720.0)

def loss(params):
    predicted_fc = compute_fc(bold_monitor(simulator(params)))
    return rmse(predicted_fc, target_fc)

opt = OptaxOptimizer(loss, optax.adam(learning_rate=0.03))
final_params, history = opt.run(params, max_steps=50)

See the full example with visualization in the documentation or run it directly in Google Colab:

Contributing

We welcome contributions and questions from the community!

Citation

If you use TVB-Optim in your research, please cite:

@article{2025tvboptim,
  title={Fast and Easy Whole-Brain Network Model Parameter Estimation with Automatic Differentiation},
  author={Pille, Marius and Martin, Leon and Richter, Emilius and Perdikis, Dionysios and Schirner, Michael and Ritter, Petra},
  journal={bioRxiv},
  year={2025},
  doi={10.1101/2025.11.18.689003}
}

Copyright © 2025 Charité Universitätsmedizin Berlin