This is the reference implementation of our NeurIPS 2023 paper Add and Thin: Diffusion for Temporal Point Processes.
If you build upon this work, please cite our paper as follows:
@inproceedings{luedke2023add,
title={Add and Thin: Diffusion for Temporal Point Processes},
author={David L{\"u}dke and Marin Bilo{\v{s}} and Oleksandr Shchur and Marten Lienen and Stephan G{\"u}nnemann},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tn9Dldam9L}
}
The code has been tested on a cluster of Linux nodes using SLURM.
We cannot guarantee the functioning of the code if the following requirements are not met:
To properly install and run our code we recommend using a virtual environment (e.g., created via
pyenv-virtualenvorconda).
The entire installation process consists of 3 steps. You can skip step 0 at you own "risk".
In the following we show how to create the environment via pyenv and pyenv-virtualenv.
The steps are the following:
- install
pyenv(if you don't have it yet) by following the original guidelines; - install the correct Python version:
pyenv install 3.10.4
- create a virtual environment with the correct version of Python:
pyenv virtualenv 3.10.4 add_thin
This step allows you to download the code in your machine, move into the correct directory and (optional) activate the correct environment. The steps are the following:
- clone the repository:
git clone https://github.com/davecasp/add-thin.git
- change into the repository:
cd add-thin - (optional) activate the environment
pyenv activate add_thin
All the required packages are defined in the pyproject.toml file and can be easily installed via pip as following:
pip install -e .Configuring experiments and running code for Add-Thin is done via hydra. If you are unfamiliar with how hydra works please check out the documentation.
To run Add-Thin with the tuned hyperparameters for different datasets:
./train.py -m --config-name config_namewhere config_name should be density_experiments_[1-4] or forecast_experiments_[1-4]. All seeds and datasets are scheduled as a gridsearch via the multirun flag.
To run Add-Thin with your own parameter:
./train.py where you are expected to set the parameter values either in the default configs or via command line flags.
To run a hyperparameter sweep over the learning rate and number of mixture components:
./train.py -m --config-name hyperparameter_sweep data.name=data_setwhere data_set can be each of the dataset names: hawkes2, reddit_politics_submissions, reddit_askscience_comments, yelp_mississauga, yelp_airport, taxi, nonstationary_renewal, pubg, twitter, stationary_renewal, self_correcting, nonstationary_poisson, hawkes1.
A trained model can be evaluated against the test-set via either density notebook or forecast notebook.