pip install torch pybind11 pyyaml numpy
pip install rpg_vid2e/esim_py/
pip install rpg_vid2e/esim_torch/The code assume the following data structure
ENV_ROOT
|
--- ENV_0
| |
| --- Data_easy
| |
| --- P000
| | |
| | --- image_lcam_front # 1000hz images
| | | |
| | | --- 000000_lcam_front.png
| | | --- ...
| | --- events # output folder for event data
| | | |
| | | --- hf_time_pose_lcam_front.txt # time stamp file
| | | --- hf_pose_lcam_front.txt # pose file
| | |
| | --- event_generation_config.yaml # configuration file (detailed below)
| |
| +-- P001
|
+-- ENV_1
An example of the config file: event_generation_config.yaml
contrast_threshold_negative: 0.15
contrast_threshold_positive: 0.18
refractory_period_ns: 50000
thresh_random_sigma: 0.03python esim_torch/scripts/generate_events.py --task gen_tartanair --dataset_dir ENV_ROOT --env_name ENV_0 --data_folder_name Data_easy --events_output_dir_name events_output --folder_exist_action skipData generation using multiple threads
python esim_torch/scripts/generate_events_batch.py --env_names AbandonedCableAutoExposure AncientTownsAutoExposure CastleFortressAutoExposure --code_directory PATH_TO_rpg_vid2e --data_directory ENV_ROOT --events_output_dir_name events_outputThis repository contains code that implements video to events conversion as described in Gehrig et al. CVPR'20 and the used dataset. The paper can be found here
If you use this code in an academic context, please cite the following work:
Daniel Gehrig, Mathias Gehrig, Javier Hidalgo-Carrió, Davide Scaramuzza, "Video to Events: Recycling Video Datasets for Event Cameras", The Conference on Computer Vision and Pattern Recognition (CVPR), 2020
@InProceedings{Gehrig_2020_CVPR,
author = {Daniel Gehrig and Mathias Gehrig and Javier Hidalgo-Carri\'o and Davide Scaramuzza},
title = {Video to Events: Recycling Video Datasets for Event Cameras},
booktitle = {{IEEE} Conf. Comput. Vis. Pattern Recog. (CVPR)},
month = {June},
year = {2020}
}- We now support frame interpolation done by FILM.
- We release a web app and interactive demo which generates events and converts your webcam to events. Try it out here.
- We now also release new python bindings for esim with GPU support. Details are here
Try out our the interactive demo and webcam support here.
The synthetic N-Caltech101 dataset, as well as video sequences used for event conversion can be found here. For each sample of each class it contains events in the form class/image_%04d.npz and images in the form class/image_%05d/images/image_%05d.png, as well as the corresponding timestamps of the images in class/image_%04d/timestamps.txt.
Clone the repo recursively with submodules
git clone git@github.com:uzh-rpg/rpg_vid2e.git --recursiveFirst download the FILM checkpoint, and move it to the current root
wget https://rpg.ifi.uzh.ch/data/VID2E/pretrained_models.zip -O /tmp/temp.zip
unzip /tmp/temp.zip -d rpg_vid2e/
rm -rf /tmp/temp.zipmake sure to install the following * Anaconda Python 3.9 * CUDA Toolkit 11.2.1 * cuDNN 8.1.0
conda create --name vid2e python=3.9
conda activate vid2e
pip install -r rpg_vid2e/requirements.txt
conda install -y -c conda-forge pybind11 matplotlib
conda install -y pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorchBuild the python bindings for ESIM
pip install rpg_vid2e/esim_py/Build the python bindings with GPU support with
pip install rpg_vid2e/esim_torch/This package provides code for adaptive upsampling with frame interpolation based on Super-SloMo
Consult the README for detailed instructions and examples.
This package exposes python bindings for ESIM which can be used within a training loop.
For detailed instructions and example consult the README
This package exposes python bindings for ESIM with GPU support.
For detailed instructions and example consult the README
To run an example, first upsample the example videos
device=cpu
# device=cuda:0
python upsampling/upsample.py --input_dir=example/original --output_dir=example/upsampled --device=$device
This will generate upsampling/upsampled with in the example/upsampled folder. To generate events, use
python esim_torch/generate_events.py --input_dir=example/upsampled \
--output_dir=example/events \
--contrast_threshold_neg=0.2 \
--contrast_threshold_pos=0.2 \
--refractory_period_ns=0