The official Pytorch implementations of Efficient Human Pose Estimation via 3D Event Point Cloud. We propose a novel 3D event point cloud based paradigm for human pose estimation and achieve efficient results on DHP19 dataset.
Project page and paper.
We test the project with the following dependencies.
- pytorch == 1.8.0+cu111
- torchvision == 0.9.0+cu111
- numpy == 1.19.2
- opencv-python == 4.4.0
- h5py == 3.3.0
- Win10 or Ubuntu18.04
Download DHP19 dataset and generate following DHP19EPC.
Your work space will look like this(note to change the data path in the codes to your own path):
├── DHP19EPC_dataset # Store test/train data
| ├─ ... # MeanLabel and LastLabel
├── EventPointPose # This repository
| ├─ checkpoints # Checkpoints and debug images
| ├─ dataset # Dataset
| ├─ DHP19EPC # To generate data for DHP19EPC_dataset
| ├─ evaluate # Evaluate model and save gif/mp4
| ├─ logs # Training logs
| ├─ models # Models
| ├─ P_matrices # Matrices in DHP19
| ├─ results # Store results or our pretrained models
| ├─ srcimg # Source images
| ├─ tools # Utility functions
| ├─ main.py # train/eval model
cd ./EventPointPose
# train MeanLabel
python main.py --train_batch_size=16 --epochs=30 --num_points=2048 --model PointNet --name PointNet-2048 --cuda_num 0
# train LastLabel
python main.py --train_batch_size=16 --epochs=30 --num_points=2048 --model PointNet --name PointNet-2048-last --cuda_num 0 --label last
You can evaluate your model and output gif as well as videos following this doc.
Our pretrained model in the paper can be found here: Baidu Cloud or Google Drive They can also be found in the Github Releases tab.
If you find our project helpful in your research, please cite with:
@inproceedings{chen2022EPP,
title={Efficient Human Pose Estimation via 3D Event Point Cloud},
author={Chen, Jiaan and Shi, Hao and Ye, Yaozu and Yang, Kailun and Sun, Lei and Wang, Kaiwei},
booktitle={2022 International Conference on 3D Vision (3DV)},
year={2022}
}
For any questions, welcome to e-mail us: chenjiaan@zju.edu.cn, haoshi@zju.edu.cn, and we will try our best to help you. =)
Thanks for these repositories:

