Skip to content

[3DV2022] pytorch implementation for ‘Efficient Human Pose Estimation via 3D Event Point Cloud’

License

Notifications You must be signed in to change notification settings

itami-lidar/EventPointPose

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Efficient Human Pose Estimation via 3D Event Point Cloud

The official Pytorch implementations of Efficient Human Pose Estimation via 3D Event Point Cloud. We propose a novel 3D event point cloud based paradigm for human pose estimation and achieve efficient results on DHP19 dataset.

Project page and paper.

Dependencies

We test the project with the following dependencies.

  • pytorch == 1.8.0+cu111
  • torchvision == 0.9.0+cu111
  • numpy == 1.19.2
  • opencv-python == 4.4.0
  • h5py == 3.3.0
  • Win10 or Ubuntu18.04

Getting started

Dataset preparation

Download DHP19 dataset and generate following DHP19EPC.

Folder Hierarchy

Your work space will look like this(note to change the data path in the codes to your own path):

├── DHP19EPC_dataset               # Store test/train data
|   ├─ ...                         # MeanLabel and LastLabel
├── EventPointPose                 # This repository
|   ├─ checkpoints                 # Checkpoints and debug images
|   ├─ dataset                     # Dataset
|   ├─ DHP19EPC                    # To generate data for DHP19EPC_dataset
|   ├─ evaluate                    # Evaluate model and save gif/mp4
|   ├─ logs                        # Training logs
|   ├─ models                      # Models
|   ├─ P_matrices                  # Matrices in DHP19
|   ├─ results                     # Store results or our pretrained models
|   ├─ srcimg                      # Source images
|   ├─ tools                       # Utility functions
|   ├─ main.py                     # train/eval model

Train model

cd ./EventPointPose

# train MeanLabel
python main.py --train_batch_size=16 --epochs=30 --num_points=2048 --model PointNet --name PointNet-2048 --cuda_num 0

# train LastLabel
python main.py --train_batch_size=16 --epochs=30 --num_points=2048 --model PointNet --name PointNet-2048-last --cuda_num 0 --label last

Evaluate model

You can evaluate your model and output gif as well as videos following this doc.

Pretrained Model

Our pretrained model in the paper can be found here: Baidu Cloud or Google Drive They can also be found in the Github Releases tab.

Video

Publication

If you find our project helpful in your research, please cite with:

@inproceedings{chen2022EPP,
  title={Efficient Human Pose Estimation via 3D Event Point Cloud},
  author={Chen, Jiaan and Shi, Hao and Ye, Yaozu and Yang, Kailun and Sun, Lei and Wang, Kaiwei},
  booktitle={2022 International Conference on 3D Vision (3DV)},
  year={2022}
}

For any questions, welcome to e-mail us: chenjiaan@zju.edu.cn, haoshi@zju.edu.cn, and we will try our best to help you. =)

Acknowledgement

Thanks for these repositories:

DHP19, Simple baseline pose, SimDR, DGCNN, Point-Trans

About

[3DV2022] pytorch implementation for ‘Efficient Human Pose Estimation via 3D Event Point Cloud’

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 58.5%
  • MATLAB 41.5%