This repository presents simple approach for human tracking and trajectory prediction based on human keypoint detection and Kalman filter.
Cuda compatible video card: this repository is tested on RTX 2080Ti
Please check the docker/Dockerfile.
The sole task of this project is to demonstrate how human detection with linear Kalman filtering can be combined into tracking and trajectory prediction. Here are some examples:
Every n seconds future trajectories for head or neck are predicted and drawn on current frame.
To reproduce these results go to next sections.
Thirst of all install Docker:
sudo apt install docker.io
After that install nvidia-docker v2.0:
sudo apt-get install nvidia-docker2
sudo pkill -SIGHUP dockerd
cd docker
sudo docker build . -t sawseen/pytorch_cv:pose_forecaster
cd ..Or alternatively use make:
make buildIf you want to track and predict trajectories based on head detection you need to download
checkpoint and put it into checkpoints/ folder of current project.
The settings are located in configs/config.yaml file.
MAIN:
SEED: 42
HEAD_DETECTION: True
DETECTING:
HEAD_CONFIG_FILE_PATH: './configs/head_cascade_rcnn_dconv_c3-c5_r50_fpn_1x.py'
HEAD_CHECKPOINT_FILE_PATH: './models/cascade_epoch_11.pth'
SCORE_THRESHOLD: 0.75
NMS_IOU_THRESHOLD: 0.2
TRACKING:
STATE_NOISE: 1000.0
R_SCALE: 5.0
Q_VAR: 100.0
IOU_THRESHOLD: 0.1
MAX_MISSES: 6
MIN_HITS: 2
OUTPUT_VIDEO:
CYCLE_LEN: 2
BLOB_SIZE: 4
LINE_WIDTH: 8
FPS: 16
MIN_AGE_FOR_TRAJECTORY: 12
DRAW_BOX: True
COMPRESS: False-
MAINsection consists ofSEEDfor reproducibility andHEAD_DETECTIONwhether to use model trained on human heads. -
DETECTINGdescribe how detections are made.HEAD_CONFIG_FILE_PATHandHEAD_CHECKPOINT_FILE_PATHare required for head detection model.SCORE_THRESHOLDis a detection threshold, whileNMS_IOU_THRESHOLDis an intersection over union threshold for non-maximum suppression. -
TRACKINGis responsible for tracking parameters.STATE_NOISE,R_SCALE,Q_VARare Kalman filter parameters.IOU_THRESHOLDis an intersection over union threshold for assuming whether detection and predicted state of current tracker are matched.MAX_MISSESis a parameter that tells how long a tracker will live without any matched detections.MIN_HITSis a parameter that tells minimum number of frames for a tracker to be drawn. -
OUTPUT_VIDEOspecifies parameters of the output video.CYCLE_LENis a period of predicted trajectory in seconds.BLOB_SIZEandLINE_WIDTHare sizes of blob around human anchor point and trajectory line in pixels.FPSis a desired frames per second for output video.MIN_AGE_FOR_TRAJECTORYis a minimum age for tracker in seconds for trajectory to be drawn.DRAW_BOXstates whether to draw bounding boxes around objects. You can useCOMPRESSto enable output video compression.
To start working in container run the following commands:
make run
make exec Currently demo.py works only with videos in a current folder.
To process your video launch demo.py with specified arguments:
python demo.py input_video.mp4 --output-video output_video.mp4 --prediction-length 2.0 --head-detection --draw-boxesWhere:
output-video: output video file name,prediction-length: length of trajectories in seconds (overrides config parameter),head-detection: flag whether to detect heads (overrides config parameter),draw-boxes: flag whether to draw bounding boxes around objects.
After you have done working stop and remove container:
make stop 
