Skip to content

This repo contain inference code for NVIDIA's trt_pose pose estimation.

License

Notifications You must be signed in to change notification settings

myatthukyaw/trt_pose_inference

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Nvidia's trt_pose Inference

This repo is for beginners who want to test Nvidia's trt_pose pose estimation.

Image

To get started, follow the instructions below. If you run into any issues please let us know.

Getting Started

To get started with trt_pose, follow these steps.

Step 1 - Install trt_pose and torch2trt

Please follow instructions on official Nvidia's trt_pose and torch2trt - An easy to use PyTorch to TensorRT converter.

Move the pretrained models into the tasks/human_pose/pretrained directory.

Step 2 - Convert the model to tensorrt model

If you have problem installing torch2trt, you can skip this step.

cd trt_pose/tasks/human_pose
python convert_trt.py --model pretrained/densenet121_baseline_att_256x256_B_epoch_160.pth --json human_pose.json --size 256

You can find the size of input image in the name of your model file.

Step 3 - Run Inference code

There are two modes to test : input a video and test with your webcam.

If you passed step-2, run this:

python inference.py --trt_model pretrained/densenet121_baseline_att_256x256_B_epoch_160_trt.pth --json human_pose.json --size 256 --video_input test.mkv

If you skipped step-2, run this:

python inference.py --trt_model pretrained/densenet121_baseline_att_256x256_B_epoch_160.pth --json human_pose.json --size 256 --video_input test.mkv

Above Image is tested with video from https://www.youtube.com/watch?v=YzcawvDGe4Y.

About

This repo contain inference code for NVIDIA's trt_pose pose estimation.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published