Skip to content

yuyoujiang/FastSAM-with-video-on-NVIDIA-Jetson

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 

Repository files navigation

FastSAM-with-video-on-NVIDIA-Jetson

This is a demo for deploying FastSAM to NVIDIA Jetson, and video can be used as input in this demo.

This has been tested and deployed on a reComputer Jetson J4011. However, you can use any NVIDIA Jetson device to deploy this demo.

Installation

  • Step 1: Flash JetPack OS to reComputer Jetson device (Refer to here).

  • Step 2: Access the terminal of Jetson device, install pip and upgrade it

sudo apt update
sudo apt install -y python3-pip
pip3 install --upgrade pip
  • Step 3: Clone the following repo
git clone https://github.com/CASIA-IVA-Lab/FastSAM
  • Step 4: Open requirements.txt
cd FastSAM
vi requirements.txt
  • Step 5: Edit the following lines. Here you need to press i first to enter editing mode. Press ESC, then type :wq to save and quit
# torch>=1.7.0
# torchvision>=0.8.1

Note: torch and torchvision are excluded for now because they will be installed later.

  • Step 6: Install the necessary packages
pip install -r requirements.txt
  • Step 7: Install CLIP
pip install git+https://github.com/openai/CLIP.git
  • Step 8: Install PyTorch and Torchvision (Refer to here).

  • Step 9: Clone this demo.

git clone https://github.com/yuyoujiang/FastSAM-with-video-on-NVIDIA-Jetson.git

Prepare The Model File

The pretrained models are PyTorch models and you can directly use them for inferencing on the Jetson device. However, to have a better speed, you can convert the PyTorch models to TensorRT optimized models by following below instructions.

  • Step 1: Download model weights in PyTorch format (Refer to here).

  • Step 2: Create a new Python script and enter the following code. Save and execute the file.

from ultralytics import YOLO


model = YOLO('FastSAM-s.pt')  # load a custom trained
# TensorRT FP32 export
# model.export(format='engine', device='0', imgsz=640)
# TensorRT FP16 export
model.export(format='engine', device='0', imgsz=640, half=True)

Tip: Click here to learn more about yolo export

Let's Run It!

For video

python3 Inference_video.py --model_path <path to model> --img_path <path to input video> --imgsz 640

For webcam

python3 Inference_video.py --model_path <path to model> --img_path <id of camera> --imgsz 640

References

https://github.com/ultralytics/
https://github.com/CASIA-IVA-Lab/FastSAM https://wiki.seeedstudio.com

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages