Skip to content

Latest commit

 

History

History
173 lines (130 loc) · 6.13 KB

README.md

File metadata and controls

173 lines (130 loc) · 6.13 KB

Grounded-Segment-Anything

We plan to create a very interesting demo by combining Grounding DINO and Segment Anything! Right now, this is just a simple small project. We will continue to improve it and create more interesting demos.

Why this project?

  • Segment Anything is a strong segmentation model. But it need prompts (like boxes/points) to generate masks.
  • Grounding DINO is a strong zero-shot detector which enable to generate high quality boxes and labels with free-form text.
  • The combination of the two models enable to detect and segment everything with text inputs!

Grounded-SAM

Grounded-SAM + Stable-Diffusion Inpainting

Imagine space

Some possible avenues for future work ...

  • Automatic image generation to construct new datasets.
  • Stronger foundation models with segmentation pre-training.
  • Colleboration with (Chat-)GPT.
  • A whole pipeline for automatically label image (with box and mask) and generate new image.

More Examples

🔥 What's New

  • 🆕 Checkout our related human-face-edit branch here. We'll keep updating this branch with more interesting features. Here are some examples:

📑 Catelog

  • GroundingDINO Demo
  • GroundingDINO + Segment-Anything Demo
  • GroundingDINO + Segment-Anything + Diffusion Demo
  • Huggingface Demo
  • Colab demo

📖 Notebook Demo

See our notebook file as an example.

🛠️ Installation

The code requires python>=3.8, as well as pytorch>=1.7 and torchvision>=0.8. Please follow the instructions here to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended.

Install Segment Anything:

python -m pip install -e segment_anything

Install GroundingDINO:

python -m pip install -e GroundingDINO

Install diffusers:

pip install --upgrade diffusers[torch]

The following optional dependencies are necessary for mask post-processing, saving masks in COCO format, the example notebooks, and exporting the model in ONNX format. jupyter is also required to run the example notebooks.

pip install opencv-python pycocotools matplotlib onnxruntime onnx ipykernel

More details can be found in install segment anything and install GroundingDINO

🏃 Run GroundingDINO Demo

  • Download the checkpoint for groundingdino:
cd Grounded-Segment-Anything

wget https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth
  • Run demo
export CUDA_VISIBLE_DEVICES=0
python grounding_dino_demo.py \
  --config GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py \
  --grounded_checkpoint groundingdino_swint_ogc.pth \
  --input_image assets/demo1.jpg \
  --output_dir "outputs" \
  --box_threshold 0.3 \
  --text_threshold 0.25 \
  --text_prompt "bear" \
  --device "cuda"
  • The model prediction visualization will be saved in output_dir as follow:

🏃‍♂️ Run Grounded-Segment-Anything Demo

  • Download the checkpoint for segment-anything and grounding-dino:
cd Grounded-Segment-Anything

wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth
wget https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth
  • Run Demo
export CUDA_VISIBLE_DEVICES=0
python grounded_sam_demo.py \
  --config GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py \
  --grounded_checkpoint groundingdino_swint_ogc.pth \
  --sam_checkpoint sam_vit_h_4b8939.pth \
  --input_image assets/demo1.jpg \
  --output_dir "outputs" \
  --box_threshold 0.3 \
  --text_threshold 0.25 \
  --text_prompt "bear" \
  --device "cuda"
  • The model prediction visualization will be saved in output_dir as follow:

⛷️ Run Grounded-Segment-Anything + Inpainting Demo

CUDA_VISIBLE_DEVICES=0
python grounded_sam_inpainting_demo.py \
  --config GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py \
  --grounded_checkpoint groundingdino_swint_ogc.pth \
  --sam_checkpoint sam_vit_h_4b8939.pth \
  --input_image assets/inpaint_demo.jpg \
  --output_dir "outputs" \
  --box_threshold 0.3 \
  --text_threshold 0.25 \
  --det_prompt "bench" \
  --inpaint_prompt "A sofa, high quality, detailed" \
  --device "cuda"

🏌️ Run Grounded-Segment-Anything + Inpainting Gradio APP

python gradio_app.py

💘 Acknowledgements

Citation

If you find this project helpful for your research, please consider citing the following BibTeX entry.

@article{kirillov2023segany,
  title={Segment Anything}, 
  author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross},
  journal={arXiv:2304.02643},
  year={2023}
}

@inproceedings{ShilongLiu2023GroundingDM,
  title={Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection},
  author={Shilong Liu and Zhaoyang Zeng and Tianhe Ren and Feng Li and Hao Zhang and Jie Yang and Chunyuan Li and Jianwei Yang and Hang Su and Jun Zhu and Lei Zhang},
  year={2023}
}