Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.
-
Updated
May 31, 2024 - Python
Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.
[CVPR2019] Fast Online Object Tracking and Segmentation: A Unifying Approach
An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary algorithms utilized include the Segment Anything Model (SAM) for key-frame segmentation and Associating Objects with Transformers (AOT) for efficient tracking and propagation purposes.
[ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model
[CVPR'23] Universal Instance Perception as Object Discovery and Retrieval
[ICCV 2023] Tracking Anything with Decoupled Video Segmentation
[CVPR2024 Highlight]GLEE: General Object Foundation Model for Images and Videos at Scale
SAM-PT: Extending SAM to zero-shot video segmentation with point-based tracking.
[ECCV'22 Oral] Towards Grand Unification of Object Tracking
[CVPR 2024 Highlight] Putting the Object Back Into Video Object Segmentation
[NeurIPS 2021] Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation
[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion. Semi-supervised VOS as well!
[NeurIPS'21] Unified tracking framework with a single appearance model. It supports Single Object Tracking (SOT), Video Object Segmentation (VOS), Multi-Object Tracking (MOT), Multi-Object Tracking and Segmentation (MOTS), Pose Tracking, Video Instance Segmentation (VIS), and class-agnostic MOT (e.g. TAO dataset).
The official implementation of CFBI(+): Collaborative Video Object Segmentation by (Multi-scale) Foreground-Background Integration.
See More, Know More: Unsupervised Video Object Segmentation with Co-Attention Siamese Networks (CVPR19)
[ICCV 2023] MOSE: A New Dataset for Video Object Segmentation in Complex Scenes
🔖 Curated list of video object segmentation (VOS) papers, datasets, and projects.
PyTorch re-implementation of DeepMask
Learning Unsupervised Video Object Segmentation through Visual Attention (CVPR19, PAMI20)
Zero-shot Video Object Segmentation via Attentive Graph Neural Networks (ICCV2019 Oral)
Add a description, image, and links to the video-object-segmentation topic page so that developers can more easily learn about it.
To associate your repository with the video-object-segmentation topic, visit your repo's landing page and select "manage topics."