Skip to content

Commit f7f40f0

Browse files
authored
Merge pull request #1127 from roboflow/develop
`supervision-0.20.0` release
2 parents 55f93a8 + c5f92a5 commit f7f40f0

File tree

98 files changed

+5706
-753
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

98 files changed

+5706
-753
lines changed

.github/workflows/docs.yml renamed to .github/workflows/publish-dev-docs.yml

+7-1
Original file line numberDiff line numberDiff line change
@@ -4,12 +4,18 @@ on:
44
push:
55
branches:
66
- develop
7+
workflow_dispatch:
8+
9+
concurrency:
10+
group: ${{ github.workflow }}-${{ github.event_name == 'push' && github.ref}}
11+
cancel-in-progress: true
712

813
permissions:
914
contents: write
1015
pages: write
1116
pull-requests: write
1217

18+
1319
jobs:
1420
deploy:
1521
runs-on: ubuntu-latest
@@ -23,7 +29,7 @@ jobs:
2329
with:
2430
python-version: '3.10'
2531
- name: 📦 Install mkdocs-material
26-
run: pip install "mkdocs-material[all]"
32+
run: pip install "mkdocs-material"
2733
- name: 📦 Install mkdocstrings[python]
2834
run: pip install "mkdocstrings[python]"
2935
- name: 📦 Install mkdocs-material[imaging]
+55
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
name: Supervision Release Documentation Workflow 📚
2+
on:
3+
workflow_dispatch:
4+
5+
concurrency:
6+
group: ${{ github.workflow }}-${{ github.event_name == 'push' && github.ref}}
7+
cancel-in-progress: true
8+
9+
permissions:
10+
contents: write
11+
pages: write
12+
pull-requests: write
13+
14+
15+
jobs:
16+
doc-build-deploy:
17+
runs-on: ubuntu-latest
18+
strategy:
19+
matrix:
20+
python-version: ["3.10"]
21+
steps:
22+
- name: 🛎️ Checkout
23+
uses: actions/checkout@v4
24+
with:
25+
fetch-depth: 0
26+
ref: ${{ github.head_ref }}
27+
28+
- name: 🐍 Set up Python
29+
uses: actions/setup-python@v5
30+
with:
31+
python-version: '3.10'
32+
- name: 📦 Install mkdocs-material
33+
run: pip install "mkdocs-material"
34+
- name: 📦 Install mkdocstrings[python]
35+
run: pip install "mkdocstrings[python]"
36+
- name: 📦 Install mkdocs-material[imaging]
37+
run: pip install "mkdocs-material[imaging]"
38+
- name: 📦 Install mike
39+
run: pip install "mike"
40+
- name: 📦 Install mkdocs-git-revision-date-localized-plugin
41+
run: pip install "mkdocs-git-revision-date-localized-plugin"
42+
- name: 📦 Install JupyterLab
43+
run: pip install jupyterlab
44+
- name: 📦 Install mkdocs-jupyter
45+
run: pip install mkdocs-jupyter
46+
- name: 📦 Install mkdocs-git-committers-plugin-2
47+
run: pip install mkdocs-git-committers-plugin-2
48+
- name: ⚙️ Configure git for github-actions 👷
49+
run: |
50+
git config --global user.name "github-actions[bot]"
51+
git config --global user.email "41898282+github-actions[bot]@users.noreply.github.com"
52+
- name: 🚀 Deploy MkDoc-Material 📚
53+
run: |
54+
latest_tag=$(git describe --tags `git rev-list --tags --max-count=1`)
55+
MKDOCS_GIT_COMMITTERS_APIKEY=${{ secrets.GITHUB_TOKEN }} mike deploy --push --update-aliases $latest_tag latest

.github/workflows/publish-test.yml

+8-6
Original file line numberDiff line numberDiff line change
@@ -6,22 +6,24 @@ on:
66
- '[0-9]+.[0-9]+[0-9]+.[0-9]+b[0-9]'
77
- '[0-9]+.[0-9]+[0-9]+.[0-9]+rc[0-9]'
88

9-
# Allows you to run this workflow manually from the Actions tab
109
workflow_dispatch:
1110

1211
jobs:
1312
build-n-publish:
1413
name: Build and publish to PyPI
1514
runs-on: ubuntu-latest
16-
15+
strategy:
16+
matrix:
17+
python-version: ["3.10"]
1718
steps:
18-
- name: Checkout source
19+
- name: 🛎️ Checkout
1920
uses: actions/checkout@v4
20-
21-
- name: 🐍 Set up Python 3.8 environment for build
21+
with:
22+
ref: ${{ github.head_ref }}
23+
- name: 🐍 Set up Python ${{ matrix.python-version }}
2224
uses: actions/setup-python@v5
2325
with:
24-
python-version: "3.8"
26+
python-version: ${{ matrix.python-version }}
2527

2628
- name: 🏗️ Build source and wheel distributions
2729
run: |

.github/workflows/publish.yml

+1-2
Original file line numberDiff line numberDiff line change
@@ -4,15 +4,14 @@ on:
44
tags:
55
- '[0-9]+.[0-9]+[0-9]+.[0-9]'
66

7-
# Allows you to run this workflow manually from the Actions tab
87
workflow_dispatch:
98

109
jobs:
1110
build:
1211
runs-on: ubuntu-latest
1312
strategy:
1413
matrix:
15-
python-version: [3.8]
14+
python-version: ["3.10"]
1615
steps:
1716
- name: 🛎️ Checkout
1817
uses: actions/checkout@v4

.pre-commit-config.yaml

+2-3
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ci:
77

88
repos:
99
- repo: https://github.com/pre-commit/pre-commit-hooks
10-
rev: v4.5.0
10+
rev: v4.6.0
1111
hooks:
1212
- id: end-of-file-fixer
1313
- id: trailing-whitespace
@@ -27,7 +27,6 @@ repos:
2727
- id: mixed-line-ending
2828

2929

30-
3130
- repo: https://github.com/PyCQA/bandit
3231
rev: '1.7.8'
3332
hooks:
@@ -46,7 +45,7 @@ repos:
4645

4746

4847
- repo: https://github.com/astral-sh/ruff-pre-commit
49-
rev: v0.3.2
48+
rev: v0.4.1
5049
hooks:
5150
- id: ruff
5251
args: [--fix, --exit-non-zero-on-fix]

README.md

+20-17
Original file line numberDiff line numberDiff line change
@@ -18,10 +18,10 @@
1818
[![downloads](https://img.shields.io/pypi/dm/supervision)](https://pypistats.org/packages/supervision)
1919
[![license](https://img.shields.io/pypi/l/supervision)](https://github.com/roboflow/supervision/blob/main/LICENSE.md)
2020
[![python-version](https://img.shields.io/pypi/pyversions/supervision)](https://badge.fury.io/py/supervision)
21-
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow/supervision/blob/main/demo.ipynb)
22-
[![Gradio](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Roboflow/Annotators)
23-
[![Discord](https://img.shields.io/discord/1159501506232451173)](https://discord.gg/GbfgXGJ8Bk)
24-
[![Built with Material for MkDocs](https://img.shields.io/badge/Material_for_MkDocs-526CFE?logo=MaterialForMkDocs&logoColor=white)](https://squidfunk.github.io/mkdocs-material/)
21+
[![colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow/supervision/blob/main/demo.ipynb)
22+
[![gradio](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Roboflow/Annotators)
23+
[![discord](https://img.shields.io/discord/1159501506232451173)](https://discord.gg/GbfgXGJ8Bk)
24+
[![built-with-material-for-mkdocs](https://img.shields.io/badge/Material_for_MkDocs-526CFE?logo=MaterialForMkDocs&logoColor=white)](https://squidfunk.github.io/mkdocs-material/)
2525
</div>
2626

2727
## 👋 hello
@@ -39,7 +39,7 @@ Pip install the supervision package in a
3939
pip install supervision
4040
```
4141

42-
Read more about desktop, headless, and local installation in our [guide](https://roboflow.github.io/supervision/).
42+
Read more about conda, mamba, and installing from source in our [guide](https://roboflow.github.io/supervision/).
4343

4444
## 🔥 quickstart
4545

@@ -71,16 +71,15 @@ len(detections)
7171
```python
7272
import cv2
7373
import supervision as sv
74-
from inference.models.utils import get_roboflow_model
74+
from inference import get_model
7575

7676
image = cv2.imread(...)
77-
model = get_roboflow_model(model_id="yolov8s-640", api_key=<ROBOFLOW API KEY>)
77+
model = get_model(model_id="yolov8s-640", api_key=<ROBOFLOW API KEY>)
7878
result = model.infer(image)[0]
7979
detections = sv.Detections.from_inference(result)
8080

8181
len(detections)
82-
# 5
83-
82+
# 5
8483
```
8584

8685
</details>
@@ -217,19 +216,23 @@ len(dataset)
217216

218217
## 🎬 tutorials
219218

219+
Want to learn how to use Supervision? Explore our [how-to guides](https://supervision.roboflow.com/develop/how_to/detect_and_annotate/), [end-to-end examples](https://github.com/roboflow/supervision/tree/develop/examples), and [cookbooks](https://supervision.roboflow.com/develop/cookbooks/)!
220+
221+
<br/>
222+
220223
<p align="left">
221-
<a href="https://youtu.be/uWP6UjDeZvY" title="Speed Estimation & Vehicle Tracking | Computer Vision | Open Source"><img src="https://github.com/SkalskiP/SkalskiP/assets/26109316/61a444c8-b135-48ce-b979-2a5ab47c5a91" alt="Speed Estimation & Vehicle Tracking | Computer Vision | Open Source" width="300px" align="left" /></a>
222-
<a href="https://youtu.be/uWP6UjDeZvY" title="Speed Estimation & Vehicle Tracking | Computer Vision | Open Source"><strong>Speed Estimation & Vehicle Tracking | Computer Vision | Open Source</strong></a>
223-
<div><strong>Created: 11 Jan 2024</strong> | <strong>Updated: 11 Jan 2024</strong></div>
224-
<br/> Learn how to track and estimate the speed of vehicles using YOLO, ByteTrack, and Roboflow Inference. This comprehensive tutorial covers object detection, multi-object tracking, filtering detections, perspective transformation, speed estimation, visualization improvements, and more.</p>
224+
<a href="https://youtu.be/hAWpsIuem10" title="Dwell Time Analysis with Computer Vision | Real-Time Stream Processing"><img src="https://github.com/SkalskiP/SkalskiP/assets/26109316/a742823d-c158-407d-b30f-063a5d11b4e1" alt="Dwell Time Analysis with Computer Vision | Real-Time Stream Processing" width="300px" align="left" /></a>
225+
<a href="https://youtu.be/hAWpsIuem10" title="Dwell Time Analysis with Computer Vision | Real-Time Stream Processing"><strong>Dwell Time Analysis with Computer Vision | Real-Time Stream Processing</strong></a>
226+
<div><strong>Created: 5 Apr 2024</strong></div>
227+
<br/>Learn how to use computer vision to analyze wait times and optimize processes. This tutorial covers object detection, tracking, and calculating time spent in designated zones. Use these techniques to improve customer experience in retail, traffic management, or other scenarios.</p>
225228

226229
<br/>
227230

228231
<p align="left">
229-
<a href="https://youtu.be/4Q3ut7vqD5o" title="Traffic Analysis with YOLOv8 and ByteTrack - Vehicle Detection and Tracking"><img src="https://github.com/roboflow/supervision/assets/26109316/54afdf1c-218c-4451-8f12-627fb85f1682" alt="Traffic Analysis with YOLOv8 and ByteTrack - Vehicle Detection and Tracking" width="300px" align="left" /></a>
230-
<a href="https://youtu.be/4Q3ut7vqD5o" title="Traffic Analysis with YOLOv8 and ByteTrack - Vehicle Detection and Tracking"><strong>Traffic Analysis with YOLOv8 and ByteTrack - Vehicle Detection and Tracking</strong></a>
231-
<div><strong>Created: 6 Sep 2023</strong> | <strong>Updated: 6 Sep 2023</strong></div>
232-
<br/> In this video, we explore real-time traffic analysis using YOLOv8 and ByteTrack to detect and track vehicles on aerial images. Harnessing the power of Python and Supervision, we delve deep into assigning cars to specific entry zones and understanding their direction of movement. By visualizing their paths, we gain insights into traffic flow across bustling roundabouts... </p>
232+
<a href="https://youtu.be/uWP6UjDeZvY" title="Speed Estimation & Vehicle Tracking | Computer Vision | Open Source"><img src="https://github.com/SkalskiP/SkalskiP/assets/26109316/61a444c8-b135-48ce-b979-2a5ab47c5a91" alt="Speed Estimation & Vehicle Tracking | Computer Vision | Open Source" width="300px" align="left" /></a>
233+
<a href="https://youtu.be/uWP6UjDeZvY" title="Speed Estimation & Vehicle Tracking | Computer Vision | Open Source"><strong>Speed Estimation & Vehicle Tracking | Computer Vision | Open Source</strong></a>
234+
<div><strong>Created: 11 Jan 2024</strong></div>
235+
<br/>Learn how to track and estimate the speed of vehicles using YOLO, ByteTrack, and Roboflow Inference. This comprehensive tutorial covers object detection, multi-object tracking, filtering detections, perspective transformation, speed estimation, visualization improvements, and more.</p>
233236

234237
## 💜 built with supervision
235238

docs/assets.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -19,13 +19,13 @@ as an extra within the Supervision package.
1919
```
2020

2121
<div class="md-typeset">
22-
<h2>download_assets</h2>
22+
<h2><a href="#supervision.assets.downloader.download_assets.download_assets">download_assets</a></h2>
2323
</div>
2424

2525
:::supervision.assets.downloader.download_assets
2626

2727
<div class="md-typeset">
28-
<h2>VideoAssets</h2>
28+
<h2><a href="#supervision.assets.downloader.download_assets.VideoAssets">VideoAssets</a></h2>
2929
</div>
3030

3131
:::supervision.assets.list.VideoAssets

docs/changelog.md

+65
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,68 @@
1+
### 0.20.0 <small>April 24, 2024</small>
2+
3+
- Added [#1128](https://github.com/roboflow/supervision/pull/1128): [`sv.KeyPoints`](/0.20.0/keypoint/core/#supervision.keypoint.core.KeyPoints) to provide initial support for pose estimation and broader keypoint detection models.
4+
5+
- Added [#1128](https://github.com/roboflow/supervision/pull/1128): [`sv.EdgeAnnotator`](/0.20.0/keypoint/annotators/#supervision.keypoint.annotators.EdgeAnnotator) and [`sv.VertexAnnotator`](/0.20.0/keypoint/annotators/#supervision.keypoint.annotators.VertexAnnotator) to enable rendering of results from keypoint detection models.
6+
7+
```python
8+
import cv2
9+
import supervision as sv
10+
from ultralytics import YOLO
11+
12+
image = cv2.imread(<SOURCE_IMAGE_PATH>)
13+
model = YOLO('yolov8l-pose')
14+
15+
result = model(image, verbose=False)[0]
16+
keypoints = sv.KeyPoints.from_ultralytics(result)
17+
18+
edge_annotators = sv.EdgeAnnotator(color=sv.Color.GREEN, thickness=5)
19+
annotated_image = edge_annotators.annotate(image.copy(), keypoints)
20+
```
21+
22+
- Changed [#1037](https://github.com/roboflow/supervision/pull/1037): [`sv.LabelAnnotator`](/0.20.0/annotators/#supervision.annotators.core.LabelAnnotator) by adding an additional `corner_radius` argument that allows for rounding the corners of the bounding box.
23+
24+
- Changed [#1109](https://github.com/roboflow/supervision/pull/1109): [`sv.PolygonZone`](/0.20.0/detection/tools/polygon_zone/#supervision.detection.tools.polygon_zone.PolygonZone) such that the `frame_resolution_wh` argument is no longer required to initialize `sv.PolygonZone`.
25+
26+
!!! failure "Deprecated"
27+
28+
The `frame_resolution_wh` parameter in `sv.PolygonZone` is deprecated and will be removed in `supervision-0.24.0`.
29+
30+
- Changed [#1084](https://github.com/roboflow/supervision/pull/1084): [`sv.get_polygon_center`](/0.20.0/utils/geometry/#supervision.geometry.core.utils.get_polygon_center) to calculate a more accurate polygon centroid.
31+
32+
- Changed [#1069](https://github.com/roboflow/supervision/pull/1069): [`sv.Detections.from_transformers`](/0.20.0/detection/core/#supervision.detection.core.Detections.from_transformers) by adding support for Transformers segmentation models and extract class names values.
33+
34+
```python
35+
import torch
36+
import supervision as sv
37+
from PIL import Image
38+
from transformers import DetrImageProcessor, DetrForSegmentation
39+
40+
processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50-panoptic")
41+
model = DetrForSegmentation.from_pretrained("facebook/detr-resnet-50-panoptic")
42+
43+
image = Image.open(<SOURCE_IMAGE_PATH>)
44+
inputs = processor(images=image, return_tensors="pt")
45+
46+
with torch.no_grad():
47+
outputs = model(**inputs)
48+
49+
width, height = image.size
50+
target_size = torch.tensor([[height, width]])
51+
results = processor.post_process_segmentation(
52+
outputs=outputs, target_sizes=target_size)[0]
53+
detections = sv.Detections.from_transformers(results, id2label=model.config.id2label)
54+
55+
mask_annotator = sv.MaskAnnotator()
56+
label_annotator = sv.LabelAnnotator(text_position=sv.Position.CENTER)
57+
58+
annotated_image = mask_annotator.annotate(
59+
scene=image, detections=detections)
60+
annotated_image = label_annotator.annotate(
61+
scene=annotated_image, detections=detections)
62+
```
63+
64+
- Fixed [#787](https://github.com/roboflow/supervision/pull/787): [`sv.ByteTrack.update_with_detections`](/0.20.0/trackers/#supervision.tracker.byte_tracker.core.ByteTrack.update_with_detections) which was removing segmentation masks while tracking. Now, `ByteTrack` can be used alongside segmentation models.
65+
166
### 0.19.0 <small>March 15, 2024</small>
267

368
- Added [#818](https://github.com/roboflow/supervision/pull/818): [`sv.CSVSink`](/0.19.0/detection/tools/save_detections/#supervision.detection.tools.csv_sink.CSVSink) allowing for the straightforward saving of image, video, or stream inference results in a `.csv` file.

docs/deprecated.md

+2-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,8 @@ These features are phased out due to better alternatives or potential issues in
1212
- The method `Color.green()` is deprecated and will be removed in `supervision-0.22.0`. Use the constant `Color.GREEN` instead.
1313
- The method `Color.blue()` is deprecated and will be removed in `supervision-0.22.0`. Use the constant `Color.BLUE` instead.
1414
- The method [`ColorPalette.default()`](draw/color.md/#supervision.draw.color.ColorPalette.default) is deprecated and will be removed in `supervision-0.22.0`. Use the constant [`ColorPalette.DEFAULT`](draw/color.md/#supervision.draw.color.ColorPalette.DEFAULT) instead.
15-
- `BoxAnnotator` is deprecated and will be removed in `supervision-0.22.0`. Use [`BoundingBoxAnnotator`](annotators.md/#supervision.annotators.core.BoundingBoxAnnotator) and [`LabelAnnotator`](annotators.md/#supervision.annotators.core.LabelAnnotator) instead.
15+
- `BoxAnnotator` is deprecated and will be removed in `supervision-0.22.0`. Use [`BoundingBoxAnnotator`](detection/annotators.md/#supervision.annotators.core.BoundingBoxAnnotator) and [`LabelAnnotator`](detection/annotators.md/#supervision.annotators.core.LabelAnnotator) instead.
1616
- The method [`FPSMonitor.__call__`](utils/video.md/#supervision.utils.video.FPSMonitor.__call__) is deprecated and will be removed in `supervision-0.22.0`. Use the attribute [`FPSMonitor.fps`](utils/video.md/#supervision.utils.video.FPSMonitor.fps) instead.
1717
- The `track_buffer`, `track_thresh`, and `match_thresh` parameters in [`ByterTrack`](trackers.md/#supervision.tracker.byte_tracker.core.ByteTrack) are deprecated and will be removed in `supervision-0.23.0`. Use `lost_track_buffer,` `track_activation_threshold`, and `minimum_matching_threshold` instead.
1818
- The `triggering_position ` parameter in [`sv.PolygonZone`](detection/tools/polygon_zone.md/#supervision.detection.tools.polygon_zone.PolygonZone) is deprecated and will be removed in `supervision-0.23.0`. Use `triggering_anchors ` instead.
19+
- The `frame_resolution_wh ` parameter in [`sv.PolygonZone`](detection/tools/polygon_zone.md/#supervision.detection.tools.polygon_zone.PolygonZone) is deprecated and will be removed in `supervision-0.24.0`.

docs/annotators.md renamed to docs/detection/annotators.md

+10-3
Original file line numberDiff line numberDiff line change
@@ -260,15 +260,22 @@ status: new
260260
=== "Label"
261261

262262
```python
263-
import supervision as sv
263+
import supervision as sv
264264

265265
image = ...
266266
detections = sv.Detections(...)
267267

268+
labels = [
269+
f"{class_name} {confidence:.2f}"
270+
for class_name, confidence
271+
in zip(detections['class_name'], detections.confidence)
272+
]
273+
268274
label_annotator = sv.LabelAnnotator(text_position=sv.Position.CENTER)
269275
annotated_frame = label_annotator.annotate(
270276
scene=image.copy(),
271-
detections=detections
277+
detections=detections,
278+
labels=labels
272279
)
273280
```
274281

@@ -281,7 +288,7 @@ status: new
281288
=== "Crop"
282289

283290
```python
284-
import supervision as sv
291+
import supervision as sv
285292

286293
image = ...
287294
detections = sv.Detections(...)

docs/detection/metrics.md

+17
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
---
2+
comments: true
3+
---
4+
5+
# Metrics
6+
7+
<div class="md-typeset">
8+
<h2><a href="#supervision.metrics.detection.ConfusionMatrix">ConfusionMatrix</a></h2>
9+
</div>
10+
11+
:::supervision.metrics.detection.ConfusionMatrix
12+
13+
<div class="md-typeset">
14+
<h2><a href="#supervision.metrics.detection.MeanAveragePrecision">MeanAveragePrecision</a></h2>
15+
</div>
16+
17+
:::supervision.metrics.detection.MeanAveragePrecision

0 commit comments

Comments
 (0)