Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
fa16539
Miscellaneous Fixes (#21289)
NickM-27 Dec 15, 2025
39af856
feat: add train classification download weights file endpoint (#21294)
ZhaiSoul Dec 15, 2025
f543d0a
Fix layout shift with camera filter (#21298)
issy Dec 15, 2025
818cccb
Settings page layout shift - follow up (#21300)
issy Dec 15, 2025
e7d0477
Miscellaneous Fixes (0.17 beta) (#21301)
hawkeye217 Dec 16, 2025
c292cd2
Align node versions used in GHA PR workflow (#21302)
issy Dec 17, 2025
78eace2
Miscellaneous Fixes (0.17 Beta) (#21320)
NickM-27 Dec 17, 2025
3edfd90
consider anonymous user authenticated (#21335)
blakeblackshear Dec 17, 2025
13957fe
classification i18n fix (#21331)
ZhaiSoul Dec 17, 2025
ae009b9
Miscellaneous Fixes (0.17 beta) (#21336)
hawkeye217 Dec 17, 2025
074b060
fix: temp directory is only created when there are review_items. (#21…
ZhaiSoul Dec 18, 2025
6a0e31d
Add object classification attributes to Tracked Object Details (#21348)
hawkeye217 Dec 18, 2025
e636449
Miscellaneous fixes (0.17 beta) (#21350)
NickM-27 Dec 18, 2025
60052e5
Miscellaneous Fixes (0.17 beta) (#21355)
hawkeye217 Dec 20, 2025
8a4d5f3
fix: fix system enrichments view classification i18n (#21366)
ZhaiSoul Dec 20, 2025
54f4af3
Miscellaneous fixes (#21373)
NickM-27 Dec 21, 2025
f74df04
fix: fix password setting overlay time i18n (#21387)
ZhaiSoul Dec 22, 2025
f862ef5
Add Scrypted - Frigate bridge plugin information (#21365)
apocaliss92 Dec 22, 2025
a4ece9d
Miscellaneous Fixes (0.17 beta) (#21396)
hawkeye217 Dec 24, 2025
bb3991f
Translated using Weblate (Turkish)
weblate Dec 24, 2025
8fb413c
Translated using Weblate (Latvian)
weblate Dec 24, 2025
d7e10df
Translated using Weblate (German)
weblate Dec 24, 2025
50a5e40
Translated using Weblate (Danish)
weblate Dec 24, 2025
1be7c56
Update translation files
weblate Dec 24, 2025
4ae3c97
Translated using Weblate (Estonian)
weblate Dec 24, 2025
3242968
Translated using Weblate (Russian)
weblate Dec 24, 2025
f94aa0f
Translated using Weblate (Romanian)
weblate Dec 24, 2025
2522a10
Translated using Weblate (Ukrainian)
weblate Dec 24, 2025
29bcb7f
Translated using Weblate (Japanese)
weblate Dec 24, 2025
525cc5b
Translated using Weblate (Catalan)
weblate Dec 24, 2025
5978020
Translated using Weblate (Czech)
weblate Dec 24, 2025
a109461
Translated using Weblate (Croatian)
weblate Dec 24, 2025
aa9dbbb
Translated using Weblate (Hebrew)
weblate Dec 24, 2025
bd2382d
Added translation using Weblate (Malayalam)
weblate Dec 24, 2025
d2aa2a0
Translated using Weblate (Polish)
weblate Dec 24, 2025
bfc2859
Translated using Weblate (Italian)
weblate Dec 24, 2025
5d960aa
Update translation files
weblate Dec 24, 2025
225c5f0
Translated using Weblate (Dutch)
weblate Dec 24, 2025
57d344a
Translated using Weblate (French)
weblate Dec 24, 2025
f34e220
Translated using Weblate (Swedish)
weblate Dec 24, 2025
edeb47a
Translated using Weblate (Persian)
weblate Dec 24, 2025
b54cb21
Update translation files
weblate Dec 24, 2025
a2e98dc
Update translation files
weblate Dec 24, 2025
ca0e53f
Update translation files
weblate Dec 24, 2025
e20b324
Update translation files
weblate Dec 24, 2025
3c5eb1a
Miscellaneous fixes (0.17 beta) (#21431)
NickM-27 Dec 26, 2025
3655b92
fix: additional proxy headers for complete support of oauth2-proxy (#…
hofq Dec 27, 2025
e2a1208
Miscellaneous fixes (0.17 Beta) (#21443)
NickM-27 Dec 29, 2025
865bbbd
feat(genai): Support multiple GenAI providers and per-camera configur…
wozz Dec 29, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
8 changes: 4 additions & 4 deletions .github/workflows/pull_request.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,9 @@ jobs:
- uses: actions/checkout@v6
with:
persist-credentials: false
- uses: actions/setup-node@master
- uses: actions/setup-node@v6
with:
node-version: 16.x
node-version: 20.x
- run: npm install
working-directory: ./web
- name: Lint
Expand All @@ -35,7 +35,7 @@ jobs:
- uses: actions/checkout@v6
with:
persist-credentials: false
- uses: actions/setup-node@master
- uses: actions/setup-node@v6
with:
node-version: 20.x
- run: npm install
Expand Down Expand Up @@ -78,7 +78,7 @@ jobs:
uses: actions/checkout@v6
with:
persist-credentials: false
- uses: actions/setup-node@master
- uses: actions/setup-node@v6
with:
node-version: 20.x
- name: Install devcontainer cli
Expand Down
9 changes: 5 additions & 4 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,14 @@

# Frigate NVR™ - 一个具有实时目标检测的本地 NVR

[English](https://github.com/blakeblackshear/frigate) | \[简体中文\]

[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

<a href="https://hosted.weblate.org/engage/frigate-nvr/-/zh_Hans/">
<img src="https://hosted.weblate.org/widget/frigate-nvr/-/zh_Hans/svg-badge.svg" alt="翻译状态" />
</a>

[English](https://github.com/blakeblackshear/frigate) | \[简体中文\]

[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

一个完整的本地网络视频录像机(NVR),专为[Home Assistant](https://www.home-assistant.io)设计,具备 AI 目标/物体检测功能。使用 OpenCV 和 TensorFlow 在本地为 IP 摄像头执行实时物体检测。

强烈推荐使用 GPU 或者 AI 加速器(例如[Google Coral 加速器](https://coral.ai/products/) 或者 [Hailo](https://hailo.ai/)等)。它们的运行效率远远高于现在的顶级 CPU,并且功耗也极低。
Expand All @@ -38,6 +38,7 @@
## 协议

本项目采用 **MIT 许可证**授权。

**代码部分**:本代码库中的源代码、配置文件和文档均遵循 [MIT 许可证](LICENSE)。您可以自由使用、修改和分发这些代码,但必须保留原始版权声明。

**商标部分**:“Frigate”名称、“Frigate NVR”品牌以及 Frigate 的 Logo 为 **Frigate LLC 的商标**,**不在** MIT 许可证覆盖范围内。
Expand Down
12 changes: 11 additions & 1 deletion docker/main/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -237,8 +237,18 @@ ENV PYTHONWARNINGS="ignore:::numpy.core.getlimits"
# Set HailoRT to disable logging
ENV HAILORT_LOGGER_PATH=NONE

# TensorFlow error only
# TensorFlow C++ logging suppression (must be set before import)
# TF_CPP_MIN_LOG_LEVEL: 0=all, 1=INFO+, 2=WARNING+, 3=ERROR+ (we use 3 for errors only)
ENV TF_CPP_MIN_LOG_LEVEL=3
# Suppress verbose logging from TensorFlow C++ code
ENV TF_CPP_MIN_VLOG_LEVEL=3
# Disable oneDNN optimization messages ("optimized with oneDNN...")
ENV TF_ENABLE_ONEDNN_OPTS=0
# Suppress AutoGraph verbosity during conversion
ENV AUTOGRAPH_VERBOSITY=0
# Google Logging (GLOG) suppression for TensorFlow components
ENV GLOG_minloglevel=3
ENV GLOG_logtostderr=0

ENV PATH="/usr/local/go2rtc/bin:/usr/local/tempio/bin:/usr/local/nginx/sbin:${PATH}"

Expand Down
8 changes: 5 additions & 3 deletions docker/main/rootfs/etc/s6-overlay/s6-rc.d/go2rtc/run
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ function setup_homekit_config() {

if [[ ! -f "${config_path}" ]]; then
echo "[INFO] Creating empty HomeKit config file..."
echo '{}' > "${config_path}"
echo 'homekit: {}' > "${config_path}"
fi

# Convert YAML to JSON for jq processing
Expand All @@ -70,12 +70,14 @@ function setup_homekit_config() {
jq '
# Keep only the homekit section if it exists, otherwise empty object
if has("homekit") then {homekit: .homekit} else {homekit: {}} end
' "${temp_json}" > "${cleaned_json}" 2>/dev/null || echo '{"homekit": {}}' > "${cleaned_json}"
' "${temp_json}" > "${cleaned_json}" 2>/dev/null || {
echo '{"homekit": {}}' > "${cleaned_json}"
}

# Convert back to YAML and write to the config file
yq eval -P "${cleaned_json}" > "${config_path}" 2>/dev/null || {
echo "[WARNING] Failed to convert cleaned config to YAML, creating minimal config"
echo '{"homekit": {}}' > "${config_path}"
echo 'homekit: {}' > "${config_path}"
}

# Clean up temp files
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,10 @@ proxy_set_header X-Forwarded-User $http_x_forwarded_user;
proxy_set_header X-Forwarded-Groups $http_x_forwarded_groups;
proxy_set_header X-Forwarded-Email $http_x_forwarded_email;
proxy_set_header X-Forwarded-Preferred-Username $http_x_forwarded_preferred_username;
proxy_set_header X-Auth-Request-User $http_x_auth_request_user;
proxy_set_header X-Auth-Request-Groups $http_x_auth_request_groups;
proxy_set_header X-Auth-Request-Email $http_x_auth_request_email;
proxy_set_header X-Auth-Request-Preferred-Username $http_x_auth_request_preferred_username;
proxy_set_header X-authentik-username $http_x_authentik_username;
proxy_set_header X-authentik-groups $http_x_authentik_groups;
proxy_set_header X-authentik-email $http_x_authentik_email;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,16 @@ id: object_classification
title: Object Classification
---

Object classification allows you to train a custom MobileNetV2 classification model to run on tracked objects (persons, cars, animals, etc.) to identify a finer category or attribute for that object.
Object classification allows you to train a custom MobileNetV2 classification model to run on tracked objects (persons, cars, animals, etc.) to identify a finer category or attribute for that object. Classification results are visible in the Tracked Object Details pane in Explore, through the `frigate/tracked_object_details` MQTT topic, in Home Assistant sensors via the official Frigate integration, or through the event endpoints in the HTTP API.

## Minimum System Requirements

Object classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.

Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.

A CPU with AVX instructions is required for training and inference.

## Classes

Classes are the categories your model will learn to distinguish between. Each class represents a distinct visual category that the model will predict.
Expand All @@ -31,9 +33,15 @@ For object classification:
- Example: `cat` → `Leo`, `Charlie`, `None`.

- **Attribute**:
- Added as metadata to the object (visible in /events): `<model_name>: <predicted_value>`.
- Added as metadata to the object, visible in the Tracked Object Details pane in Explore, `frigate/events` MQTT messages, and the HTTP API response as `<model_name>: <predicted_value>`.
- Ideal when multiple attributes can coexist independently.
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not.
- Example: Detecting if a `person` in a construction yard is wearing a helmet or not, and if they are wearing a yellow vest or not.

:::note

A tracked object can only have a single sub label. If you are using Triggers or Face Recognition and you configure an object classification model for `person` using the sub label type, your sub label may not be assigned correctly as it depends on which enrichment completes its analysis first. Consider using the `attribute` type instead.

:::

## Assignment Requirements

Expand Down Expand Up @@ -73,24 +81,50 @@ classification:
classification_type: sub_label # or: attribute
```

An optional config, `save_attempts`, can be set as a key under the model name. This defines the number of classification attempts to save in the Recent Classifications tab. For object classification models, the default is 200.

## Training the model

Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of two steps:

### Step 1: Name and Define

Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Include a `none` class for objects that don't fit any specific category.
Enter a name for your model, select the object label to classify (e.g., `person`, `dog`, `car`), choose the classification type (sub label or attribute), and define your classes. Frigate will automatically include a `none` class for objects that don't fit any specific category.

For example: To classify your two cats, create a model named "Our Cats" and create two classes, "Charlie" and "Leo". A third class, "none", will be created automatically for other neighborhood cats that are not your own.

### Step 2: Assign Training Examples

The system will automatically generate example images from detected objects matching your selected label. You'll be guided through each class one at a time to select which images represent that class. Any images not assigned to a specific class will automatically be assigned to `none` when you complete the last class. Once all images are processed, training will begin automatically.

When choosing which objects to classify, start with a small number of visually distinct classes and ensure your training samples match camera viewpoints and distances typical for those objects.

If examples for some of your classes do not appear in the grid, you can continue configuring the model without them. New images will begin to appear in the Recent Classifications view. When your missing classes are seen, classify them from this view and retrain your model.

### Improving the Model

- **Problem framing**: Keep classes visually distinct and relevant to the chosen object types.
- **Data collection**: Use the model’s Recent Classification tab to gather balanced examples across times of day, weather, and distances.
- **Preprocessing**: Ensure examples reflect object crops similar to Frigate’s boxes; keep the subject centered.
- **Labels**: Keep label names short and consistent; include a `none` class if you plan to ignore uncertain predictions for sub labels.
- **Threshold**: Tune `threshold` per model to reduce false assignments. Start at `0.8` and adjust based on validation.

## Debugging Classification Models

To troubleshoot issues with object classification models, enable debug logging to see detailed information about classification attempts, scores, and consensus calculations.

Enable debug logs for classification models by adding `frigate.data_processing.real_time.custom_classification: debug` to your `logger` configuration. These logs are verbose, so only keep this enabled when necessary. Restart Frigate after this change.

```yaml
logger:
default: info
logs:
frigate.data_processing.real_time.custom_classification: debug
```

The debug logs will show:

- Classification probabilities for each attempt
- Whether scores meet the threshold requirement
- Consensus calculations and when assignments are made
- Object classification history and weighted scores
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,16 @@ id: state_classification
title: State Classification
---

State classification allows you to train a custom MobileNetV2 classification model on a fixed region of your camera frame(s) to determine a current state. The model can be configured to run on a schedule and/or when motion is detected in that region.
State classification allows you to train a custom MobileNetV2 classification model on a fixed region of your camera frame(s) to determine a current state. The model can be configured to run on a schedule and/or when motion is detected in that region. Classification results are available through the `frigate/<camera_name>/classification/<model_name>` MQTT topic and in Home Assistant sensors via the official Frigate integration.

## Minimum System Requirements

State classification models are lightweight and run very fast on CPU. Inference should be usable on virtually any machine that can run Frigate.

Training the model does briefly use a high amount of system resources for about 1–3 minutes per training run. On lower-power devices, training may take longer.

A CPU with AVX instructions is required for training and inference.

## Classes

Classes are the different states an area on your camera can be in. Each class represents a distinct visual state that the model will learn to recognize.
Expand Down Expand Up @@ -46,6 +48,8 @@ classification:
crop: [0, 180, 220, 400]
```

An optional config, `save_attempts`, can be set as a key under the model name. This defines the number of classification attempts to save in the Recent Classifications tab. For state classification models, the default is 100.

## Training the model

Creating and training the model is done within the Frigate UI using the `Classification` page. The process consists of three steps:
Expand All @@ -70,3 +74,34 @@ Once some images are assigned, training will begin automatically.
- **Data collection**: Use the model's Recent Classifications tab to gather balanced examples across times of day and weather.
- **When to train**: Focus on cases where the model is entirely incorrect or flips between states when it should not. There's no need to train additional images when the model is already working consistently.
- **Selecting training images**: Images scoring below 100% due to new conditions (e.g., first snow of the year, seasonal changes) or variations (e.g., objects temporarily in view, insects at night) are good candidates for training, as they represent scenarios different from the default state. Training these lower-scoring images that differ from existing training data helps prevent overfitting. Avoid training large quantities of images that look very similar, especially if they already score 100% as this can lead to overfitting.

## Debugging Classification Models

To troubleshoot issues with state classification models, enable debug logging to see detailed information about classification attempts, scores, and state verification.

Enable debug logs for classification models by adding `frigate.data_processing.real_time.custom_classification: debug` to your `logger` configuration. These logs are verbose, so only keep this enabled when necessary. Restart Frigate after this change.

```yaml
logger:
default: info
logs:
frigate.data_processing.real_time.custom_classification: debug
```

The debug logs will show:

- Classification probabilities for each attempt
- Whether scores meet the threshold requirement
- State verification progress (consecutive detections needed)
- When state changes are published

### Recent Classifications

For state classification, images are only added to recent classifications under specific circumstances:

- **First detection**: The first classification attempt for a camera is always saved
- **State changes**: Images are saved when the detected state differs from the current verified state
- **Pending verification**: Images are saved when there's a pending state change being verified (requires 3 consecutive identical states)
- **Low confidence**: Images with scores below 100% are saved even if the state matches the current state (useful for training)

Images are **not** saved when the state is stable (detected state matches current state) **and** the score is 100%. This prevents unnecessary storage of redundant high-confidence classifications.
Loading
Loading