Remove all AGPL licensed YOLO references from Frigate (#10716)

* Remove yolov8 support from Frigate

* Remove automatic build

* Formatting and remove yolov5

* Formatting

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
pull/10819/head
Blake Blackshear 2 months ago committed by GitHub
parent 13b4e5ff41
commit b65aa640c9
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 18
      .github/workflows/ci.yml
  2. 9
      docker/rockchip/Dockerfile
  3. 105
      docs/docs/configuration/object_detectors.md
  4. 2
      docs/docs/configuration/reference.md
  5. 11
      docs/docs/frigate/hardware.md
  6. 2
      frigate/detectors/detector_config.py
  7. 41
      frigate/detectors/plugins/openvino.py
  8. 94
      frigate/detectors/plugins/rknn.py
  9. 6
      frigate/events/audio.py
  10. 30
      frigate/http.py
  11. 18
      frigate/ptz/autotrack.py
  12. 24
      frigate/ptz/onvif.py
  13. 54
      frigate/test/test_ffmpeg_presets.py
  14. 1
      migrations/003_create_recordings_table.py
  15. 1
      migrations/011_update_indexes.py
  16. 1
      migrations/017_update_indexes.py
  17. 1
      migrations/019_create_regions_table.py
  18. 1
      migrations/020_update_index_recordings.py

@ -79,15 +79,15 @@ jobs:
rpi.tags=${{ steps.setup.outputs.image-name }}-rpi
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64,mode=max
- name: Build and push RockChip build
uses: docker/bake-action@v3
with:
push: true
targets: rk
files: docker/rockchip/rk.hcl
set: |
rk.tags=${{ steps.setup.outputs.image-name }}-rk
*.cache-from=type=gha
#- name: Build and push RockChip build
# uses: docker/bake-action@v3
# with:
# push: true
# targets: rk
# files: docker/rockchip/rk.hcl
# set: |
# rk.tags=${{ steps.setup.outputs.image-name }}-rk
# *.cache-from=type=gha
jetson_jp4_build:
runs-on: ubuntu-latest
name: Jetson Jetpack 4

@ -12,8 +12,8 @@ RUN pip3 wheel --wheel-dir=/rk-wheels -c /requirements-wheels.txt -r /requiremen
FROM deps AS rk-deps
ARG TARGETARCH
RUN --mount=type=bind,from=rk-wheels,source=/rk-wheels,target=/deps/rk-wheels \
pip3 install -U /deps/rk-wheels/*.whl
RUN --mount=type=bind,from=rk-wheels,source=/rk-wheels,target=/deps/rk-wheels \
pip3 install -U /deps/rk-wheels/*.whl
WORKDIR /opt/frigate/
COPY --from=rootfs / /
@ -21,10 +21,7 @@ COPY --from=rootfs / /
ADD https://github.com/MarcA711/rknpu2/releases/download/v1.5.2/librknnrt_rk356x.so /usr/lib/
ADD https://github.com/MarcA711/rknpu2/releases/download/v1.5.2/librknnrt_rk3588.so /usr/lib/
ADD https://github.com/MarcA711/rknn-models/releases/download/v1.5.2-rk3562/yolov8n-320x320-rk3562.rknn /models/rknn/
ADD https://github.com/MarcA711/rknn-models/releases/download/v1.5.2-rk3566/yolov8n-320x320-rk3566.rknn /models/rknn/
ADD https://github.com/MarcA711/rknn-models/releases/download/v1.5.2-rk3568/yolov8n-320x320-rk3568.rknn /models/rknn/
ADD https://github.com/MarcA711/rknn-models/releases/download/v1.5.2-rk3588/yolov8n-320x320-rk3588.rknn /models/rknn/
# TODO removed models, other models support may need to be added back in
RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffmpeg
RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffprobe

@ -13,7 +13,7 @@ The CPU detector type runs a TensorFlow Lite model utilizing the CPU without har
:::tip
If you do not have GPU or Edge TPU hardware, using the [OpenVINO Detector](#openvino-detector) is often more efficient than using the CPU detector.
If you do not have GPU or Edge TPU hardware, using the [OpenVINO Detector](#openvino-detector) is often more efficient than using the CPU detector.
:::
@ -131,7 +131,7 @@ model:
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
```
This detector also supports some YOLO variants: YOLOX, YOLOv5, and YOLOv8 specifically. Other YOLO variants are not officially supported/tested. Frigate does not come with any yolo models preloaded, so you will need to supply your own models. This detector has been verified to work with the [yolox_tiny](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolox-tiny) model from Intel's Open Model Zoo. You can follow [these instructions](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolox-tiny#download-a-model-and-convert-it-into-openvino-ir-format) to retrieve the OpenVINO-compatible `yolox_tiny` model. Make sure that the model input dimensions match the `width` and `height` parameters, and `model_type` is set accordingly. See [Full Configuration Reference](/configuration/reference.md) for a list of possible `model_type` options. Below is an example of how `yolox_tiny` can be used in Frigate:
This detector also supports YOLOX. Other YOLO variants are not officially supported/tested. Frigate does not come with any yolo models preloaded, so you will need to supply your own models. This detector has been verified to work with the [yolox_tiny](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolox-tiny) model from Intel's Open Model Zoo. You can follow [these instructions](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolox-tiny#download-a-model-and-convert-it-into-openvino-ir-format) to retrieve the OpenVINO-compatible `yolox_tiny` model. Make sure that the model input dimensions match the `width` and `height` parameters, and `model_type` is set accordingly. See [Full Configuration Reference](/configuration/reference.md) for a list of possible `model_type` options. Below is an example of how `yolox_tiny` can be used in Frigate:
```yaml
detectors:
@ -302,104 +302,3 @@ Replace `<your_codeproject_ai_server_ip>` and `<port>` with the IP address and p
To verify that the integration is working correctly, start Frigate and observe the logs for any error messages related to CodeProject.AI. Additionally, you can check the Frigate web interface to see if the objects detected by CodeProject.AI are being displayed and tracked properly.
# Community Supported Detectors
## Rockchip RKNN-Toolkit-Lite2
This detector is only available if one of the following Rockchip SoCs is used:
- RK3588/RK3588S
- RK3568
- RK3566
- RK3562
These SoCs come with a NPU that will highly speed up detection.
### Setup
Use a frigate docker image with `-rk` suffix and enable privileged mode by adding the `--privileged` flag to your docker run command or `privileged: true` to your `docker-compose.yml` file.
### Configuration
This `config.yml` shows all relevant options to configure the detector and explains them. All values shown are the default values (except for one). Lines that are required at least to use the detector are labeled as required, all other lines are optional.
```yaml
detectors: # required
rknn: # required
type: rknn # required
# core mask for npu
core_mask: 0
model: # required
# name of yolov8 model or path to your own .rknn model file
# possible values are:
# - default-yolov8n
# - default-yolov8s
# - default-yolov8m
# - default-yolov8l
# - default-yolov8x
# - /config/model_cache/rknn/your_custom_model.rknn
path: default-yolov8n
# width and height of detection frames
width: 320
height: 320
# pixel format of detection frame
# default value is rgb but yolov models usually use bgr format
input_pixel_format: bgr # required
# shape of detection frame
input_tensor: nhwc
```
Explanation for rknn specific options:
- **core mask** controls which cores of your NPU should be used. This option applies only to SoCs with a multicore NPU (at the time of writing this in only the RK3588/S). The easiest way is to pass the value as a binary number. To do so, use the prefix `0b` and write a `0` to disable a core and a `1` to enable a core, whereas the last digit coresponds to core0, the second last to core1, etc. You also have to use the cores in ascending order (so you can't use core0 and core2; but you can use core0 and core1). Enabling more cores can reduce the inference speed, especially when using bigger models (see section below). Examples:
- `core_mask: 0b000` or just `core_mask: 0` let the NPU decide which cores should be used. Default and recommended value.
- `core_mask: 0b001` use only core0.
- `core_mask: 0b011` use core0 and core1.
- `core_mask: 0b110` use core1 and core2. **This does not** work, since core0 is disabled.
### Choosing a model
There are 5 default yolov8 models that differ in size and therefore load the NPU more or less. In ascending order, with the top one being the smallest and least computationally intensive model:
| Model | Size in mb |
| ------- | ---------- |
| yolov8n | 9 |
| yolov8s | 25 |
| yolov8m | 54 |
| yolov8l | 90 |
| yolov8x | 136 |
:::tip
You can get the load of your NPU with the following command:
```bash
$ cat /sys/kernel/debug/rknpu/load
>> NPU load: Core0: 0%, Core1: 0%, Core2: 0%,
```
:::
- By default the rknn detector uses the yolov8n model (`model: path: default-yolov8n`). This model comes with the image, so no further steps than those mentioned above are necessary.
- If you want to use a more precise model, you can pass `default-yolov8s`, `default-yolov8m`, `default-yolov8l` or `default-yolov8x` as `model: path:` option.
- If the model does not exist, it will be automatically downloaded to `/config/model_cache/rknn`.
- If your server has no internet connection, you can download the model from [this Github repository](https://github.com/MarcA711/rknn-models/releases) using another device and place it in the `config/model_cache/rknn` on your system.
- Finally, you can also provide your own model. Note that only yolov8 models are currently supported. Moreover, you will need to convert your model to the rknn format using `rknn-toolkit2` on a x86 machine. Afterwards, you can place your `.rknn` model file in the `config/model_cache/rknn` directory on your system. Then you need to pass the path to your model using the `path` option of your `model` block like this:
```yaml
model:
path: /config/model_cache/rknn/my-rknn-model.rknn
```
:::tip
When you have a multicore NPU, you can enable all cores to reduce inference times. You should consider activating all cores if you use a larger model like yolov8l. If your NPU has 3 cores (like rk3588/S SoCs), you can enable all 3 cores using:
```yaml
detectors:
rknn:
type: rknn
core_mask: 0b111
```
:::

@ -80,7 +80,7 @@ model:
# Valid values are nhwc or nchw (default: shown below)
input_tensor: nhwc
# Optional: Object detection model type, currently only used with the OpenVINO detector
# Valid values are ssd, yolox, yolov5, or yolov8 (default: shown below)
# Valid values are ssd, yolox (default: shown below)
model_type: ssd
# Optional: Label name modifications. These are merged into the standard labelmap.
labelmap:

@ -95,17 +95,6 @@ Frigate supports all Jetson boards, from the inexpensive Jetson Nano to the powe
Inference speed will vary depending on the YOLO model, jetson platform and jetson nvpmodel (GPU/DLA/EMC clock speed). It is typically 20-40 ms for most models. The DLA is more efficient than the GPU, but not faster, so using the DLA will reduce power consumption but will slightly increase inference time.
#### Rockchip SoC
Frigate supports SBCs with the following Rockchip SoCs:
- RK3566/RK3568
- RK3588/RK3588S
- RV1103/RV1106
- RK3562
Using the yolov8n model and an Orange Pi 5 Plus with RK3588 SoC inference speeds vary between 20 - 25 ms.
## What does Frigate use the CPU for and what does it use a detector for? (ELI5 Version)
This is taken from a [user question on reddit](https://www.reddit.com/r/homeassistant/comments/q8mgau/comment/hgqbxh5/?utm_source=share&utm_medium=web2x&context=3). Modified slightly for clarity.

@ -30,8 +30,6 @@ class InputTensorEnum(str, Enum):
class ModelTypeEnum(str, Enum):
ssd = "ssd"
yolox = "yolox"
yolov5 = "yolov5"
yolov8 = "yolov8"
class ModelConfig(BaseModel):

@ -131,44 +131,3 @@ class OvDetector(DetectionApi):
object_detected[6], object_detected[5], object_detected[:4]
)
return detections
elif self.ov_model_type == ModelTypeEnum.yolov8:
out_tensor = infer_request.get_output_tensor()
results = out_tensor.data[0]
output_data = np.transpose(results)
scores = np.max(output_data[:, 4:], axis=1)
if len(scores) == 0:
return np.zeros((20, 6), np.float32)
scores = np.expand_dims(scores, axis=1)
# add scores to the last column
dets = np.concatenate((output_data, scores), axis=1)
# filter out lines with scores below threshold
dets = dets[dets[:, -1] > 0.5, :]
# limit to top 20 scores, descending order
ordered = dets[dets[:, -1].argsort()[::-1]][:20]
detections = np.zeros((20, 6), np.float32)
for i, object_detected in enumerate(ordered):
detections[i] = self.process_yolo(
np.argmax(object_detected[4:-1]),
object_detected[-1],
object_detected[:4],
)
return detections
elif self.ov_model_type == ModelTypeEnum.yolov5:
out_tensor = infer_request.get_output_tensor()
output_data = out_tensor.data[0]
# filter out lines with scores below threshold
conf_mask = (output_data[:, 4] >= 0.5).squeeze()
output_data = output_data[conf_mask]
# limit to top 20 scores, descending order
ordered = output_data[output_data[:, 4].argsort()[::-1]][:20]
detections = np.zeros((20, 6), np.float32)
for i, object_detected in enumerate(ordered):
detections[i] = self.process_yolo(
np.argmax(object_detected[5:]),
object_detected[4],
object_detected[:4],
)
return detections

@ -1,10 +1,7 @@
import logging
import os.path
import urllib.request
from typing import Literal
import numpy as np
try:
from hide_warnings import hide_warnings
except: # noqa: E722
@ -24,14 +21,6 @@ DETECTOR_KEY = "rknn"
supported_socs = ["rk3562", "rk3566", "rk3568", "rk3588"]
yolov8_suffix = {
"default-yolov8n": "n",
"default-yolov8s": "s",
"default-yolov8m": "m",
"default-yolov8l": "l",
"default-yolov8x": "x",
}
class RknnDetectorConfig(BaseDetectorConfig):
type: Literal[DETECTOR_KEY]
@ -73,35 +62,12 @@ class Rknn(DetectionApi):
elif "rk3588" in soc:
os.rename("/usr/lib/librknnrt_rk3588.so", "/usr/lib/librknnrt.so")
self.model_path = config.model.path or "default-yolov8n"
self.core_mask = config.core_mask
self.height = config.model.height
self.width = config.model.width
if self.model_path in yolov8_suffix:
if self.model_path == "default-yolov8n":
self.model_path = "/models/rknn/yolov8n-320x320-{soc}.rknn".format(
soc=soc
)
else:
model_suffix = yolov8_suffix[self.model_path]
self.model_path = (
"/config/model_cache/rknn/yolov8{suffix}-320x320-{soc}.rknn".format(
suffix=model_suffix, soc=soc
)
)
os.makedirs("/config/model_cache/rknn", exist_ok=True)
if not os.path.isfile(self.model_path):
logger.info(
"Downloading yolov8{suffix} model.".format(suffix=model_suffix)
)
urllib.request.urlretrieve(
"https://github.com/MarcA711/rknn-models/releases/download/v1.5.2-{soc}/yolov8{suffix}-320x320-{soc}.rknn".format(
soc=soc, suffix=model_suffix
),
self.model_path,
)
if True:
os.makedirs("/config/model_cache/rknn", exist_ok=True)
if (config.model.width != 320) or (config.model.height != 320):
logger.error(
@ -137,60 +103,12 @@ class Rknn(DetectionApi):
"Error initializing rknn runtime. Do you run docker in privileged mode?"
)
def __del__(self):
self.rknn.release()
def postprocess(self, results):
"""
Processes yolov8 output.
Args:
results: array with shape: (1, 84, n, 1) where n depends on yolov8 model size (for 320x320 model n=2100)
Returns:
detections: array with shape (20, 6) with 20 rows of (class, confidence, y_min, x_min, y_max, x_max)
"""
results = np.transpose(results[0, :, :, 0]) # array shape (2100, 84)
scores = np.max(
results[:, 4:], axis=1
) # array shape (2100,); max confidence of each row
# remove lines with score scores < 0.4
filtered_arg = np.argwhere(scores > 0.4)
results = results[filtered_arg[:, 0]]
scores = scores[filtered_arg[:, 0]]
num_detections = len(scores)
if num_detections == 0:
return np.zeros((20, 6), np.float32)
if num_detections > 20:
top_arg = np.argpartition(scores, -20)[-20:]
results = results[top_arg]
scores = scores[top_arg]
num_detections = 20
classes = np.argmax(results[:, 4:], axis=1)
boxes = np.transpose(
np.vstack(
(
(results[:, 1] - 0.5 * results[:, 3]) / self.height,
(results[:, 0] - 0.5 * results[:, 2]) / self.width,
(results[:, 1] + 0.5 * results[:, 3]) / self.height,
(results[:, 0] + 0.5 * results[:, 2]) / self.width,
)
)
raise Exception(
"RKNN does not currently support any models. Please see the docs for more info."
)
detections = np.zeros((20, 6), np.float32)
detections[:num_detections, 0] = classes
detections[:num_detections, 1] = scores
detections[:num_detections, 2:] = boxes
return detections
def __del__(self):
self.rknn.release()
@hide_warnings
def inference(self, tensor_input):

@ -256,9 +256,9 @@ class AudioEventMaintainer(threading.Thread):
def handle_detection(self, label: str, score: float) -> None:
if self.detections.get(label):
self.detections[label][
"last_detection"
] = datetime.datetime.now().timestamp()
self.detections[label]["last_detection"] = (
datetime.datetime.now().timestamp()
)
else:
self.inter_process_communicator.queue.put(
(f"{self.config.name}/audio/{label}", "ON")

@ -700,9 +700,9 @@ def event_snapshot(id):
else:
response.headers["Cache-Control"] = "no-store"
if download:
response.headers[
"Content-Disposition"
] = f"attachment; filename=snapshot-{id}.jpg"
response.headers["Content-Disposition"] = (
f"attachment; filename=snapshot-{id}.jpg"
)
return response
@ -889,9 +889,9 @@ def event_clip(id):
if download:
response.headers["Content-Disposition"] = "attachment; filename=%s" % file_name
response.headers["Content-Length"] = os.path.getsize(clip_path)
response.headers[
"X-Accel-Redirect"
] = f"/clips/{file_name}" # nginx: https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ignore_headers
response.headers["X-Accel-Redirect"] = (
f"/clips/{file_name}" # nginx: https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ignore_headers
)
return response
@ -1187,9 +1187,9 @@ def config():
config["plus"] = {"enabled": current_app.plus_api.is_active()}
for detector, detector_config in config["detectors"].items():
detector_config["model"][
"labelmap"
] = current_app.frigate_config.model.merged_labelmap
detector_config["model"]["labelmap"] = (
current_app.frigate_config.model.merged_labelmap
)
return jsonify(config)
@ -1596,9 +1596,9 @@ def get_recordings_storage_usage():
total_mb = recording_stats["total"]
camera_usages: dict[
str, dict
] = current_app.storage_maintainer.calculate_camera_usages()
camera_usages: dict[str, dict] = (
current_app.storage_maintainer.calculate_camera_usages()
)
for camera_name in camera_usages.keys():
if camera_usages.get(camera_name, {}).get("usage"):
@ -1785,9 +1785,9 @@ def recording_clip(camera_name, start_ts, end_ts):
if download:
response.headers["Content-Disposition"] = "attachment; filename=%s" % file_name
response.headers["Content-Length"] = os.path.getsize(path)
response.headers[
"X-Accel-Redirect"
] = f"/cache/{file_name}" # nginx: https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ignore_headers
response.headers["X-Accel-Redirect"] = (
f"/cache/{file_name}" # nginx: https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ignore_headers
)
return response

@ -297,12 +297,12 @@ class PtzAutoTracker:
self.ptz_metrics[camera][
"ptz_max_zoom"
].value = camera_config.onvif.autotracking.movement_weights[1]
self.intercept[
camera
] = camera_config.onvif.autotracking.movement_weights[2]
self.move_coefficients[
camera
] = camera_config.onvif.autotracking.movement_weights[3:]
self.intercept[camera] = (
camera_config.onvif.autotracking.movement_weights[2]
)
self.move_coefficients[camera] = (
camera_config.onvif.autotracking.movement_weights[3:]
)
else:
camera_config.onvif.autotracking.enabled = False
self.ptz_metrics[camera]["ptz_autotracker_enabled"].value = False
@ -566,9 +566,9 @@ class PtzAutoTracker:
) ** self.zoom_factor[camera]
if "original_target_box" not in self.tracked_object_metrics[camera]:
self.tracked_object_metrics[camera][
"original_target_box"
] = self.tracked_object_metrics[camera]["target_box"]
self.tracked_object_metrics[camera]["original_target_box"] = (
self.tracked_object_metrics[camera]["target_box"]
)
(
self.tracked_object_metrics[camera]["valid_velocity"],

@ -123,9 +123,9 @@ class OnvifController:
logger.debug(f"Onvif config for {camera_name}: {ptz_config}")
service_capabilities_request = ptz.create_type("GetServiceCapabilities")
self.cams[camera_name][
"service_capabilities_request"
] = service_capabilities_request
self.cams[camera_name]["service_capabilities_request"] = (
service_capabilities_request
)
fov_space_id = next(
(
@ -241,9 +241,9 @@ class OnvifController:
supported_features.append("zoom-r")
try:
# get camera's zoom limits from onvif config
self.cams[camera_name][
"relative_zoom_range"
] = ptz_config.Spaces.RelativeZoomTranslationSpace[0]
self.cams[camera_name]["relative_zoom_range"] = (
ptz_config.Spaces.RelativeZoomTranslationSpace[0]
)
except Exception:
if (
self.config.cameras[camera_name].onvif.autotracking.zooming
@ -260,9 +260,9 @@ class OnvifController:
supported_features.append("zoom-a")
try:
# get camera's zoom limits from onvif config
self.cams[camera_name][
"absolute_zoom_range"
] = ptz_config.Spaces.AbsoluteZoomPositionSpace[0]
self.cams[camera_name]["absolute_zoom_range"] = (
ptz_config.Spaces.AbsoluteZoomPositionSpace[0]
)
self.cams[camera_name]["zoom_limits"] = configs.ZoomLimits
except Exception:
if self.config.cameras[camera_name].onvif.autotracking.zooming:
@ -279,9 +279,9 @@ class OnvifController:
and configs.DefaultRelativePanTiltTranslationSpace is not None
):
supported_features.append("pt-r-fov")
self.cams[camera_name][
"relative_fov_range"
] = ptz_config.Spaces.RelativePanTiltTranslationSpace[fov_space_id]
self.cams[camera_name]["relative_fov_range"] = (
ptz_config.Spaces.RelativePanTiltTranslationSpace[fov_space_id]
)
self.cams[camera_name]["features"] = supported_features

@ -45,9 +45,9 @@ class TestFfmpegPresets(unittest.TestCase):
assert self.default_ffmpeg == frigate_config.dict(exclude_unset=True)
def test_ffmpeg_hwaccel_preset(self):
self.default_ffmpeg["cameras"]["back"]["ffmpeg"][
"hwaccel_args"
] = "preset-rpi-64-h264"
self.default_ffmpeg["cameras"]["back"]["ffmpeg"]["hwaccel_args"] = (
"preset-rpi-64-h264"
)
frigate_config = FrigateConfig(**self.default_ffmpeg)
frigate_config.cameras["back"].create_ffmpeg_cmds()
assert "preset-rpi-64-h264" not in (
@ -58,9 +58,9 @@ class TestFfmpegPresets(unittest.TestCase):
)
def test_ffmpeg_hwaccel_not_preset(self):
self.default_ffmpeg["cameras"]["back"]["ffmpeg"][
"hwaccel_args"
] = "-other-hwaccel args"
self.default_ffmpeg["cameras"]["back"]["ffmpeg"]["hwaccel_args"] = (
"-other-hwaccel args"
)
frigate_config = FrigateConfig(**self.default_ffmpeg)
frigate_config.cameras["back"].create_ffmpeg_cmds()
assert "-other-hwaccel args" in (
@ -68,9 +68,9 @@ class TestFfmpegPresets(unittest.TestCase):
)
def test_ffmpeg_hwaccel_scale_preset(self):
self.default_ffmpeg["cameras"]["back"]["ffmpeg"][
"hwaccel_args"
] = "preset-nvidia-h264"
self.default_ffmpeg["cameras"]["back"]["ffmpeg"]["hwaccel_args"] = (
"preset-nvidia-h264"
)
self.default_ffmpeg["cameras"]["back"]["detect"] = {
"height": 1920,
"width": 2560,
@ -89,9 +89,9 @@ class TestFfmpegPresets(unittest.TestCase):
def test_default_ffmpeg_input_arg_preset(self):
frigate_config = FrigateConfig(**self.default_ffmpeg)
self.default_ffmpeg["cameras"]["back"]["ffmpeg"][
"input_args"
] = "preset-rtsp-generic"
self.default_ffmpeg["cameras"]["back"]["ffmpeg"]["input_args"] = (
"preset-rtsp-generic"
)
frigate_preset_config = FrigateConfig(**self.default_ffmpeg)
frigate_config.cameras["back"].create_ffmpeg_cmds()
frigate_preset_config.cameras["back"].create_ffmpeg_cmds()
@ -102,9 +102,9 @@ class TestFfmpegPresets(unittest.TestCase):
)
def test_ffmpeg_input_preset(self):
self.default_ffmpeg["cameras"]["back"]["ffmpeg"][
"input_args"
] = "preset-rtmp-generic"
self.default_ffmpeg["cameras"]["back"]["ffmpeg"]["input_args"] = (
"preset-rtmp-generic"
)
frigate_config = FrigateConfig(**self.default_ffmpeg)
frigate_config.cameras["back"].create_ffmpeg_cmds()
assert "preset-rtmp-generic" not in (
@ -135,9 +135,9 @@ class TestFfmpegPresets(unittest.TestCase):
)
def test_ffmpeg_output_record_preset(self):
self.default_ffmpeg["cameras"]["back"]["ffmpeg"]["output_args"][
"record"
] = "preset-record-generic-audio-aac"
self.default_ffmpeg["cameras"]["back"]["ffmpeg"]["output_args"]["record"] = (
"preset-record-generic-audio-aac"
)
frigate_config = FrigateConfig(**self.default_ffmpeg)
frigate_config.cameras["back"].create_ffmpeg_cmds()
assert "preset-record-generic-audio-aac" not in (
@ -148,9 +148,9 @@ class TestFfmpegPresets(unittest.TestCase):
)
def test_ffmpeg_output_record_not_preset(self):
self.default_ffmpeg["cameras"]["back"]["ffmpeg"]["output_args"][
"record"
] = "-some output"
self.default_ffmpeg["cameras"]["back"]["ffmpeg"]["output_args"]["record"] = (
"-some output"
)
frigate_config = FrigateConfig(**self.default_ffmpeg)
frigate_config.cameras["back"].create_ffmpeg_cmds()
assert "-some output" in (
@ -158,9 +158,9 @@ class TestFfmpegPresets(unittest.TestCase):
)
def test_ffmpeg_output_rtmp_preset(self):
self.default_ffmpeg["cameras"]["back"]["ffmpeg"]["output_args"][
"rtmp"
] = "preset-rtmp-jpeg"
self.default_ffmpeg["cameras"]["back"]["ffmpeg"]["output_args"]["rtmp"] = (
"preset-rtmp-jpeg"
)
frigate_config = FrigateConfig(**self.default_ffmpeg)
frigate_config.cameras["back"].create_ffmpeg_cmds()
assert "preset-rtmp-jpeg" not in (
@ -171,9 +171,9 @@ class TestFfmpegPresets(unittest.TestCase):
)
def test_ffmpeg_output_rtmp_not_preset(self):
self.default_ffmpeg["cameras"]["back"]["ffmpeg"]["output_args"][
"rtmp"
] = "-some output"
self.default_ffmpeg["cameras"]["back"]["ffmpeg"]["output_args"]["rtmp"] = (
"-some output"
)
frigate_config = FrigateConfig(**self.default_ffmpeg)
frigate_config.cameras["back"].create_ffmpeg_cmds()
assert "-some output" in (

@ -20,6 +20,7 @@ Some examples (model - class or model name)::
> migrator.add_default(model, field_name, default)
"""
import peewee as pw
SQL = pw.SQL

@ -20,6 +20,7 @@ Some examples (model - class or model name)::
> migrator.add_default(model, field_name, default)
"""
import peewee as pw
SQL = pw.SQL

@ -20,6 +20,7 @@ Some examples (model - class or model name)::
> migrator.add_default(model, field_name, default)
"""
import peewee as pw
SQL = pw.SQL

@ -20,6 +20,7 @@ Some examples (model - class or model name)::
> migrator.add_default(model, field_name, default)
"""
import peewee as pw
SQL = pw.SQL

@ -20,6 +20,7 @@ Some examples (model - class or model name)::
> migrator.add_default(model, field_name, default)
"""
import peewee as pw
SQL = pw.SQL

Loading…
Cancel
Save