mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-09-10 17:51:45 +02:00
Compare commits
58 Commits
Author | SHA1 | Date | |
---|---|---|---|
|
037c4d1cc0 | ||
|
1613499218 | ||
|
205fdf3ae3 | ||
|
f46f8a2160 | ||
|
880902cdd7 | ||
|
c5ed95ec52 | ||
|
751de141d5 | ||
|
0eb441fe50 | ||
|
7566aecb0b | ||
|
60714a733e | ||
|
d7f7cd7be1 | ||
|
6591210050 | ||
|
7e7b3288a8 | ||
|
fe3eb24dfe | ||
|
e664cb2285 | ||
|
62047c80d5 | ||
|
198e53bd42 | ||
|
f7ed8b4cab | ||
|
e9dc30235b | ||
|
16b7f7f6e7 | ||
|
281c461647 | ||
|
667c302a7d | ||
|
ad694f5511 | ||
|
fa6956c46e | ||
|
b5aa1b2c21 | ||
|
f62feeb50c | ||
|
0dda37ac43 | ||
|
4fcb1ea7ac | ||
|
4347402fcc | ||
|
4fe246f472 | ||
|
7cf439e010 | ||
|
8a01643acf | ||
|
664a6fd0cb | ||
|
2b185a1105 | ||
|
75e33d8a56 | ||
|
8f4b5b4bdb | ||
|
95cea06dd3 | ||
|
ec2543c23f | ||
|
d27e8c1bbf | ||
|
353ee1228c | ||
|
ba20b61c43 | ||
|
b45f642868 | ||
|
9ed7ccab75 | ||
|
ceced7cc91 | ||
|
1db26cb41e | ||
|
6840415b6c | ||
|
06539c925c | ||
|
addb4e6891 | ||
|
fb290c411b | ||
|
89db960c05 | ||
|
2cde58037d | ||
|
d1be614a10 | ||
|
93c7c8c518 | ||
|
c83a35d090 | ||
|
d31a4e3443 | ||
|
334b6670e1 | ||
|
b5067c07f8 | ||
|
21e9b2f2ce |
5
.github/DISCUSSION_TEMPLATE/report-a-bug.yml
vendored
5
.github/DISCUSSION_TEMPLATE/report-a-bug.yml
vendored
@ -6,7 +6,7 @@ body:
|
||||
value: |
|
||||
Use this form to submit a reproducible bug in Frigate or Frigate's UI.
|
||||
|
||||
Before submitting your bug report, please [search the discussions][discussions], look at recent open and closed [pull requests][prs], read the [official Frigate documentation][docs], and read the [Frigate FAQ][faq] pinned at the Discussion page to see if your bug has already been fixed by the developers or reported by the community.
|
||||
Before submitting your bug report, please ask the AI with the "Ask AI" button on the [official documentation site][ai] about your issue, [search the discussions][discussions], look at recent open and closed [pull requests][prs], read the [official Frigate documentation][docs], and read the [Frigate FAQ][faq] pinned at the Discussion page to see if your bug has already been fixed by the developers or reported by the community.
|
||||
|
||||
**If you are unsure if your issue is actually a bug or not, please submit a support request first.**
|
||||
|
||||
@ -14,6 +14,7 @@ body:
|
||||
[prs]: https://www.github.com/blakeblackshear/frigate/pulls
|
||||
[docs]: https://docs.frigate.video
|
||||
[faq]: https://github.com/blakeblackshear/frigate/discussions/12724
|
||||
[ai]: https://docs.frigate.video
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Checklist
|
||||
@ -26,6 +27,8 @@ body:
|
||||
- label: I have tried a different browser to see if it is related to my browser.
|
||||
required: true
|
||||
- label: I have tried reproducing the issue in [incognito mode](https://www.computerworld.com/article/1719851/how-to-go-incognito-in-chrome-firefox-safari-and-edge.html) to rule out problems with any third party extensions or plugins I have installed.
|
||||
- label: I have asked the AI at https://docs.frigate.video about my issue.
|
||||
required: true
|
||||
- type: textarea
|
||||
id: description
|
||||
attributes:
|
||||
|
2
Makefile
2
Makefile
@ -1,7 +1,7 @@
|
||||
default_target: local
|
||||
|
||||
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
|
||||
VERSION = 0.16.0
|
||||
VERSION = 0.16.2
|
||||
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
|
||||
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
|
||||
BOARDS= #Initialized empty
|
||||
|
@ -152,7 +152,7 @@ ARG TARGETARCH
|
||||
# Use a separate container to build wheels to prevent build dependencies in final image
|
||||
RUN apt-get -qq update \
|
||||
&& apt-get -qq install -y \
|
||||
apt-transport-https wget \
|
||||
apt-transport-https wget unzip \
|
||||
&& apt-get -qq update \
|
||||
&& apt-get -qq install -y \
|
||||
python3.11 \
|
||||
|
@ -2,18 +2,25 @@
|
||||
|
||||
set -euxo pipefail
|
||||
|
||||
SQLITE3_VERSION="96c92aba00c8375bc32fafcdf12429c58bd8aabfcadab6683e35bbb9cdebf19e" # 3.46.0
|
||||
SQLITE3_VERSION="3.46.1"
|
||||
PYSQLITE3_VERSION="0.5.3"
|
||||
|
||||
# Fetch the source code for the latest release of Sqlite.
|
||||
# Fetch the pre-built sqlite amalgamation instead of building from source
|
||||
if [[ ! -d "sqlite" ]]; then
|
||||
wget https://www.sqlite.org/src/tarball/sqlite.tar.gz?r=${SQLITE3_VERSION} -O sqlite.tar.gz
|
||||
tar xzf sqlite.tar.gz
|
||||
cd sqlite/
|
||||
LIBS="-lm" ./configure --disable-tcl --enable-tempstore=always
|
||||
make sqlite3.c
|
||||
mkdir sqlite
|
||||
cd sqlite
|
||||
|
||||
# Download the pre-built amalgamation from sqlite.org
|
||||
# For SQLite 3.46.1, the amalgamation version is 3460100
|
||||
SQLITE_AMALGAMATION_VERSION="3460100"
|
||||
|
||||
wget https://www.sqlite.org/2024/sqlite-amalgamation-${SQLITE_AMALGAMATION_VERSION}.zip -O sqlite-amalgamation.zip
|
||||
unzip sqlite-amalgamation.zip
|
||||
mv sqlite-amalgamation-${SQLITE_AMALGAMATION_VERSION}/* .
|
||||
rmdir sqlite-amalgamation-${SQLITE_AMALGAMATION_VERSION}
|
||||
rm sqlite-amalgamation.zip
|
||||
|
||||
cd ../
|
||||
rm sqlite.tar.gz
|
||||
fi
|
||||
|
||||
# Grab the pysqlite3 source code.
|
||||
|
@ -57,9 +57,16 @@ fi
|
||||
|
||||
# arch specific packages
|
||||
if [[ "${TARGETARCH}" == "amd64" ]]; then
|
||||
# Install non-free version of i965 driver
|
||||
sed -i -E "/^Components: main$/s/main/main contrib non-free non-free-firmware/" "/etc/apt/sources.list.d/debian.sources" \
|
||||
&& apt-get -qq update \
|
||||
&& apt-get install --no-install-recommends --no-install-suggests -y i965-va-driver-shaders \
|
||||
&& sed -i -E "/^Components: main contrib non-free non-free-firmware$/s/main contrib non-free non-free-firmware/main/" "/etc/apt/sources.list.d/debian.sources" \
|
||||
&& apt-get update
|
||||
|
||||
# install amd / intel-i965 driver packages
|
||||
apt-get -qq install --no-install-recommends --no-install-suggests -y \
|
||||
i965-va-driver intel-gpu-tools onevpl-tools \
|
||||
intel-gpu-tools onevpl-tools \
|
||||
libva-drm2 \
|
||||
mesa-va-drivers radeontop
|
||||
|
||||
|
@ -144,7 +144,14 @@ WEB Digest Algorithm - MD5
|
||||
|
||||
### Reolink Cameras
|
||||
|
||||
Reolink has older cameras (ex: 410 & 520) as well as newer camera (ex: 520a & 511wa) which support different subsets of options. In both cases using the http stream is recommended.
|
||||
Reolink has many different camera models with inconsistently supported features and behavior. The below table shows a summary of various features and recommendations.
|
||||
|
||||
| Camera Resolution | Camera Generation | Recommended Stream Type | Additional Notes |
|
||||
| ---------------- | ------------------------- | -------------------------------- | ----------------------------------------------------------------------- |
|
||||
| 5MP or lower | All | http-flv | Stream is h264 |
|
||||
| 6MP or higher | Latest (ex: Duo3, CX-8##) | http-flv with ffmpeg 8.0, or rtsp | This uses the new http-flv-enhanced over H265 which requires ffmpeg 8.0 |
|
||||
| 6MP or higher | Older (ex: RLC-8##) | rtsp | |
|
||||
|
||||
Frigate works much better with newer reolink cameras that are setup with the below options:
|
||||
|
||||
If available, recommended settings are:
|
||||
@ -157,12 +164,6 @@ According to [this discussion](https://github.com/blakeblackshear/frigate/issues
|
||||
Cameras connected via a Reolink NVR can be connected with the http stream, use `channel[0..15]` in the stream url for the additional channels.
|
||||
The setup of main stream can be also done via RTSP, but isn't always reliable on all hardware versions. The example configuration is working with the oldest HW version RLN16-410 device with multiple types of cameras.
|
||||
|
||||
:::warning
|
||||
|
||||
The below configuration only works for reolink cameras with stream resolution of 5MP or lower, 8MP+ cameras need to use RTSP as http-flv is not supported in this case.
|
||||
|
||||
:::
|
||||
|
||||
```yaml
|
||||
go2rtc:
|
||||
streams:
|
||||
@ -259,7 +260,7 @@ To use a USB camera (webcam) with Frigate, the recommendation is to use go2rtc's
|
||||
go2rtc:
|
||||
streams:
|
||||
usb_camera:
|
||||
- "ffmpeg:device?video=0&video_size=1024x576#video=h264"
|
||||
- "ffmpeg:device?video=0&video_size=1024x576#video=h264"
|
||||
|
||||
cameras:
|
||||
usb_camera:
|
||||
|
@ -107,10 +107,7 @@ This list of working and non-working PTZ cameras is based on user feedback.
|
||||
| Hanwha XNP-6550RH | ✅ | ❌ | |
|
||||
| Hikvision | ✅ | ❌ | Incomplete ONVIF support (MoveStatus won't update even on latest firmware) - reported with HWP-N4215IH-DE and DS-2DE3304W-DE, but likely others |
|
||||
| Hikvision DS-2DE3A404IWG-E/W | ✅ | ✅ | |
|
||||
| Reolink 511WA | ✅ | ❌ | Zoom only |
|
||||
| Reolink E1 Pro | ✅ | ❌ | |
|
||||
| Reolink E1 Zoom | ✅ | ❌ | |
|
||||
| Reolink RLC-823A 16x | ✅ | ❌ | |
|
||||
| Reolink | ✅ | ❌ | |
|
||||
| Speco O8P32X | ✅ | ❌ | |
|
||||
| Sunba 405-D20X | ✅ | ❌ | Incomplete ONVIF support reported on original, and 4k models. All models are suspected incompatable. |
|
||||
| Tapo | ✅ | ❌ | Many models supported, ONVIF Service Port: 2020 |
|
||||
|
@ -21,8 +21,7 @@ See [the hwaccel docs](/configuration/hardware_acceleration_video.md) for more i
|
||||
| preset-nvidia | Nvidia GPU | |
|
||||
| preset-jetson-h264 | Nvidia Jetson with h264 stream | |
|
||||
| preset-jetson-h265 | Nvidia Jetson with h265 stream | |
|
||||
| preset-rk-h264 | Rockchip MPP with h264 stream | Use image with \*-rk suffix and privileged mode |
|
||||
| preset-rk-h265 | Rockchip MPP with h265 stream | Use image with \*-rk suffix and privileged mode |
|
||||
| preset-rkmpp | Rockchip MPP | Use image with \*-rk suffix and privileged mode |
|
||||
|
||||
### Input Args Presets
|
||||
|
||||
|
@ -99,6 +99,12 @@ genai:
|
||||
model: gemini-1.5-flash
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
To use a different Gemini-compatible API endpoint, set the `GEMINI_BASE_URL` environment variable to your provider's API URL.
|
||||
|
||||
:::
|
||||
|
||||
## OpenAI
|
||||
|
||||
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
|
||||
|
@ -9,7 +9,6 @@ It is highly recommended to use a GPU for hardware acceleration video decoding i
|
||||
|
||||
Depending on your system, these parameters may not be compatible. More information on hardware accelerated decoding for ffmpeg can be found here: https://trac.ffmpeg.org/wiki/HWAccelIntro
|
||||
|
||||
# Object Detection
|
||||
|
||||
## Raspberry Pi 3/4
|
||||
|
||||
@ -229,7 +228,7 @@ Additional configuration is needed for the Docker container to be able to access
|
||||
services:
|
||||
frigate:
|
||||
...
|
||||
image: ghcr.io/blakeblackshear/frigate:stable
|
||||
image: ghcr.io/blakeblackshear/frigate:stable-tensorrt
|
||||
deploy: # <------------- Add this section
|
||||
resources:
|
||||
reservations:
|
||||
@ -247,7 +246,7 @@ docker run -d \
|
||||
--name frigate \
|
||||
...
|
||||
--gpus=all \
|
||||
ghcr.io/blakeblackshear/frigate:stable
|
||||
ghcr.io/blakeblackshear/frigate:stable-tensorrt
|
||||
```
|
||||
|
||||
### Setup Decoder
|
||||
|
@ -251,3 +251,7 @@ Note that disabling a camera through the config file (`enabled: False`) removes
|
||||
6. **I have unmuted some cameras on my dashboard, but I do not hear sound. Why?**
|
||||
|
||||
If your camera is streaming (as indicated by a red dot in the upper right, or if it has been set to continuous streaming mode), your browser may be blocking audio until you interact with the page. This is an intentional browser limitation. See [this article](https://developer.mozilla.org/en-US/docs/Web/Media/Autoplay_guide#autoplay_availability). Many browsers have a whitelist feature to change this behavior.
|
||||
|
||||
7. **My camera streams have lots of visual artifacts / distortion.**
|
||||
|
||||
Some cameras don't include the hardware to support multiple connections to the high resolution stream, and this can cause unexpected behavior. In this case it is recommended to [restream](./restream.md) the high resolution stream so that it can be used for live view and recordings.
|
||||
|
@ -29,6 +29,7 @@ Frigate supports multiple different detectors that work on different types of ha
|
||||
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured.
|
||||
|
||||
**Nvidia Jetson**
|
||||
|
||||
- [TensortRT](#nvidia-tensorrt-detector): TensorRT can run on Jetson devices, using one of many default models.
|
||||
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt-jp6` Frigate image when a supported ONNX model is configured.
|
||||
|
||||
@ -325,6 +326,12 @@ The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv
|
||||
|
||||
:::
|
||||
|
||||
:::warning
|
||||
|
||||
If you are using a Frigate+ YOLOv9 model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
|
||||
|
||||
:::
|
||||
|
||||
After placing the downloaded onnx model in your config folder, you can use the following configuration:
|
||||
|
||||
```yaml
|
||||
@ -440,14 +447,13 @@ Also AMD/ROCm does not "officially" support integrated GPUs. It still does work
|
||||
|
||||
For the rocm frigate build there is some automatic detection:
|
||||
|
||||
- gfx90c -> 9.0.0
|
||||
- gfx1031 -> 10.3.0
|
||||
- gfx1103 -> 11.0.0
|
||||
|
||||
If you have something else you might need to override the `HSA_OVERRIDE_GFX_VERSION` at Docker launch. Suppose the version you want is `9.0.0`, then you should configure it from command line as:
|
||||
If you have something else you might need to override the `HSA_OVERRIDE_GFX_VERSION` at Docker launch. Suppose the version you want is `10.0.0`, then you should configure it from command line as:
|
||||
|
||||
```bash
|
||||
$ docker run -e HSA_OVERRIDE_GFX_VERSION=9.0.0 \
|
||||
$ docker run -e HSA_OVERRIDE_GFX_VERSION=10.0.0 \
|
||||
...
|
||||
```
|
||||
|
||||
@ -458,7 +464,7 @@ services:
|
||||
frigate:
|
||||
|
||||
environment:
|
||||
HSA_OVERRIDE_GFX_VERSION: "9.0.0"
|
||||
HSA_OVERRIDE_GFX_VERSION: "10.0.0"
|
||||
```
|
||||
|
||||
Figuring out what version you need can be complicated as you can't tell the chipset name and driver from the AMD brand name.
|
||||
@ -534,6 +540,12 @@ There is no default model provided, the following formats are supported:
|
||||
|
||||
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
|
||||
|
||||
:::warning
|
||||
|
||||
If you are using a Frigate+ YOLO-NAS model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
|
||||
|
||||
:::
|
||||
|
||||
After placing the downloaded onnx model in your config folder, you can use the following configuration:
|
||||
|
||||
```yaml
|
||||
@ -561,6 +573,12 @@ The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv
|
||||
|
||||
:::
|
||||
|
||||
:::warning
|
||||
|
||||
If you are using a Frigate+ YOLOv9 model, you should not define any of the below `model` parameters in your config except for `path`. See [the Frigate+ model docs](/plus/first_model#step-3-set-your-model-id-in-the-config) for more information on setting up your model.
|
||||
|
||||
:::
|
||||
|
||||
After placing the downloaded onnx model in your config folder, you can use the following configuration:
|
||||
|
||||
```yaml
|
||||
@ -1031,23 +1049,25 @@ python3 yolo_to_onnx.py -m yolov7-320
|
||||
|
||||
#### YOLOv9
|
||||
|
||||
YOLOv9 model can be exported as ONNX using the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=t` in the first line to the [model size](https://github.com/WongKinYiu/yolov9#performance) you would like to convert (available sizes are `t`, `s`, `m`, `c`, and `e`).
|
||||
YOLOv9 model can be exported as ONNX using the command below. You can copy and paste the whole thing to your terminal and execute, altering `MODEL_SIZE=t` and `IMG_SIZE=320` in the first line to the [model size](https://github.com/WongKinYiu/yolov9#performance) you would like to convert (available model sizes are `t`, `s`, `m`, `c`, and `e`, common image sizes are `320` and `640`).
|
||||
|
||||
```sh
|
||||
docker build . --build-arg MODEL_SIZE=t --output . -f- <<'EOF'
|
||||
docker build . --build-arg MODEL_SIZE=t --build-arg IMG_SIZE=320 --output . -f- <<'EOF'
|
||||
FROM python:3.11 AS build
|
||||
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
|
||||
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
|
||||
WORKDIR /yolov9
|
||||
ADD https://github.com/WongKinYiu/yolov9.git .
|
||||
RUN uv pip install --system -r requirements.txt
|
||||
RUN uv pip install --system onnx onnxruntime onnx-simplifier>=0.4.1
|
||||
RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier>=0.4.1
|
||||
ARG MODEL_SIZE
|
||||
ARG IMG_SIZE
|
||||
ADD https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-${MODEL_SIZE}-converted.pt yolov9-${MODEL_SIZE}.pt
|
||||
RUN sed -i "s/ckpt = torch.load(attempt_download(w), map_location='cpu')/ckpt = torch.load(attempt_download(w), map_location='cpu', weights_only=False)/g" models/experimental.py
|
||||
RUN python3 export.py --weights ./yolov9-${MODEL_SIZE}.pt --imgsz 320 --simplify --include onnx
|
||||
RUN python3 export.py --weights ./yolov9-${MODEL_SIZE}.pt --imgsz ${IMG_SIZE} --simplify --include onnx
|
||||
FROM scratch
|
||||
ARG MODEL_SIZE
|
||||
COPY --from=build /yolov9/yolov9-${MODEL_SIZE}.onnx /
|
||||
ARG IMG_SIZE
|
||||
COPY --from=build /yolov9/yolov9-${MODEL_SIZE}.onnx /yolov9-${MODEL_SIZE}-${IMG_SIZE}.onnx
|
||||
EOF
|
||||
```
|
||||
|
@ -438,7 +438,7 @@ record:
|
||||
# Optional: Number of minutes to wait between cleanup runs (default: shown below)
|
||||
# This can be used to reduce the frequency of deleting recording segments from disk if you want to minimize i/o
|
||||
expire_interval: 60
|
||||
# Optional: Sync recordings with disk on startup and once a day (default: shown below).
|
||||
# Optional: Two-way sync recordings database with disk on startup and once a day (default: shown below).
|
||||
sync_recordings: False
|
||||
# Optional: Retention settings for recording
|
||||
retain:
|
||||
|
@ -99,6 +99,7 @@ In real-world deployments, even with multiple cameras running concurrently, Frig
|
||||
| Name | Hailo‑8 Inference Time | Hailo‑8L Inference Time |
|
||||
| ---------------- | ---------------------- | ----------------------- |
|
||||
| ssd mobilenet v1 | ~ 6 ms | ~ 10 ms |
|
||||
| yolov9-tiny | | 320: 18ms |
|
||||
| yolov6n | ~ 7 ms | ~ 11 ms |
|
||||
|
||||
### Google Coral TPU
|
||||
@ -131,17 +132,19 @@ More information is available [in the detector docs](/configuration/object_detec
|
||||
|
||||
Inference speeds vary greatly depending on the CPU or GPU used, some known examples of GPU inference times are below:
|
||||
|
||||
| Name | MobileNetV2 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time | Notes |
|
||||
| -------------- | -------------------------- | ------------------------- | ---------------------- | ---------------------------------- |
|
||||
| Intel HD 530 | 15 - 35 ms | | | Can only run one detector instance |
|
||||
| Intel HD 620 | 15 - 25 ms | 320: ~ 35 ms | | |
|
||||
| Intel HD 630 | ~ 15 ms | 320: ~ 30 ms | | |
|
||||
| Intel UHD 730 | ~ 10 ms | 320: ~ 19 ms 640: ~ 54 ms | | |
|
||||
| Intel UHD 770 | ~ 15 ms | 320: ~ 20 ms 640: ~ 46 ms | | |
|
||||
| Intel N100 | ~ 15 ms | 320: ~ 25 ms | | Can only run one detector instance |
|
||||
| Intel Iris XE | ~ 10 ms | 320: ~ 18 ms 640: ~ 50 ms | | |
|
||||
| Intel Arc A380 | ~ 6 ms | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
|
||||
| Intel Arc A750 | ~ 4 ms | 320: ~ 8 ms | | |
|
||||
| Name | MobileNetV2 Inference Time | YOLOv9 | YOLO-NAS Inference Time | RF-DETR Inference Time | Notes |
|
||||
| -------------- | -------------------------- | ------------------------------------------------- | ------------------------- | ---------------------- | ---------------------------------- |
|
||||
| Intel HD 530 | 15 - 35 ms | | | | Can only run one detector instance |
|
||||
| Intel HD 620 | 15 - 25 ms | | 320: ~ 35 ms | | |
|
||||
| Intel HD 630 | ~ 15 ms | | 320: ~ 30 ms | | |
|
||||
| Intel UHD 730 | ~ 10 ms | | 320: ~ 19 ms 640: ~ 54 ms | | |
|
||||
| Intel UHD 770 | ~ 15 ms | t-320: ~ 16 ms s-320: ~ 20 ms s-640: ~ 40 ms | 320: ~ 20 ms 640: ~ 46 ms | | |
|
||||
| Intel N100 | ~ 15 ms | s-320: 30 ms | 320: ~ 25 ms | | Can only run one detector instance |
|
||||
| Intel N150 | ~ 15 ms | t-320: 16 ms s-320: 24 ms | | | |
|
||||
| Intel Iris XE | ~ 10 ms | s-320: 12 ms s-640: 30 ms | 320: ~ 18 ms 640: ~ 50 ms | | |
|
||||
| Intel Arc A310 | ~ 5 ms | t-320: 7 ms t-640: 11 ms s-320: 8 ms s-640: 15 ms | 320: ~ 8 ms 640: ~ 14 ms | | |
|
||||
| Intel Arc A380 | ~ 6 ms | | 320: ~ 10 ms 640: ~ 22 ms | 336: 20 ms 448: 27 ms | |
|
||||
| Intel Arc A750 | ~ 4 ms | | 320: ~ 8 ms | | |
|
||||
|
||||
### TensorRT - Nvidia GPU
|
||||
|
||||
@ -166,12 +169,13 @@ There are improved capabilities in newer GPU architectures that TensorRT can ben
|
||||
Inference speeds will vary greatly depending on the GPU and the model used.
|
||||
`tiny` variants are faster than the equivalent non-tiny model, some known examples are below:
|
||||
|
||||
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time |
|
||||
| --------------- | --------------------- | ------------------------- | ---------------------- |
|
||||
| RTX 3050 | t-320: 15 ms | 320: ~ 10 ms 640: ~ 16 ms | Nano-320: ~ 12 ms |
|
||||
| RTX 3070 | t-320: 11 ms | 320: ~ 8 ms 640: ~ 14 ms | Nano-320: ~ 9 ms |
|
||||
| RTX A4000 | | 320: ~ 15 ms | |
|
||||
| Tesla P40 | | 320: ~ 105 ms | |
|
||||
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time | RF-DETR Inference Time |
|
||||
| --------------- | ------------------------- | ------------------------- | ---------------------- |
|
||||
| GTX 1070 | s-320: 16 ms | 320: 14 ms | |
|
||||
| RTX 3050 | t-320: 15 ms s-320: 17 ms | 320: ~ 10 ms 640: ~ 16 ms | Nano-320: ~ 12 ms |
|
||||
| RTX 3070 | t-320: 11 ms s-320: 13 ms | 320: ~ 8 ms 640: ~ 14 ms | Nano-320: ~ 9 ms |
|
||||
| RTX A4000 | | 320: ~ 15 ms | |
|
||||
| Tesla P40 | | 320: ~ 105 ms | |
|
||||
|
||||
### ROCm - AMD GPU
|
||||
|
||||
@ -179,7 +183,7 @@ With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detec
|
||||
|
||||
| Name | YOLOv9 Inference Time | YOLO-NAS Inference Time |
|
||||
| --------- | --------------------- | ------------------------- |
|
||||
| AMD 780M | ~ 14 ms | 320: ~ 25 ms 640: ~ 50 ms |
|
||||
| AMD 780M | 320: ~ 14 ms | 320: ~ 25 ms 640: ~ 50 ms |
|
||||
| AMD 8700G | | 320: ~ 20 ms 640: ~ 40 ms |
|
||||
|
||||
## Community Supported Detectors
|
||||
|
@ -43,7 +43,7 @@ The following ports are used by Frigate and can be mapped via docker as required
|
||||
| `8971` | Authenticated UI and API access without TLS. Reverse proxies should use this port. |
|
||||
| `5000` | Internal unauthenticated UI and API access. Access to this port should be limited. Intended to be used within the docker network for services that integrate with Frigate. |
|
||||
| `8554` | RTSP restreaming. By default, these streams are unauthenticated. Authentication can be configured in go2rtc section of config. |
|
||||
| `8555` | WebRTC connections for low latency live views. |
|
||||
| `8555` | WebRTC connections for cameras with two-way talk support. |
|
||||
|
||||
#### Common Docker Compose storage configurations
|
||||
|
||||
|
74
docs/docs/frigate/planning_setup.md
Normal file
74
docs/docs/frigate/planning_setup.md
Normal file
@ -0,0 +1,74 @@
|
||||
---
|
||||
id: planning_setup
|
||||
title: Planning a New Installation
|
||||
---
|
||||
|
||||
Choosing the right hardware for your Frigate NVR setup is important for optimal performance and a smooth experience. This guide will walk you through the key considerations, focusing on the number of cameras and the hardware required for efficient object detection.
|
||||
|
||||
## Key Considerations
|
||||
|
||||
### Number of Cameras and Simultaneous Activity
|
||||
|
||||
The most fundamental factor in your hardware decision is the number of cameras you plan to use. However, it's not just about the raw count; it's also about how many of those cameras are likely to see activity and require object detection simultaneously.
|
||||
|
||||
When motion is detected in a camera's feed, regions of that frame are sent to your chosen [object detection hardware](/configuration/object_detectors).
|
||||
|
||||
- **Low Simultaneous Activity (1-6 cameras with occasional motion)**: If you have a few cameras in areas with infrequent activity (e.g., a seldom-used backyard, a quiet interior), the demand on your object detection hardware will be lower. A single, entry-level AI accelerator will suffice.
|
||||
- **Moderate Simultaneous Activity (6-12 cameras with some overlapping motion)**: For setups with more cameras, especially in areas like a busy street or a property with multiple access points, it's more likely that several cameras will capture activity at the same time. This increases the load on your object detection hardware, requiring more processing power.
|
||||
- **High Simultaneous Activity (12+ cameras or highly active zones)**: Large installations or scenarios where many cameras frequently capture activity (e.g., busy street with overview, identification, dedicated LPR cameras, etc.) will necessitate robust object detection capabilities. You'll likely need multiple entry-level AI accelerators or a more powerful single unit such as a discrete GPU.
|
||||
- **Commercial Installations (40+ cameras)**: Commercial installations or scenarios where a substantial number of cameras capture activity (e.g., a commercial property, an active public space) will necessitate robust object detection capabilities. You'll likely need a modern discrete GPU.
|
||||
|
||||
### Video Decoding
|
||||
|
||||
Modern CPUs with integrated GPUs (Intel Quick Sync, AMD VCN) or dedicated GPUs can significantly offload video decoding from the main CPU, freeing up resources. This is highly recommended, especially for multiple cameras.
|
||||
|
||||
:::tip
|
||||
|
||||
For commercial installations it is important to verify the number of supported concurrent streams on your GPU, many consumer GPUs max out at ~20 concurrent camera streams.
|
||||
|
||||
:::
|
||||
|
||||
## Hardware Considerations
|
||||
|
||||
### Object Detection
|
||||
|
||||
There are many different hardware options for object detection depending on priorities and available hardware. See [the recommended hardware page](./hardware.md#detectors) for more specifics on what hardware is recommended for object detection.
|
||||
|
||||
### Storage
|
||||
|
||||
Storage is an important consideration when planning a new installation. To get a more precise estimate of your storage requirements, you can use an IP camera storage calculator. Websites like [IPConfigure Storage Calculator](https://calculator.ipconfigure.com/) can help you determine the necessary disk space based on your camera settings.
|
||||
|
||||
|
||||
#### SSDs (Solid State Drives)
|
||||
|
||||
SSDs are an excellent choice for Frigate, offering high speed and responsiveness. The older concern that SSDs would quickly "wear out" from constant video recording is largely no longer valid for modern consumer and enterprise-grade SSDs.
|
||||
|
||||
- Longevity: Modern SSDs are designed with advanced wear-leveling algorithms and significantly higher "Terabytes Written" (TBW) ratings than earlier models. For typical home NVR use, a good quality SSD will likely outlast the useful life of your NVR hardware itself.
|
||||
- Performance: SSDs excel at handling the numerous small write operations that occur during continuous video recording and can significantly improve the responsiveness of the Frigate UI and clip retrieval.
|
||||
- Silence and Efficiency: SSDs produce no noise and consume less power than traditional HDDs.
|
||||
|
||||
#### HDDs (Hard Disk Drives)
|
||||
|
||||
Traditional Hard Disk Drives (HDDs) remain a great and often more cost-effective option for long-term video storage, especially for larger setups where raw capacity is prioritized.
|
||||
|
||||
- Cost-Effectiveness: HDDs offer the best cost per gigabyte, making them ideal for storing many days, weeks, or months of continuous footage.
|
||||
- Capacity: HDDs are available in much larger capacities than most consumer SSDs, which is beneficial for extensive video archives.
|
||||
- NVR-Rated Drives: If choosing an HDD, consider drives specifically designed for surveillance (NVR) use, such as Western Digital Purple or Seagate SkyHawk. These drives are engineered for 24/7 operation and continuous write workloads, offering improved reliability compared to standard desktop drives.
|
||||
|
||||
Determining Your Storage Needs
|
||||
The amount of storage you need will depend on several factors:
|
||||
|
||||
- Number of Cameras: More cameras naturally require more space.
|
||||
- Resolution and Framerate: Higher resolution (e.g., 4K) and higher framerate (e.g., 30fps) streams consume significantly more storage.
|
||||
- Recording Method: Continuous recording uses the most space. motion-only recording or object-triggered recording can save space, but may miss some footage.
|
||||
- Retention Period: How many days, weeks, or months of footage do you want to keep?
|
||||
|
||||
#### Network Storage (NFS/SMB)
|
||||
|
||||
While supported, using network-attached storage (NAS) for recordings can introduce latency and network dependency considerations. For optimal performance and reliability, it is generally recommended to have local storage for your Frigate recordings. If using a NAS, ensure your network connection to it is robust and fast (Gigabit Ethernet at minimum) and that the NAS itself can handle the continuous write load.
|
||||
|
||||
### RAM (Memory)
|
||||
|
||||
- **Basic Minimum: 4GB RAM**: This is generally sufficient for a very basic Frigate setup with a few cameras and a dedicated object detection accelerator, without running any enrichments. Performance might be tight, especially with higher resolution streams or numerous detections.
|
||||
- **Minimum for Enrichments: 8GB RAM**: If you plan to utilize Frigate's enrichment features (e.g., facial recognition, license plate recognition, or other AI models that run alongside standard object detection), 8GB of RAM should be considered the minimum. Enrichments require additional memory to load and process their respective models and data.
|
||||
- **Recommended: 16GB RAM**: For most users, especially those with many cameras (8+) or who plan to heavily leverage enrichments, 16GB of RAM is highly recommended. This provides ample headroom for smooth operation, reduces the likelihood of swapping to disk (which can impact performance), and allows for future expansion.
|
@ -5,7 +5,7 @@ title: Updating
|
||||
|
||||
# Updating Frigate
|
||||
|
||||
The current stable version of Frigate is **0.15.0**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.15.0).
|
||||
The current stable version of Frigate is **0.16.1**. The release notes and any breaking changes for this version can be found on the [Frigate GitHub releases page](https://github.com/blakeblackshear/frigate/releases/tag/v0.16.1).
|
||||
|
||||
Keeping Frigate up to date ensures you benefit from the latest features, performance improvements, and bug fixes. The update process varies slightly depending on your installation method (Docker, Home Assistant Addon, etc.). Below are instructions for the most common setups.
|
||||
|
||||
@ -33,21 +33,21 @@ If you’re running Frigate via Docker (recommended method), follow these steps:
|
||||
2. **Update and Pull the Latest Image**:
|
||||
|
||||
- If using Docker Compose:
|
||||
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.15.0` instead of `0.14.1`). For example:
|
||||
- Edit your `docker-compose.yml` file to specify the desired version tag (e.g., `0.16.1` instead of `0.15.2`). For example:
|
||||
```yaml
|
||||
services:
|
||||
frigate:
|
||||
image: ghcr.io/blakeblackshear/frigate:0.15.0
|
||||
image: ghcr.io/blakeblackshear/frigate:0.16.1
|
||||
```
|
||||
- Then pull the image:
|
||||
```bash
|
||||
docker pull ghcr.io/blakeblackshear/frigate:0.15.0
|
||||
docker pull ghcr.io/blakeblackshear/frigate:0.16.1
|
||||
```
|
||||
- **Note for `stable` Tag Users**: If your `docker-compose.yml` uses the `stable` tag (e.g., `ghcr.io/blakeblackshear/frigate:stable`), you don’t need to update the tag manually. The `stable` tag always points to the latest stable release after pulling.
|
||||
- If using `docker run`:
|
||||
- Pull the image with the appropriate tag (e.g., `0.15.0`, `0.15.0-tensorrt`, or `stable`):
|
||||
- Pull the image with the appropriate tag (e.g., `0.16.1`, `0.16.1-tensorrt`, or `stable`):
|
||||
```bash
|
||||
docker pull ghcr.io/blakeblackshear/frigate:0.15.0
|
||||
docker pull ghcr.io/blakeblackshear/frigate:0.16.1
|
||||
```
|
||||
|
||||
3. **Start the Container**:
|
||||
@ -105,8 +105,8 @@ If an update causes issues:
|
||||
1. Stop Frigate.
|
||||
2. Restore your backed-up config file and database.
|
||||
3. Revert to the previous image version:
|
||||
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.14.1`) in your `docker run` command.
|
||||
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.14.1`), and re-run `docker compose up -d`.
|
||||
- For Docker: Specify an older tag (e.g., `ghcr.io/blakeblackshear/frigate:0.15.2`) in your `docker run` command.
|
||||
- For Docker Compose: Edit your `docker-compose.yml`, specify the older version tag (e.g., `ghcr.io/blakeblackshear/frigate:0.15.2`), and re-run `docker compose up -d`.
|
||||
- For Home Assistant: Reinstall the previous addon version manually via the repository if needed and restart the addon.
|
||||
4. Verify the old version is running again.
|
||||
|
||||
|
@ -15,10 +15,10 @@ At a high level, there are five processing steps that could be applied to a came
|
||||
%%{init: {"themeVariables": {"edgeLabelBackground": "transparent"}}}%%
|
||||
|
||||
flowchart LR
|
||||
Feed(Feed\nacquisition) --> Decode(Video\ndecoding)
|
||||
Decode --> Motion(Motion\ndetection)
|
||||
Motion --> Object(Object\ndetection)
|
||||
Feed --> Recording(Recording\nand\nvisualization)
|
||||
Feed(Feed acquisition) --> Decode(Video decoding)
|
||||
Decode --> Motion(Motion detection)
|
||||
Motion --> Object(Object detection)
|
||||
Feed --> Recording(Recording and visualization)
|
||||
Motion --> Recording
|
||||
Object --> Recording
|
||||
```
|
||||
|
@ -114,7 +114,7 @@ section.
|
||||
## Next steps
|
||||
|
||||
1. If the stream you added to go2rtc is also used by Frigate for the `record` or `detect` role, you can migrate your config to pull from the RTSP restream to reduce the number of connections to your camera as shown [here](/configuration/restream#reduce-connections-to-camera).
|
||||
2. You may also prefer to [setup WebRTC](/configuration/live#webrtc-extra-configuration) for slightly lower latency than MSE. Note that WebRTC only supports h264 and specific audio formats and may require opening ports on your router.
|
||||
2. You can [set up WebRTC](/configuration/live#webrtc-extra-configuration) if your camera supports two-way talk. Note that WebRTC only supports specific audio formats and may require opening ports on your router.
|
||||
|
||||
## Important considerations
|
||||
|
||||
|
@ -185,6 +185,26 @@ For clips to be castable to media devices, audio is required and may need to be
|
||||
|
||||
<a name="api"></a>
|
||||
|
||||
## Camera API
|
||||
|
||||
To disable a camera dynamically
|
||||
|
||||
```
|
||||
action: camera.turn_off
|
||||
data: {}
|
||||
target:
|
||||
entity_id: camera.back_deck_cam # your Frigate camera entity ID
|
||||
```
|
||||
|
||||
To enable a camera that has been disabled dynamically
|
||||
|
||||
```
|
||||
action: camera.turn_on
|
||||
data: {}
|
||||
target:
|
||||
entity_id: camera.back_deck_cam # your Frigate camera entity ID
|
||||
```
|
||||
|
||||
## Notification API
|
||||
|
||||
Many people do not want to expose Frigate to the web, so the integration creates some public API endpoints that can be used for notifications.
|
||||
|
@ -34,6 +34,12 @@ Model IDs are not secret values and can be shared freely. Access to your model i
|
||||
|
||||
:::
|
||||
|
||||
:::tip
|
||||
|
||||
When setting the plus model id, all other fields should be removed as these are configured automatically with the Frigate+ model config
|
||||
|
||||
:::
|
||||
|
||||
## Step 4: Adjust your object filters for higher scores
|
||||
|
||||
Frigate+ models generally have much higher scores than the default model provided in Frigate. You will likely need to increase your `threshold` and `min_score` values. Here is an example of how these values can be refined, but you should expect these to evolve as your model improves. For more information about how `threshold` and `min_score` are related, see the docs on [object filters](../configuration/object_filters.md#object-scores).
|
||||
|
@ -11,34 +11,51 @@ Information on how to integrate Frigate+ with Frigate can be found in the [integ
|
||||
|
||||
## Available model types
|
||||
|
||||
There are two model types offered in Frigate+, `mobiledet` and `yolonas`. Both of these models are object detection models and are trained to detect the same set of labels [listed below](#available-label-types).
|
||||
There are three model types offered in Frigate+, `mobiledet`, `yolonas`, and `yolov9`. All of these models are object detection models and are trained to detect the same set of labels [listed below](#available-label-types).
|
||||
|
||||
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types). You can test model types for compatibility and speed on your hardware by using the base models.
|
||||
|
||||
| Model Type | Description |
|
||||
| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
|
||||
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
|
||||
| Model Type | Description |
|
||||
| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
|
||||
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
|
||||
| `yolov9` | A leading SOTA (state of the art) object detection model with similar performance to yolonas, but on a wider range of hardware options. Runs on Intel, NVidia GPUs, AMD GPUs, Hailo, MemryX\*, Apple Silicon\*, and Rockchip NPUs. |
|
||||
|
||||
_\* Support coming in 0.17_
|
||||
|
||||
### YOLOv9 Details
|
||||
|
||||
YOLOv9 models are available in `s` and `t` sizes. When requesting a `yolov9` model, you will be prompted to choose a size. If you are unsure what size to choose, you should perform some tests with the base models to find the performance level that suits you. The `s` size is most similar to the current `yolonas` models in terms of inference times and accuracy, and a good place to start is the `320x320` resolution model for `yolov9s`.
|
||||
|
||||
:::info
|
||||
|
||||
When switching to YOLOv9, you may need to adjust your thresholds for some objects.
|
||||
|
||||
:::
|
||||
|
||||
#### Hailo Support
|
||||
|
||||
If you have a Hailo device, you will need to specify the hardware you have when submitting a model request because they are not cross compatible. Please test using the available base models before submitting your model request.
|
||||
|
||||
#### Rockchip (RKNN) Support
|
||||
|
||||
For 0.16, YOLOv9 onnx models will need to be manually converted. First, you will need to configure Frigate to use the model id for your YOLOv9 onnx model so it downloads the model to your `model_cache` directory. From there, you can follow the [documentation](/configuration/object_detectors.md#converting-your-own-onnx-model-to-rknn-format) to convert it. Automatic conversion is coming in 0.17.
|
||||
|
||||
## Supported detector types
|
||||
|
||||
Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVino (`openvino`), and ONNX (`onnx`) detectors.
|
||||
|
||||
:::warning
|
||||
|
||||
Using Frigate+ models with `onnx` is only available with Frigate 0.15 and later.
|
||||
|
||||
:::
|
||||
Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVino (`openvino`), ONNX (`onnx`), Hailo (`hailo8l`), and Rockchip\* (`rknn`) detectors.
|
||||
|
||||
| Hardware | Recommended Detector Type | Recommended Model Type |
|
||||
| -------------------------------------------------------------------------------- | ------------------------- | ---------------------- |
|
||||
| [CPU](/configuration/object_detectors.md#cpu-detector-not-recommended) | `cpu` | `mobiledet` |
|
||||
| [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `mobiledet` |
|
||||
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolonas` |
|
||||
| [NVidia GPU](/configuration/object_detectors#onnx)\* | `onnx` | `yolonas` |
|
||||
| [AMD ROCm GPU](/configuration/object_detectors#amdrocm-gpu-detector)\* | `rocm` | `yolonas` |
|
||||
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolov9` |
|
||||
| [NVidia GPU](/configuration/object_detectors#onnx) | `onnx` | `yolov9` |
|
||||
| [AMD ROCm GPU](/configuration/object_detectors#amdrocm-gpu-detector) | `onnx` | `yolov9` |
|
||||
| [Hailo8/Hailo8L/Hailo8R](/configuration/object_detectors#hailo-8) | `hailo8l` | `yolov9` |
|
||||
| [Rockchip NPU](/configuration/object_detectors#rockchip-platform)\* | `rknn` | `yolov9` |
|
||||
|
||||
_\* Requires Frigate 0.15_
|
||||
_\* Requires manual conversion in 0.16. Automatic conversion coming in 0.17._
|
||||
|
||||
## Improving your model
|
||||
|
||||
|
@ -7,6 +7,7 @@ const sidebars: SidebarsConfig = {
|
||||
Frigate: [
|
||||
'frigate/index',
|
||||
'frigate/hardware',
|
||||
'frigate/planning_setup',
|
||||
'frigate/installation',
|
||||
'frigate/updating',
|
||||
'frigate/camera_setup',
|
||||
|
8
docs/static/frigate-api.yaml
vendored
8
docs/static/frigate-api.yaml
vendored
@ -1759,6 +1759,10 @@ paths:
|
||||
- name: include_thumbnails
|
||||
in: query
|
||||
required: false
|
||||
description: >
|
||||
Deprecated. Thumbnail data is no longer included in the response.
|
||||
Use the /api/events/:event_id/thumbnail.:extension endpoint instead.
|
||||
deprecated: true
|
||||
schema:
|
||||
anyOf:
|
||||
- type: integer
|
||||
@ -1973,6 +1977,10 @@ paths:
|
||||
- name: include_thumbnails
|
||||
in: query
|
||||
required: false
|
||||
description: >
|
||||
Deprecated. Thumbnail data is no longer included in the response.
|
||||
Use the /api/events/:event_id/thumbnail.:extension endpoint instead.
|
||||
deprecated: true
|
||||
schema:
|
||||
anyOf:
|
||||
- type: integer
|
||||
|
@ -20,7 +20,7 @@ from fastapi.encoders import jsonable_encoder
|
||||
from fastapi.params import Depends
|
||||
from fastapi.responses import JSONResponse, PlainTextResponse, StreamingResponse
|
||||
from markupsafe import escape
|
||||
from peewee import operator
|
||||
from peewee import SQL, operator
|
||||
from pydantic import ValidationError
|
||||
|
||||
from frigate.api.auth import require_role
|
||||
@ -685,7 +685,14 @@ def plusModels(request: Request, filterByCurrentModelDetector: bool = False):
|
||||
@router.get("/recognized_license_plates")
|
||||
def get_recognized_license_plates(split_joined: Optional[int] = None):
|
||||
try:
|
||||
events = Event.select(Event.data).distinct()
|
||||
query = (
|
||||
Event.select(
|
||||
SQL("json_extract(data, '$.recognized_license_plate') AS plate")
|
||||
)
|
||||
.where(SQL("json_extract(data, '$.recognized_license_plate') IS NOT NULL"))
|
||||
.distinct()
|
||||
)
|
||||
recognized_license_plates = [row[0] for row in query.tuples()]
|
||||
except Exception:
|
||||
return JSONResponse(
|
||||
content=(
|
||||
@ -694,14 +701,6 @@ def get_recognized_license_plates(split_joined: Optional[int] = None):
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
recognized_license_plates = []
|
||||
for e in events:
|
||||
if e.data is not None and "recognized_license_plate" in e.data:
|
||||
recognized_license_plates.append(e.data["recognized_license_plate"])
|
||||
|
||||
while None in recognized_license_plates:
|
||||
recognized_license_plates.remove(None)
|
||||
|
||||
if split_joined:
|
||||
original_recognized_license_plates = recognized_license_plates.copy()
|
||||
for recognized_license_plate in original_recognized_license_plates:
|
||||
|
@ -214,7 +214,7 @@ async def register_face(request: Request, name: str, file: UploadFile):
|
||||
)
|
||||
|
||||
context: EmbeddingsContext = request.app.embeddings
|
||||
result = context.register_face(name, await file.read())
|
||||
result = None if context is None else context.register_face(name, await file.read())
|
||||
|
||||
if not isinstance(result, dict):
|
||||
return JSONResponse(
|
||||
|
@ -1,6 +1,6 @@
|
||||
from typing import Optional
|
||||
|
||||
from pydantic import BaseModel
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
DEFAULT_TIME_RANGE = "00:00,24:00"
|
||||
|
||||
@ -21,7 +21,14 @@ class EventsQueryParams(BaseModel):
|
||||
has_clip: Optional[int] = None
|
||||
has_snapshot: Optional[int] = None
|
||||
in_progress: Optional[int] = None
|
||||
include_thumbnails: Optional[int] = 1
|
||||
include_thumbnails: Optional[int] = Field(
|
||||
1,
|
||||
description=(
|
||||
"Deprecated. Thumbnail data is no longer included in the response. "
|
||||
"Use the /api/events/:event_id/thumbnail.:extension endpoint instead."
|
||||
),
|
||||
deprecated=True,
|
||||
)
|
||||
favorites: Optional[int] = None
|
||||
min_score: Optional[float] = None
|
||||
max_score: Optional[float] = None
|
||||
@ -40,7 +47,14 @@ class EventsSearchQueryParams(BaseModel):
|
||||
query: Optional[str] = None
|
||||
event_id: Optional[str] = None
|
||||
search_type: Optional[str] = "thumbnail"
|
||||
include_thumbnails: Optional[int] = 1
|
||||
include_thumbnails: Optional[int] = Field(
|
||||
1,
|
||||
description=(
|
||||
"Deprecated. Thumbnail data is no longer included in the response. "
|
||||
"Use the /api/events/:event_id/thumbnail.:extension endpoint instead."
|
||||
),
|
||||
deprecated=True,
|
||||
)
|
||||
limit: Optional[int] = 50
|
||||
cameras: Optional[str] = "all"
|
||||
labels: Optional[str] = "all"
|
||||
|
@ -10,6 +10,11 @@ class Extension(str, Enum):
|
||||
jpg = "jpg"
|
||||
jpeg = "jpeg"
|
||||
|
||||
def get_mime_type(self) -> str:
|
||||
if self in (Extension.jpg, Extension.jpeg):
|
||||
return "image/jpeg"
|
||||
return f"image/{self.value}"
|
||||
|
||||
|
||||
class MediaLatestFrameQueryParams(BaseModel):
|
||||
bbox: Optional[int] = None
|
||||
|
@ -724,15 +724,24 @@ def events_search(request: Request, params: EventsSearchQueryParams = Depends())
|
||||
|
||||
if (sort is None or sort == "relevance") and search_results:
|
||||
processed_events.sort(key=lambda x: x.get("search_distance", float("inf")))
|
||||
elif min_score is not None and max_score is not None and sort == "score_asc":
|
||||
elif sort == "score_asc":
|
||||
processed_events.sort(key=lambda x: x["data"]["score"])
|
||||
elif min_score is not None and max_score is not None and sort == "score_desc":
|
||||
elif sort == "score_desc":
|
||||
processed_events.sort(key=lambda x: x["data"]["score"], reverse=True)
|
||||
elif min_speed is not None and max_speed is not None and sort == "speed_asc":
|
||||
processed_events.sort(key=lambda x: x["data"]["average_estimated_speed"])
|
||||
elif min_speed is not None and max_speed is not None and sort == "speed_desc":
|
||||
elif sort == "speed_asc":
|
||||
processed_events.sort(
|
||||
key=lambda x: x["data"]["average_estimated_speed"], reverse=True
|
||||
key=lambda x: (
|
||||
x["data"].get("average_estimated_speed") is None,
|
||||
x["data"].get("average_estimated_speed"),
|
||||
)
|
||||
)
|
||||
elif sort == "speed_desc":
|
||||
processed_events.sort(
|
||||
key=lambda x: (
|
||||
x["data"].get("average_estimated_speed") is None,
|
||||
x["data"].get("average_estimated_speed", float("-inf")),
|
||||
),
|
||||
reverse=True,
|
||||
)
|
||||
elif sort == "date_asc":
|
||||
processed_events.sort(key=lambda x: x["start_time"])
|
||||
|
@ -142,15 +142,13 @@ def latest_frame(
|
||||
"regions": params.regions,
|
||||
}
|
||||
quality = params.quality
|
||||
mime_type = extension
|
||||
|
||||
if extension == "png":
|
||||
if extension == Extension.png:
|
||||
quality_params = None
|
||||
elif extension == "webp":
|
||||
elif extension == Extension.webp:
|
||||
quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), quality]
|
||||
else:
|
||||
else: # jpg or jpeg
|
||||
quality_params = [int(cv2.IMWRITE_JPEG_QUALITY), quality]
|
||||
mime_type = "jpeg"
|
||||
|
||||
if camera_name in request.app.frigate_config.cameras:
|
||||
frame = frame_processor.get_current_frame(camera_name, draw_options)
|
||||
@ -193,18 +191,21 @@ def latest_frame(
|
||||
|
||||
frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA)
|
||||
|
||||
_, img = cv2.imencode(f".{extension}", frame, quality_params)
|
||||
_, img = cv2.imencode(f".{extension.value}", frame, quality_params)
|
||||
return Response(
|
||||
content=img.tobytes(),
|
||||
media_type=f"image/{mime_type}",
|
||||
media_type=extension.get_mime_type(),
|
||||
headers={
|
||||
"Content-Type": f"image/{mime_type}",
|
||||
"Cache-Control": "no-store"
|
||||
if not params.store
|
||||
else "private, max-age=60",
|
||||
},
|
||||
)
|
||||
elif camera_name == "birdseye" and request.app.frigate_config.birdseye.restream:
|
||||
elif (
|
||||
camera_name == "birdseye"
|
||||
and request.app.frigate_config.birdseye.enabled
|
||||
and request.app.frigate_config.birdseye.restream
|
||||
):
|
||||
frame = cv2.cvtColor(
|
||||
frame_processor.get_current_frame(camera_name),
|
||||
cv2.COLOR_YUV2BGR_I420,
|
||||
@ -215,12 +216,11 @@ def latest_frame(
|
||||
|
||||
frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA)
|
||||
|
||||
_, img = cv2.imencode(f".{extension}", frame, quality_params)
|
||||
_, img = cv2.imencode(f".{extension.value}", frame, quality_params)
|
||||
return Response(
|
||||
content=img.tobytes(),
|
||||
media_type=f"image/{mime_type}",
|
||||
media_type=extension.get_mime_type(),
|
||||
headers={
|
||||
"Content-Type": f"image/{mime_type}",
|
||||
"Cache-Control": "no-store"
|
||||
if not params.store
|
||||
else "private, max-age=60",
|
||||
@ -749,7 +749,10 @@ def vod_hour(year_month: str, day: int, hour: int, camera_name: str, tz_name: st
|
||||
"/vod/event/{event_id}",
|
||||
description="Returns an HLS playlist for the specified object. Append /master.m3u8 or /index.m3u8 for HLS playback.",
|
||||
)
|
||||
def vod_event(event_id: str):
|
||||
def vod_event(
|
||||
event_id: str,
|
||||
padding: int = Query(0, description="Padding to apply to the vod."),
|
||||
):
|
||||
try:
|
||||
event: Event = Event.get(Event.id == event_id)
|
||||
except DoesNotExist:
|
||||
@ -772,32 +775,23 @@ def vod_event(event_id: str):
|
||||
status_code=404,
|
||||
)
|
||||
|
||||
clip_path = os.path.join(CLIPS_DIR, f"{event.camera}-{event.id}.mp4")
|
||||
|
||||
if not os.path.isfile(clip_path):
|
||||
end_ts = (
|
||||
datetime.now().timestamp() if event.end_time is None else event.end_time
|
||||
)
|
||||
vod_response = vod_ts(event.camera, event.start_time, end_ts)
|
||||
# If the recordings are not found and the event started more than 5 minutes ago, set has_clip to false
|
||||
if (
|
||||
event.start_time < datetime.now().timestamp() - 300
|
||||
and type(vod_response) is tuple
|
||||
and len(vod_response) == 2
|
||||
and vod_response[1] == 404
|
||||
):
|
||||
Event.update(has_clip=False).where(Event.id == event_id).execute()
|
||||
return vod_response
|
||||
|
||||
duration = int((event.end_time - event.start_time) * 1000)
|
||||
return JSONResponse(
|
||||
content={
|
||||
"cache": True,
|
||||
"discontinuity": False,
|
||||
"durations": [duration],
|
||||
"sequences": [{"clips": [{"type": "source", "path": clip_path}]}],
|
||||
}
|
||||
end_ts = (
|
||||
datetime.now().timestamp()
|
||||
if event.end_time is None
|
||||
else (event.end_time + padding)
|
||||
)
|
||||
vod_response = vod_ts(event.camera, event.start_time - padding, end_ts)
|
||||
|
||||
# If the recordings are not found and the event started more than 5 minutes ago, set has_clip to false
|
||||
if (
|
||||
event.start_time < datetime.now().timestamp() - 300
|
||||
and type(vod_response) is tuple
|
||||
and len(vod_response) == 2
|
||||
and vod_response[1] == 404
|
||||
):
|
||||
Event.update(has_clip=False).where(Event.id == event_id).execute()
|
||||
|
||||
return vod_response
|
||||
|
||||
|
||||
@router.get(
|
||||
@ -878,7 +872,7 @@ def event_snapshot(
|
||||
def event_thumbnail(
|
||||
request: Request,
|
||||
event_id: str,
|
||||
extension: str,
|
||||
extension: Extension,
|
||||
max_cache_age: int = Query(
|
||||
2592000, description="Max cache age in seconds. Default 30 days in seconds."
|
||||
),
|
||||
@ -903,7 +897,7 @@ def event_thumbnail(
|
||||
if event_id in camera_state.tracked_objects:
|
||||
tracked_obj = camera_state.tracked_objects.get(event_id)
|
||||
if tracked_obj is not None:
|
||||
thumbnail_bytes = tracked_obj.get_thumbnail(extension)
|
||||
thumbnail_bytes = tracked_obj.get_thumbnail(extension.value)
|
||||
except Exception:
|
||||
return JSONResponse(
|
||||
content={"success": False, "message": "Event not found"},
|
||||
@ -931,23 +925,21 @@ def event_thumbnail(
|
||||
)
|
||||
|
||||
quality_params = None
|
||||
|
||||
if extension == "jpg" or extension == "jpeg":
|
||||
if extension in (Extension.jpg, Extension.jpeg):
|
||||
quality_params = [int(cv2.IMWRITE_JPEG_QUALITY), 70]
|
||||
elif extension == "webp":
|
||||
elif extension == Extension.webp:
|
||||
quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), 60]
|
||||
|
||||
_, img = cv2.imencode(f".{extension}", thumbnail, quality_params)
|
||||
_, img = cv2.imencode(f".{extension.value}", thumbnail, quality_params)
|
||||
thumbnail_bytes = img.tobytes()
|
||||
|
||||
return Response(
|
||||
thumbnail_bytes,
|
||||
media_type=f"image/{extension}",
|
||||
media_type=extension.get_mime_type(),
|
||||
headers={
|
||||
"Cache-Control": f"private, max-age={max_cache_age}"
|
||||
if event_complete
|
||||
else "no-store",
|
||||
"Content-Type": f"image/{extension}",
|
||||
},
|
||||
)
|
||||
|
||||
@ -1158,7 +1150,11 @@ def event_snapshot_clean(request: Request, event_id: str, download: bool = False
|
||||
|
||||
|
||||
@router.get("/events/{event_id}/clip.mp4")
|
||||
def event_clip(request: Request, event_id: str):
|
||||
def event_clip(
|
||||
request: Request,
|
||||
event_id: str,
|
||||
padding: int = Query(0, description="Padding to apply to clip."),
|
||||
):
|
||||
try:
|
||||
event: Event = Event.get(Event.id == event_id)
|
||||
except DoesNotExist:
|
||||
@ -1171,8 +1167,12 @@ def event_clip(request: Request, event_id: str):
|
||||
content={"success": False, "message": "Clip not available"}, status_code=404
|
||||
)
|
||||
|
||||
end_ts = datetime.now().timestamp() if event.end_time is None else event.end_time
|
||||
return recording_clip(request, event.camera, event.start_time, end_ts)
|
||||
end_ts = (
|
||||
datetime.now().timestamp()
|
||||
if event.end_time is None
|
||||
else event.end_time + padding
|
||||
)
|
||||
return recording_clip(request, event.camera, event.start_time - padding, end_ts)
|
||||
|
||||
|
||||
@router.get("/events/{event_id}/preview.gif")
|
||||
@ -1598,7 +1598,7 @@ def label_thumbnail(request: Request, camera_name: str, label: str):
|
||||
try:
|
||||
event_id = event_query.scalar()
|
||||
|
||||
return event_thumbnail(request, event_id, 60)
|
||||
return event_thumbnail(request, event_id, Extension.jpg, 60)
|
||||
except DoesNotExist:
|
||||
frame = np.zeros((175, 175, 3), np.uint8)
|
||||
ret, jpg = cv2.imencode(".jpg", frame, [int(cv2.IMWRITE_JPEG_QUALITY), 70])
|
||||
|
@ -250,6 +250,7 @@ class FrigateApp:
|
||||
and not genai_cameras
|
||||
and not self.config.lpr.enabled
|
||||
and not self.config.face_recognition.enabled
|
||||
and not self.config.classification.bird.enabled
|
||||
):
|
||||
return
|
||||
|
||||
|
@ -61,6 +61,7 @@ class FfmpegConfig(FrigateBaseModel):
|
||||
retry_interval: float = Field(
|
||||
default=10.0,
|
||||
title="Time in seconds to wait before FFmpeg retries connecting to the camera.",
|
||||
gt=0.0,
|
||||
)
|
||||
apple_compatibility: bool = Field(
|
||||
default=False,
|
||||
|
@ -158,6 +158,13 @@ class ModelConfig(BaseModel):
|
||||
self.input_pixel_format = model_info["pixelFormat"]
|
||||
self.model_type = model_info["type"]
|
||||
|
||||
if model_info.get("inputDataType"):
|
||||
self.input_dtype = model_info["inputDataType"]
|
||||
|
||||
# RKNN always uses NHWC
|
||||
if detector == "rknn":
|
||||
self.input_tensor = InputTensorEnum.nhwc
|
||||
|
||||
# generate list of attribute labels
|
||||
self.attributes_map = {
|
||||
**model_info.get("attributes", DEFAULT_ATTRIBUTE_LABEL_MAP),
|
||||
|
@ -40,10 +40,15 @@ class GenAIClient:
|
||||
event: Event,
|
||||
) -> Optional[str]:
|
||||
"""Generate a description for the frame."""
|
||||
prompt = camera_config.genai.object_prompts.get(
|
||||
event.label,
|
||||
camera_config.genai.prompt,
|
||||
).format(**model_to_dict(event))
|
||||
try:
|
||||
prompt = camera_config.genai.object_prompts.get(
|
||||
event.label,
|
||||
camera_config.genai.prompt,
|
||||
).format(**model_to_dict(event))
|
||||
except KeyError as e:
|
||||
logger.error(f"Invalid key in GenAI prompt: {e}")
|
||||
return None
|
||||
|
||||
logger.debug(f"Sending images to genai provider with prompt: {prompt}")
|
||||
return self._send(prompt, thumbnails)
|
||||
|
||||
|
@ -369,12 +369,13 @@ class PtzAutoTracker:
|
||||
logger.info(f"Camera calibration for {camera} in progress")
|
||||
|
||||
# zoom levels test
|
||||
self.zoom_time[camera] = 0
|
||||
|
||||
if (
|
||||
self.config.cameras[camera].onvif.autotracking.zooming
|
||||
!= ZoomingModeEnum.disabled
|
||||
):
|
||||
logger.info(f"Calibration for {camera} in progress: 0% complete")
|
||||
self.zoom_time[camera] = 0
|
||||
|
||||
for i in range(2):
|
||||
# absolute move to 0 - fully zoomed out
|
||||
@ -1329,7 +1330,11 @@ class PtzAutoTracker:
|
||||
|
||||
if camera_config.onvif.autotracking.enabled:
|
||||
if not self.autotracker_init[camera]:
|
||||
self._autotracker_setup(camera_config, camera)
|
||||
future = asyncio.run_coroutine_threadsafe(
|
||||
self._autotracker_setup(camera_config, camera), self.onvif.loop
|
||||
)
|
||||
# Wait for the coroutine to complete
|
||||
future.result()
|
||||
|
||||
if self.calibrating[camera]:
|
||||
logger.debug(f"{camera}: Calibrating camera")
|
||||
@ -1476,7 +1481,8 @@ class PtzAutoTracker:
|
||||
self.tracked_object[camera] = None
|
||||
self.tracked_object_history[camera].clear()
|
||||
|
||||
self.ptz_metrics[camera].motor_stopped.wait()
|
||||
while not self.ptz_metrics[camera].motor_stopped.is_set():
|
||||
await self.onvif.get_camera_status(camera)
|
||||
logger.debug(
|
||||
f"{camera}: Time is {self.ptz_metrics[camera].frame_time.value}, returning to preset: {autotracker_config.return_preset}"
|
||||
)
|
||||
@ -1486,7 +1492,7 @@ class PtzAutoTracker:
|
||||
)
|
||||
|
||||
# update stored zoom level from preset
|
||||
if not self.ptz_metrics[camera].motor_stopped.is_set():
|
||||
while not self.ptz_metrics[camera].motor_stopped.is_set():
|
||||
await self.onvif.get_camera_status(camera)
|
||||
|
||||
self.ptz_metrics[camera].tracking_active.clear()
|
||||
|
@ -48,6 +48,8 @@ class OnvifController:
|
||||
self.config = config
|
||||
self.ptz_metrics = ptz_metrics
|
||||
|
||||
self.status_locks: dict[str, asyncio.Lock] = {}
|
||||
|
||||
# Create a dedicated event loop and run it in a separate thread
|
||||
self.loop = asyncio.new_event_loop()
|
||||
self.loop_thread = threading.Thread(target=self._run_event_loop, daemon=True)
|
||||
@ -59,6 +61,7 @@ class OnvifController:
|
||||
continue
|
||||
if cam.onvif.host:
|
||||
self.camera_configs[cam_name] = cam
|
||||
self.status_locks[cam_name] = asyncio.Lock()
|
||||
|
||||
asyncio.run_coroutine_threadsafe(self._init_cameras(), self.loop)
|
||||
|
||||
@ -764,105 +767,110 @@ class OnvifController:
|
||||
return False
|
||||
|
||||
async def get_camera_status(self, camera_name: str) -> None:
|
||||
if camera_name not in self.cams.keys():
|
||||
logger.error(f"ONVIF is not configured for {camera_name}")
|
||||
return
|
||||
|
||||
if not self.cams[camera_name]["init"]:
|
||||
if not await self._init_onvif(camera_name):
|
||||
async with self.status_locks[camera_name]:
|
||||
if camera_name not in self.cams.keys():
|
||||
logger.error(f"ONVIF is not configured for {camera_name}")
|
||||
return
|
||||
|
||||
status_request = self.cams[camera_name]["status_request"]
|
||||
try:
|
||||
status = await self.cams[camera_name]["ptz"].GetStatus(status_request)
|
||||
except Exception:
|
||||
pass # We're unsupported, that'll be reported in the next check.
|
||||
if not self.cams[camera_name]["init"]:
|
||||
if not await self._init_onvif(camera_name):
|
||||
return
|
||||
|
||||
try:
|
||||
pan_tilt_status = getattr(status.MoveStatus, "PanTilt", None)
|
||||
zoom_status = getattr(status.MoveStatus, "Zoom", None)
|
||||
status_request = self.cams[camera_name]["status_request"]
|
||||
try:
|
||||
status = await self.cams[camera_name]["ptz"].GetStatus(status_request)
|
||||
except Exception:
|
||||
pass # We're unsupported, that'll be reported in the next check.
|
||||
|
||||
# if it's not an attribute, see if MoveStatus even exists in the status result
|
||||
if pan_tilt_status is None:
|
||||
pan_tilt_status = getattr(status, "MoveStatus", None)
|
||||
try:
|
||||
pan_tilt_status = getattr(status.MoveStatus, "PanTilt", None)
|
||||
zoom_status = getattr(status.MoveStatus, "Zoom", None)
|
||||
|
||||
# we're unsupported
|
||||
if pan_tilt_status is None or pan_tilt_status not in [
|
||||
"IDLE",
|
||||
"MOVING",
|
||||
]:
|
||||
raise Exception
|
||||
except Exception:
|
||||
logger.warning(
|
||||
f"Camera {camera_name} does not support the ONVIF GetStatus method. Autotracking will not function correctly and must be disabled in your config."
|
||||
# if it's not an attribute, see if MoveStatus even exists in the status result
|
||||
if pan_tilt_status is None:
|
||||
pan_tilt_status = getattr(status, "MoveStatus", None)
|
||||
|
||||
# we're unsupported
|
||||
if pan_tilt_status is None or pan_tilt_status not in [
|
||||
"IDLE",
|
||||
"MOVING",
|
||||
]:
|
||||
raise Exception
|
||||
except Exception:
|
||||
logger.warning(
|
||||
f"Camera {camera_name} does not support the ONVIF GetStatus method. Autotracking will not function correctly and must be disabled in your config."
|
||||
)
|
||||
return
|
||||
|
||||
logger.debug(
|
||||
f"{camera_name}: Pan/tilt status: {pan_tilt_status}, Zoom status: {zoom_status}"
|
||||
)
|
||||
return
|
||||
|
||||
logger.debug(
|
||||
f"{camera_name}: Pan/tilt status: {pan_tilt_status}, Zoom status: {zoom_status}"
|
||||
)
|
||||
if pan_tilt_status == "IDLE" and (
|
||||
zoom_status is None or zoom_status == "IDLE"
|
||||
):
|
||||
self.cams[camera_name]["active"] = False
|
||||
if not self.ptz_metrics[camera_name].motor_stopped.is_set():
|
||||
self.ptz_metrics[camera_name].motor_stopped.set()
|
||||
|
||||
if pan_tilt_status == "IDLE" and (zoom_status is None or zoom_status == "IDLE"):
|
||||
self.cams[camera_name]["active"] = False
|
||||
if not self.ptz_metrics[camera_name].motor_stopped.is_set():
|
||||
self.ptz_metrics[camera_name].motor_stopped.set()
|
||||
logger.debug(
|
||||
f"{camera_name}: PTZ stop time: {self.ptz_metrics[camera_name].frame_time.value}"
|
||||
)
|
||||
|
||||
self.ptz_metrics[camera_name].stop_time.value = self.ptz_metrics[
|
||||
camera_name
|
||||
].frame_time.value
|
||||
else:
|
||||
self.cams[camera_name]["active"] = True
|
||||
if self.ptz_metrics[camera_name].motor_stopped.is_set():
|
||||
self.ptz_metrics[camera_name].motor_stopped.clear()
|
||||
|
||||
logger.debug(
|
||||
f"{camera_name}: PTZ start time: {self.ptz_metrics[camera_name].frame_time.value}"
|
||||
)
|
||||
|
||||
self.ptz_metrics[camera_name].start_time.value = self.ptz_metrics[
|
||||
camera_name
|
||||
].frame_time.value
|
||||
self.ptz_metrics[camera_name].stop_time.value = 0
|
||||
|
||||
if (
|
||||
self.config.cameras[camera_name].onvif.autotracking.zooming
|
||||
!= ZoomingModeEnum.disabled
|
||||
):
|
||||
# store absolute zoom level as 0 to 1 interpolated from the values of the camera
|
||||
self.ptz_metrics[camera_name].zoom_level.value = numpy.interp(
|
||||
round(status.Position.Zoom.x, 2),
|
||||
[
|
||||
self.cams[camera_name]["absolute_zoom_range"]["XRange"]["Min"],
|
||||
self.cams[camera_name]["absolute_zoom_range"]["XRange"]["Max"],
|
||||
],
|
||||
[0, 1],
|
||||
)
|
||||
logger.debug(
|
||||
f"{camera_name}: PTZ stop time: {self.ptz_metrics[camera_name].frame_time.value}"
|
||||
f"{camera_name}: Camera zoom level: {self.ptz_metrics[camera_name].zoom_level.value}"
|
||||
)
|
||||
|
||||
# some hikvision cams won't update MoveStatus, so warn if it hasn't changed
|
||||
if (
|
||||
not self.ptz_metrics[camera_name].motor_stopped.is_set()
|
||||
and not self.ptz_metrics[camera_name].reset.is_set()
|
||||
and self.ptz_metrics[camera_name].start_time.value != 0
|
||||
and self.ptz_metrics[camera_name].frame_time.value
|
||||
> (self.ptz_metrics[camera_name].start_time.value + 10)
|
||||
and self.ptz_metrics[camera_name].stop_time.value == 0
|
||||
):
|
||||
logger.debug(
|
||||
f"Start time: {self.ptz_metrics[camera_name].start_time.value}, Stop time: {self.ptz_metrics[camera_name].stop_time.value}, Frame time: {self.ptz_metrics[camera_name].frame_time.value}"
|
||||
)
|
||||
# set the stop time so we don't come back into this again and spam the logs
|
||||
self.ptz_metrics[camera_name].stop_time.value = self.ptz_metrics[
|
||||
camera_name
|
||||
].frame_time.value
|
||||
else:
|
||||
self.cams[camera_name]["active"] = True
|
||||
if self.ptz_metrics[camera_name].motor_stopped.is_set():
|
||||
self.ptz_metrics[camera_name].motor_stopped.clear()
|
||||
|
||||
logger.debug(
|
||||
f"{camera_name}: PTZ start time: {self.ptz_metrics[camera_name].frame_time.value}"
|
||||
logger.warning(
|
||||
f"Camera {camera_name} is still in ONVIF 'MOVING' status."
|
||||
)
|
||||
|
||||
self.ptz_metrics[camera_name].start_time.value = self.ptz_metrics[
|
||||
camera_name
|
||||
].frame_time.value
|
||||
self.ptz_metrics[camera_name].stop_time.value = 0
|
||||
|
||||
if (
|
||||
self.config.cameras[camera_name].onvif.autotracking.zooming
|
||||
!= ZoomingModeEnum.disabled
|
||||
):
|
||||
# store absolute zoom level as 0 to 1 interpolated from the values of the camera
|
||||
self.ptz_metrics[camera_name].zoom_level.value = numpy.interp(
|
||||
round(status.Position.Zoom.x, 2),
|
||||
[
|
||||
self.cams[camera_name]["absolute_zoom_range"]["XRange"]["Min"],
|
||||
self.cams[camera_name]["absolute_zoom_range"]["XRange"]["Max"],
|
||||
],
|
||||
[0, 1],
|
||||
)
|
||||
logger.debug(
|
||||
f"{camera_name}: Camera zoom level: {self.ptz_metrics[camera_name].zoom_level.value}"
|
||||
)
|
||||
|
||||
# some hikvision cams won't update MoveStatus, so warn if it hasn't changed
|
||||
if (
|
||||
not self.ptz_metrics[camera_name].motor_stopped.is_set()
|
||||
and not self.ptz_metrics[camera_name].reset.is_set()
|
||||
and self.ptz_metrics[camera_name].start_time.value != 0
|
||||
and self.ptz_metrics[camera_name].frame_time.value
|
||||
> (self.ptz_metrics[camera_name].start_time.value + 10)
|
||||
and self.ptz_metrics[camera_name].stop_time.value == 0
|
||||
):
|
||||
logger.debug(
|
||||
f"Start time: {self.ptz_metrics[camera_name].start_time.value}, Stop time: {self.ptz_metrics[camera_name].stop_time.value}, Frame time: {self.ptz_metrics[camera_name].frame_time.value}"
|
||||
)
|
||||
# set the stop time so we don't come back into this again and spam the logs
|
||||
self.ptz_metrics[camera_name].stop_time.value = self.ptz_metrics[
|
||||
camera_name
|
||||
].frame_time.value
|
||||
logger.warning(f"Camera {camera_name} is still in ONVIF 'MOVING' status.")
|
||||
|
||||
def close(self) -> None:
|
||||
"""Gracefully shut down the ONVIF controller."""
|
||||
if not hasattr(self, "loop") or self.loop.is_closed():
|
||||
|
@ -66,7 +66,7 @@ def sync_recordings(limited: bool) -> None:
|
||||
|
||||
if float(len(recordings_to_delete)) / max(1, recordings.count()) > 0.5:
|
||||
logger.warning(
|
||||
f"Deleting {(float(len(recordings_to_delete)) / recordings.count()):2f}% of recordings DB entries, could be due to configuration error. Aborting..."
|
||||
f"Deleting {(len(recordings_to_delete) / max(1, recordings.count()) * 100):.2f}% of recordings DB entries, could be due to configuration error. Aborting..."
|
||||
)
|
||||
return False
|
||||
|
||||
@ -106,7 +106,7 @@ def sync_recordings(limited: bool) -> None:
|
||||
|
||||
if float(len(files_to_delete)) / max(1, len(files_on_disk)) > 0.5:
|
||||
logger.debug(
|
||||
f"Deleting {(float(len(files_to_delete)) / len(files_on_disk)):2f}% of recordings DB entries, could be due to configuration error. Aborting..."
|
||||
f"Deleting {(len(files_to_delete) / max(1, len(files_on_disk)) * 100):.2f}% of recordings DB entries, could be due to configuration error. Aborting..."
|
||||
)
|
||||
return False
|
||||
|
||||
|
@ -301,7 +301,7 @@ def get_intel_gpu_stats(intel_gpu_device: Optional[str]) -> Optional[dict[str, s
|
||||
"-o",
|
||||
"-",
|
||||
"-s",
|
||||
"1",
|
||||
"1000", # Intel changed this from seconds to milliseconds in 2024+ versions
|
||||
]
|
||||
|
||||
if intel_gpu_device:
|
||||
|
@ -131,10 +131,7 @@ export default function SearchFilterGroup({
|
||||
);
|
||||
|
||||
const availableSortTypes = useMemo(() => {
|
||||
const sortTypes = ["date_asc", "date_desc"];
|
||||
if (filter?.min_score || filter?.max_score) {
|
||||
sortTypes.push("score_desc", "score_asc");
|
||||
}
|
||||
const sortTypes = ["date_asc", "date_desc", "score_desc", "score_asc"];
|
||||
if (filter?.min_speed || filter?.max_speed) {
|
||||
sortTypes.push("speed_desc", "speed_asc");
|
||||
}
|
||||
|
@ -332,7 +332,9 @@ export default function GeneralSettings({ className }: GeneralSettingsProps) {
|
||||
<Portal>
|
||||
<SubItemContent
|
||||
className={
|
||||
isDesktop ? "" : "w-[92%] rounded-lg md:rounded-2xl"
|
||||
isDesktop
|
||||
? ""
|
||||
: "scrollbar-container max-h-[75dvh] w-[92%] overflow-y-scroll rounded-lg md:rounded-2xl"
|
||||
}
|
||||
>
|
||||
<span tabIndex={0} className="sr-only" />
|
||||
|
@ -433,137 +433,139 @@ function CustomTimeSelector({
|
||||
className={`mt-3 flex items-center rounded-lg bg-secondary text-secondary-foreground ${isDesktop ? "mx-8 gap-2 px-2" : "pl-2"}`}
|
||||
>
|
||||
<FaCalendarAlt />
|
||||
<Popover
|
||||
open={startOpen}
|
||||
onOpenChange={(open) => {
|
||||
if (!open) {
|
||||
setStartOpen(false);
|
||||
}
|
||||
}}
|
||||
>
|
||||
<PopoverTrigger asChild>
|
||||
<Button
|
||||
className={`text-primary ${isDesktop ? "" : "text-xs"}`}
|
||||
aria-label={t("export.time.start.title")}
|
||||
variant={startOpen ? "select" : "default"}
|
||||
size="sm"
|
||||
onClick={() => {
|
||||
setStartOpen(true);
|
||||
setEndOpen(false);
|
||||
}}
|
||||
>
|
||||
{formattedStart}
|
||||
</Button>
|
||||
</PopoverTrigger>
|
||||
<PopoverContent className="flex flex-col items-center">
|
||||
<TimezoneAwareCalendar
|
||||
timezone={config?.ui.timezone}
|
||||
selectedDay={new Date(startTime * 1000)}
|
||||
onSelect={(day) => {
|
||||
if (!day) {
|
||||
return;
|
||||
}
|
||||
|
||||
setRange({
|
||||
before: endTime,
|
||||
after: day.getTime() / 1000 + 1,
|
||||
});
|
||||
}}
|
||||
/>
|
||||
<SelectSeparator className="bg-secondary" />
|
||||
<input
|
||||
className="text-md mx-4 w-full border border-input bg-background p-1 text-secondary-foreground hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
|
||||
id="startTime"
|
||||
type="time"
|
||||
value={startClock}
|
||||
step={isIOS ? "60" : "1"}
|
||||
onChange={(e) => {
|
||||
const clock = e.target.value;
|
||||
const [hour, minute, second] = isIOS
|
||||
? [...clock.split(":"), "00"]
|
||||
: clock.split(":");
|
||||
|
||||
const start = new Date(startTime * 1000);
|
||||
start.setHours(
|
||||
parseInt(hour),
|
||||
parseInt(minute),
|
||||
parseInt(second ?? 0),
|
||||
0,
|
||||
);
|
||||
setRange({
|
||||
before: endTime,
|
||||
after: start.getTime() / 1000,
|
||||
});
|
||||
}}
|
||||
/>
|
||||
</PopoverContent>
|
||||
</Popover>
|
||||
<FaArrowRight className="size-4 text-primary" />
|
||||
<Popover
|
||||
open={endOpen}
|
||||
onOpenChange={(open) => {
|
||||
if (!open) {
|
||||
setEndOpen(false);
|
||||
}
|
||||
}}
|
||||
>
|
||||
<PopoverTrigger asChild>
|
||||
<Button
|
||||
className={`text-primary ${isDesktop ? "" : "text-xs"}`}
|
||||
aria-label={t("export.time.end.title")}
|
||||
variant={endOpen ? "select" : "default"}
|
||||
size="sm"
|
||||
onClick={() => {
|
||||
setEndOpen(true);
|
||||
<div className="flex flex-wrap items-center">
|
||||
<Popover
|
||||
open={startOpen}
|
||||
onOpenChange={(open) => {
|
||||
if (!open) {
|
||||
setStartOpen(false);
|
||||
}}
|
||||
>
|
||||
{formattedEnd}
|
||||
</Button>
|
||||
</PopoverTrigger>
|
||||
<PopoverContent className="flex flex-col items-center">
|
||||
<TimezoneAwareCalendar
|
||||
timezone={config?.ui.timezone}
|
||||
selectedDay={new Date(endTime * 1000)}
|
||||
onSelect={(day) => {
|
||||
if (!day) {
|
||||
return;
|
||||
}
|
||||
}
|
||||
}}
|
||||
>
|
||||
<PopoverTrigger asChild>
|
||||
<Button
|
||||
className={`text-primary ${isDesktop ? "" : "text-xs"}`}
|
||||
aria-label={t("export.time.start.title")}
|
||||
variant={startOpen ? "select" : "default"}
|
||||
size="sm"
|
||||
onClick={() => {
|
||||
setStartOpen(true);
|
||||
setEndOpen(false);
|
||||
}}
|
||||
>
|
||||
{formattedStart}
|
||||
</Button>
|
||||
</PopoverTrigger>
|
||||
<PopoverContent className="flex flex-col items-center">
|
||||
<TimezoneAwareCalendar
|
||||
timezone={config?.ui.timezone}
|
||||
selectedDay={new Date(startTime * 1000)}
|
||||
onSelect={(day) => {
|
||||
if (!day) {
|
||||
return;
|
||||
}
|
||||
|
||||
setRange({
|
||||
after: startTime,
|
||||
before: day.getTime() / 1000,
|
||||
});
|
||||
}}
|
||||
/>
|
||||
<SelectSeparator className="bg-secondary" />
|
||||
<input
|
||||
className="text-md mx-4 w-full border border-input bg-background p-1 text-secondary-foreground hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
|
||||
id="startTime"
|
||||
type="time"
|
||||
value={endClock}
|
||||
step={isIOS ? "60" : "1"}
|
||||
onChange={(e) => {
|
||||
const clock = e.target.value;
|
||||
const [hour, minute, second] = isIOS
|
||||
? [...clock.split(":"), "00"]
|
||||
: clock.split(":");
|
||||
setRange({
|
||||
before: endTime,
|
||||
after: day.getTime() / 1000 + 1,
|
||||
});
|
||||
}}
|
||||
/>
|
||||
<SelectSeparator className="bg-secondary" />
|
||||
<input
|
||||
className="text-md mx-4 w-full border border-input bg-background p-1 text-secondary-foreground hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
|
||||
id="startTime"
|
||||
type="time"
|
||||
value={startClock}
|
||||
step={isIOS ? "60" : "1"}
|
||||
onChange={(e) => {
|
||||
const clock = e.target.value;
|
||||
const [hour, minute, second] = isIOS
|
||||
? [...clock.split(":"), "00"]
|
||||
: clock.split(":");
|
||||
|
||||
const end = new Date(endTime * 1000);
|
||||
end.setHours(
|
||||
parseInt(hour),
|
||||
parseInt(minute),
|
||||
parseInt(second ?? 0),
|
||||
0,
|
||||
);
|
||||
setRange({
|
||||
before: end.getTime() / 1000,
|
||||
after: startTime,
|
||||
});
|
||||
}}
|
||||
/>
|
||||
</PopoverContent>
|
||||
</Popover>
|
||||
const start = new Date(startTime * 1000);
|
||||
start.setHours(
|
||||
parseInt(hour),
|
||||
parseInt(minute),
|
||||
parseInt(second ?? 0),
|
||||
0,
|
||||
);
|
||||
setRange({
|
||||
before: endTime,
|
||||
after: start.getTime() / 1000,
|
||||
});
|
||||
}}
|
||||
/>
|
||||
</PopoverContent>
|
||||
</Popover>
|
||||
<FaArrowRight className="size-4 text-primary" />
|
||||
<Popover
|
||||
open={endOpen}
|
||||
onOpenChange={(open) => {
|
||||
if (!open) {
|
||||
setEndOpen(false);
|
||||
}
|
||||
}}
|
||||
>
|
||||
<PopoverTrigger asChild>
|
||||
<Button
|
||||
className={`text-primary ${isDesktop ? "" : "text-xs"}`}
|
||||
aria-label={t("export.time.end.title")}
|
||||
variant={endOpen ? "select" : "default"}
|
||||
size="sm"
|
||||
onClick={() => {
|
||||
setEndOpen(true);
|
||||
setStartOpen(false);
|
||||
}}
|
||||
>
|
||||
{formattedEnd}
|
||||
</Button>
|
||||
</PopoverTrigger>
|
||||
<PopoverContent className="flex flex-col items-center">
|
||||
<TimezoneAwareCalendar
|
||||
timezone={config?.ui.timezone}
|
||||
selectedDay={new Date(endTime * 1000)}
|
||||
onSelect={(day) => {
|
||||
if (!day) {
|
||||
return;
|
||||
}
|
||||
|
||||
setRange({
|
||||
after: startTime,
|
||||
before: day.getTime() / 1000,
|
||||
});
|
||||
}}
|
||||
/>
|
||||
<SelectSeparator className="bg-secondary" />
|
||||
<input
|
||||
className="text-md mx-4 w-full border border-input bg-background p-1 text-secondary-foreground hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
|
||||
id="startTime"
|
||||
type="time"
|
||||
value={endClock}
|
||||
step={isIOS ? "60" : "1"}
|
||||
onChange={(e) => {
|
||||
const clock = e.target.value;
|
||||
const [hour, minute, second] = isIOS
|
||||
? [...clock.split(":"), "00"]
|
||||
: clock.split(":");
|
||||
|
||||
const end = new Date(endTime * 1000);
|
||||
end.setHours(
|
||||
parseInt(hour),
|
||||
parseInt(minute),
|
||||
parseInt(second ?? 0),
|
||||
0,
|
||||
);
|
||||
setRange({
|
||||
before: end.getTime() / 1000,
|
||||
after: startTime,
|
||||
});
|
||||
}}
|
||||
/>
|
||||
</PopoverContent>
|
||||
</Popover>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
@ -42,6 +42,7 @@ import {
|
||||
CommandList,
|
||||
} from "@/components/ui/command";
|
||||
import { LuCheck } from "react-icons/lu";
|
||||
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
||||
|
||||
type SearchFilterDialogProps = {
|
||||
config?: FrigateConfig;
|
||||
@ -64,6 +65,9 @@ export default function SearchFilterDialog({
|
||||
const { t } = useTranslation(["components/filter"]);
|
||||
const [currentFilter, setCurrentFilter] = useState(filter ?? {});
|
||||
const { data: allSubLabels } = useSWR(["sub_labels", { split_joined: 1 }]);
|
||||
const { data: allRecognizedLicensePlates } = useSWR<string[]>(
|
||||
"recognized_license_plates",
|
||||
);
|
||||
|
||||
useEffect(() => {
|
||||
if (filter) {
|
||||
@ -130,6 +134,7 @@ export default function SearchFilterDialog({
|
||||
}
|
||||
/>
|
||||
<RecognizedLicensePlatesFilterContent
|
||||
allRecognizedLicensePlates={allRecognizedLicensePlates}
|
||||
recognizedLicensePlates={currentFilter.recognized_license_plate}
|
||||
setRecognizedLicensePlates={(plate) =>
|
||||
setCurrentFilter({
|
||||
@ -875,6 +880,7 @@ export function SnapshotClipFilterContent({
|
||||
}
|
||||
|
||||
type RecognizedLicensePlatesFilterContentProps = {
|
||||
allRecognizedLicensePlates: string[] | undefined;
|
||||
recognizedLicensePlates: string[] | undefined;
|
||||
setRecognizedLicensePlates: (
|
||||
recognizedLicensePlates: string[] | undefined,
|
||||
@ -882,18 +888,12 @@ type RecognizedLicensePlatesFilterContentProps = {
|
||||
};
|
||||
|
||||
export function RecognizedLicensePlatesFilterContent({
|
||||
allRecognizedLicensePlates,
|
||||
recognizedLicensePlates,
|
||||
setRecognizedLicensePlates,
|
||||
}: RecognizedLicensePlatesFilterContentProps) {
|
||||
const { t } = useTranslation(["components/filter"]);
|
||||
|
||||
const { data: allRecognizedLicensePlates, error } = useSWR<string[]>(
|
||||
"recognized_license_plates",
|
||||
{
|
||||
revalidateOnFocus: false,
|
||||
},
|
||||
);
|
||||
|
||||
const [selectedRecognizedLicensePlates, setSelectedRecognizedLicensePlates] =
|
||||
useState<string[]>(recognizedLicensePlates || []);
|
||||
const [inputValue, setInputValue] = useState("");
|
||||
@ -923,7 +923,7 @@ export function RecognizedLicensePlatesFilterContent({
|
||||
}
|
||||
};
|
||||
|
||||
if (!allRecognizedLicensePlates || allRecognizedLicensePlates.length === 0) {
|
||||
if (allRecognizedLicensePlates && allRecognizedLicensePlates.length === 0) {
|
||||
return null;
|
||||
}
|
||||
|
||||
@ -947,15 +947,12 @@ export function RecognizedLicensePlatesFilterContent({
|
||||
<div className="overflow-x-hidden">
|
||||
<DropdownMenuSeparator className="mb-3" />
|
||||
<div className="mb-3 text-lg">{t("recognizedLicensePlates.title")}</div>
|
||||
{error ? (
|
||||
<p className="text-sm text-red-500">
|
||||
{t("recognizedLicensePlates.loadFailed")}
|
||||
</p>
|
||||
) : !allRecognizedLicensePlates ? (
|
||||
<p className="text-sm text-muted-foreground">
|
||||
{t("recognizedLicensePlates.loading")}
|
||||
</p>
|
||||
) : (
|
||||
{allRecognizedLicensePlates == undefined ? (
|
||||
<div className="flex flex-col items-center justify-center text-sm text-muted-foreground">
|
||||
<ActivityIndicator className="mb-3 mr-2 size-5" />
|
||||
<p>{t("recognizedLicensePlates.loading")}</p>
|
||||
</div>
|
||||
) : allRecognizedLicensePlates.length == 0 ? null : (
|
||||
<>
|
||||
<Command
|
||||
className="border border-input bg-background"
|
||||
@ -1010,11 +1007,11 @@ export function RecognizedLicensePlatesFilterContent({
|
||||
))}
|
||||
</div>
|
||||
)}
|
||||
<p className="mt-1 text-sm text-muted-foreground">
|
||||
{t("recognizedLicensePlates.selectPlatesFromList")}
|
||||
</p>
|
||||
</>
|
||||
)}
|
||||
<p className="mt-1 text-sm text-muted-foreground">
|
||||
{t("recognizedLicensePlates.selectPlatesFromList")}
|
||||
</p>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
@ -1,4 +1,10 @@
|
||||
import React, { useState, useRef, useEffect, useCallback } from "react";
|
||||
import React, {
|
||||
useState,
|
||||
useRef,
|
||||
useEffect,
|
||||
useCallback,
|
||||
useMemo,
|
||||
} from "react";
|
||||
import { useVideoDimensions } from "@/hooks/use-video-dimensions";
|
||||
import HlsVideoPlayer from "./HlsVideoPlayer";
|
||||
import ActivityIndicator from "../indicators/activity-indicator";
|
||||
@ -89,6 +95,12 @@ export function GenericVideoPlayer({
|
||||
},
|
||||
);
|
||||
|
||||
const hlsSource = useMemo(() => {
|
||||
return {
|
||||
playlist: source,
|
||||
};
|
||||
}, [source]);
|
||||
|
||||
return (
|
||||
<div ref={containerRef} className="relative flex h-full w-full flex-col">
|
||||
<div className="relative flex flex-grow items-center justify-center">
|
||||
@ -107,7 +119,7 @@ export function GenericVideoPlayer({
|
||||
>
|
||||
<HlsVideoPlayer
|
||||
videoRef={videoRef}
|
||||
currentSource={source}
|
||||
currentSource={hlsSource}
|
||||
hotKeys
|
||||
visible
|
||||
frigateControls={false}
|
||||
|
@ -28,17 +28,22 @@ const unsupportedErrorCodes = [
|
||||
MediaError.MEDIA_ERR_DECODE,
|
||||
];
|
||||
|
||||
export interface HlsSource {
|
||||
playlist: string;
|
||||
startPosition?: number;
|
||||
}
|
||||
|
||||
type HlsVideoPlayerProps = {
|
||||
videoRef: MutableRefObject<HTMLVideoElement | null>;
|
||||
containerRef?: React.MutableRefObject<HTMLDivElement | null>;
|
||||
visible: boolean;
|
||||
currentSource: string;
|
||||
currentSource: HlsSource;
|
||||
hotKeys: boolean;
|
||||
supportsFullscreen: boolean;
|
||||
fullscreen: boolean;
|
||||
frigateControls?: boolean;
|
||||
inpointOffset?: number;
|
||||
onClipEnded?: () => void;
|
||||
onClipEnded?: (currentTime: number) => void;
|
||||
onPlayerLoaded?: () => void;
|
||||
onTimeUpdate?: (time: number) => void;
|
||||
onPlaying?: () => void;
|
||||
@ -113,18 +118,28 @@ export default function HlsVideoPlayer({
|
||||
const currentPlaybackRate = videoRef.current.playbackRate;
|
||||
|
||||
if (!useHlsCompat) {
|
||||
videoRef.current.src = currentSource;
|
||||
videoRef.current.src = currentSource.playlist;
|
||||
videoRef.current.load();
|
||||
return;
|
||||
}
|
||||
|
||||
if (!hlsRef.current) {
|
||||
hlsRef.current = new Hls();
|
||||
hlsRef.current.attachMedia(videoRef.current);
|
||||
}
|
||||
|
||||
hlsRef.current.loadSource(currentSource);
|
||||
hlsRef.current = new Hls({
|
||||
maxBufferLength: 10,
|
||||
maxBufferSize: 20 * 1000 * 1000,
|
||||
startPosition: currentSource.startPosition,
|
||||
});
|
||||
hlsRef.current.attachMedia(videoRef.current);
|
||||
hlsRef.current.loadSource(currentSource.playlist);
|
||||
videoRef.current.playbackRate = currentPlaybackRate;
|
||||
|
||||
return () => {
|
||||
// we must destroy the hlsRef every time the source changes
|
||||
// so that we can create a new HLS instance with startPosition
|
||||
// set at the optimal point in time
|
||||
if (hlsRef.current) {
|
||||
hlsRef.current.destroy();
|
||||
}
|
||||
};
|
||||
}, [videoRef, hlsRef, useHlsCompat, currentSource]);
|
||||
|
||||
// state handling
|
||||
@ -374,7 +389,11 @@ export default function HlsVideoPlayer({
|
||||
}
|
||||
}
|
||||
}}
|
||||
onEnded={onClipEnded}
|
||||
onEnded={() => {
|
||||
if (onClipEnded) {
|
||||
onClipEnded(getVideoTime() ?? 0);
|
||||
}
|
||||
}}
|
||||
onError={(e) => {
|
||||
if (
|
||||
!hlsRef.current &&
|
||||
|
@ -164,7 +164,7 @@ export default function JSMpegPlayer({
|
||||
statsIntervalRef.current = setInterval(() => {
|
||||
const currentTimestamp = Date.now();
|
||||
const timeDiff = (currentTimestamp - lastTimestampRef.current) / 1000; // in seconds
|
||||
const bitrate = (bytesReceivedRef.current * 8) / timeDiff / 1000; // in kbps
|
||||
const bitrate = bytesReceivedRef.current / timeDiff / 1000; // in kBps
|
||||
|
||||
setStats?.({
|
||||
streamType: "jsmpeg",
|
||||
|
@ -80,7 +80,7 @@ export default function LivePlayer({
|
||||
|
||||
const [stats, setStats] = useState<PlayerStatsType>({
|
||||
streamType: "-",
|
||||
bandwidth: 0, // in kbps
|
||||
bandwidth: 0, // in kBps
|
||||
latency: undefined, // in seconds
|
||||
totalFrames: 0,
|
||||
droppedFrames: undefined,
|
||||
|
@ -338,7 +338,7 @@ function MSEPlayer({
|
||||
// console.debug("VideoRTC.buffer", b.byteLength, bufLen);
|
||||
} else {
|
||||
try {
|
||||
sb?.appendBuffer(data);
|
||||
sb?.appendBuffer(data as ArrayBuffer);
|
||||
} catch (e) {
|
||||
// no-op
|
||||
}
|
||||
@ -592,7 +592,7 @@ function MSEPlayer({
|
||||
const now = Date.now();
|
||||
const bytesLoaded = totalBytesLoaded.current;
|
||||
const timeElapsed = (now - lastTimestamp) / 1000; // seconds
|
||||
const bandwidth = (bytesLoaded - lastLoadedBytes) / timeElapsed / 1024; // kbps
|
||||
const bandwidth = (bytesLoaded - lastLoadedBytes) / timeElapsed / 1000; // kBps
|
||||
|
||||
lastLoadedBytes = bytesLoaded;
|
||||
lastTimestamp = now;
|
||||
|
@ -17,7 +17,7 @@ export function PlayerStats({ stats, minimal }: PlayerStatsProps) {
|
||||
</p>
|
||||
<p>
|
||||
<span className="text-white/70">{t("stats.bandwidth.title")}</span>{" "}
|
||||
<span className="text-white">{stats.bandwidth.toFixed(2)} kbps</span>
|
||||
<span className="text-white">{stats.bandwidth.toFixed(2)} kBps</span>
|
||||
</p>
|
||||
{stats.latency != undefined && (
|
||||
<p>
|
||||
@ -66,7 +66,7 @@ export function PlayerStats({ stats, minimal }: PlayerStatsProps) {
|
||||
</div>
|
||||
<div className="flex flex-col items-center gap-1">
|
||||
<span className="text-white/70">{t("stats.bandwidth.short")}</span>{" "}
|
||||
<span className="text-white">{stats.bandwidth.toFixed(2)} kbps</span>
|
||||
<span className="text-white">{stats.bandwidth.toFixed(2)} kBps</span>
|
||||
</div>
|
||||
{stats.latency != undefined && (
|
||||
<div className="hidden flex-col items-center gap-1 md:flex">
|
||||
|
@ -266,7 +266,7 @@ export default function WebRtcPlayer({
|
||||
const bitrate =
|
||||
timeDiff > 0
|
||||
? (bytesReceived - lastBytesReceived) / timeDiff / 1000
|
||||
: 0; // in kbps
|
||||
: 0; // in kBps
|
||||
|
||||
setStats?.({
|
||||
streamType: "WebRTC",
|
||||
|
@ -6,7 +6,7 @@ import { Recording } from "@/types/record";
|
||||
import { Preview } from "@/types/preview";
|
||||
import PreviewPlayer, { PreviewController } from "../PreviewPlayer";
|
||||
import { DynamicVideoController } from "./DynamicVideoController";
|
||||
import HlsVideoPlayer from "../HlsVideoPlayer";
|
||||
import HlsVideoPlayer, { HlsSource } from "../HlsVideoPlayer";
|
||||
import { TimeRange } from "@/types/timeline";
|
||||
import ActivityIndicator from "@/components/indicators/activity-indicator";
|
||||
import { VideoResolutionType } from "@/types/live";
|
||||
@ -14,6 +14,7 @@ import axios from "axios";
|
||||
import { cn } from "@/lib/utils";
|
||||
import { useTranslation } from "react-i18next";
|
||||
import { calculateInpointOffset } from "@/utils/videoUtil";
|
||||
import { isFirefox } from "react-device-detect";
|
||||
|
||||
/**
|
||||
* Dynamically switches between video playback and scrubbing preview player.
|
||||
@ -98,9 +99,10 @@ export default function DynamicVideoPlayer({
|
||||
const [isLoading, setIsLoading] = useState(false);
|
||||
const [isBuffering, setIsBuffering] = useState(false);
|
||||
const [loadingTimeout, setLoadingTimeout] = useState<NodeJS.Timeout>();
|
||||
const [source, setSource] = useState(
|
||||
`${apiHost}vod/${camera}/start/${timeRange.after}/end/${timeRange.before}/master.m3u8`,
|
||||
);
|
||||
const [source, setSource] = useState<HlsSource>({
|
||||
playlist: `${apiHost}vod/${camera}/start/${timeRange.after}/end/${timeRange.before}/master.m3u8`,
|
||||
startPosition: startTimestamp ? timeRange.after - startTimestamp : 0,
|
||||
});
|
||||
|
||||
// start at correct time
|
||||
|
||||
@ -184,9 +186,28 @@ export default function DynamicVideoPlayer({
|
||||
playerRef.current.autoplay = !isScrubbing;
|
||||
}
|
||||
|
||||
setSource(
|
||||
`${apiHost}vod/${camera}/start/${recordingParams.after}/end/${recordingParams.before}/master.m3u8`,
|
||||
);
|
||||
let startPosition = undefined;
|
||||
|
||||
if (startTimestamp) {
|
||||
const inpointOffset = calculateInpointOffset(
|
||||
recordingParams.after,
|
||||
(recordings || [])[0],
|
||||
);
|
||||
const idealStartPosition = Math.max(
|
||||
0,
|
||||
startTimestamp - timeRange.after - inpointOffset,
|
||||
);
|
||||
|
||||
if (idealStartPosition >= recordings[0].start_time - timeRange.after) {
|
||||
startPosition = idealStartPosition;
|
||||
}
|
||||
}
|
||||
|
||||
setSource({
|
||||
playlist: `${apiHost}vod/${camera}/start/${recordingParams.after}/end/${recordingParams.before}/master.m3u8`,
|
||||
startPosition,
|
||||
});
|
||||
|
||||
setLoadingTimeout(setTimeout(() => setIsLoading(true), 1000));
|
||||
|
||||
controller.newPlayback({
|
||||
@ -203,6 +224,33 @@ export default function DynamicVideoPlayer({
|
||||
[recordingParams, recordings],
|
||||
);
|
||||
|
||||
const onValidateClipEnd = useCallback(
|
||||
(currentTime: number) => {
|
||||
if (!onClipEnded || !controller || !recordings) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (!isFirefox) {
|
||||
onClipEnded();
|
||||
}
|
||||
|
||||
// Firefox has a bug where clipEnded can be called prematurely due to buffering
|
||||
// we need to validate if the current play-point is truly at the end of available recordings
|
||||
|
||||
const lastRecordingTime = recordings.at(-1)?.start_time;
|
||||
|
||||
if (
|
||||
!lastRecordingTime ||
|
||||
controller.getProgress(currentTime) < lastRecordingTime
|
||||
) {
|
||||
return;
|
||||
}
|
||||
|
||||
onClipEnded();
|
||||
},
|
||||
[onClipEnded, controller, recordings],
|
||||
);
|
||||
|
||||
return (
|
||||
<>
|
||||
<HlsVideoPlayer
|
||||
@ -216,7 +264,7 @@ export default function DynamicVideoPlayer({
|
||||
inpointOffset={inpointOffset}
|
||||
onTimeUpdate={onTimeUpdate}
|
||||
onPlayerLoaded={onPlayerLoaded}
|
||||
onClipEnded={onClipEnded}
|
||||
onClipEnded={onValidateClipEnd}
|
||||
onPlaying={() => {
|
||||
if (isScrubbing) {
|
||||
playerRef.current?.pause();
|
||||
|
@ -1,5 +1,5 @@
|
||||
import { CameraConfig, FrigateConfig } from "@/types/frigateConfig";
|
||||
import { useCallback, useEffect, useState } from "react";
|
||||
import { useCallback, useEffect, useState, useMemo } from "react";
|
||||
import useSWR from "swr";
|
||||
import { LivePlayerMode, LiveStreamMetadata } from "@/types/live";
|
||||
|
||||
@ -8,9 +8,68 @@ export default function useCameraLiveMode(
|
||||
windowVisible: boolean,
|
||||
) {
|
||||
const { data: config } = useSWR<FrigateConfig>("config");
|
||||
const { data: allStreamMetadata } = useSWR<{
|
||||
|
||||
// Get comma-separated list of restreamed stream names for SWR key
|
||||
const restreamedStreamsKey = useMemo(() => {
|
||||
if (!cameras || !config) return null;
|
||||
|
||||
const streamNames = new Set<string>();
|
||||
cameras.forEach((camera) => {
|
||||
const isRestreamed = Object.keys(config.go2rtc.streams || {}).includes(
|
||||
Object.values(camera.live.streams)[0],
|
||||
);
|
||||
|
||||
if (isRestreamed) {
|
||||
Object.values(camera.live.streams).forEach((streamName) => {
|
||||
streamNames.add(streamName);
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
return streamNames.size > 0
|
||||
? Array.from(streamNames).sort().join(",")
|
||||
: null;
|
||||
}, [cameras, config]);
|
||||
|
||||
const streamsFetcher = useCallback(async (key: string) => {
|
||||
const streamNames = key.split(",");
|
||||
|
||||
const metadataPromises = streamNames.map(async (streamName) => {
|
||||
try {
|
||||
const response = await fetch(`/api/go2rtc/streams/${streamName}`, {
|
||||
priority: "low",
|
||||
});
|
||||
|
||||
if (response.ok) {
|
||||
const data = await response.json();
|
||||
return { streamName, data };
|
||||
}
|
||||
return { streamName, data: null };
|
||||
} catch (error) {
|
||||
// eslint-disable-next-line no-console
|
||||
console.error(`Failed to fetch metadata for ${streamName}:`, error);
|
||||
return { streamName, data: null };
|
||||
}
|
||||
});
|
||||
|
||||
const results = await Promise.allSettled(metadataPromises);
|
||||
|
||||
const metadata: { [key: string]: LiveStreamMetadata } = {};
|
||||
results.forEach((result) => {
|
||||
if (result.status === "fulfilled" && result.value.data) {
|
||||
metadata[result.value.streamName] = result.value.data;
|
||||
}
|
||||
});
|
||||
|
||||
return metadata;
|
||||
}, []);
|
||||
|
||||
const { data: allStreamMetadata = {} } = useSWR<{
|
||||
[key: string]: LiveStreamMetadata;
|
||||
}>(config ? "go2rtc/streams" : null, { revalidateOnFocus: false });
|
||||
}>(restreamedStreamsKey, streamsFetcher, {
|
||||
revalidateOnFocus: false,
|
||||
dedupingInterval: 10000,
|
||||
});
|
||||
|
||||
const [preferredLiveModes, setPreferredLiveModes] = useState<{
|
||||
[key: string]: LivePlayerMode;
|
||||
|
@ -17,7 +17,7 @@ export function useVideoDimensions(
|
||||
});
|
||||
|
||||
const videoAspectRatio = useMemo(() => {
|
||||
return videoResolution.width / videoResolution.height;
|
||||
return videoResolution.width / videoResolution.height || 16 / 9;
|
||||
}, [videoResolution]);
|
||||
|
||||
const containerAspectRatio = useMemo(() => {
|
||||
@ -25,8 +25,8 @@ export function useVideoDimensions(
|
||||
}, [containerWidth, containerHeight]);
|
||||
|
||||
const videoDimensions = useMemo(() => {
|
||||
if (!containerWidth || !containerHeight || !videoAspectRatio)
|
||||
return { width: "100%", height: "100%" };
|
||||
if (!containerWidth || !containerHeight)
|
||||
return { aspectRatio: "16 / 9", width: "100%" };
|
||||
if (containerAspectRatio > videoAspectRatio) {
|
||||
const height = containerHeight;
|
||||
const width = height * videoAspectRatio;
|
||||
|
@ -73,7 +73,11 @@ export default function Settings() {
|
||||
|
||||
const isAdmin = useIsAdmin();
|
||||
|
||||
const allowedViewsForViewer: SettingsType[] = ["ui", "debug"];
|
||||
const allowedViewsForViewer: SettingsType[] = [
|
||||
"ui",
|
||||
"debug",
|
||||
"notifications",
|
||||
];
|
||||
const visibleSettingsViews = !isAdmin
|
||||
? allowedViewsForViewer
|
||||
: allSettingsViews;
|
||||
@ -164,7 +168,7 @@ export default function Settings() {
|
||||
useSearchEffect("page", (page: string) => {
|
||||
if (allSettingsViews.includes(page as SettingsType)) {
|
||||
// Restrict viewer to UI settings
|
||||
if (!isAdmin && !["ui", "debug"].includes(page)) {
|
||||
if (!isAdmin && !allowedViewsForViewer.includes(page as SettingsType)) {
|
||||
setPage("ui");
|
||||
} else {
|
||||
setPage(page as SettingsType);
|
||||
@ -200,7 +204,7 @@ export default function Settings() {
|
||||
onValueChange={(value: SettingsType) => {
|
||||
if (value) {
|
||||
// Restrict viewer navigation
|
||||
if (!isAdmin && !["ui", "debug"].includes(value)) {
|
||||
if (!isAdmin && !allowedViewsForViewer.includes(value)) {
|
||||
setPageToggle("ui");
|
||||
} else {
|
||||
setPageToggle(value);
|
||||
|
@ -390,7 +390,6 @@ export default function FrigatePlusSettingsView({
|
||||
className="cursor-pointer"
|
||||
value={id}
|
||||
disabled={
|
||||
model.type != config.model.model_type ||
|
||||
!model.supportedDetectors.includes(
|
||||
Object.values(config.detectors)[0]
|
||||
.type,
|
||||
|
@ -46,6 +46,8 @@ import { Alert, AlertDescription, AlertTitle } from "@/components/ui/alert";
|
||||
import { Trans, useTranslation } from "react-i18next";
|
||||
import { useDateLocale } from "@/hooks/use-date-locale";
|
||||
import { useDocDomain } from "@/hooks/use-doc-domain";
|
||||
import { useIsAdmin } from "@/hooks/use-is-admin";
|
||||
import { cn } from "@/lib/utils";
|
||||
|
||||
const NOTIFICATION_SERVICE_WORKER = "notifications-worker.js";
|
||||
|
||||
@ -64,6 +66,10 @@ export default function NotificationView({
|
||||
const { t } = useTranslation(["views/settings"]);
|
||||
const { getLocaleDocUrl } = useDocDomain();
|
||||
|
||||
// roles
|
||||
|
||||
const isAdmin = useIsAdmin();
|
||||
|
||||
const { data: config, mutate: updateConfig } = useSWR<FrigateConfig>(
|
||||
"config",
|
||||
{
|
||||
@ -380,7 +386,11 @@ export default function NotificationView({
|
||||
<div className="flex size-full flex-col md:flex-row">
|
||||
<Toaster position="top-center" closeButton={true} />
|
||||
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto rounded-lg border-[1px] border-secondary-foreground bg-background_alt p-2 md:order-none md:mb-0 md:mr-2 md:mt-0">
|
||||
<div className="grid w-full grid-cols-1 gap-4 md:grid-cols-2">
|
||||
<div
|
||||
className={cn(
|
||||
isAdmin && "grid w-full grid-cols-1 gap-4 md:grid-cols-2",
|
||||
)}
|
||||
>
|
||||
<div className="col-span-1">
|
||||
<Heading as="h3" className="my-2">
|
||||
{t("notification.notificationSettings.title")}
|
||||
@ -403,138 +413,151 @@ export default function NotificationView({
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<Form {...form}>
|
||||
<form
|
||||
onSubmit={form.handleSubmit(onSubmit)}
|
||||
className="mt-2 space-y-6"
|
||||
>
|
||||
<FormField
|
||||
control={form.control}
|
||||
name="email"
|
||||
render={({ field }) => (
|
||||
<FormItem>
|
||||
<FormLabel>{t("notification.email.title")}</FormLabel>
|
||||
<FormControl>
|
||||
<Input
|
||||
className="text-md w-full border border-input bg-background p-2 hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark] md:w-72"
|
||||
placeholder={t("notification.email.placeholder")}
|
||||
{...field}
|
||||
/>
|
||||
</FormControl>
|
||||
<FormDescription>
|
||||
{t("notification.email.desc")}
|
||||
</FormDescription>
|
||||
<FormMessage />
|
||||
</FormItem>
|
||||
)}
|
||||
/>
|
||||
{isAdmin && (
|
||||
<Form {...form}>
|
||||
<form
|
||||
onSubmit={form.handleSubmit(onSubmit)}
|
||||
className="mt-2 space-y-6"
|
||||
>
|
||||
<FormField
|
||||
control={form.control}
|
||||
name="email"
|
||||
render={({ field }) => (
|
||||
<FormItem>
|
||||
<FormLabel>{t("notification.email.title")}</FormLabel>
|
||||
<FormControl>
|
||||
<Input
|
||||
className="text-md w-full border border-input bg-background p-2 hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark] md:w-72"
|
||||
placeholder={t("notification.email.placeholder")}
|
||||
{...field}
|
||||
/>
|
||||
</FormControl>
|
||||
<FormDescription>
|
||||
{t("notification.email.desc")}
|
||||
</FormDescription>
|
||||
<FormMessage />
|
||||
</FormItem>
|
||||
)}
|
||||
/>
|
||||
|
||||
<FormField
|
||||
control={form.control}
|
||||
name="cameras"
|
||||
render={({ field }) => (
|
||||
<FormItem>
|
||||
{allCameras && allCameras?.length > 0 ? (
|
||||
<>
|
||||
<div className="mb-2">
|
||||
<FormLabel className="flex flex-row items-center text-base">
|
||||
{t("notification.cameras.title")}
|
||||
</FormLabel>
|
||||
</div>
|
||||
<div className="max-w-md space-y-2 rounded-lg bg-secondary p-4">
|
||||
<FormField
|
||||
control={form.control}
|
||||
name="allEnabled"
|
||||
render={({ field }) => (
|
||||
<FormField
|
||||
control={form.control}
|
||||
name="cameras"
|
||||
render={({ field }) => (
|
||||
<FormItem>
|
||||
{allCameras && allCameras?.length > 0 ? (
|
||||
<>
|
||||
<div className="mb-2">
|
||||
<FormLabel className="flex flex-row items-center text-base">
|
||||
{t("notification.cameras.title")}
|
||||
</FormLabel>
|
||||
</div>
|
||||
<div className="max-w-md space-y-2 rounded-lg bg-secondary p-4">
|
||||
<FormField
|
||||
control={form.control}
|
||||
name="allEnabled"
|
||||
render={({ field }) => (
|
||||
<FilterSwitch
|
||||
label={t("cameras.all.title", {
|
||||
ns: "components/filter",
|
||||
})}
|
||||
isChecked={field.value}
|
||||
onCheckedChange={(checked) => {
|
||||
setChangedValue(true);
|
||||
if (checked) {
|
||||
form.setValue("cameras", []);
|
||||
}
|
||||
field.onChange(checked);
|
||||
}}
|
||||
/>
|
||||
)}
|
||||
/>
|
||||
{allCameras?.map((camera) => (
|
||||
<FilterSwitch
|
||||
label={t("cameras.all.title", {
|
||||
ns: "components/filter",
|
||||
})}
|
||||
isChecked={field.value}
|
||||
key={camera.name}
|
||||
label={camera.name.replaceAll("_", " ")}
|
||||
isChecked={field.value?.includes(
|
||||
camera.name,
|
||||
)}
|
||||
onCheckedChange={(checked) => {
|
||||
setChangedValue(true);
|
||||
let newCameras;
|
||||
if (checked) {
|
||||
form.setValue("cameras", []);
|
||||
newCameras = [
|
||||
...field.value,
|
||||
camera.name,
|
||||
];
|
||||
} else {
|
||||
newCameras = field.value?.filter(
|
||||
(value) => value !== camera.name,
|
||||
);
|
||||
}
|
||||
field.onChange(checked);
|
||||
field.onChange(newCameras);
|
||||
form.setValue("allEnabled", false);
|
||||
}}
|
||||
/>
|
||||
)}
|
||||
/>
|
||||
{allCameras?.map((camera) => (
|
||||
<FilterSwitch
|
||||
key={camera.name}
|
||||
label={camera.name.replaceAll("_", " ")}
|
||||
isChecked={field.value?.includes(camera.name)}
|
||||
onCheckedChange={(checked) => {
|
||||
setChangedValue(true);
|
||||
let newCameras;
|
||||
if (checked) {
|
||||
newCameras = [
|
||||
...field.value,
|
||||
camera.name,
|
||||
];
|
||||
} else {
|
||||
newCameras = field.value?.filter(
|
||||
(value) => value !== camera.name,
|
||||
);
|
||||
}
|
||||
field.onChange(newCameras);
|
||||
form.setValue("allEnabled", false);
|
||||
}}
|
||||
/>
|
||||
))}
|
||||
))}
|
||||
</div>
|
||||
</>
|
||||
) : (
|
||||
<div className="font-normal text-destructive">
|
||||
{t("notification.cameras.noCameras")}
|
||||
</div>
|
||||
</>
|
||||
) : (
|
||||
<div className="font-normal text-destructive">
|
||||
{t("notification.cameras.noCameras")}
|
||||
</div>
|
||||
)}
|
||||
)}
|
||||
|
||||
<FormMessage />
|
||||
<FormDescription>
|
||||
{t("notification.cameras.desc")}
|
||||
</FormDescription>
|
||||
</FormItem>
|
||||
)}
|
||||
/>
|
||||
|
||||
<div className="flex w-full flex-row items-center gap-2 pt-2 md:w-[50%]">
|
||||
<Button
|
||||
className="flex flex-1"
|
||||
aria-label={t("button.cancel", { ns: "common" })}
|
||||
onClick={onCancel}
|
||||
type="button"
|
||||
>
|
||||
{t("button.cancel", { ns: "common" })}
|
||||
</Button>
|
||||
<Button
|
||||
variant="select"
|
||||
disabled={isLoading}
|
||||
className="flex flex-1"
|
||||
aria-label={t("button.save", { ns: "common" })}
|
||||
type="submit"
|
||||
>
|
||||
{isLoading ? (
|
||||
<div className="flex flex-row items-center gap-2">
|
||||
<ActivityIndicator />
|
||||
<span>{t("button.saving", { ns: "common" })}</span>
|
||||
</div>
|
||||
) : (
|
||||
t("button.save", { ns: "common" })
|
||||
<FormMessage />
|
||||
<FormDescription>
|
||||
{t("notification.cameras.desc")}
|
||||
</FormDescription>
|
||||
</FormItem>
|
||||
)}
|
||||
</Button>
|
||||
</div>
|
||||
</form>
|
||||
</Form>
|
||||
/>
|
||||
|
||||
<div className="flex w-full flex-row items-center gap-2 pt-2 md:w-[50%]">
|
||||
<Button
|
||||
className="flex flex-1"
|
||||
aria-label={t("button.cancel", { ns: "common" })}
|
||||
onClick={onCancel}
|
||||
type="button"
|
||||
>
|
||||
{t("button.cancel", { ns: "common" })}
|
||||
</Button>
|
||||
<Button
|
||||
variant="select"
|
||||
disabled={isLoading}
|
||||
className="flex flex-1"
|
||||
aria-label={t("button.save", { ns: "common" })}
|
||||
type="submit"
|
||||
>
|
||||
{isLoading ? (
|
||||
<div className="flex flex-row items-center gap-2">
|
||||
<ActivityIndicator />
|
||||
<span>{t("button.saving", { ns: "common" })}</span>
|
||||
</div>
|
||||
) : (
|
||||
t("button.save", { ns: "common" })
|
||||
)}
|
||||
</Button>
|
||||
</div>
|
||||
</form>
|
||||
</Form>
|
||||
)}
|
||||
</div>
|
||||
|
||||
<div className="col-span-1">
|
||||
<div className="mt-4 gap-2 space-y-6">
|
||||
<div className="flex flex-col gap-2 md:max-w-[50%]">
|
||||
<Separator className="my-2 flex bg-secondary md:hidden" />
|
||||
<Heading as="h4" className="my-2">
|
||||
<div
|
||||
className={cn(
|
||||
isAdmin && "flex flex-col gap-2 md:max-w-[50%]",
|
||||
)}
|
||||
>
|
||||
<Separator
|
||||
className={cn(
|
||||
"my-2 flex bg-secondary",
|
||||
isAdmin && "md:hidden",
|
||||
)}
|
||||
/>
|
||||
<Heading as="h4" className={cn(isAdmin ? "my-2" : "my-4")}>
|
||||
{t("notification.deviceSpecific")}
|
||||
</Heading>
|
||||
<Button
|
||||
@ -580,7 +603,7 @@ export default function NotificationView({
|
||||
? t("notification.unregisterDevice")
|
||||
: t("notification.registerDevice")}
|
||||
</Button>
|
||||
{registration != null && registration.active && (
|
||||
{isAdmin && registration != null && registration.active && (
|
||||
<Button
|
||||
aria-label={t("notification.sendTestNotification")}
|
||||
onClick={() => sendTestNotification("notification_test")}
|
||||
@ -590,7 +613,7 @@ export default function NotificationView({
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
{notificationCameras.length > 0 && (
|
||||
{isAdmin && notificationCameras.length > 0 && (
|
||||
<div className="mt-4 gap-2 space-y-6">
|
||||
<div className="space-y-3">
|
||||
<Separator className="my-2 flex bg-secondary" />
|
||||
|
Loading…
Reference in New Issue
Block a user