Merge pull request #11419 from blakeblackshear/dev

0.14 Release
This commit is contained in:
Blake Blackshear 2024-08-08 08:43:29 -05:00 committed by GitHub
commit 8a099b4ae5
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
614 changed files with 52678 additions and 30122 deletions

View File

@ -0,0 +1,168 @@
rtmp
edgetpu
labelmap
rockchip
jetson
rocm
vaapi
CUDA
hwaccel
RTSP
Hikvision
Dahua
Amcrest
Reolink
Loryta
Beelink
Celeron
vaapi
blakeblackshear
workdir
onvif
autotracking
openvino
tflite
deepstack
codeproject
udev
tailscale
restream
restreaming
webrtc
ssdlite
mobilenet
mosquitto
datasheet
Jellyfin
Radeon
libva
Ubiquiti
Unifi
Tapo
Annke
autotracker
autotracked
variations
ONVIF
traefik
devcontainer
rootfs
ffprobe
autotrack
logpipe
imread
imwrite
imencode
imutils
thresholded
timelapse
ultrafast
sleeptime
radeontop
vainfo
tmpfs
homography
websockets
LIBAVFORMAT
NTSC
onnxruntime
fourcc
radeonsi
paho
imagestream
jsonify
cgroups
sysconf
memlimit
gpuload
nvml
setproctitle
psutil
Kalman
frontdoor
namedtuples
zeep
fflags
probesize
wallclock
rknn
socs
pydantic
shms
imdecode
colormap
webui
mse
jsmpeg
unreviewed
Chromecast
Swipeable
flac
scroller
cmdline
toggleable
bottombar
opencv
apexcharts
buildx
mqtt
rawvideo
defragment
Norfair
subclassing
yolo
tensorrt
blackshear
stylelint
HACS
homeassistant
hass
castable
mobiledet
framebuffer
mjpeg
substream
codeowner
noninteractive
restreamed
mountpoint
fstype
OWASP
iotop
letsencrypt
fullchain
lsusb
iostat
usermod
balena
passwordless
debconf
dpkg
poweroff
surveillance
qnap
homekit
colorspace
quantisation
skylake
Cuvid
foscam
onnx
numpy
protobuf
aarch
amdgpu
chipset
referer
mpegts
webp
authelia
authentik
unichip
rebranded
udevadm
automations
unraid
hideable
healthcheck
keepalive

View File

@ -10,10 +10,14 @@
"features": {
"ghcr.io/devcontainers/features/common-utils:1": {}
},
"forwardPorts": [5000, 5001, 5173, 1935, 8554, 8555],
"forwardPorts": [8971, 5000, 5001, 5173, 8554, 8555],
"portsAttributes": {
"8971": {
"label": "External NGINX",
"onAutoForward": "silent"
},
"5000": {
"label": "NGINX",
"label": "Internal NGINX",
"onAutoForward": "silent"
},
"5001": {
@ -24,10 +28,6 @@
"label": "Vite Server",
"onAutoForward": "silent"
},
"1935": {
"label": "RTMP",
"onAutoForward": "silent"
},
"8554": {
"label": "gortc RTSP",
"onAutoForward": "silent"

View File

@ -0,0 +1,83 @@
title: "[Bug]: "
labels: ["bug", "triage"]
body:
- type: textarea
id: description
attributes:
label: Describe the problem you are having
validations:
required: true
- type: textarea
id: steps
attributes:
label: Steps to reproduce
validations:
required: true
- type: input
id: version
attributes:
label: Version
description: Visible on the System page in the Web UI
validations:
required: true
- type: textarea
id: config
attributes:
label: Frigate config file
description: This will be automatically formatted into code, so no need for backticks.
render: yaml
validations:
required: true
- type: textarea
id: logs
attributes:
label: Relevant log output
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
render: shell
validations:
required: true
- type: dropdown
id: os
attributes:
label: Operating system
options:
- HassOS
- Debian
- Other Linux
- Proxmox
- UNRAID
- Windows
- Other
validations:
required: true
- type: dropdown
id: install-method
attributes:
label: Install method
options:
- HassOS Addon
- Docker Compose
- Docker CLI
validations:
required: true
- type: dropdown
id: network
attributes:
label: Network connection
options:
- Wired
- Wireless
- Mixed
validations:
required: true
- type: input
id: camera
attributes:
label: Camera make and model
description: Dahua, hikvision, amcrest, reolink, etc and model number
validations:
required: true
- type: textarea
id: other
attributes:
label: Any other information that may be helpful

View File

@ -1,8 +1,5 @@
name: Camera Support Request
description: Support for setting up cameras in Frigate
title: "[Camera Support]: "
labels: ["support", "triage"]
assignees: []
body:
- type: textarea
id: description
@ -14,7 +11,7 @@ body:
id: version
attributes:
label: Version
description: Visible on the Debug page in the Web UI
description: Visible on the System page in the Web UI
validations:
required: true
- type: textarea
@ -72,14 +69,14 @@ body:
validations:
required: true
- type: dropdown
id: coral
id: object-detector
attributes:
label: Coral version
label: Object Detector
options:
- USB
- PCIe
- M.2
- Dev Board
- Coral
- OpenVino
- TensorRT
- RKNN
- Other
- CPU (no coral)
validations:

View File

@ -1,8 +1,5 @@
name: Config Support Request
description: Support for Frigate configuration
title: "[Config Support]: "
labels: ["support", "triage"]
assignees: []
body:
- type: textarea
id: description
@ -14,7 +11,7 @@ body:
id: version
attributes:
label: Version
description: Visible on the Debug page in the Web UI
description: Visible on the System page in the Web UI
validations:
required: true
- type: textarea
@ -64,14 +61,14 @@ body:
validations:
required: true
- type: dropdown
id: coral
id: object-detector
attributes:
label: Coral version
label: Object Detector
options:
- USB
- PCIe
- M.2
- Dev Board
- Coral
- OpenVino
- TensorRT
- RKNN
- Other
- CPU (no coral)
validations:

View File

@ -1,8 +1,5 @@
name: Detector Support Request
description: Support for setting up object detector in Frigate (Coral, OpenVINO, TensorRT, etc.)
title: "[Detector Support]: "
labels: ["support", "triage"]
assignees: []
body:
- type: textarea
id: description
@ -14,7 +11,7 @@ body:
id: version
attributes:
label: Version
description: Visible on the Debug page in the Web UI
description: Visible on the System page in the Web UI
validations:
required: true
- type: textarea
@ -66,14 +63,14 @@ body:
validations:
required: true
- type: dropdown
id: coral
id: object-detector
attributes:
label: Coral version
label: Object Detector
options:
- USB
- PCIe
- M.2
- Dev Board
- Coral
- OpenVino
- TensorRT
- RKNN
- Other
- CPU (no coral)
validations:

View File

@ -1,8 +1,5 @@
name: General Support Request
description: General support request for Frigate
title: "[Support]: "
labels: ["support", "triage"]
assignees: []
body:
- type: textarea
id: description
@ -14,7 +11,7 @@ body:
id: version
attributes:
label: Version
description: Visible on the Debug page in the Web UI
description: Visible on the System page in the Web UI
validations:
required: true
- type: textarea
@ -72,14 +69,14 @@ body:
validations:
required: true
- type: dropdown
id: coral
id: object-detector
attributes:
label: Coral version
label: Object Detector
options:
- USB
- PCIe
- M.2
- Dev Board
- Coral
- OpenVino
- TensorRT
- RKNN
- Other
- CPU (no coral)
validations:

View File

@ -1,8 +1,5 @@
name: Hardware Acceleration Support Request
description: Support for setting up GPU hardware acceleration in Frigate
title: "[HW Accel Support]: "
labels: ["support", "triage"]
assignees: []
body:
- type: textarea
id: description
@ -14,7 +11,7 @@ body:
id: version
attributes:
label: Version
description: Visible on the Debug page in the Web UI
description: Visible on the System page in the Web UI
validations:
required: true
- type: textarea

View File

@ -0,0 +1,9 @@
title: "[Question]: "
labels: ["question"]
body:
- type: textarea
id: description
attributes:
label: "What is your question:"
validations:
required: true

1
.github/FUNDING.yml vendored
View File

@ -1,3 +1,4 @@
github:
- blakeblackshear
- NickM-27
- hawkeye217

View File

@ -1 +1,8 @@
blank_issues_enabled: false
contact_links:
- name: Frigate Support
url: https://github.com/blakeblackshear/frigate/discussions/new/choose
about: Get support for setting up or troubleshooting Frigate.
- name: Frigate Bug Report
url: https://github.com/blakeblackshear/frigate/discussions/new/choose
about: Report a specific UI or backend bug.

View File

@ -34,5 +34,7 @@ updates:
directory: "/docs"
schedule:
interval: daily
allow:
- dependency-name: "@docusaurus/*"
open-pull-requests-limit: 10
target-branch: dev

View File

@ -37,16 +37,6 @@ jobs:
target: frigate
tags: ${{ steps.setup.outputs.image-name }}-amd64
cache-from: type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64
- name: Build and push TensorRT (x86 GPU)
uses: docker/bake-action@v4
with:
push: true
targets: tensorrt
files: docker/tensorrt/trt.hcl
set: |
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64,mode=max
arm64_build:
runs-on: ubuntu-latest
name: ARM Build
@ -79,15 +69,15 @@ jobs:
rpi.tags=${{ steps.setup.outputs.image-name }}-rpi
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64,mode=max
#- name: Build and push RockChip build
# uses: docker/bake-action@v3
# with:
# push: true
# targets: rk
# files: docker/rockchip/rk.hcl
# set: |
# rk.tags=${{ steps.setup.outputs.image-name }}-rk
# *.cache-from=type=gha
- name: Build and push Rockchip build
uses: docker/bake-action@v3
with:
push: true
targets: rk
files: docker/rockchip/rk.hcl
set: |
rk.tags=${{ steps.setup.outputs.image-name }}-rk
*.cache-from=type=gha
jetson_jp4_build:
runs-on: ubuntu-latest
name: Jetson Jetpack 4
@ -140,6 +130,82 @@ jobs:
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt-jp5
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp5
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-jp5,mode=max
amd64_extra_builds:
runs-on: ubuntu-latest
name: AMD64 Extra Build
needs:
- amd64_build
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push TensorRT (x86 GPU)
env:
COMPUTE_LEVEL: "50 60 70 80 90"
uses: docker/bake-action@v4
with:
push: true
targets: tensorrt
files: docker/tensorrt/trt.hcl
set: |
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64,mode=max
#- name: AMD/ROCm general build
# env:
# AMDGPU: gfx
# HSA_OVERRIDE: 0
# uses: docker/bake-action@v3
# with:
# push: true
# targets: rocm
# files: docker/rocm/rocm.hcl
# set: |
# rocm.tags=${{ steps.setup.outputs.image-name }}-rocm
# *.cache-from=type=gha
#- name: AMD/ROCm gfx900
# env:
# AMDGPU: gfx900
# HSA_OVERRIDE: 1
# HSA_OVERRIDE_GFX_VERSION: 9.0.0
# uses: docker/bake-action@v3
# with:
# push: true
# targets: rocm
# files: docker/rocm/rocm.hcl
# set: |
# rocm.tags=${{ steps.setup.outputs.image-name }}-rocm-gfx900
# *.cache-from=type=gha
#- name: AMD/ROCm gfx1030
# env:
# AMDGPU: gfx1030
# HSA_OVERRIDE: 1
# HSA_OVERRIDE_GFX_VERSION: 10.3.0
# uses: docker/bake-action@v3
# with:
# push: true
# targets: rocm
# files: docker/rocm/rocm.hcl
# set: |
# rocm.tags=${{ steps.setup.outputs.image-name }}-rocm-gfx1030
# *.cache-from=type=gha
#- name: AMD/ROCm gfx1100
# env:
# AMDGPU: gfx1100
# HSA_OVERRIDE: 1
# HSA_OVERRIDE_GFX_VERSION: 11.0.0
# uses: docker/bake-action@v3
# with:
# push: true
# targets: rocm
# files: docker/rocm/rocm.hcl
# set: |
# rocm.tags=${{ steps.setup.outputs.image-name }}-rocm-gfx1100
# *.cache-from=type=gha
# The majority of users running arm64 are rpi users, so the rpi
# build should be the primary arm64 image
assemble_default_build:
@ -154,16 +220,16 @@ jobs:
with:
string: ${{ github.repository }}
- name: Log in to the Container registry
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d
uses: docker/login-action@0d4c9c5ea7693da7b068278f7b52bda2a190a446
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create short sha
run: echo "SHORT_SHA=${GITHUB_SHA::7}" >> $GITHUB_ENV
- uses: int128/docker-manifest-create-action@v1
- uses: int128/docker-manifest-create-action@v2
with:
tags: ghcr.io/${{ steps.lowercaseRepo.outputs.lowercase }}:${{ github.ref_name }}-${{ env.SHORT_SHA }}
suffixes: |
-amd64
-rpi
sources: |
ghcr.io/${{ steps.lowercaseRepo.outputs.lowercase }}:${{ github.ref_name }}-${{ env.SHORT_SHA }}-amd64
ghcr.io/${{ steps.lowercaseRepo.outputs.lowercase }}:${{ github.ref_name }}-${{ env.SHORT_SHA }}-rpi

View File

@ -11,7 +11,7 @@ jobs:
steps:
- name: Get Dependabot metadata
id: metadata
uses: dependabot/fetch-metadata@v1
uses: dependabot/fetch-metadata@v2
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Enable auto-merge for Dependabot PRs

View File

@ -51,12 +51,12 @@ jobs:
- uses: actions/checkout@v4
- uses: actions/setup-node@master
with:
node-version: 16.x
node-version: 20.x
- run: npm install
working-directory: ./web
- name: Test
run: npm run test
working-directory: ./web
# - name: Test
# run: npm run test
# working-directory: ./web
python_checks:
runs-on: ubuntu-latest
@ -65,7 +65,7 @@ jobs:
- name: Check out the repository
uses: actions/checkout@v4
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
uses: actions/setup-python@v5.0.0
uses: actions/setup-python@v5.1.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
- name: Install requirements

View File

@ -16,22 +16,32 @@ jobs:
with:
string: ${{ github.repository }}
- name: Log in to the Container registry
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d
uses: docker/login-action@0d4c9c5ea7693da7b068278f7b52bda2a190a446
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create tag variables
run: |
BRANCH=$([[ "${{ github.ref_name }}" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]] && echo "master" || echo "dev")
BRANCH=dev
echo "BRANCH=${BRANCH}" >> $GITHUB_ENV
echo "BASE=ghcr.io/${{ steps.lowercaseRepo.outputs.lowercase }}" >> $GITHUB_ENV
echo "BUILD_TAG=${BRANCH}-${GITHUB_SHA::7}" >> $GITHUB_ENV
echo "CLEAN_VERSION=$(echo ${GITHUB_REF##*/} | tr '[:upper:]' '[:lower:]' | sed 's/^[v]//')" >> $GITHUB_ENV
- name: Tag and push the main image
run: |
VERSION_TAG=${BASE}:${CLEAN_VERSION}
STABLE_TAG=${BASE}:stable
PULL_TAG=${BASE}:${BUILD_TAG}
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG} docker://${VERSION_TAG}
for variant in standard-arm64 tensorrt tensorrt-jp4 tensorrt-jp5 rk; do
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG}-${variant} docker://${VERSION_TAG}-${variant}
done
# stable tag
if [[ "${BRANCH}" == "master" ]]; then
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG} docker://${STABLE_TAG}
for variant in standard-arm64 tensorrt tensorrt-jp4 tensorrt-jp5 rk; do
docker run --rm -v $HOME/.docker/config.json:/config.json quay.io/skopeo/stable:latest copy --authfile /config.json --multi-arch all docker://${PULL_TAG}-${variant} docker://${STABLE_TAG}-${variant}
done
fi

View File

@ -24,3 +24,18 @@ jobs:
operations-per-run: 120
- name: Print outputs
run: echo ${{ join(steps.stale.outputs.*, ',') }}
clean_ghcr:
name: Delete outdated dev container images
runs-on: ubuntu-latest
steps:
- name: Delete old images
uses: snok/container-retention-policy@v2
with:
image-names: dev-*
cut-off: 60 days ago UTC
keep-at-least: 5
account-type: personal
token: ${{ secrets.GITHUB_TOKEN }}
token-type: github-token

2
.gitignore vendored
View File

@ -8,7 +8,6 @@ config/*
!config/*.example
models
*.mp4
*.ts
*.db
*.csv
frigate/version.py
@ -18,3 +17,4 @@ web/coverage
core
!/web/**/*.ts
.idea/*
.ipynb_checkpoints

View File

@ -2,5 +2,5 @@
/docker/tensorrt/ @madsciencetist @NateMeyer
/docker/tensorrt/*arm64* @madsciencetist
/docker/tensorrt/*jetson* @madsciencetist
/docker/rockchip/ @MarcA711
/docker/rocm/ @harakas

View File

@ -1,7 +1,7 @@
default_target: local
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
VERSION = 0.13.2
VERSION = 0.14.0
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
CURRENT_UID := $(shell id -u)

View File

@ -29,18 +29,22 @@ If you would like to make a donation to support development, please use [Github
## Screenshots
Integration into Home Assistant
### Live dashboard
<div>
<a href="docs/static/img/media_browser.png"><img src="docs/static/img/media_browser.png" height=400></a>
<a href="docs/static/img/notification.png"><img src="docs/static/img/notification.png" height=400></a>
<img width="800" alt="Live dashboard" src="https://github.com/blakeblackshear/frigate/assets/569905/5e713cb9-9db5-41dc-947a-6937c3bc376e">
</div>
Also comes with a builtin UI:
### Streamlined review workflow
<div>
<a href="docs/static/img/home-ui.png"><img src="docs/static/img/home-ui.png" height=400></a>
<a href="docs/static/img/camera-ui.png"><img src="docs/static/img/camera-ui.png" height=400></a>
<img width="800" alt="Streamlined review workflow" src="https://github.com/blakeblackshear/frigate/assets/569905/6fed96e8-3b18-40e5-9ddc-31e6f3c9f2ff">
</div>
![Events](docs/static/img/events-ui.png)
### Multi-camera scrubbing
<div>
<img width="800" alt="Multi-camera scrubbing" src="https://github.com/blakeblackshear/frigate/assets/569905/d6788a15-0eeb-4427-a8d4-80b93cae3d74">
</div>
### Built-in mask and zone editor
<div>
<img width="800" alt="Multi-camera scrubbing" src="https://github.com/blakeblackshear/frigate/assets/569905/d7885fc3-bfe6-452f-b7d0-d957cb3e31f5">
</div>

21
cspell.json Normal file
View File

@ -0,0 +1,21 @@
{
"version": "0.2",
"ignorePaths": [
"Dockerfile",
"Dockerfile.*",
"CMakeLists.txt",
"*.db",
"node_modules",
"__pycache__",
"dist"
],
"language": "en",
"dictionaryDefinitions": [
{
"name": "frigate-dictionary",
"path": "./.cspell/frigate-dictionary.txt",
"addWords": true
}
],
"dictionaries": ["frigate-dictionary"]
}

View File

@ -33,15 +33,18 @@ RUN --mount=type=tmpfs,target=/tmp --mount=type=tmpfs,target=/var/cache/apt \
FROM scratch AS go2rtc
ARG TARGETARCH
WORKDIR /rootfs/usr/local/go2rtc/bin
ADD --link --chmod=755 "https://github.com/AlexxIT/go2rtc/releases/download/v1.8.4/go2rtc_linux_${TARGETARCH}" go2rtc
ADD --link --chmod=755 "https://github.com/AlexxIT/go2rtc/releases/download/v1.9.2/go2rtc_linux_${TARGETARCH}" go2rtc
FROM wget AS tempio
ARG TARGETARCH
RUN --mount=type=bind,source=docker/main/install_tempio.sh,target=/deps/install_tempio.sh \
/deps/install_tempio.sh
####
#
# OpenVino Support
#
# 1. Download and convert a model from Intel's Public Open Model Zoo
# 2. Build libUSB without udev to handle NCS2 enumeration
#
####
# Download and Convert OpenVino model
@ -51,17 +54,24 @@ ARG DEBIAN_FRONTEND
# Install OpenVino Runtime and Dev library
COPY docker/main/requirements-ov.txt /requirements-ov.txt
RUN apt-get -qq update \
&& apt-get -qq install -y wget python3 python3-distutils \
&& apt-get -qq install -y wget python3 python3-dev python3-distutils gcc pkg-config libhdf5-dev \
&& wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
&& python3 get-pip.py "pip" \
&& pip install -r /requirements-ov.txt
# Get OpenVino Model
RUN mkdir /models \
&& cd /models && omz_downloader --name ssdlite_mobilenet_v2 \
&& cd /models && omz_converter --name ssdlite_mobilenet_v2 --precision FP16
RUN --mount=type=bind,source=docker/main/build_ov_model.py,target=/build_ov_model.py \
mkdir /models && cd /models \
&& wget http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz \
&& tar -xvf ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz \
&& python3 /build_ov_model.py
####
#
# Coral Compatibility
#
# Builds libusb without udev. Needed for synology and other devices with USB coral
####
# libUSB - No Udev
FROM wget as libusb-build
ARG TARGETARCH
@ -97,11 +107,12 @@ RUN wget -qO edgetpu_model.tflite https://github.com/google-coral/test_data/raw/
RUN wget -qO cpu_model.tflite https://github.com/google-coral/test_data/raw/release-frogfish/ssdlite_mobiledet_coco_qat_postprocess.tflite
COPY labelmap.txt .
# Copy OpenVino model
COPY --from=ov-converter /models/public/ssdlite_mobilenet_v2/FP16 openvino-model
COPY --from=ov-converter /models/ssdlite_mobilenet_v2.xml openvino-model/
COPY --from=ov-converter /models/ssdlite_mobilenet_v2.bin openvino-model/
RUN wget -q https://github.com/openvinotoolkit/open_model_zoo/raw/master/data/dataset_classes/coco_91cl_bkgr.txt -O openvino-model/coco_91cl_bkgr.txt && \
sed -i 's/truck/car/g' openvino-model/coco_91cl_bkgr.txt
# Get Audio Model and labels
RUN wget -qO cpu_audio_model.tflite https://tfhub.dev/google/lite-model/yamnet/classification/tflite/1?lite-format=tflite
RUN wget -qO - https://www.kaggle.com/api/v1/models/google/yamnet/tfLite/classification-tflite/1/download | tar xvz && mv 1.tflite cpu_audio_model.tflite
COPY audio-labelmap.txt .
@ -159,6 +170,7 @@ FROM scratch AS deps-rootfs
COPY --from=nginx /usr/local/nginx/ /usr/local/nginx/
COPY --from=go2rtc /rootfs/ /
COPY --from=libusb-build /usr/local/lib /usr/local/lib
COPY --from=tempio /rootfs/ /
COPY --from=s6-overlay /rootfs/ /
COPY --from=models /rootfs/ /
COPY docker/main/rootfs/ /
@ -176,7 +188,7 @@ ARG APT_KEY_DONT_WARN_ON_DANGEROUS_USAGE=DontWarn
ENV NVIDIA_VISIBLE_DEVICES=all
ENV NVIDIA_DRIVER_CAPABILITIES="compute,video,utility"
ENV PATH="/usr/lib/btbn-ffmpeg/bin:/usr/local/go2rtc/bin:/usr/local/nginx/sbin:${PATH}"
ENV PATH="/usr/lib/btbn-ffmpeg/bin:/usr/local/go2rtc/bin:/usr/local/tempio/bin:/usr/local/nginx/sbin:${PATH}"
# Install dependencies
RUN --mount=type=bind,source=docker/main/install_deps.sh,target=/deps/install_deps.sh \
@ -191,12 +203,13 @@ COPY --from=deps-rootfs / /
RUN ldconfig
EXPOSE 5000
EXPOSE 1935
EXPOSE 8554
EXPOSE 8555/tcp 8555/udp
# Configure logging to prepend timestamps, log to stdout, keep 0 archives and rotate on 10MB
ENV S6_LOGGING_SCRIPT="T 1 n0 s10000000 T"
# Do not fail on long-running download scripts
ENV S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0
ENTRYPOINT ["/init"]
CMD []
@ -232,12 +245,14 @@ RUN apt-get update \
RUN --mount=type=bind,source=./docker/main/requirements-dev.txt,target=/workspace/frigate/requirements-dev.txt \
pip3 install -r requirements-dev.txt
HEALTHCHECK NONE
CMD ["sleep", "infinity"]
# Frigate web build
# This should be architecture agnostic, so speed up the build on multiarch by not using QEMU.
FROM --platform=$BUILDPLATFORM node:16 AS web-build
FROM --platform=$BUILDPLATFORM node:20 AS web-build
WORKDIR /work
COPY web/package.json web/package-lock.json ./

View File

@ -5,7 +5,8 @@ set -euxo pipefail
NGINX_VERSION="1.25.3"
VOD_MODULE_VERSION="1.31"
SECURE_TOKEN_MODULE_VERSION="1.5"
RTMP_MODULE_VERSION="1.2.2"
SET_MISC_MODULE_VERSION="v0.33"
NGX_DEVEL_KIT_VERSION="v0.3.3"
cp /etc/apt/sources.list /etc/apt/sources.list.d/sources-src.list
sed -i 's|deb http|deb-src http|g' /etc/apt/sources.list.d/sources-src.list
@ -49,10 +50,16 @@ mkdir /tmp/nginx-secure-token-module
wget https://github.com/kaltura/nginx-secure-token-module/archive/refs/tags/${SECURE_TOKEN_MODULE_VERSION}.tar.gz
tar -zxf ${SECURE_TOKEN_MODULE_VERSION}.tar.gz -C /tmp/nginx-secure-token-module --strip-components=1
rm ${SECURE_TOKEN_MODULE_VERSION}.tar.gz
mkdir /tmp/nginx-rtmp-module
wget -nv https://github.com/arut/nginx-rtmp-module/archive/refs/tags/v${RTMP_MODULE_VERSION}.tar.gz
tar -zxf v${RTMP_MODULE_VERSION}.tar.gz -C /tmp/nginx-rtmp-module --strip-components=1
rm v${RTMP_MODULE_VERSION}.tar.gz
mkdir /tmp/ngx_devel_kit
wget https://github.com/vision5/ngx_devel_kit/archive/refs/tags/${NGX_DEVEL_KIT_VERSION}.tar.gz
tar -zxf ${NGX_DEVEL_KIT_VERSION}.tar.gz -C /tmp/ngx_devel_kit --strip-components=1
rm ${NGX_DEVEL_KIT_VERSION}.tar.gz
mkdir /tmp/nginx-set-misc-module
wget https://github.com/openresty/set-misc-nginx-module/archive/refs/tags/${SET_MISC_MODULE_VERSION}.tar.gz
tar -zxf ${SET_MISC_MODULE_VERSION}.tar.gz -C /tmp/nginx-set-misc-module --strip-components=1
rm ${SET_MISC_MODULE_VERSION}.tar.gz
cd /tmp/nginx
@ -60,10 +67,13 @@ cd /tmp/nginx
--with-file-aio \
--with-http_sub_module \
--with-http_ssl_module \
--with-http_auth_request_module \
--with-http_realip_module \
--with-threads \
--add-module=../ngx_devel_kit \
--add-module=../nginx-set-misc-module \
--add-module=../nginx-vod-module \
--add-module=../nginx-secure-token-module \
--add-module=../nginx-rtmp-module \
--with-cc-opt="-O3 -Wno-error=implicit-fallthrough"
make CC="ccache gcc" -j$(nproc) && make install

View File

@ -0,0 +1,11 @@
import openvino as ov
from openvino.tools import mo
ov_model = mo.convert_model(
"/models/ssdlite_mobilenet_v2_coco_2018_05_09/frozen_inference_graph.pb",
compress_to_fp16=True,
transformations_config="/usr/local/lib/python3.9/dist-packages/openvino/tools/mo/front/tf/ssd_v2_support.json",
tensorflow_object_detection_api_pipeline_config="/models/ssdlite_mobilenet_v2_coco_2018_05_09/pipeline.config",
reverse_input_channels=True,
)
ov.save_model(ov_model, "/models/ssdlite_mobilenet_v2.xml")

View File

@ -40,7 +40,7 @@ apt-get -qq install --no-install-recommends --no-install-suggests -y \
# btbn-ffmpeg -> amd64
if [[ "${TARGETARCH}" == "amd64" ]]; then
mkdir -p /usr/lib/btbn-ffmpeg
wget -qO btbn-ffmpeg.tar.xz "https://github.com/BtbN/FFmpeg-Builds/releases/download/autobuild-2022-07-31-12-37/ffmpeg-n5.1-2-g915ef932a3-linux64-gpl-5.1.tar.xz"
wget -qO btbn-ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2022-07-31-12-37/ffmpeg-n5.1-2-g915ef932a3-linux64-gpl-5.1.tar.xz"
tar -xf btbn-ffmpeg.tar.xz -C /usr/lib/btbn-ffmpeg --strip-components 1
rm -rf btbn-ffmpeg.tar.xz /usr/lib/btbn-ffmpeg/doc /usr/lib/btbn-ffmpeg/bin/ffplay
fi
@ -48,7 +48,7 @@ fi
# ffmpeg -> arm64
if [[ "${TARGETARCH}" == "arm64" ]]; then
mkdir -p /usr/lib/btbn-ffmpeg
wget -qO btbn-ffmpeg.tar.xz "https://github.com/BtbN/FFmpeg-Builds/releases/download/autobuild-2022-07-31-12-37/ffmpeg-n5.1-2-g915ef932a3-linuxarm64-gpl-5.1.tar.xz"
wget -qO btbn-ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2022-07-31-12-37/ffmpeg-n5.1-2-g915ef932a3-linuxarm64-gpl-5.1.tar.xz"
tar -xf btbn-ffmpeg.tar.xz -C /usr/lib/btbn-ffmpeg --strip-components 1
rm -rf btbn-ffmpeg.tar.xz /usr/lib/btbn-ffmpeg/doc /usr/lib/btbn-ffmpeg/bin/ffplay
fi

16
docker/main/install_tempio.sh Executable file
View File

@ -0,0 +1,16 @@
#!/bin/bash
set -euxo pipefail
tempio_version="2021.09.0"
if [[ "${TARGETARCH}" == "amd64" ]]; then
arch="amd64"
elif [[ "${TARGETARCH}" == "arm64" ]]; then
arch="aarch64"
fi
mkdir -p /rootfs/usr/local/tempio/bin
wget -q -O /rootfs/usr/local/tempio/bin/tempio "https://github.com/home-assistant/tempio/releases/download/${tempio_version}/tempio_${arch}"
chmod 755 /rootfs/usr/local/tempio/bin/tempio

View File

@ -1,5 +1,3 @@
numpy
# Openvino Library - Custom built with MYRIAD support
openvino @ https://github.com/NateMeyer/openvino-wheels/releases/download/multi-arch_2022.3.1/openvino-2022.3.1-1-cp39-cp39-manylinux_2_31_x86_64.whl; platform_machine == 'x86_64'
openvino @ https://github.com/NateMeyer/openvino-wheels/releases/download/multi-arch_2022.3.1/openvino-2022.3.1-1-cp39-cp39-linux_aarch64.whl; platform_machine == 'aarch64'
openvino-dev[tensorflow2] @ https://github.com/NateMeyer/openvino-wheels/releases/download/multi-arch_2022.3.1/openvino_dev-2022.3.1-1-py3-none-any.whl
tensorflow
openvino-dev>=2024.0.0

View File

@ -1,29 +1,32 @@
click == 8.1.*
Flask == 2.3.*
Flask == 3.0.*
Flask_Limiter == 3.7.*
imutils == 0.5.*
matplotlib == 3.7.*
joserfc == 0.11.*
markupsafe == 2.1.*
mypy == 1.6.1
numpy == 1.23.*
numpy == 1.26.*
onvif_zeep == 0.2.12
opencv-python-headless == 4.7.0.*
paho-mqtt == 1.6.*
opencv-python-headless == 4.9.0.*
paho-mqtt == 2.1.*
pandas == 2.2.*
peewee == 3.17.*
peewee_migrate == 1.12.*
psutil == 5.9.*
pydantic == 1.10.*
pydantic == 2.7.*
git+https://github.com/fbcotter/py3nvml#egg=py3nvml
PyYAML == 6.0.*
pytz == 2023.3.post1
pytz == 2024.1
pyzmq == 26.0.*
ruamel.yaml == 0.18.*
tzlocal == 5.2
types-PyYAML == 6.0.*
requests == 2.31.*
types-requests == 2.31.*
scipy == 1.11.*
requests == 2.32.*
types-requests == 2.32.*
scipy == 1.13.*
norfair == 2.2.*
setproctitle == 1.3.*
ws4py == 0.5.*
unidecode == 1.3.*
# Openvino Library - Custom built with MYRIAD support
openvino @ https://github.com/NateMeyer/openvino-wheels/releases/download/multi-arch_2022.3.1/openvino-2022.3.1-1-cp39-cp39-manylinux_2_31_x86_64.whl; platform_machine == 'x86_64'
openvino @ https://github.com/NateMeyer/openvino-wheels/releases/download/multi-arch_2022.3.1/openvino-2022.3.1-1-cp39-cp39-linux_aarch64.whl; platform_machine == 'aarch64'
onnxruntime == 1.18.*
openvino == 2024.1.*

View File

@ -0,0 +1 @@
certsync

View File

@ -0,0 +1 @@
certsync-pipeline

View File

@ -0,0 +1,4 @@
#!/command/with-contenv bash
# shellcheck shell=bash
exec logutil-service /dev/shm/logs/certsync

View File

@ -0,0 +1 @@
longrun

View File

@ -0,0 +1,30 @@
#!/command/with-contenv bash
# shellcheck shell=bash
# Take down the S6 supervision tree when the service fails
set -o errexit -o nounset -o pipefail
# Logs should be sent to stdout so that s6 can collect them
declare exit_code_container
exit_code_container=$(cat /run/s6-linux-init-container-results/exitcode)
readonly exit_code_container
readonly exit_code_service="${1}"
readonly exit_code_signal="${2}"
readonly service="CERTSYNC"
echo "[INFO] Service ${service} exited with code ${exit_code_service} (by signal ${exit_code_signal})"
if [[ "${exit_code_service}" -eq 256 ]]; then
if [[ "${exit_code_container}" -eq 0 ]]; then
echo $((128 + exit_code_signal)) >/run/s6-linux-init-container-results/exitcode
fi
if [[ "${exit_code_signal}" -eq 15 ]]; then
exec /run/s6/basedir/bin/halt
fi
elif [[ "${exit_code_service}" -ne 0 ]]; then
if [[ "${exit_code_container}" -eq 0 ]]; then
echo "${exit_code_service}" >/run/s6-linux-init-container-results/exitcode
fi
exec /run/s6/basedir/bin/halt
fi

View File

@ -0,0 +1 @@
certsync-log

View File

@ -0,0 +1,58 @@
#!/command/with-contenv bash
# shellcheck shell=bash
# Start the CERTSYNC service
set -o errexit -o nounset -o pipefail
# Logs should be sent to stdout so that s6 can collect them
echo "[INFO] Starting certsync..."
lefile="/etc/letsencrypt/live/frigate/fullchain.pem"
tls_enabled=`python3 /usr/local/nginx/get_tls_settings.py | jq -r .enabled`
while true
do
if [[ "$tls_enabled" == 'false' ]]; then
sleep 9999
continue
fi
if [ ! -e $lefile ]
then
echo "[ERROR] TLS certificate does not exist: $lefile"
fi
leprint=`openssl x509 -in $lefile -fingerprint -noout 2>&1 || echo 'failed'`
case "$leprint" in
*Fingerprint*)
;;
*)
echo "[ERROR] Missing fingerprint from $lefile"
;;
esac
liveprint=`echo | openssl s_client -showcerts -connect 127.0.0.1:8971 2>&1 | openssl x509 -fingerprint 2>&1 | grep -i fingerprint || echo 'failed'`
case "$liveprint" in
*Fingerprint*)
;;
*)
echo "[ERROR] Missing fingerprint from current nginx TLS cert"
;;
esac
if [[ "$leprint" != "failed" && "$liveprint" != "failed" && "$leprint" != "$liveprint" ]]
then
echo "[INFO] Reloading nginx to refresh TLS certificate"
echo "$lefile: $leprint"
/usr/local/nginx/sbin/nginx -s reload
fi
sleep 60
done
exit 0

View File

@ -0,0 +1 @@
30000

View File

@ -0,0 +1 @@
longrun

View File

@ -16,8 +16,8 @@ function migrate_db_path() {
if [[ -f "${config_file_yaml}" ]]; then
config_file="${config_file_yaml}"
elif [[ ! -f "${config_file}" ]]; then
echo "[ERROR] Frigate config file not found"
return 1
# Frigate will create the config file on startup
return 0
fi
unset config_file_yaml

View File

@ -4,7 +4,7 @@
set -o errexit -o nounset -o pipefail
dirs=(/dev/shm/logs/frigate /dev/shm/logs/go2rtc /dev/shm/logs/nginx)
dirs=(/dev/shm/logs/frigate /dev/shm/logs/go2rtc /dev/shm/logs/nginx /dev/shm/logs/certsync)
mkdir -p "${dirs[@]}"
chown nobody:nogroup "${dirs[@]}"

View File

@ -0,0 +1,5 @@
#!/usr/bin/env bash
set -e
# Wait for PID file to exist.
while ! test -f /run/nginx.pid; do sleep 1; done

View File

@ -0,0 +1 @@
3

View File

@ -8,6 +8,84 @@ set -o errexit -o nounset -o pipefail
echo "[INFO] Starting NGINX..."
# Taken from https://github.com/felipecrs/cgroup-scripts/commits/master/get_cpus.sh
function get_cpus() {
local quota=""
local period=""
if [ -f /sys/fs/cgroup/cgroup.controllers ]; then
if [ -f /sys/fs/cgroup/cpu.max ]; then
read -r quota period </sys/fs/cgroup/cpu.max
if [ "$quota" = "max" ]; then
quota=""
period=""
fi
else
echo "[WARN] /sys/fs/cgroup/cpu.max not found. Falling back to /proc/cpuinfo." >&2
fi
else
if [ -f /sys/fs/cgroup/cpu/cpu.cfs_quota_us ] && [ -f /sys/fs/cgroup/cpu/cpu.cfs_period_us ]; then
quota=$(cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us)
period=$(cat /sys/fs/cgroup/cpu/cpu.cfs_period_us)
if [ "$quota" = "-1" ]; then
quota=""
period=""
fi
else
echo "[WARN] /sys/fs/cgroup/cpu/cpu.cfs_quota_us or /sys/fs/cgroup/cpu/cpu.cfs_period_us not found. Falling back to /proc/cpuinfo." >&2
fi
fi
local cpus
if [ -n "${quota}" ] && [ -n "${period}" ]; then
cpus=$((quota / period))
if [ "$cpus" -eq 0 ]; then
cpus=1
fi
else
cpus=$(grep -c ^processor /proc/cpuinfo)
fi
printf '%s' "$cpus"
}
function set_worker_processes() {
# Capture number of assigned CPUs to calculate worker processes
local cpus
cpus=$(get_cpus)
if [[ "${cpus}" -gt 4 ]]; then
cpus=4
fi
# we need to catch any errors because sed will fail if user has bind mounted a custom nginx file
sed -i "s/worker_processes auto;/worker_processes ${cpus};/" /usr/local/nginx/conf/nginx.conf || true
}
set_worker_processes
# ensure the directory for ACME challenges exists
mkdir -p /etc/letsencrypt/www
# Create self signed certs if needed
letsencrypt_path=/etc/letsencrypt/live/frigate
mkdir -p $letsencrypt_path
if [ ! \( -f "$letsencrypt_path/privkey.pem" -a -f "$letsencrypt_path/fullchain.pem" \) ]; then
echo "[INFO] No TLS certificate found. Generating a self signed certificate..."
openssl req -new -newkey rsa:4096 -days 365 -nodes -x509 \
-subj "/O=FRIGATE DEFAULT CERT/CN=*" \
-keyout "$letsencrypt_path/privkey.pem" -out "$letsencrypt_path/fullchain.pem" 2>/dev/null
fi
# build templates for optional TLS support
python3 /usr/local/nginx/get_tls_settings.py | \
tempio -template /usr/local/nginx/templates/listen.gotmpl \
-out /usr/local/nginx/conf/listen.conf
# Replace the bash process with the NGINX process, redirecting stderr to stdout
exec 2>&1
exec nginx
exec \
s6-notifyoncheck -t 30000 -n 1 \
nginx

View File

@ -0,0 +1,80 @@
0 person
1 bicycle
2 car
3 motorcycle
4 airplane
5 car
6 train
7 car
8 boat
9 traffic light
10 fire hydrant
11 stop sign
12 parking meter
13 bench
14 bird
15 cat
16 dog
17 horse
18 sheep
19 cow
20 elephant
21 bear
22 zebra
23 giraffe
24 backpack
25 umbrella
26 handbag
27 tie
28 suitcase
29 frisbee
30 skis
31 snowboard
32 sports ball
33 kite
34 baseball bat
35 baseball glove
36 skateboard
37 surfboard
38 tennis racket
39 bottle
40 wine glass
41 cup
42 fork
43 knife
44 spoon
45 bowl
46 banana
47 apple
48 sandwich
49 orange
50 broccoli
51 carrot
52 hot dog
53 pizza
54 donut
55 cake
56 chair
57 couch
58 potted plant
59 bed
60 dining table
61 toilet
62 tv
63 laptop
64 mouse
65 remote
66 keyboard
67 cell phone
68 microwave
69 oven
70 toaster
71 sink
72 refrigerator
73 book
74 clock
75 vase
76 scissors
77 teddy bear
78 hair drier
79 toothbrush

View File

@ -0,0 +1,91 @@
0 person
1 bicycle
2 car
3 motorcycle
4 airplane
5 bus
6 train
7 car
8 boat
9 traffic light
10 fire hydrant
11 street sign
12 stop sign
13 parking meter
14 bench
15 bird
16 cat
17 dog
18 horse
19 sheep
20 cow
21 elephant
22 bear
23 zebra
24 giraffe
25 hat
26 backpack
27 umbrella
28 shoe
29 eye glasses
30 handbag
31 tie
32 suitcase
33 frisbee
34 skis
35 snowboard
36 sports ball
37 kite
38 baseball bat
39 baseball glove
40 skateboard
41 surfboard
42 tennis racket
43 bottle
44 plate
45 wine glass
46 cup
47 fork
48 knife
49 spoon
50 bowl
51 banana
52 apple
53 sandwich
54 orange
55 broccoli
56 carrot
57 hot dog
58 pizza
59 donut
60 cake
61 chair
62 couch
63 potted plant
64 bed
65 mirror
66 dining table
67 window
68 desk
69 toilet
70 door
71 tv
72 laptop
73 mouse
74 remote
75 keyboard
76 cell phone
77 microwave
78 oven
79 toaster
80 sink
81 refrigerator
82 blender
83 book
84 clock
85 vase
86 scissors
87 teddy bear
88 hair drier
89 toothbrush
90 hair brush

View File

@ -21,9 +21,9 @@ FRIGATE_ENV_VARS = {k: v for k, v in os.environ.items() if k.startswith("FRIGATE
if os.path.isdir("/run/secrets"):
for secret_file in os.listdir("/run/secrets"):
if secret_file.startswith("FRIGATE_"):
FRIGATE_ENV_VARS[secret_file] = Path(
os.path.join("/run/secrets", secret_file)
).read_text()
FRIGATE_ENV_VARS[secret_file] = (
Path(os.path.join("/run/secrets", secret_file)).read_text().strip()
)
config_file = os.environ.get("CONFIG_FILE", "/config/config.yml")
@ -32,13 +32,16 @@ config_file_yaml = config_file.replace(".yml", ".yaml")
if os.path.isfile(config_file_yaml):
config_file = config_file_yaml
with open(config_file) as f:
try:
with open(config_file) as f:
raw_config = f.read()
if config_file.endswith((".yaml", ".yml")):
if config_file.endswith((".yaml", ".yml")):
config: dict[str, any] = yaml.safe_load(raw_config)
elif config_file.endswith(".json"):
elif config_file.endswith(".json"):
config: dict[str, any] = json.loads(raw_config)
except FileNotFoundError:
config: dict[str, any] = {}
go2rtc_config: dict[str, any] = config.get("go2rtc", {})
@ -109,23 +112,9 @@ if int(os.environ["LIBAVFORMAT_VERSION_MAJOR"]) < 59:
"rtsp": "-fflags nobuffer -flags low_delay -stimeout 5000000 -user_agent go2rtc/ffmpeg -rtsp_transport tcp -i {input}"
}
elif go2rtc_config["ffmpeg"].get("rtsp") is None:
go2rtc_config["ffmpeg"][
"rtsp"
] = "-fflags nobuffer -flags low_delay -stimeout 5000000 -user_agent go2rtc/ffmpeg -rtsp_transport tcp -i {input}"
# add hardware acceleration presets for rockchip devices
# may be removed if frigate uses a go2rtc version that includes these presets
if go2rtc_config.get("ffmpeg") is None:
go2rtc_config["ffmpeg"] = {
"h264/rk": "-c:v h264_rkmpp_encoder -g 50 -bf 0",
"h265/rk": "-c:v hevc_rkmpp_encoder -g 50 -bf 0",
}
else:
if go2rtc_config["ffmpeg"].get("h264/rk") is None:
go2rtc_config["ffmpeg"]["h264/rk"] = "-c:v h264_rkmpp_encoder -g 50 -bf 0"
if go2rtc_config["ffmpeg"].get("h265/rk") is None:
go2rtc_config["ffmpeg"]["h265/rk"] = "-c:v hevc_rkmpp_encoder -g 50 -bf 0"
go2rtc_config["ffmpeg"]["rtsp"] = (
"-fflags nobuffer -flags low_delay -stimeout 5000000 -user_agent go2rtc/ffmpeg -rtsp_transport tcp -i {input}"
)
for name in go2rtc_config.get("streams", {}):
stream = go2rtc_config["streams"][name]

View File

@ -0,0 +1,43 @@
set $upstream_auth http://127.0.0.1:5001/auth;
## Virtual endpoint created by nginx to forward auth requests.
location /auth {
## Essential Proxy Configuration
internal;
proxy_pass $upstream_auth;
## Headers
# First strip out all the request headers
# Note: This is important to ensure that upgrade requests for secure
# websockets dont cause the backend to fail
proxy_pass_request_headers off;
# Pass info about the request
proxy_set_header X-Original-Method $request_method;
proxy_set_header X-Original-URL $scheme://$http_host$request_uri;
proxy_set_header X-Server-Port $server_port;
proxy_set_header Content-Length "";
# Pass along auth related info
proxy_set_header Authorization $http_authorization;
proxy_set_header Cookie $http_cookie;
proxy_set_header X-CSRF-TOKEN "1";
# include headers from common auth proxies
include proxy_trusted_headers.conf;
## Basic Proxy Configuration
proxy_pass_request_body off;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503; # Timeout if the real server is dead
proxy_redirect http:// $scheme://;
proxy_http_version 1.1;
proxy_cache_bypass $cookie_session;
proxy_no_cache $cookie_session;
proxy_buffers 4 32k;
client_body_buffer_size 128k;
## Advanced Proxy Configuration
send_timeout 5m;
proxy_read_timeout 240;
proxy_send_timeout 240;
proxy_connect_timeout 240;
}

View File

@ -0,0 +1,22 @@
## Send a subrequest to verify if the user is authenticated and has permission to access the resource.
auth_request /auth;
## Save the upstream metadata response headers from Authelia to variables.
auth_request_set $user $upstream_http_remote_user;
auth_request_set $groups $upstream_http_remote_groups;
auth_request_set $name $upstream_http_remote_name;
auth_request_set $email $upstream_http_remote_email;
## Inject the metadata response headers from the variables into the request made to the backend.
proxy_set_header Remote-User $user;
proxy_set_header Remote-Groups $groups;
proxy_set_header Remote-Email $email;
proxy_set_header Remote-Name $name;
## Refresh the cookie as needed
auth_request_set $auth_cookie $upstream_http_set_cookie;
add_header Set-Cookie $auth_cookie;
## Pass the location header back up if it exists
auth_request_set $redirection_url $upstream_http_location;
add_header Location $redirection_url;

View File

@ -0,0 +1,4 @@
upstream go2rtc {
server 127.0.0.1:1984;
keepalive 1024;
}

View File

@ -56,13 +56,10 @@ http {
keepalive 1024;
}
upstream go2rtc {
server 127.0.0.1:1984;
keepalive 1024;
}
include go2rtc_upstream.conf;
server {
listen 5000;
include listen.conf;
# vod settings
vod_base_url '';
@ -95,7 +92,10 @@ http {
gzip on;
gzip_types application/vnd.apple.mpegurl;
include auth_location.conf;
location /vod/ {
include auth_request.conf;
aio threads;
vod hls;
@ -107,6 +107,7 @@ http {
}
location /stream/ {
include auth_request.conf;
add_header Cache-Control "no-store";
expires off;
@ -121,12 +122,14 @@ http {
}
location /clips/ {
include auth_request.conf;
types {
video/mp4 mp4;
image/jpeg jpg;
}
expires 7d;
add_header Cache-Control "public";
autoindex on;
root /media/frigate;
}
@ -137,6 +140,7 @@ http {
}
location /recordings/ {
include auth_request.conf;
types {
video/mp4 mp4;
}
@ -147,6 +151,7 @@ http {
}
location /exports/ {
include auth_request.conf;
types {
video/mp4 mp4;
}
@ -157,17 +162,20 @@ http {
}
location /ws {
include auth_request.conf;
proxy_pass http://mqtt_ws/;
include proxy.conf;
}
location /live/jsmpeg/ {
include auth_request.conf;
proxy_pass http://jsmpeg/;
include proxy.conf;
}
# frigate lovelace card uses this path
location /live/mse/api/ws {
include auth_request.conf;
limit_except GET {
deny all;
}
@ -176,6 +184,7 @@ http {
}
location /live/webrtc/api/ws {
include auth_request.conf;
limit_except GET {
deny all;
}
@ -185,6 +194,7 @@ http {
# pass through go2rtc player
location /live/webrtc/webrtc.html {
include auth_request.conf;
limit_except GET {
deny all;
}
@ -194,6 +204,7 @@ http {
# frontend uses this to fetch the version
location /api/go2rtc/api {
include auth_request.conf;
limit_except GET {
deny all;
}
@ -203,6 +214,7 @@ http {
# integration uses this to add webrtc candidate
location /api/go2rtc/webrtc {
include auth_request.conf;
limit_except POST {
deny all;
}
@ -210,13 +222,15 @@ http {
include proxy.conf;
}
location ~* /api/.*\.(jpg|jpeg|png)$ {
location ~* /api/.*\.(jpg|jpeg|png|webp|gif)$ {
include auth_request.conf;
rewrite ^/api/(.*)$ $1 break;
proxy_pass http://frigate_api;
include proxy.conf;
}
location /api/ {
include auth_request.conf;
add_header Cache-Control "no-store";
expires off;
proxy_pass http://frigate_api/;
@ -231,27 +245,38 @@ http {
add_header X-Cache-Status $upstream_cache_status;
location /api/vod/ {
include auth_request.conf;
proxy_pass http://frigate_api/vod/;
include proxy.conf;
proxy_cache off;
}
location /api/login {
auth_request off;
rewrite ^/api(/.*)$ $1 break;
proxy_pass http://frigate_api;
include proxy.conf;
}
location /api/stats {
include auth_request.conf;
access_log off;
rewrite ^/api/(.*)$ $1 break;
rewrite ^/api(/.*)$ $1 break;
proxy_pass http://frigate_api;
include proxy.conf;
}
location /api/version {
include auth_request.conf;
access_log off;
rewrite ^/api/(.*)$ $1 break;
rewrite ^/api(/.*)$ $1 break;
proxy_pass http://frigate_api;
include proxy.conf;
}
}
location / {
# do not require auth for static assets
add_header Cache-Control "no-store";
expires off;
@ -273,22 +298,7 @@ http {
sub_filter_once off;
root /opt/frigate/web;
try_files $uri $uri/ /index.html;
}
}
}
rtmp {
server {
listen 1935;
chunk_size 4096;
allow publish 127.0.0.1;
deny publish all;
allow play all;
application live {
live on;
record off;
meta copy;
try_files $uri $uri.html $uri/ /index.html;
}
}
}

View File

@ -1,4 +1,26 @@
proxy_http_version 1.1;
## Headers
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Original-URL $scheme://$http_host$request_uri;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-URI $request_uri;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
## Basic Proxy Configuration
client_body_buffer_size 128k;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503; ## Timeout if the real server is dead.
proxy_redirect http:// $scheme://;
proxy_http_version 1.1;
proxy_cache_bypass $cookie_session;
proxy_no_cache $cookie_session;
proxy_buffers 64 256k;
## Advanced Proxy Configuration
send_timeout 5m;
proxy_read_timeout 360;
proxy_send_timeout 360;
proxy_connect_timeout 360;

View File

@ -0,0 +1,25 @@
# Header used to validate reverse proxy trust
proxy_set_header X-Proxy-Secret $http_x_proxy_secret;
# these headers will be copied to the /auth request and are available
# to be mapped in the config to Frigate's remote-user header
# List of headers sent by common authentication proxies:
# - Authelia
# - Traefik forward auth
# - oauth2_proxy
# - Authentik
proxy_set_header Remote-User $http_remote_user;
proxy_set_header Remote-Groups $http_remote_groups;
proxy_set_header Remote-Email $http_remote_email;
proxy_set_header Remote-Name $http_remote_name;
proxy_set_header X-Forwarded-User $http_x_forwarded_user;
proxy_set_header X-Forwarded-Groups $http_x_forwarded_groups;
proxy_set_header X-Forwarded-Email $http_x_forwarded_email;
proxy_set_header X-Forwarded-Preferred-Username $http_x_forwarded_preferred_username;
proxy_set_header X-authentik-username $http_x_authentik_username;
proxy_set_header X-authentik-groups $http_x_authentik_groups;
proxy_set_header X-authentik-email $http_x_authentik_email;
proxy_set_header X-authentik-name $http_x_authentik_name;
proxy_set_header X-authentik-uid $http_x_authentik_uid;

View File

@ -0,0 +1,28 @@
"""Prints the tls config as json to stdout."""
import json
import os
import yaml
config_file = os.environ.get("CONFIG_FILE", "/config/config.yml")
# Check if we can use .yaml instead of .yml
config_file_yaml = config_file.replace(".yml", ".yaml")
if os.path.isfile(config_file_yaml):
config_file = config_file_yaml
try:
with open(config_file) as f:
raw_config = f.read()
if config_file.endswith((".yaml", ".yml")):
config: dict[str, any] = yaml.safe_load(raw_config)
elif config_file.endswith(".json"):
config: dict[str, any] = json.loads(raw_config)
except FileNotFoundError:
config: dict[str, any] = {}
tls_config: dict[str, any] = config.get("tls", {"enabled": True})
print(json.dumps(tls_config))

View File

@ -0,0 +1,33 @@
# intended for internal traffic, not protected by auth
listen 5000;
{{ if not .enabled }}
# intended for external traffic, protected by auth
listen 8971;
{{ else }}
# intended for external traffic, protected by auth
listen 8971 ssl;
ssl_certificate /etc/letsencrypt/live/frigate/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/frigate/privkey.pem;
# generated 2024-06-01, Mozilla Guideline v5.7, nginx 1.25.3, OpenSSL 1.1.1w, modern configuration, no OCSP
# https://ssl-config.mozilla.org/#server=nginx&version=1.25.3&config=modern&openssl=1.1.1w&ocsp=false&guideline=5.7
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
# modern configuration
ssl_protocols TLSv1.3;
ssl_prefer_server_ciphers off;
# HSTS (ngx_http_headers_module is required) (63072000 seconds)
add_header Strict-Transport-Security "max-age=63072000" always;
# ACME challenge location
location /.well-known/acme-challenge/ {
default_type "text/plain";
root /etc/letsencrypt/www;
}
{{ end }}

View File

@ -9,7 +9,7 @@ COPY docker/rockchip/requirements-wheels-rk.txt /requirements-wheels-rk.txt
RUN sed -i "/https:\/\//d" /requirements-wheels.txt
RUN pip3 wheel --wheel-dir=/rk-wheels -c /requirements-wheels.txt -r /requirements-wheels-rk.txt
FROM deps AS rk-deps
FROM deps AS rk-frigate
ARG TARGETARCH
RUN --mount=type=bind,from=rk-wheels,source=/rk-wheels,target=/deps/rk-wheels \
@ -18,12 +18,9 @@ RUN --mount=type=bind,from=rk-wheels,source=/rk-wheels,target=/deps/rk-wheels \
WORKDIR /opt/frigate/
COPY --from=rootfs / /
ADD https://github.com/MarcA711/rknpu2/releases/download/v1.5.2/librknnrt_rk356x.so /usr/lib/
ADD https://github.com/MarcA711/rknpu2/releases/download/v1.5.2/librknnrt_rk3588.so /usr/lib/
# TODO removed models, other models support may need to be added back in
ADD https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.0.0/librknnrt.so /usr/lib/
RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffmpeg
RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffprobe
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.0-1/ffmpeg /usr/lib/btbn-ffmpeg/bin/
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.0-1/ffprobe /usr/lib/btbn-ffmpeg/bin/
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-5/ffmpeg /usr/lib/btbn-ffmpeg/bin/
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-5/ffprobe /usr/lib/btbn-ffmpeg/bin/

View File

@ -1,2 +1 @@
hide-warnings == 0.17
rknn-toolkit-lite2 @ https://github.com/MarcA711/rknn-toolkit2/releases/download/v1.5.2/rknn_toolkit_lite2-1.5.2-cp39-cp39-linux_aarch64.whl
rknn-toolkit-lite2 @ https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.0.0/rknn_toolkit_lite2-2.0.0b0-cp39-cp39-linux_aarch64.whl

View File

@ -1,9 +1,3 @@
target wget {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
target = "wget"
}
target wheels {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
@ -25,7 +19,6 @@ target rootfs {
target rk {
dockerfile = "docker/rockchip/Dockerfile"
contexts = {
wget = "target:wget",
wheels = "target:wheels",
deps = "target:deps",
rootfs = "target:rootfs"

106
docker/rocm/Dockerfile Normal file
View File

@ -0,0 +1,106 @@
# syntax=docker/dockerfile:1.4
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
ARG ROCM=5.7.3
ARG AMDGPU=gfx900
ARG HSA_OVERRIDE_GFX_VERSION
ARG HSA_OVERRIDE
#######################################################################
FROM ubuntu:focal as rocm
ARG ROCM
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install gnupg wget
RUN mkdir --parents --mode=0755 /etc/apt/keyrings
RUN wget https://repo.radeon.com/rocm/rocm.gpg.key -O - | gpg --dearmor | tee /etc/apt/keyrings/rocm.gpg > /dev/null
COPY docker/rocm/rocm.list /etc/apt/sources.list.d/
COPY docker/rocm/rocm-pin-600 /etc/apt/preferences.d/
RUN apt-get update
RUN apt-get -y install --no-install-recommends migraphx
RUN apt-get -y install --no-install-recommends migraphx-dev
RUN mkdir -p /opt/rocm-dist/opt/rocm-$ROCM/lib
RUN cd /opt/rocm-$ROCM/lib && cp -dpr libMIOpen*.so* libamd*.so* libhip*.so* libhsa*.so* libmigraphx*.so* librocm*.so* librocblas*.so* /opt/rocm-dist/opt/rocm-$ROCM/lib/
RUN cd /opt/rocm-dist/opt/ && ln -s rocm-$ROCM rocm
RUN mkdir -p /opt/rocm-dist/etc/ld.so.conf.d/
RUN echo /opt/rocm/lib|tee /opt/rocm-dist/etc/ld.so.conf.d/rocm.conf
#######################################################################
FROM --platform=linux/amd64 debian:11 as debian-base
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install --no-install-recommends libelf1 libdrm2 libdrm-amdgpu1 libnuma1 kmod
RUN apt-get -y install python3
#######################################################################
# ROCm does not come with migraphx wrappers for python 3.9, so we build it here
FROM debian-base as debian-build
ARG ROCM
COPY --from=rocm /opt/rocm-$ROCM /opt/rocm-$ROCM
RUN ln -s /opt/rocm-$ROCM /opt/rocm
RUN apt-get -y install g++ cmake
RUN apt-get -y install python3-pybind11 python3.9-distutils python3-dev
WORKDIR /opt/build
COPY docker/rocm/migraphx .
RUN mkdir build && cd build && cmake .. && make install
#######################################################################
FROM deps AS deps-prelim
# need this to install libnuma1
RUN apt-get update
# no ugprade?!?!
RUN apt-get -y install libnuma1
WORKDIR /opt/frigate/
COPY --from=rootfs / /
COPY docker/rocm/rootfs/ /
#######################################################################
FROM scratch AS rocm-dist
ARG ROCM
ARG AMDGPU
COPY --from=rocm /opt/rocm-$ROCM/bin/rocminfo /opt/rocm-$ROCM/bin/migraphx-driver /opt/rocm-$ROCM/bin/
COPY --from=rocm /opt/rocm-$ROCM/share/miopen/db/*$AMDGPU* /opt/rocm-$ROCM/share/miopen/db/
COPY --from=rocm /opt/rocm-$ROCM/lib/rocblas/library/*$AMDGPU* /opt/rocm-$ROCM/lib/rocblas/library/
COPY --from=rocm /opt/rocm-dist/ /
COPY --from=debian-build /opt/rocm/lib/migraphx.cpython-39-x86_64-linux-gnu.so /opt/rocm-$ROCM/lib/
#######################################################################
FROM deps-prelim AS rocm-prelim-hsa-override0
ENV HSA_ENABLE_SDMA=0
COPY --from=rocm-dist / /
RUN ldconfig
#######################################################################
FROM rocm-prelim-hsa-override0 as rocm-prelim-hsa-override1
ARG HSA_OVERRIDE_GFX_VERSION
ENV HSA_OVERRIDE_GFX_VERSION=$HSA_OVERRIDE_GFX_VERSION
#######################################################################
FROM rocm-prelim-hsa-override$HSA_OVERRIDE as rocm-deps
# Request yolov8 download at startup
ENV DOWNLOAD_YOLOV8=1

View File

@ -0,0 +1,26 @@
cmake_minimum_required(VERSION 3.1)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)
if(NOT CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE Release)
endif()
SET(CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE)
project(migraphx_py)
include_directories(/opt/rocm/include)
find_package(pybind11 REQUIRED)
pybind11_add_module(migraphx migraphx_py.cpp)
target_link_libraries(migraphx PRIVATE /opt/rocm/lib/libmigraphx.so /opt/rocm/lib/libmigraphx_tf.so /opt/rocm/lib/libmigraphx_onnx.so)
install(TARGETS migraphx
COMPONENT python
LIBRARY DESTINATION /opt/rocm/lib
)

View File

@ -0,0 +1,582 @@
/*
* The MIT License (MIT)
*
* Copyright (c) 2015-2022 Advanced Micro Devices, Inc. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include <pybind11/numpy.h>
#include <migraphx/program.hpp>
#include <migraphx/instruction_ref.hpp>
#include <migraphx/operation.hpp>
#include <migraphx/quantization.hpp>
#include <migraphx/generate.hpp>
#include <migraphx/instruction.hpp>
#include <migraphx/ref/target.hpp>
#include <migraphx/stringutils.hpp>
#include <migraphx/tf.hpp>
#include <migraphx/onnx.hpp>
#include <migraphx/load_save.hpp>
#include <migraphx/register_target.hpp>
#include <migraphx/json.hpp>
#include <migraphx/make_op.hpp>
#include <migraphx/op/common.hpp>
#ifdef HAVE_GPU
#include <migraphx/gpu/hip.hpp>
#endif
using half = half_float::half;
namespace py = pybind11;
#ifdef __clang__
#define MIGRAPHX_PUSH_UNUSED_WARNING \
_Pragma("clang diagnostic push") \
_Pragma("clang diagnostic ignored \"-Wused-but-marked-unused\"")
#define MIGRAPHX_POP_WARNING _Pragma("clang diagnostic pop")
#else
#define MIGRAPHX_PUSH_UNUSED_WARNING
#define MIGRAPHX_POP_WARNING
#endif
#define MIGRAPHX_PYBIND11_MODULE(...) \
MIGRAPHX_PUSH_UNUSED_WARNING \
PYBIND11_MODULE(__VA_ARGS__) \
MIGRAPHX_POP_WARNING
#define MIGRAPHX_PYTHON_GENERATE_SHAPE_ENUM(x, t) .value(#x, migraphx::shape::type_t::x)
namespace migraphx {
migraphx::value to_value(py::kwargs kwargs);
migraphx::value to_value(py::list lst);
template <class T, class F>
void visit_py(T x, F f)
{
if(py::isinstance<py::kwargs>(x))
{
f(to_value(x.template cast<py::kwargs>()));
}
else if(py::isinstance<py::list>(x))
{
f(to_value(x.template cast<py::list>()));
}
else if(py::isinstance<py::bool_>(x))
{
f(x.template cast<bool>());
}
else if(py::isinstance<py::int_>(x) or py::hasattr(x, "__index__"))
{
f(x.template cast<int>());
}
else if(py::isinstance<py::float_>(x))
{
f(x.template cast<float>());
}
else if(py::isinstance<py::str>(x))
{
f(x.template cast<std::string>());
}
else if(py::isinstance<migraphx::shape::dynamic_dimension>(x))
{
f(migraphx::to_value(x.template cast<migraphx::shape::dynamic_dimension>()));
}
else
{
MIGRAPHX_THROW("VISIT_PY: Unsupported data type!");
}
}
migraphx::value to_value(py::list lst)
{
migraphx::value v = migraphx::value::array{};
for(auto val : lst)
{
visit_py(val, [&](auto py_val) { v.push_back(py_val); });
}
return v;
}
migraphx::value to_value(py::kwargs kwargs)
{
migraphx::value v = migraphx::value::object{};
for(auto arg : kwargs)
{
auto&& key = py::str(arg.first);
auto&& val = arg.second;
visit_py(val, [&](auto py_val) { v[key] = py_val; });
}
return v;
}
} // namespace migraphx
namespace pybind11 {
namespace detail {
template <>
struct npy_format_descriptor<half>
{
static std::string format()
{
// following: https://docs.python.org/3/library/struct.html#format-characters
return "e";
}
static constexpr auto name() { return _("half"); }
};
} // namespace detail
} // namespace pybind11
template <class F>
void visit_type(const migraphx::shape& s, F f)
{
s.visit_type(f);
}
template <class T, class F>
void visit(const migraphx::raw_data<T>& x, F f)
{
x.visit(f);
}
template <class F>
void visit_types(F f)
{
migraphx::shape::visit_types(f);
}
template <class T>
py::buffer_info to_buffer_info(T& x)
{
migraphx::shape s = x.get_shape();
assert(s.type() != migraphx::shape::tuple_type);
if(s.dynamic())
MIGRAPHX_THROW("MIGRAPHX PYTHON: dynamic shape argument passed to to_buffer_info");
auto strides = s.strides();
std::transform(
strides.begin(), strides.end(), strides.begin(), [&](auto i) { return i * s.type_size(); });
py::buffer_info b;
visit_type(s, [&](auto as) {
// migraphx use int8_t data to store bool type, we need to
// explicitly specify the data type as bool for python
if(s.type() == migraphx::shape::bool_type)
{
b = py::buffer_info(x.data(),
as.size(),
py::format_descriptor<bool>::format(),
s.ndim(),
s.lens(),
strides);
}
else
{
b = py::buffer_info(x.data(),
as.size(),
py::format_descriptor<decltype(as())>::format(),
s.ndim(),
s.lens(),
strides);
}
});
return b;
}
migraphx::shape to_shape(const py::buffer_info& info)
{
migraphx::shape::type_t t;
std::size_t n = 0;
visit_types([&](auto as) {
if(info.format == py::format_descriptor<decltype(as())>::format() or
(info.format == "l" and py::format_descriptor<decltype(as())>::format() == "q") or
(info.format == "L" and py::format_descriptor<decltype(as())>::format() == "Q"))
{
t = as.type_enum();
n = sizeof(as());
}
else if(info.format == "?" and py::format_descriptor<decltype(as())>::format() == "b")
{
t = migraphx::shape::bool_type;
n = sizeof(bool);
}
});
if(n == 0)
{
MIGRAPHX_THROW("MIGRAPHX PYTHON: Unsupported data type " + info.format);
}
auto strides = info.strides;
std::transform(strides.begin(), strides.end(), strides.begin(), [&](auto i) -> std::size_t {
return n > 0 ? i / n : 0;
});
// scalar support
if(info.shape.empty())
{
return migraphx::shape{t};
}
else
{
return migraphx::shape{t, info.shape, strides};
}
}
MIGRAPHX_PYBIND11_MODULE(migraphx, m)
{
py::class_<migraphx::shape> shape_cls(m, "shape");
shape_cls
.def(py::init([](py::kwargs kwargs) {
auto v = migraphx::to_value(kwargs);
auto t = migraphx::shape::parse_type(v.get("type", "float"));
if(v.contains("dyn_dims"))
{
auto dyn_dims =
migraphx::from_value<std::vector<migraphx::shape::dynamic_dimension>>(
v.at("dyn_dims"));
return migraphx::shape(t, dyn_dims);
}
auto lens = v.get<std::size_t>("lens", {1});
if(v.contains("strides"))
return migraphx::shape(t, lens, v.at("strides").to_vector<std::size_t>());
else
return migraphx::shape(t, lens);
}))
.def("type", &migraphx::shape::type)
.def("lens", &migraphx::shape::lens)
.def("strides", &migraphx::shape::strides)
.def("ndim", &migraphx::shape::ndim)
.def("elements", &migraphx::shape::elements)
.def("bytes", &migraphx::shape::bytes)
.def("type_string", &migraphx::shape::type_string)
.def("type_size", &migraphx::shape::type_size)
.def("dyn_dims", &migraphx::shape::dyn_dims)
.def("packed", &migraphx::shape::packed)
.def("transposed", &migraphx::shape::transposed)
.def("broadcasted", &migraphx::shape::broadcasted)
.def("standard", &migraphx::shape::standard)
.def("scalar", &migraphx::shape::scalar)
.def("dynamic", &migraphx::shape::dynamic)
.def("__eq__", std::equal_to<migraphx::shape>{})
.def("__ne__", std::not_equal_to<migraphx::shape>{})
.def("__repr__", [](const migraphx::shape& s) { return migraphx::to_string(s); });
py::enum_<migraphx::shape::type_t>(shape_cls, "type_t")
MIGRAPHX_SHAPE_VISIT_TYPES(MIGRAPHX_PYTHON_GENERATE_SHAPE_ENUM);
py::class_<migraphx::shape::dynamic_dimension>(shape_cls, "dynamic_dimension")
.def(py::init<>())
.def(py::init<std::size_t, std::size_t>())
.def(py::init<std::size_t, std::size_t, std::set<std::size_t>>())
.def_readwrite("min", &migraphx::shape::dynamic_dimension::min)
.def_readwrite("max", &migraphx::shape::dynamic_dimension::max)
.def_readwrite("optimals", &migraphx::shape::dynamic_dimension::optimals)
.def("is_fixed", &migraphx::shape::dynamic_dimension::is_fixed);
py::class_<migraphx::argument>(m, "argument", py::buffer_protocol())
.def_buffer([](migraphx::argument& x) -> py::buffer_info { return to_buffer_info(x); })
.def(py::init([](py::buffer b) {
py::buffer_info info = b.request();
return migraphx::argument(to_shape(info), info.ptr);
}))
.def("get_shape", &migraphx::argument::get_shape)
.def("data_ptr",
[](migraphx::argument& x) { return reinterpret_cast<std::uintptr_t>(x.data()); })
.def("tolist",
[](migraphx::argument& x) {
py::list l{x.get_shape().elements()};
visit(x, [&](auto data) { l = py::cast(data.to_vector()); });
return l;
})
.def("__eq__", std::equal_to<migraphx::argument>{})
.def("__ne__", std::not_equal_to<migraphx::argument>{})
.def("__repr__", [](const migraphx::argument& x) { return migraphx::to_string(x); });
py::class_<migraphx::target>(m, "target");
py::class_<migraphx::instruction_ref>(m, "instruction_ref")
.def("shape", [](migraphx::instruction_ref i) { return i->get_shape(); })
.def("op", [](migraphx::instruction_ref i) { return i->get_operator(); });
py::class_<migraphx::module, std::unique_ptr<migraphx::module, py::nodelete>>(m, "module")
.def("print", [](const migraphx::module& mm) { std::cout << mm << std::endl; })
.def(
"add_instruction",
[](migraphx::module& mm,
const migraphx::operation& op,
std::vector<migraphx::instruction_ref>& args,
std::vector<migraphx::module*>& mod_args) {
return mm.add_instruction(op, args, mod_args);
},
py::arg("op"),
py::arg("args"),
py::arg("mod_args") = std::vector<migraphx::module*>{})
.def(
"add_literal",
[](migraphx::module& mm, py::buffer data) {
py::buffer_info info = data.request();
auto literal_shape = to_shape(info);
return mm.add_literal(literal_shape, reinterpret_cast<char*>(info.ptr));
},
py::arg("data"))
.def(
"add_parameter",
[](migraphx::module& mm, const std::string& name, const migraphx::shape shape) {
return mm.add_parameter(name, shape);
},
py::arg("name"),
py::arg("shape"))
.def(
"add_return",
[](migraphx::module& mm, std::vector<migraphx::instruction_ref>& args) {
return mm.add_return(args);
},
py::arg("args"))
.def("__repr__", [](const migraphx::module& mm) { return migraphx::to_string(mm); });
py::class_<migraphx::program>(m, "program")
.def(py::init([]() { return migraphx::program(); }))
.def("get_parameter_names", &migraphx::program::get_parameter_names)
.def("get_parameter_shapes", &migraphx::program::get_parameter_shapes)
.def("get_output_shapes", &migraphx::program::get_output_shapes)
.def("is_compiled", &migraphx::program::is_compiled)
.def(
"compile",
[](migraphx::program& p,
const migraphx::target& t,
bool offload_copy,
bool fast_math,
bool exhaustive_tune) {
migraphx::compile_options options;
options.offload_copy = offload_copy;
options.fast_math = fast_math;
options.exhaustive_tune = exhaustive_tune;
p.compile(t, options);
},
py::arg("t"),
py::arg("offload_copy") = true,
py::arg("fast_math") = true,
py::arg("exhaustive_tune") = false)
.def("get_main_module", [](const migraphx::program& p) { return p.get_main_module(); })
.def(
"create_module",
[](migraphx::program& p, const std::string& name) { return p.create_module(name); },
py::arg("name"))
.def("run",
[](migraphx::program& p, py::dict params) {
migraphx::parameter_map pm;
for(auto x : params)
{
std::string key = x.first.cast<std::string>();
py::buffer b = x.second.cast<py::buffer>();
py::buffer_info info = b.request();
pm[key] = migraphx::argument(to_shape(info), info.ptr);
}
return p.eval(pm);
})
.def("run_async",
[](migraphx::program& p,
py::dict params,
std::uintptr_t stream,
std::string stream_name) {
migraphx::parameter_map pm;
for(auto x : params)
{
std::string key = x.first.cast<std::string>();
py::buffer b = x.second.cast<py::buffer>();
py::buffer_info info = b.request();
pm[key] = migraphx::argument(to_shape(info), info.ptr);
}
migraphx::execution_environment exec_env{
migraphx::any_ptr(reinterpret_cast<void*>(stream), stream_name), true};
return p.eval(pm, exec_env);
})
.def("sort", &migraphx::program::sort)
.def("print", [](const migraphx::program& p) { std::cout << p << std::endl; })
.def("__eq__", std::equal_to<migraphx::program>{})
.def("__ne__", std::not_equal_to<migraphx::program>{})
.def("__repr__", [](const migraphx::program& p) { return migraphx::to_string(p); });
py::class_<migraphx::operation> op(m, "op");
op.def(py::init([](const std::string& name, py::kwargs kwargs) {
migraphx::value v = migraphx::value::object{};
if(kwargs)
{
v = migraphx::to_value(kwargs);
}
return migraphx::make_op(name, v);
}))
.def("name", &migraphx::operation::name);
py::enum_<migraphx::op::pooling_mode>(op, "pooling_mode")
.value("average", migraphx::op::pooling_mode::average)
.value("max", migraphx::op::pooling_mode::max)
.value("lpnorm", migraphx::op::pooling_mode::lpnorm);
py::enum_<migraphx::op::rnn_direction>(op, "rnn_direction")
.value("forward", migraphx::op::rnn_direction::forward)
.value("reverse", migraphx::op::rnn_direction::reverse)
.value("bidirectional", migraphx::op::rnn_direction::bidirectional);
m.def(
"argument_from_pointer",
[](const migraphx::shape shape, const int64_t address) {
return migraphx::argument(shape, reinterpret_cast<void*>(address));
},
py::arg("shape"),
py::arg("address"));
m.def(
"parse_tf",
[](const std::string& filename,
bool is_nhwc,
unsigned int batch_size,
std::unordered_map<std::string, std::vector<std::size_t>> map_input_dims,
std::vector<std::string> output_names) {
return migraphx::parse_tf(
filename, migraphx::tf_options{is_nhwc, batch_size, map_input_dims, output_names});
},
"Parse tf protobuf (default format is nhwc)",
py::arg("filename"),
py::arg("is_nhwc") = true,
py::arg("batch_size") = 1,
py::arg("map_input_dims") = std::unordered_map<std::string, std::vector<std::size_t>>(),
py::arg("output_names") = std::vector<std::string>());
m.def(
"parse_onnx",
[](const std::string& filename,
unsigned int default_dim_value,
migraphx::shape::dynamic_dimension default_dyn_dim_value,
std::unordered_map<std::string, std::vector<std::size_t>> map_input_dims,
std::unordered_map<std::string, std::vector<migraphx::shape::dynamic_dimension>>
map_dyn_input_dims,
bool skip_unknown_operators,
bool print_program_on_error,
int64_t max_loop_iterations) {
migraphx::onnx_options options;
options.default_dim_value = default_dim_value;
options.default_dyn_dim_value = default_dyn_dim_value;
options.map_input_dims = map_input_dims;
options.map_dyn_input_dims = map_dyn_input_dims;
options.skip_unknown_operators = skip_unknown_operators;
options.print_program_on_error = print_program_on_error;
options.max_loop_iterations = max_loop_iterations;
return migraphx::parse_onnx(filename, options);
},
"Parse onnx file",
py::arg("filename"),
py::arg("default_dim_value") = 0,
py::arg("default_dyn_dim_value") = migraphx::shape::dynamic_dimension{1, 1},
py::arg("map_input_dims") = std::unordered_map<std::string, std::vector<std::size_t>>(),
py::arg("map_dyn_input_dims") =
std::unordered_map<std::string, std::vector<migraphx::shape::dynamic_dimension>>(),
py::arg("skip_unknown_operators") = false,
py::arg("print_program_on_error") = false,
py::arg("max_loop_iterations") = 10);
m.def(
"parse_onnx_buffer",
[](const std::string& onnx_buffer,
unsigned int default_dim_value,
migraphx::shape::dynamic_dimension default_dyn_dim_value,
std::unordered_map<std::string, std::vector<std::size_t>> map_input_dims,
std::unordered_map<std::string, std::vector<migraphx::shape::dynamic_dimension>>
map_dyn_input_dims,
bool skip_unknown_operators,
bool print_program_on_error) {
migraphx::onnx_options options;
options.default_dim_value = default_dim_value;
options.default_dyn_dim_value = default_dyn_dim_value;
options.map_input_dims = map_input_dims;
options.map_dyn_input_dims = map_dyn_input_dims;
options.skip_unknown_operators = skip_unknown_operators;
options.print_program_on_error = print_program_on_error;
return migraphx::parse_onnx_buffer(onnx_buffer, options);
},
"Parse onnx file",
py::arg("filename"),
py::arg("default_dim_value") = 0,
py::arg("default_dyn_dim_value") = migraphx::shape::dynamic_dimension{1, 1},
py::arg("map_input_dims") = std::unordered_map<std::string, std::vector<std::size_t>>(),
py::arg("map_dyn_input_dims") =
std::unordered_map<std::string, std::vector<migraphx::shape::dynamic_dimension>>(),
py::arg("skip_unknown_operators") = false,
py::arg("print_program_on_error") = false);
m.def(
"load",
[](const std::string& name, const std::string& format) {
migraphx::file_options options;
options.format = format;
return migraphx::load(name, options);
},
"Load MIGraphX program",
py::arg("filename"),
py::arg("format") = "msgpack");
m.def(
"save",
[](const migraphx::program& p, const std::string& name, const std::string& format) {
migraphx::file_options options;
options.format = format;
return migraphx::save(p, name, options);
},
"Save MIGraphX program",
py::arg("p"),
py::arg("filename"),
py::arg("format") = "msgpack");
m.def("get_target", &migraphx::make_target);
m.def("create_argument", [](const migraphx::shape& s, const std::vector<double>& values) {
if(values.size() != s.elements())
MIGRAPHX_THROW("Values and shape elements do not match");
migraphx::argument a{s};
a.fill(values.begin(), values.end());
return a;
});
m.def("generate_argument", &migraphx::generate_argument, py::arg("s"), py::arg("seed") = 0);
m.def("fill_argument", &migraphx::fill_argument, py::arg("s"), py::arg("value"));
m.def("quantize_fp16",
&migraphx::quantize_fp16,
py::arg("prog"),
py::arg("ins_names") = std::vector<std::string>{"all"});
m.def("quantize_int8",
&migraphx::quantize_int8,
py::arg("prog"),
py::arg("t"),
py::arg("calibration") = std::vector<migraphx::parameter_map>{},
py::arg("ins_names") = std::vector<std::string>{"dot", "convolution"});
#ifdef HAVE_GPU
m.def("allocate_gpu", &migraphx::gpu::allocate_gpu, py::arg("s"), py::arg("host") = false);
m.def("to_gpu", &migraphx::gpu::to_gpu, py::arg("arg"), py::arg("host") = false);
m.def("from_gpu", &migraphx::gpu::from_gpu);
m.def("gpu_sync", [] { migraphx::gpu::gpu_sync(); });
#endif
#ifdef VERSION_INFO
m.attr("__version__") = VERSION_INFO;
#else
m.attr("__version__") = "dev";
#endif
}

3
docker/rocm/rocm-pin-600 Normal file
View File

@ -0,0 +1,3 @@
Package: *
Pin: release o=repo.radeon.com
Pin-Priority: 600

38
docker/rocm/rocm.hcl Normal file
View File

@ -0,0 +1,38 @@
variable "AMDGPU" {
default = "gfx900"
}
variable "ROCM" {
default = "5.7.3"
}
variable "HSA_OVERRIDE_GFX_VERSION" {
default = ""
}
variable "HSA_OVERRIDE" {
default = "1"
}
target deps {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/amd64"]
target = "deps"
}
target rootfs {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/amd64"]
target = "rootfs"
}
target rocm {
dockerfile = "docker/rocm/Dockerfile"
contexts = {
deps = "target:deps",
rootfs = "target:rootfs"
}
platforms = ["linux/amd64"]
args = {
AMDGPU = AMDGPU,
ROCM = ROCM,
HSA_OVERRIDE_GFX_VERSION = HSA_OVERRIDE_GFX_VERSION,
HSA_OVERRIDE = HSA_OVERRIDE
}
}

1
docker/rocm/rocm.list Normal file
View File

@ -0,0 +1 @@
deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/rocm/apt/5.7.3 focal main

17
docker/rocm/rocm.mk Normal file
View File

@ -0,0 +1,17 @@
BOARDS += rocm
# AMD/ROCm is chunky so we build couple of smaller images for specific chipsets
ROCM_CHIPSETS:=gfx900:9.0.0 gfx1030:10.3.0 gfx1100:11.0.0
local-rocm: version
$(foreach chipset,$(ROCM_CHIPSETS),AMDGPU=$(word 1,$(subst :, ,$(chipset))) HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) HSA_OVERRIDE=1 docker buildx bake --load --file=docker/rocm/rocm.hcl --set rocm.tags=frigate:latest-rocm-$(word 1,$(subst :, ,$(chipset))) rocm;)
unset HSA_OVERRIDE_GFX_VERSION && HSA_OVERRIDE=0 AMDGPU=gfx docker buildx bake --load --file=docker/rocm/rocm.hcl --set rocm.tags=frigate:latest-rocm rocm
build-rocm: version
$(foreach chipset,$(ROCM_CHIPSETS),AMDGPU=$(word 1,$(subst :, ,$(chipset))) HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) HSA_OVERRIDE=1 docker buildx bake --file=docker/rocm/rocm.hcl --set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm-$(chipset) rocm;)
unset HSA_OVERRIDE_GFX_VERSION && HSA_OVERRIDE=0 AMDGPU=gfx docker buildx bake --file=docker/rocm/rocm.hcl --set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm rocm
push-rocm: build-rocm
$(foreach chipset,$(ROCM_CHIPSETS),AMDGPU=$(word 1,$(subst :, ,$(chipset))) HSA_OVERRIDE_GFX_VERSION=$(word 2,$(subst :, ,$(chipset))) HSA_OVERRIDE=1 docker buildx bake --push --file=docker/rocm/rocm.hcl --set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm-$(chipset) rocm;)
unset HSA_OVERRIDE_GFX_VERSION && HSA_OVERRIDE=0 AMDGPU=gfx docker buildx bake --push --file=docker/rocm/rocm.hcl --set rocm.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rocm rocm

View File

@ -0,0 +1,20 @@
#!/command/with-contenv bash
# shellcheck shell=bash
# Compile YoloV8 ONNX files into ROCm MIGraphX files
OVERRIDE=$(cd /opt/frigate && python3 -c 'import frigate.detectors.plugins.rocm as rocm; print(rocm.auto_override_gfx_version())')
if ! test -z "$OVERRIDE"; then
echo "Using HSA_OVERRIDE_GFX_VERSION=${OVERRIDE}"
export HSA_OVERRIDE_GFX_VERSION=$OVERRIDE
fi
for onnx in /config/model_cache/yolov8/*.onnx
do
mxr="${onnx%.onnx}.mxr"
if ! test -f $mxr; then
echo "processing $onnx into $mxr"
/opt/rocm/bin/migraphx-driver compile $onnx --optimize --gpu --enable-offload-copy --binary -o $mxr
fi
done

View File

@ -0,0 +1 @@
oneshot

View File

@ -0,0 +1 @@
/etc/s6-overlay/s6-rc.d/compile-rocm-models/run

View File

@ -8,6 +8,8 @@ ARG TRT_BASE=nvcr.io/nvidia/tensorrt:23.03-py3
# Build TensorRT-specific library
FROM ${TRT_BASE} AS trt-deps
ARG COMPUTE_LEVEL
RUN apt-get update \
&& apt-get install -y git build-essential cuda-nvcc-* cuda-nvtx-* libnvinfer-dev libnvinfer-plugin-dev libnvparsers-dev libnvonnxparsers-dev \
&& rm -rf /var/lib/apt/lists/*

View File

@ -11,7 +11,7 @@ git clone --depth 1 https://github.com/NateMeyer/tensorrt_demos.git -b condition
if [ ! -e /usr/local/cuda ]; then
ln -s /usr/local/cuda-* /usr/local/cuda
fi
cd ./tensorrt_demos/plugins && make all -j$(nproc)
cd ./tensorrt_demos/plugins && make all -j$(nproc) computes="${COMPUTE_LEVEL:-}"
cp libyolo_layer.so /usr/local/lib/libyolo_layer.so
# Store yolo scripts for later conversion

View File

@ -10,12 +10,16 @@ variable "SLIM_BASE" {
variable "TRT_BASE" {
default = null
}
variable "COMPUTE_LEVEL" {
default = ""
}
target "_build_args" {
args = {
BASE_IMAGE = BASE_IMAGE,
SLIM_BASE = SLIM_BASE,
TRT_BASE = TRT_BASE
TRT_BASE = TRT_BASE,
COMPUTE_LEVEL = COMPUTE_LEVEL
}
platforms = ["linux/${ARCH}"]
}

View File

@ -2,7 +2,7 @@ BOARDS += trt
JETPACK4_BASE ?= timongentzsch/l4t-ubuntu20-opencv:latest # L4T 32.7.1 JetPack 4.6.1
JETPACK5_BASE ?= nvcr.io/nvidia/l4t-tensorrt:r8.5.2-runtime # L4T 35.3.1 JetPack 5.1.1
X86_DGPU_ARGS := ARCH=amd64
X86_DGPU_ARGS := ARCH=amd64 COMPUTE_LEVEL="50 60 70 80 90"
JETPACK4_ARGS := ARCH=arm64 BASE_IMAGE=$(JETPACK4_BASE) SLIM_BASE=$(JETPACK4_BASE) TRT_BASE=$(JETPACK4_BASE)
JETPACK5_ARGS := ARCH=arm64 BASE_IMAGE=$(JETPACK5_BASE) SLIM_BASE=$(JETPACK5_BASE) TRT_BASE=$(JETPACK5_BASE)

View File

@ -80,6 +80,14 @@ model:
input_pixel_format: "bgr"
```
#### `labelmap`
:::warning
If the labelmap is customized then the labels used for alerts will need to be adjusted as well. See [alert labels](../configuration/review.md#restricting-alerts-to-specific-labels) for more info.
:::
The labelmap can be customized to your needs. A common reason to do this is to combine multiple object types that are easily confused when you don't need to be as granular such as car/truck. By default, truck is renamed to car because they are often confused. You cannot add new object types, but you can change the names of existing objects in the model.
```yaml
@ -96,7 +104,7 @@ model:
Note that if you rename objects in the labelmap, you will also need to update your `objects -> track` list as well.
:::caution
:::warning
Some labels have special handling and modifications can disable functionality.
@ -106,7 +114,53 @@ Some labels have special handling and modifications can disable functionality.
:::
## Custom ffmpeg build
## Network Configuration
Changes to Frigate's internal network configuration can be made by bind mounting nginx.conf into the container. For example:
```yaml
services:
frigate:
container_name: frigate
...
volumes:
...
- /path/to/your/nginx.conf:/usr/local/nginx/conf/nginx.conf
```
### Enabling IPv6
IPv6 is disabled by default, to enable IPv6 listen.gotmpl needs to be bind mounted with IPv6 enabled. For example:
```
{{ if not .enabled }}
# intended for external traffic, protected by auth
listen 8971;
{{ else }}
# intended for external traffic, protected by auth
listen 8971 ssl;
# intended for internal traffic, not protected by auth
listen 5000;
```
becomes
```
{{ if not .enabled }}
# intended for external traffic, protected by auth
listen [::]:8971 ipv6only=off;
{{ else }}
# intended for external traffic, protected by auth
listen [::]:8971 ipv6only=off ssl;
# intended for internal traffic, not protected by auth
listen [::]:5000 ipv6only=off;
```
## Custom Dependencies
### Custom ffmpeg build
Included with Frigate is a build of ffmpeg that works for the vast majority of users. However, there exists some hardware setups which have incompatibilities with the included build. In this case, a docker volume mapping can be used to overwrite the included ffmpeg build with an ffmpeg build that works for your specific hardware setup.
@ -118,9 +172,9 @@ To do this:
NOTE: The folder that is mapped from the host needs to be the folder that contains `/bin`. So if the full structure is `/home/appdata/frigate/custom-ffmpeg/bin/ffmpeg` then `/home/appdata/frigate/custom-ffmpeg` needs to be mapped to `/usr/lib/btbn-ffmpeg`.
## Custom go2rtc version
### Custom go2rtc version
Frigate currently includes go2rtc v1.8.4, there may be certain cases where you want to run a different version of go2rtc.
Frigate currently includes go2rtc v1.9.4, there may be certain cases where you want to run a different version of go2rtc.
To do this:

View File

@ -0,0 +1,132 @@
---
id: authentication
title: Authentication
---
# Authentication
Frigate stores user information in its database. Password hashes are generated using industry standard PBKDF2-SHA256 with 600,000 iterations. Upon successful login, a JWT token is issued with an expiration date and set as a cookie. The cookie is refreshed as needed automatically. This JWT token can also be passed in the Authorization header as a bearer token.
Users are managed in the UI under Settings > Users.
The following ports are available to access the Frigate web UI.
| Port | Description |
| ------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `8971` | Authenticated UI and API. Reverse proxies should use this port. |
| `5000` | Internal unauthenticated UI and API access. Access to this port should be limited. Intended to be used within the docker network for services that integrate with Frigate and do not support authentication. |
## Onboarding
On startup, an admin user and password are generated and printed in the logs. It is recommended to set a new password for the admin account after logging in for the first time under Settings > Users.
## Resetting admin password
In the event that you are locked out of your instance, you can tell Frigate to reset the admin password and print it in the logs on next startup using the `reset_admin_password` setting in your config file.
## Login failure rate limiting
In order to limit the risk of brute force attacks, rate limiting is available for login failures. This is implemented with Flask-Limiter, and the string notation for valid values is available in [the documentation](https://flask-limiter.readthedocs.io/en/stable/configuration.html#rate-limit-string-notation).
For example, `1/second;5/minute;20/hour` will rate limit the login endpoint when failures occur more than:
- 1 time per second
- 5 times per minute
- 20 times per hour
Restarting Frigate will reset the rate limits.
If you are running Frigate behind a proxy, you will want to set `trusted_proxies` or these rate limits will apply to the upstream proxy IP address. This means that a brute force attack will rate limit login attempts from other devices and could temporarily lock you out of your instance. In order to ensure rate limits only apply to the actual IP address where the requests are coming from, you will need to list the upstream networks that you want to trust. These trusted proxies are checked against the `X-Forwarded-For` header when looking for the IP address where the request originated.
If you are running a reverse proxy in the same docker compose file as Frigate, here is an example of how your auth config might look:
```yaml
auth:
failed_login_rate_limit: "1/second;5/minute;20/hour"
trusted_proxies:
- 172.18.0.0/16 # <---- this is the subnet for the internal docker compose network
```
## JWT Token Secret
The JWT token secret needs to be kept secure. Anyone with this secret can generate valid JWT tokens to authenticate with Frigate. This should be a cryptographically random string of at least 64 characters.
You can generate a token using the Python secret library with the following command:
```shell
python3 -c 'import secrets; print(secrets.token_hex(64))'
```
Frigate looks for a JWT token secret in the following order:
1. An environment variable named `FRIGATE_JWT_SECRET`
2. A docker secret named `FRIGATE_JWT_SECRET` in `/run/secrets/`
3. A `jwt_secret` option from the Home Assistant Addon options
4. A `.jwt_secret` file in the config directory
If no secret is found on startup, Frigate generates one and stores it in a `.jwt_secret` file in the config directory.
Changing the secret will invalidate current tokens.
## Proxy configuration
Frigate can be configured to leverage features of common upstream authentication proxies such as Authelia, Authentik, oauth2_proxy, or traefik-forward-auth.
If you are leveraging the authentication of an upstream proxy, you likely want to disable Frigate's authentication. Optionally, if communication between the reverse proxy and Frigate is over an untrusted network, you should set an `auth_secret` in the `proxy` config and configure the proxy to send the secret value as a header named `X-Proxy-Secret`. Assuming this is an untrusted network, you will also want to [configure a real TLS certificate](tls.md) to ensure the traffic can't simply be sniffed to steal the secret.
Here is an example of how to disable Frigate's authentication and also ensure the requests come only from your known proxy.
```yaml
auth:
enabled: False
proxy:
auth_secret: <some random long string>
```
You can use the following code to generate a random secret.
```shell
python3 -c 'import secrets; print(secrets.token_hex(64))'
```
### Header mapping
If you have disabled Frigate's authentication and your proxy supports passing a header with the authenticated username, you can use the `header_map` config to specify the header name so it is passed to Frigate. For example, the following will map the `X-Forwarded-User` value. Header names are not case sensitive.
```yaml
proxy:
...
header_map:
user: x-forwarded-user
```
Note that only the following list of headers are permitted by default:
```
Remote-User
Remote-Groups
Remote-Email
Remote-Name
X-Forwarded-User
X-Forwarded-Groups
X-Forwarded-Email
X-Forwarded-Preferred-Username
X-authentik-username
X-authentik-groups
X-authentik-email
X-authentik-name
X-authentik-uid
```
If you would like to add more options, you can overwrite the default file with a docker bind mount at `/usr/local/nginx/conf/proxy_trusted_headers.conf`. Reference the source code for the default file formatting.
Future versions of Frigate may leverage group and role headers for authorization in Frigate as well.
### Login page redirection
Frigate gracefully performs login page redirection that should work with most authentication proxies. If your reverse proxy returns a `Location` header on `401`, `302`, or `307` unauthorized responses, Frigate's frontend will automatically detect it and redirect to that URL.
### Custom logout url
If your reverse proxy has a dedicated logout url, you can specify using the `logout_url` config option. This will update the link for the `Logout` link in the UI.

View File

@ -159,7 +159,7 @@ This is often caused by the same reason as above - the `MoveStatus` ONVIF parame
### I'm seeing this error in the logs: "Autotracker: motion estimator couldn't get transformations". What does this mean?
To maintain object tracking during PTZ moves, Frigate tracks the motion of your camera based on the details of the frame. If you are seeing this message, it could mean that your `zoom_factor` may be set too high, the scene around your detected object does not have enough details (like hard edges or color variatons), or your camera's shutter speed is too slow and motion blur is occurring. Try reducing `zoom_factor`, finding a way to alter the scene around your object, or changing your camera's shutter speed.
To maintain object tracking during PTZ moves, Frigate tracks the motion of your camera based on the details of the frame. If you are seeing this message, it could mean that your `zoom_factor` may be set too high, the scene around your detected object does not have enough details (like hard edges or color variations), or your camera's shutter speed is too slow and motion blur is occurring. Try reducing `zoom_factor`, finding a way to alter the scene around your object, or changing your camera's shutter speed.
### Calibration seems to have completed, but the camera is not actually moving to track my object. Why?

View File

@ -1,15 +1,20 @@
# Birdseye
Birdseye allows a heads-up view of your cameras to see what is going on around your property / space without having to watch all cameras that may have nothing happening. Birdseye allows specific modes that intelligently show and disappear based on what you care about.
In addition to Frigate's Live camera dashboard, Birdseye allows a portable heads-up view of your cameras to see what is going on around your property / space without having to watch all cameras that may have nothing happening. Birdseye allows specific modes that intelligently show and disappear based on what you care about.
Birdseye can be viewed by adding the "Birdseye" camera to a Camera Group in the Web UI. Add a Camera Group by pressing the "+" icon on the Live page, and choose "Birdseye" as one of the cameras.
Birdseye can also be used in HomeAssistant dashboards, cast to media devices, etc.
## Birdseye Behavior
### Birdseye Modes
Birdseye offers different modes to customize which cameras show under which circumstances.
- **continuous:** All cameras are always included
- **motion:** Cameras that have detected motion within the last 30 seconds are included
- **objects:** Cameras that have tracked an active object within the last 30 seconds are included
- **continuous:** All cameras are always included
- **motion:** Cameras that have detected motion within the last 30 seconds are included
- **objects:** Cameras that have tracked an active object within the last 30 seconds are included
### Custom Birdseye Icon
@ -79,7 +84,7 @@ cameras:
order: 2
```
*Note*: Cameras are sorted by default using their name to ensure a constant view inside Birdseye.
_Note_: Cameras are sorted by default using their name to ensure a constant view inside Birdseye.
### Birdseye Cameras

View File

@ -69,16 +69,12 @@ cameras:
ffmpeg:
output_args:
record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v copy -tag:v hvc1 -bsf:v hevc_mp4toannexb -c:a aac
rtmp: -c:v copy -c:a aac -f flv
inputs:
- path: rtsp://user:password@camera-ip:554/H264/ch1/main/av_stream # <----- Update for your camera
roles:
- detect
- record
- rtmp
rtmp:
enabled: False # <-- RTMP should be disabled if your stream is not H264
detect:
width: # <- optional, by default Frigate tries to automatically detect resolution
height: # <- optional, by default Frigate tries to automatically detect resolution
@ -108,7 +104,7 @@ According to [this discussion](https://github.com/blakeblackshear/frigate/issues
Cameras connected via a Reolink NVR can be connected with the http stream, use `channel[0..15]` in the stream url for the additional channels.
The setup of main stream can be also done via RTSP, but isn't always reliable on all hardware versions. The example configuration is working with the oldest HW version RLN16-410 device with multiple types of cameras.
:::caution
:::warning
The below configuration only works for reolink cameras with stream resolution of 5MP or lower, 8MP+ cameras need to use RTSP as http-flv is not supported in this case.
@ -179,15 +175,14 @@ go2rtc:
- rtspx://192.168.1.1:7441/abcdefghijk
```
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.8.4#source-rtsp)
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.9.4#source-rtsp)
In the Unifi 2.0 update Unifi Protect Cameras had a change in audio sample rate which causes issues for ffmpeg. The input rate needs to be set for record and rtmp if used directly with unifi protect.
In the Unifi 2.0 update Unifi Protect Cameras had a change in audio sample rate which causes issues for ffmpeg. The input rate needs to be set for record if used directly with unifi protect.
```yaml
ffmpeg:
output_args:
record: preset-record-ubiquiti
rtmp: preset-rtmp-ubiquiti # recommend using go2rtc instead
```
### TP-Link VIGI Cameras

View File

@ -12,11 +12,10 @@ A camera is enabled by default but can be temporarily disabled by using `enabled
Each role can only be assigned to one input per camera. The options for roles are as follows:
| Role | Description |
| -------- | ---------------------------------------------------------------------------------------- |
| -------- | ----------------------------------------------------------------------------------- |
| `detect` | Main feed for object detection. [docs](object_detectors.md) |
| `record` | Saves segments of the video feed based on configuration settings. [docs](record.md) |
| `audio` | Feed for audio based detection. [docs](audio_detectors.md) |
| `rtmp` | Deprecated: Broadcast as an RTMP feed for other services to consume. [docs](restream.md) |
```yaml
mqtt:
@ -29,7 +28,6 @@ cameras:
- path: rtsp://viewer:{FRIGATE_RTSP_PASSWORD}@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2
roles:
- detect
- rtmp # <- deprecated, recommend using restream instead
- path: rtsp://viewer:{FRIGATE_RTSP_PASSWORD}@10.0.10.10:554/live
roles:
- record
@ -52,7 +50,7 @@ For camera model specific settings check the [camera specific](camera_specific.m
## Setting up camera PTZ controls
:::caution
:::warning
Not every PTZ supports ONVIF, which is the standard protocol Frigate uses to communicate with your camera. Check the [official list of ONVIF conformant products](https://www.onvif.org/conformant-products/), your camera documentation, or camera manufacturer's website to ensure your PTZ supports ONVIF. Also, ensure your camera is running the latest firmware.
@ -81,7 +79,7 @@ This list of working and non-working PTZ cameras is based on user feedback.
| Brand or specific camera | PTZ Controls | Autotracking | Notes |
| ------------------------ | :----------: | :----------: | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| Amcrest | ✅ | ✅ | ⛔️ Generally, Amcrest should work, but some older models (like the common IP2M-841) don't support auto tracking |
| Amcrest | ✅ | ✅ | ⛔️ Generally, Amcrest should work, but some older models (like the common IP2M-841) don't support autotracking |
| Amcrest ASH21 | ❌ | ❌ | No ONVIF support |
| Ctronics PTZ | ✅ | ❌ | |
| Dahua | ✅ | ✅ | |
@ -93,10 +91,26 @@ This list of working and non-working PTZ cameras is based on user feedback.
| Reolink E1 Zoom | ✅ | ❌ | |
| Reolink RLC-823A 16x | ✅ | ❌ | |
| Sunba 405-D20X | ✅ | ❌ | |
| Tapo C200 | ✅ | ❌ | Incomplete ONVIF support |
| Tapo C210 | ✅ | ❌ | Incomplete ONVIF support, ONVIF Service Port: 2020 |
| Tapo C220 | ✅ | ❌ | Incomplete ONVIF support, ONVIF Service Port: 2020 |
| Tapo C225 | ✅ | ❌ | Incomplete ONVIF support, ONVIF Service Port: 2020 |
| Tapo C520WS | ✅ | ❌ | Incomplete ONVIF support, ONVIF Service Port: 2020 |
| Tapo | ✅ | ❌ | Many models supported, ONVIF Service Port: 2020 |
| Uniview IPC672LR-AX4DUPK | ✅ | ❌ | Firmware says FOV relative movement is supported, but camera doesn't actually move when sending ONVIF commands |
| Vikylin PTZ-2804X-I2 | ❌ | ❌ | Incomplete ONVIF support |
## Setting up camera groups
:::tip
It is recommended to set up camera groups using the UI.
:::
Cameras can be grouped together and assigned a name and icon, this allows them to be reviewed and filtered together. There will always be the default group for all cameras.
```yaml
camera_groups:
front:
cameras:
- driveway_cam
- garage_cam
icon: LuCar
order: 0
```

View File

@ -18,13 +18,11 @@ See [the hwaccel docs](/configuration/hardware_acceleration.md) for more info on
| preset-vaapi | Intel & AMD VAAPI | Check hwaccel docs to ensure correct driver is chosen |
| preset-intel-qsv-h264 | Intel QSV with h264 stream | If issues occur recommend using vaapi preset instead |
| preset-intel-qsv-h265 | Intel QSV with h265 stream | If issues occur recommend using vaapi preset instead |
| preset-nvidia-h264 | Nvidia GPU with h264 stream | |
| preset-nvidia-h265 | Nvidia GPU with h265 stream | |
| preset-nvidia-mjpeg | Nvidia GPU with mjpeg stream | Recommend restreaming mjpeg and using nvidia-h264 |
| preset-nvidia | Nvidia GPU | |
| preset-jetson-h264 | Nvidia Jetson with h264 stream | |
| preset-jetson-h265 | Nvidia Jetson with h265 stream | |
| preset-rk-h264 | Rockchip MPP with h264 stream | Use image with *-rk suffix and privileged mode |
| preset-rk-h265 | Rockchip MPP with h265 stream | Use image with *-rk suffix and privileged mode |
| preset-rk-h264 | Rockchip MPP with h264 stream | Use image with \*-rk suffix and privileged mode |
| preset-rk-h265 | Rockchip MPP with h265 stream | Use image with \*-rk suffix and privileged mode |
### Input Args Presets
@ -44,7 +42,7 @@ See [the camera specific docs](/configuration/camera_specific.md) for more info
| preset-rtsp-udp | RTSP Stream via UDP | Use when camera is UDP only |
| preset-rtsp-blue-iris | Blue Iris RTSP Stream | Use when consuming a stream from Blue Iris |
:::caution
:::warning
It is important to be mindful of input args when using restream because you can have a mix of protocols. `http` and `rtmp` presets cannot be used with `rtsp` streams. For example, when using a reolink cam with the rtsp restream as a source for record the preset-http-reolink will cause a crash. In this case presets will need to be set at the stream level. See the example below.
@ -74,7 +72,7 @@ cameras:
Output args presets help make the config more readable and handle use cases for different types of streams to ensure consistent recordings.
| Preset | Usage | Other Notes |
| -------------------------------- | --------------------------------- | --------------------------------------------- |
| -------------------------------- | --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ |
| preset-record-generic | Record WITHOUT audio | This is the default when nothing is specified |
| preset-record-generic-audio-copy | Record WITH original audio | Use this to enable audio in recordings |
| preset-record-generic-audio-aac | Record WITH transcoded aac audio | Use this to transcode to aac audio. If your source is already aac, use preset-record-generic-audio-copy instead to avoid re-encoding |

View File

@ -5,7 +5,9 @@ title: Hardware Acceleration
# Hardware Acceleration
It is recommended to update your configuration to enable hardware accelerated decoding in ffmpeg. Depending on your system, these parameters may not be compatible. More information on hardware accelerated decoding for ffmpeg can be found here: https://trac.ffmpeg.org/wiki/HWAccelIntro
It is highly recommended to use a GPU for hardware acceleration in Frigate. Some types of hardware acceleration are detected and used automatically, but you may need to update your configuration to enable hardware accelerated decoding in ffmpeg.
Depending on your system, these parameters may not be compatible. More information on hardware accelerated decoding for ffmpeg can be found here: https://trac.ffmpeg.org/wiki/HWAccelIntro
# Officially Supported
@ -360,15 +362,15 @@ that NVDEC/NVDEC1 are in use.
## Rockchip platform
Hardware accelerated video de-/encoding is supported on all Rockchip SoCs.
Hardware accelerated video de-/encoding is supported on all Rockchip SoCs using [Nyanmisaka's FFmpeg 6.1 Fork](https://github.com/nyanmisaka/ffmpeg-rockchip) based on [Rockchip's mpp library](https://github.com/rockchip-linux/mpp).
### Setup
### Prerequisites
Use a frigate docker image with `-rk` suffix and enable privileged mode by adding the `--privileged` flag to your docker run command or `privileged: true` to your `docker-compose.yml` file.
Make sure to follow the [Rockchip specific installation instructions](/frigate/installation#rockchip-platform).
### Configuration
Add one of the following ffmpeg presets to your `config.yaml` to enable hardware acceleration:
Add one of the following FFmpeg presets to your `config.yaml` to enable hardware video processing:
```yaml
# if you try to decode a h264 encoded stream
@ -385,29 +387,3 @@ ffmpeg:
Make sure that your SoC supports hardware acceleration for your input stream. For example, if your camera streams with h265 encoding and a 4k resolution, your SoC must be able to de- and encode h265 with a 4k resolution or higher. If you are unsure whether your SoC meets the requirements, take a look at the datasheet.
:::
### go2rtc presets for hardware accelerated transcoding
If your input stream is to be transcoded using hardware acceleration, there are these presets for go2rtc: `h264/rk` and `h265/rk`. You can use them this way:
```
go2rtc:
streams:
Cam_h264: ffmpeg:rtsp://username:password@192.168.1.123/av_stream/ch0#video=h264/rk
Cam_h265: ffmpeg:rtsp://username:password@192.168.1.123/av_stream/ch0#video=h265/rk
```
:::warning
The go2rtc docs may suggest the following configuration:
```
go2rtc:
streams:
Cam_h264: ffmpeg:rtsp://username:password@192.168.1.123/av_stream/ch0#video=h264#hardware=rk
Cam_h265: ffmpeg:rtsp://username:password@192.168.1.123/av_stream/ch0#video=h265#hardware=rk
```
However, this does not currently work.
:::

View File

@ -25,7 +25,7 @@ cameras:
## VSCode Configuration Schema
VSCode supports JSON schemas for automatically validating configuration files. You can enable this feature by adding `# yaml-language-server: $schema=http://frigate_host:5000/api/config/schema.json` to the beginning of the configuration file. Replace `frigate_host` with the IP address or hostname of your Frigate server. If you're using both VSCode and Frigate as an add-on, you should use `ccab4aaf-frigate` instead. Make sure to expose port `5000` for the Web Interface when accessing the config from VSCode on another machine.
VSCode supports JSON schemas for automatically validating configuration files. You can enable this feature by adding `# yaml-language-server: $schema=http://frigate_host:5000/api/config/schema.json` to the beginning of the configuration file. Replace `frigate_host` with the IP address or hostname of your Frigate server. If you're using both VSCode and Frigate as an add-on, you should use `ccab4aaf-frigate` instead. Make sure to expose the internal unauthenticated port `5000` when accessing the config from VSCode on another machine.
## Environment Variable Substitution
@ -113,7 +113,7 @@ cameras:
- detect
motion:
mask:
- 0,461,3,0,1919,0,1919,843,1699,492,1344,458,1346,336,973,317,869,375,866,432
- 0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400
```
### Standalone Intel Mini PC with USB Coral
@ -167,7 +167,7 @@ cameras:
- detect
motion:
mask:
- 0,461,3,0,1919,0,1919,843,1699,492,1344,458,1346,336,973,317,869,375,866,432
- 0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400
```
### Home Assistant integrated Intel Mini PC with OpenVino
@ -232,5 +232,5 @@ cameras:
- detect
motion:
mask:
- 0,461,3,0,1919,0,1919,843,1699,492,1344,458,1346,336,973,317,869,375,866,432
- 0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400
```

View File

@ -3,15 +3,17 @@ id: live
title: Live View
---
Frigate has different live view options, some of which require the bundled `go2rtc` to be configured as shown in the [step by step guide](/guides/configuring_go2rtc).
Frigate intelligently displays your camera streams on the Live view dashboard. Your camera images update once per minute when no detectable activity is occurring to conserve bandwidth and resources. As soon as any motion is detected, cameras seamlessly switch to a live stream.
## Live View Options
## Live View technologies
Live view options can be selected while viewing the live stream. The options are:
Frigate intelligently uses three different streaming technologies to display your camera streams on the dashboard and the single camera view, switching between available modes based on network bandwidth, player errors, or required features like two-way talk. The highest quality and fluency of the Live view requires the bundled `go2rtc` to be configured as shown in the [step by step guide](/guides/configuring_go2rtc).
The jsmpeg live view will use more browser and client GPU resources. Using go2rtc is highly recommended and will provide a superior experience.
| Source | Latency | Frame Rate | Resolution | Audio | Requires go2rtc | Other Limitations |
| ------ | ------- | ------------------------------------- | -------------- | ---------------------------- | --------------- | ------------------------------------------------ |
| jsmpeg | low | same as `detect -> fps`, capped at 10 | same as detect | no | no | none |
| ------ | ------- | ------------------------------------- | ---------- | ---------------------------- | --------------- | ------------------------------------------------------------------------------------ |
| jsmpeg | low | same as `detect -> fps`, capped at 10 | 720p | no | no | resolution is configurable, but go2rtc is recommended if you want higher resolutions |
| mse | low | native | native | yes (depends on audio codec) | yes | iPhone requires iOS 17.1+, Firefox is h.264 only |
| webrtc | lowest | native | native | yes (depends on audio codec) | yes | requires extra config, doesn't support h.265 |
@ -79,7 +81,7 @@ WebRTC works by creating a TCP or UDP connection on port `8555`. However, it req
- stun:8555
```
- For access through Tailscale, the Frigate system's Tailscale IP must be added as a WebRTC candidate. Tailscale IPs all start with `100.`, and are reserved within the `100.0.0.0/8` CIDR block.
- For access through Tailscale, the Frigate system's Tailscale IP must be added as a WebRTC candidate. Tailscale IPs all start with `100.`, and are reserved within the `100.64.0.0/10` CIDR block.
:::tip

View File

@ -5,7 +5,9 @@ title: Masks
## Motion masks
Motion masks are used to prevent unwanted types of motion from triggering detection. Try watching the debug feed with `Motion Boxes` enabled to see what may be regularly detected as motion. For example, you want to mask out your timestamp, the sky, rooftops, etc. Keep in mind that this mask only prevents motion from being detected and does not prevent objects from being detected if object detection was started due to motion in unmasked areas. Motion is also used during object tracking to refine the object detection area in the next frame. Over masking will make it more difficult for objects to be tracked. To see this effect, create a mask, and then watch the video feed with `Motion Boxes` enabled again.
Motion masks are used to prevent unwanted types of motion from triggering detection. Try watching the Debug feed (Settings --> Debug) with `Motion Boxes` enabled to see what may be regularly detected as motion. For example, you want to mask out your timestamp, the sky, rooftops, etc. Keep in mind that this mask only prevents motion from being detected and does not prevent objects from being detected if object detection was started due to motion in unmasked areas. Motion is also used during object tracking to refine the object detection area in the next frame. _Over-masking will make it more difficult for objects to be tracked._
See [further clarification](#further-clarification) below on why you may not want to use a motion mask.
## Object filter masks
@ -20,32 +22,30 @@ Object filter masks can be used to filter out stubborn false positives in fixed
To create a poly mask:
1. Visit the Web UI
1. Click the camera you wish to create a mask for
1. Select "Debug" at the top
1. Expand the "Options" below the video feed
1. Click "Mask & Zone creator"
1. Click "Add" on the type of mask or zone you would like to create
1. Click on the camera's latest image to create a masked area. The yaml representation will be updated in real-time
1. When you've finished creating your mask, click "Copy" and paste the contents into your config file and restart Frigate
2. Click/tap the gear icon and open "Settings"
3. Select "Mask / zone editor"
4. At the top right, select the camera you wish to create a mask or zone for
5. Click the plus icon under the type of mask or zone you would like to create
6. Click on the camera's latest image to create the points for a masked area. Click the first point again to close the polygon.
7. When you've finished creating your mask, press Save.
8. Restart Frigate to apply your changes.
Example of a finished row corresponding to the below example image:
Your config file will be updated with the relative coordinates of the mask/zone:
```yaml
motion:
mask: "0,461,3,0,1919,0,1919,843,1699,492,1344,458,1346,336,973,317,869,375,866,432"
mask: "0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456,0.700,0.424,0.701,0.311,0.507,0.294,0.453,0.347,0.451,0.400"
```
Multiple masks can be listed.
Multiple masks can be listed in your config.
```yaml
motion:
mask:
- 458,1346,336,973,317,869,375,866,432
- 0,461,3,0,1919,0,1919,843,1699,492,1344
- 0.239,1.246,0.175,0.901,0.165,0.805,0.195,0.802
- 0.000,0.427,0.002,0.000,0.999,0.000,0.999,0.781,0.885,0.456
```
![poly](/img/example-mask-poly-min.png)
### Further Clarification
This is a response to a [question posed on reddit](https://www.reddit.com/r/homeautomation/comments/ppxdve/replacing_my_doorbell_with_a_security_camera_a_6/hd876w4?utm_source=share&utm_medium=web2x&context=3):
@ -78,7 +78,7 @@ It is, but the definition of "unnecessary" varies. I want to ignore areas of mot
> For me, giving my masks ANY padding results in a lot of people detection I'm not interested in. I live in the city and catch a lot of the sidewalk on my camera. People walk by my front door all the time and the margin between the sidewalk and actually walking onto my stoop is very thin, so I basically have everything but the exact contours of my stoop masked out. This results in very tidy detections but this info keeps throwing me off. Am I just overthinking it?
This is what `required_zones` are for. You should define a zone (remember this is evaluated based on the bottom center of the bounding box) and make it required to save snapshots and clips (now events in 0.9.0). You can also use this in your conditions for a notification.
This is what `required_zones` are for. You should define a zone (remember this is evaluated based on the bottom center of the bounding box) and make it required to save snapshots and clips (previously events in 0.9.0 to 0.13.0 and review items in 0.14.0 and later). You can also use this in your conditions for a notification.
> Maybe my specific situation just warrants this. I've just been having a hard time understanding the relevance of this information - it seems to be that it's exactly what would be expected when "masking out" an area of ANY image.

View File

@ -21,9 +21,7 @@ First, mask areas with regular motion not caused by the objects you want to dete
## Prepare For Testing
The easiest way to tune motion detection is to do it live, have one window / screen open with the frigate debug view and motion boxes enabled with another window / screen open allowing for configuring the motion settings. It is recommended to use Home Assistant or MQTT as they offer live configuration of some motion settings meaning that Frigate does not need to be restarted when values are changed.
In Home Assistant the `Improve Contrast`, `Contour Area`, and `Threshold` configuration entities are disabled by default but can easily be enabled and used to tune live, otherwise MQTT can be used.
The easiest way to tune motion detection is to use the Frigate UI under Settings > Motion Tuner. This screen allows the changing of motion detection values live to easily see the immediate effect on what is detected as motion.
## Tuning Motion Detection During The Day

View File

@ -81,6 +81,15 @@ detectors:
device: ""
```
### Single PCIE/M.2 Coral
```yaml
detectors:
coral:
type: edgetpu
device: pci
```
### Multiple PCIE/M.2 Corals
```yaml
@ -109,9 +118,13 @@ detectors:
The OpenVINO detector type runs an OpenVINO IR model on AMD and Intel CPUs, Intel GPUs and Intel VPU hardware. To configure an OpenVINO detector, set the `"type"` attribute to `"openvino"`.
The OpenVINO device to be used is specified using the `"device"` attribute according to the naming conventions in the [Device Documentation](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Working_with_devices.html). Other supported devices could be `AUTO`, `CPU`, `GPU`, `MYRIAD`, etc. If not specified, the default OpenVINO device will be selected by the `AUTO` plugin.
The OpenVINO device to be used is specified using the `"device"` attribute according to the naming conventions in the [Device Documentation](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes.html). The most common devices are `CPU` and `GPU`. Currently, there is a known issue with using `AUTO`. For backwards compatibility, Frigate will attempt to use `GPU` if `AUTO` is set in your configuration.
OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. It will also run on AMD CPUs despite having no official support for it. A supported Intel platform is required to use the `GPU` device with OpenVINO. The `MYRIAD` device may be run on any platform, including Arm devices. For detailed system requirements, see [OpenVINO System Requirements](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html)
OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. It will also run on AMD CPUs despite having no official support for it. A supported Intel platform is required to use the `GPU` device with OpenVINO. For detailed system requirements, see [OpenVINO System Requirements](https://docs.openvino.ai/2024/about-openvino/release-notes-openvino/system-requirements.html)
### Supported Models
#### SSDLite MobileNet v2
An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobilenet_v2.xml` and is used by this detector type by default. The model comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model. Use the model configuration shown below when using the OpenVINO detector with the default model.
@ -119,66 +132,52 @@ An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobil
detectors:
ov:
type: openvino
device: AUTO
model:
path: /openvino-model/ssdlite_mobilenet_v2.xml
device: GPU
model:
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: bgr
path: /openvino-model/ssdlite_mobilenet_v2.xml
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
```
This detector also supports YOLOX. Other YOLO variants are not officially supported/tested. Frigate does not come with any yolo models preloaded, so you will need to supply your own models. This detector has been verified to work with the [yolox_tiny](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolox-tiny) model from Intel's Open Model Zoo. You can follow [these instructions](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolox-tiny#download-a-model-and-convert-it-into-openvino-ir-format) to retrieve the OpenVINO-compatible `yolox_tiny` model. Make sure that the model input dimensions match the `width` and `height` parameters, and `model_type` is set accordingly. See [Full Configuration Reference](/configuration/reference.md) for a list of possible `model_type` options. Below is an example of how `yolox_tiny` can be used in Frigate:
#### YOLOX
This detector also supports YOLOX. Frigate does not come with any YOLOX models preloaded, so you will need to supply your own models.
#### YOLO-NAS
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
detectors:
ov:
type: openvino
device: AUTO
model:
path: /path/to/yolox_tiny.xml
device: GPU
model:
width: 416
height: 416
model_type: yolonas
width: 320 # <--- should match whatever was set in notebook
height: 320 # <--- should match whatever was set in notebook
input_tensor: nchw
input_pixel_format: bgr
model_type: yolox
labelmap_path: /path/to/coco_80cl.txt
path: /config/yolo_nas_s.onnx
labelmap_path: /labelmap/coco-80.txt
```
### Intel NCS2 VPU and Myriad X Setup
Intel produces a neural net inference accelleration chip called Myriad X. This chip was sold in their Neural Compute Stick 2 (NCS2) which has been discontinued. If intending to use the MYRIAD device for accelleration, additional setup is required to pass through the USB device. The host needs a udev rule installed to handle the NCS2 device.
```bash
sudo usermod -a -G users "$(whoami)"
cat <<EOF > 97-myriad-usbboot.rules
SUBSYSTEM=="usb", ATTRS{idProduct}=="2485", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
SUBSYSTEM=="usb", ATTRS{idProduct}=="f63b", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
EOF
sudo cp 97-myriad-usbboot.rules /etc/udev/rules.d/
sudo udevadm control --reload-rules
sudo udevadm trigger
```
Additionally, the Frigate docker container needs to run with the following configuration:
```bash
--device-cgroup-rule='c 189:\* rmw' -v /dev/bus/usb:/dev/bus/usb
```
or in your compose file:
```yml
device_cgroup_rules:
- "c 189:* rmw"
volumes:
- /dev/bus/usb:/dev/bus/usb
```
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
## NVidia TensorRT Detector
@ -261,7 +260,7 @@ frigate:
### Configuration Parameters
The TensorRT detector can be selected by specifying `tensorrt` as the model type. The GPU will need to be passed through to the docker container using the same methods described in the [Hardware Acceleration](hardware_acceleration.md#nvidia-gpu) section. If you pass through multiple GPUs, you can select which GPU is used for a detector with the `device` configuration parameter. The `device` parameter is an integer value of the GPU index, as shown by `nvidia-smi` within the container.
The TensorRT detector can be selected by specifying `tensorrt` as the model type. The GPU will need to be passed through to the docker container using the same methods described in the [Hardware Acceleration](hardware_acceleration.md#nvidia-gpus) section. If you pass through multiple GPUs, you can select which GPU is used for a detector with the `device` configuration parameter. The `device` parameter is an integer value of the GPU index, as shown by `nvidia-smi` within the container.
The TensorRT detector uses `.trt` model files that are located in `/config/model_cache/tensorrt` by default. These model path and dimensions used will depend on which model you have generated.
@ -302,3 +301,88 @@ Replace `<your_codeproject_ai_server_ip>` and `<port>` with the IP address and p
To verify that the integration is working correctly, start Frigate and observe the logs for any error messages related to CodeProject.AI. Additionally, you can check the Frigate web interface to see if the objects detected by CodeProject.AI are being displayed and tracked properly.
# Community Supported Detectors
## Rockchip platform
Hardware accelerated object detection is supported on the following SoCs:
- RK3562
- RK3566
- RK3568
- RK3576
- RK3588
This implementation uses the [Rockchip's RKNN-Toolkit2](https://github.com/airockchip/rknn-toolkit2/), version v2.0.0.beta0. Currently, only [Yolo-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) is supported as object detection model.
### Prerequisites
Make sure to follow the [Rockchip specific installation instrucitions](/frigate/installation#rockchip-platform).
### Configuration
This `config.yml` shows all relevant options to configure the detector and explains them. All values shown are the default values (except for two). Lines that are required at least to use the detector are labeled as required, all other lines are optional.
```yaml
detectors: # required
rknn: # required
type: rknn # required
# number of NPU cores to use
# 0 means choose automatically
# increase for better performance if you have a multicore NPU e.g. set to 3 on rk3588
num_cores: 0
model: # required
# name of model (will be automatically downloaded) or path to your own .rknn model file
# possible values are:
# - deci-fp16-yolonas_s
# - deci-fp16-yolonas_m
# - deci-fp16-yolonas_l
# - /config/model_cache/your_custom_model.rknn
path: deci-fp16-yolonas_s
# width and height of detection frames
width: 320
height: 320
# pixel format of detection frame
# default value is rgb but yolo models usually use bgr format
input_pixel_format: bgr # required
# shape of detection frame
input_tensor: nhwc
# needs to be adjusted to model, see below
labelmap_path: /labelmap.txt # required
```
The correct labelmap must be loaded for each model. If you use a custom model (see notes below), you must make sure to provide the correct labelmap. The table below lists the correct paths for the bundled models:
| `path` | `labelmap_path` |
| --------------------- | --------------------- |
| deci-fp16-yolonas\_\* | /labelmap/coco-80.txt |
### Choosing a model
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The inference time was determined on a rk3588 with 3 NPU cores.
| Model | Size in mb | Inference time in ms |
| ------------------- | ---------- | -------------------- |
| deci-fp16-yolonas_s | 24 | 25 |
| deci-fp16-yolonas_m | 62 | 35 |
| deci-fp16-yolonas_l | 81 | 45 |
:::tip
You can get the load of your NPU with the following command:
```bash
$ cat /sys/kernel/debug/rknpu/load
>> NPU load: Core0: 0%, Core1: 0%, Core2: 0%,
```
:::
- All models are automatically downloaded and stored in the folder `config/model_cache/rknn_cache`. After upgrading Frigate, you should remove older models to free up space.
- You can also provide your own `.rknn` model. You should not save your own models in the `rknn_cache` folder, store them directly in the `model_cache` folder or another subfolder. To convert a model to `.rknn` format see the `rknn-toolkit2` (requires a x86 machine). Note, that there is only post-processing for the supported models.

View File

@ -36,7 +36,7 @@ False positives can also be reduced by filtering a detection based on its shape.
### Object Area
`min_area` and `max_area` filter on the area of an objects bounding box in pixels and can be used to reduce false positives that are outside the range of expected sizes. For example when a leaf is detected as a dog or when a large tree is detected as a person, these can be reduced by adding a `min_area` / `max_area` filter. The recordings timeline can be used to determine the area of the bounding box in that frame by selecting a timeline item then mousing over or tapping the red box.
`min_area` and `max_area` filter on the area of an objects bounding box in pixels and can be used to reduce false positives that are outside the range of expected sizes. For example when a leaf is detected as a dog or when a large tree is detected as a person, these can be reduced by adding a `min_area` / `max_area` filter.
### Object Proportions

View File

@ -159,11 +159,14 @@ Using Frigate UI, HomeAssistant, or MQTT, cameras can be automated to only recor
## How do I export recordings?
The export page in the Frigate WebUI allows for exporting real time clips with a designated start and stop time as well as exporting a time-lapse for a designated start and stop time. These exports can take a while so it is important to leave the file until it is no longer in progress.
Footage can be exported from Frigate by right-clicking (desktop) or long pressing (mobile) on a review item in the Review pane or by clicking the Export button in the History view. Exported footage is then organized and searchable through the Export view, accessible from the main navigation bar.
### Time-lapse export
Time lapse exporting is available only via the [HTTP API](../integrations/api.md#post-apiexportcamerastartstart-timestampendend-timestamp).
When exporting a time-lapse the default speed-up is 25x with 30 FPS. This means that every 25 seconds of (real-time) recording is condensed into 1 second of time-lapse video (always without audio) with a smoothness of 30 FPS.
To configure the speed-up factor, the frame rate and further custom settings, the configuration parameter `timelapse_args` can be used. The below configuration example would change the time-lapse speed to 60x (for fitting 1 hour of recording into 1 minute of time-lapse) with 25 FPS:
```yaml

View File

@ -5,7 +5,7 @@ title: Full Reference Config
### Full configuration reference:
:::caution
:::warning
It is not recommended to copy this full configuration file. Only specify values that are different from the defaults. Configuration options and default values may change in future versions.
@ -63,6 +63,59 @@ database:
# The path to store the SQLite DB (default: shown below)
path: /config/frigate.db
# Optional: TLS configuration
tls:
# Optional: Enable TLS for port 8971 (default: shown below)
enabled: True
# Optional: Proxy configuration
proxy:
# Optional: Mapping for headers from upstream proxies. Only used if Frigate's auth
# is disabled.
# NOTE: Many authentication proxies pass a header downstream with the authenticated
# user name. Not all values are supported. It must be a whitelisted header.
# See the docs for more info.
header_map:
user: x-forwarded-user
# Optional: Url for logging out a user. This sets the location of the logout url in
# the UI.
logout_url: /api/logout
# Optional: Auth secret that is checked against the X-Proxy-Secret header sent from
# the proxy. If not set, all requests are trusted regardless of origin.
auth_secret: None
# Optional: Authentication configuration
auth:
# Optional: Enable authentication
enabled: True
# Optional: Reset the admin user password on startup (default: shown below)
# New password is printed in the logs
reset_admin_password: False
# Optional: Cookie to store the JWT token for native auth (default: shown below)
cookie_name: frigate_token
# Optional: Set secure flag on cookie. (default: shown below)
# NOTE: This should be set to True if you are using TLS
cookie_secure: False
# Optional: Session length in seconds (default: shown below)
session_length: 86400 # 24 hours
# Optional: Refresh time in seconds (default: shown below)
# When the session is going to expire in less time than this setting,
# it will be refreshed back to the session_length.
refresh_time: 43200 # 12 hours
# Optional: Rate limiting for login failures to help prevent brute force
# login attacks (default: shown below)
# See the docs for more information on valid values
failed_login_rate_limit: None
# Optional: Trusted proxies for determining IP address to rate limit
# NOTE: This is only used for rate limiting login attempts and does not bypass
# authentication. See the authentication docs for more details.
trusted_proxies: []
# Optional: Number of hashing iterations for user passwords
# As of Feb 2023, OWASP recommends 600000 iterations for PBKDF2-SHA256
# NOTE: changing this value will not automatically update password hashes, you
# will need to change each user password for it to apply
hash_iterations: 600000
# Optional: model modifications
model:
# Optional: path to the model (default: automatic based on detector)
@ -80,7 +133,7 @@ model:
# Valid values are nhwc or nchw (default: shown below)
input_tensor: nhwc
# Optional: Object detection model type, currently only used with the OpenVINO detector
# Valid values are ssd, yolox (default: shown below)
# Valid values are ssd, yolox, yolonas (default: shown below)
model_type: ssd
# Optional: Label name modifications. These are merged into the standard labelmap.
labelmap:
@ -159,9 +212,9 @@ birdseye:
ffmpeg:
# Optional: global ffmpeg args (default: shown below)
global_args: -hide_banner -loglevel warning -threads 2
# Optional: global hwaccel args (default: shown below)
# Optional: global hwaccel args (default: auto detect)
# NOTE: See hardware acceleration docs for your specific device
hwaccel_args: []
hwaccel_args: "auto"
# Optional: global input args (default: shown below)
input_args: preset-rtsp-generic
# Optional: global output args
@ -170,8 +223,6 @@ ffmpeg:
detect: -threads 2 -f rawvideo -pix_fmt yuv420p
# Optional: output args for record streams (default: shown below)
record: preset-record-generic
# Optional: output args for rtmp streams (default: shown below)
rtmp: preset-rtmp-generic
# Optional: Time in seconds to wait before ffmpeg retries connecting to the camera. (default: shown below)
# If set too low, frigate will retry a connection to the camera's stream too frequently, using up the limited streams some cameras can allow at once
# If set too high, then if a ffmpeg crash or camera stream timeout occurs, you could potentially lose up to a maximum of retry_interval second(s) of footage
@ -239,7 +290,7 @@ objects:
# Optional: mask to prevent all object types from being detected in certain areas (default: no mask)
# Checks based on the bottom center of the bounding box of the object.
# NOTE: This mask is COMBINED with the object type specific mask below
mask: 0,0,1000,0,1000,200,0,200
mask: 0.000,0.000,0.781,0.000,0.781,0.278,0.000,0.278
# Optional: filters to reduce false positives for specific object types
filters:
person:
@ -257,7 +308,29 @@ objects:
threshold: 0.7
# Optional: mask to prevent this object type from being detected in certain areas (default: no mask)
# Checks based on the bottom center of the bounding box of the object
mask: 0,0,1000,0,1000,200,0,200
mask: 0.000,0.000,0.781,0.000,0.781,0.278,0.000,0.278
# Optional: Review configuration
# NOTE: Can be overridden at the camera level
review:
# Optional: alerts configuration
alerts:
# Optional: labels that qualify as an alert (default: shown below)
labels:
- car
- person
# Optional: required zones for an object to be marked as an alert (default: none)
required_zones:
- driveway
# Optional: detections configuration
detections:
# Optional: labels that qualify as a detection (default: all labels that are tracked / listened to)
labels:
- car
- person
# Optional: required zones for an object to be marked as a detection (default: none)
required_zones:
- driveway
# Optional: Motion configuration
# NOTE: Can be overridden at the camera level
@ -291,7 +364,7 @@ motion:
frame_height: 100
# Optional: motion mask
# NOTE: see docs for more detailed info on creating masks
mask: 0,900,1080,900,1080,1920,0,1920
mask: 0.000,0.469,1.000,0.469,1.000,1.000,0.000,1.000
# Optional: improve contrast (default: shown below)
# Enables dynamic contrast improvement. This should help improve night detections at the cost of making motion detection more sensitive
# for daytime.
@ -333,6 +406,11 @@ record:
# The -r (framerate) dictates how smooth the output video is.
# So the args would be -vf setpts=0.02*PTS -r 30 in that case.
timelapse_args: "-vf setpts=0.04*PTS -r 30"
# Optional: Recording Preview Settings
preview:
# Optional: Quality of recording preview (default: shown below).
# Options are: very_low, low, medium, high, very_high
quality: medium
# Optional: Event recording settings
events:
# Optional: Number of seconds before the event to include (default: shown below)
@ -342,8 +420,6 @@ record:
# Optional: Objects to save recordings for. (default: all tracked objects)
objects:
- person
# Optional: Restrict recordings to objects that entered any of the listed zones (default: no required zones)
required_zones: []
# Optional: Retention settings for recordings of events
retain:
# Required: Default retention days (default: shown below)
@ -389,13 +465,6 @@ snapshots:
# Optional: quality of the encoded jpeg, 0-100 (default: shown below)
quality: 70
# Optional: RTMP configuration
# NOTE: RTMP is deprecated in favor of restream
# NOTE: Can be overridden at the camera level
rtmp:
# Optional: Enable the RTMP stream (default: False)
enabled: False
# Optional: Restream configuration
# Uses https://github.com/AlexxIT/go2rtc (v1.8.3)
go2rtc:
@ -452,14 +521,13 @@ cameras:
# Required: the path to the stream
# NOTE: path may include environment variables or docker secrets, which must begin with 'FRIGATE_' and be referenced in {}
- path: rtsp://viewer:{FRIGATE_RTSP_PASSWORD}@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2
# Required: list of roles for this stream. valid values are: audio,detect,record,rtmp
# NOTICE: In addition to assigning the audio, record, and rtmp roles,
# Required: list of roles for this stream. valid values are: audio,detect,record
# NOTICE: In addition to assigning the audio, detect, and record roles
# they must also be enabled in the camera config.
roles:
- audio
- detect
- record
- rtmp
# Optional: stream specific global args (default: inherit)
# global_args:
# Optional: stream specific hwaccel args (default: inherit)
@ -490,9 +558,11 @@ cameras:
front_steps:
# Required: List of x,y coordinates to define the polygon of the zone.
# NOTE: Presence in a zone is evaluated only based on the bottom center of the objects bounding box.
coordinates: 545,1077,747,939,788,805
coordinates: 0.284,0.997,0.389,0.869,0.410,0.745
# Optional: Number of consecutive frames required for object to be considered present in the zone (default: shown below).
inertia: 3
# Optional: Number of seconds that an object must loiter to be considered in the zone (default: shown below)
loitering_time: 0
# Optional: List of objects that can trigger this zone (default: all tracked objects)
objects:
- person
@ -543,6 +613,9 @@ cameras:
user: admin
# Optional: password for login.
password: admin
# Optional: Ignores time synchronization mismatches between the camera and the server during authentication.
# Using NTP on both ends is recommended and this should only be set to True in a "safe" environment due to the security risk it represents.
ignore_time_mismatch: False
# Optional: PTZ camera object autotracking. Keeps a moving object in
# the center of the frame by automatically moving the PTZ camera.
autotracking:
@ -586,12 +659,8 @@ cameras:
# Optional
ui:
# Optional: Set the default live mode for cameras in the UI (default: shown below)
live_mode: mse
# Optional: Set a timezone to use in the UI (default: use browser local time)
# timezone: America/Denver
# Optional: Use an experimental recordings / camera view UI (default: shown below)
use_experimental: False
# Optional: Set the time format used.
# Options are browser, 12hour, or 24hour (default: shown below)
time_format: browser
@ -638,4 +707,19 @@ telemetry:
# Optional: Enable the latest version outbound check (default: shown below)
# NOTE: If you use the HomeAssistant integration, disabling this will prevent it from reporting new versions
version_check: True
# Optional: Camera groups (default: no groups are setup)
# NOTE: It is recommended to use the UI to setup camera groups
camera_groups:
# Required: Name of camera group
front:
# Required: list of cameras in the group
cameras:
- front_cam
- side_cam
- front_doorbell_cam
# Required: icon used for group
icon: car
# Required: index of this group
order: 0
```

View File

@ -7,11 +7,11 @@ title: Restream
Frigate can restream your video feed as an RTSP feed for other applications such as Home Assistant to utilize it at `rtsp://<frigate_host>:8554/<camera_name>`. Port 8554 must be open. [This allows you to use a video feed for detection in Frigate and Home Assistant live view at the same time without having to make two separate connections to the camera](#reduce-connections-to-camera). The video feed is copied from the original video feed directly to avoid re-encoding. This feed does not include any annotation by Frigate.
Frigate uses [go2rtc](https://github.com/AlexxIT/go2rtc/tree/v1.8.4) to provide its restream and MSE/WebRTC capabilities. The go2rtc config is hosted at the `go2rtc` in the config, see [go2rtc docs](https://github.com/AlexxIT/go2rtc/tree/v1.8.4#configuration) for more advanced configurations and features.
Frigate uses [go2rtc](https://github.com/AlexxIT/go2rtc/tree/v1.9.4) to provide its restream and MSE/WebRTC capabilities. The go2rtc config is hosted at the `go2rtc` in the config, see [go2rtc docs](https://github.com/AlexxIT/go2rtc/tree/v1.9.4#configuration) for more advanced configurations and features.
:::note
You can access the go2rtc stream info at `http://frigate_ip:5000/api/go2rtc/streams` which can be helpful to debug as well as provide useful information about your camera streams.
You can access the go2rtc stream info at `/api/go2rtc/streams` which can be helpful to debug as well as provide useful information about your camera streams.
:::
@ -38,10 +38,6 @@ go2rtc:
**NOTE:** This does not apply to localhost requests, there is no need to provide credentials when using the restream as a source for frigate cameras.
## RTMP (Deprecated)
In previous Frigate versions RTMP was used for re-streaming. RTMP has disadvantages however including being incompatible with H.265, high bitrates, and certain audio codecs. RTMP is deprecated and it is recommended use the built in go2rtc config for restreaming.
## Reduce Connections To Camera
Some cameras only support one active connection or you may just want to have a single connection open to the camera. The RTSP restream allows this to be possible.
@ -138,7 +134,7 @@ cameras:
## Advanced Restream Configurations
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.8.4#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.4#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:
NOTE: The output will need to be passed with two curly braces `{{output}}`

View File

@ -0,0 +1,77 @@
---
id: review
title: Review
---
The Review page of the Frigate UI is for quickly reviewing historical footage of interest from your cameras. _Review items_ are indicated on a vertical timeline and displayed as a grid of previews - bandwidth-optimized, low frame rate, low resolution videos. Hovering over or swiping a preview plays the video and marks it as reviewed. If more in-depth analysis is required, the preview can be clicked/tapped and the full frame rate, full resolution recording is displayed.
Review items are filterable by date, object type, and camera.
### Review items vs. events
In Frigate 0.13 and earlier versions, the UI presented "events". An event was synonymous with a tracked or detected object. In Frigate 0.14 and later, a review item is a time period where any number of tracked objects were active.
For example, consider a situation where two people walked past your house. One was walking a dog. At the same time, a car drove by on the street behind them.
In this scenario, Frigate 0.13 and earlier would show 4 events in the UI - one for each person, another for the dog, and yet another for the car. You would have had 4 separate videos to watch even though they would have all overlapped.
In 0.14 and later, all of that is bundled into a single review item which starts and ends to capture all of that activity. Reviews for a single camera cannot overlap. Once you have watched that time period on that camera, it is marked as reviewed.
## Alerts and Detections
Not every segment of video captured by Frigate may be of the same level of interest to you. Video of people who enter your property may be a different priority than those walking by on the sidewalk. For this reason, Frigate 0.14 categorizes review items as _alerts_ and _detections_. By default, all person and car objects are considered alerts. You can refine categorization of your review items by configuring required zones for them.
## Restricting alerts to specific labels
By default a review item will only be marked as an alert if a person or car is detected. This can be configured to include any object or audio label using the following config:
```yaml
# can be overridden at the camera level
review:
alerts:
labels:
- car
- cat
- dog
- person
- speech
```
## Restricting detections to specific labels
By default all detections that do not qualify as an alert qualify as a detection. However, detections can further be filtered to only include certain labels or certain zones.
By default a review item will only be marked as an alert if a person or car is detected. This can be configured to include any object or audio label using the following config:
```yaml
# can be overridden at the camera level
review:
detections:
labels:
- bark
- dog
```
## Excluding a camera from alerts or detections
To exclude a specific camera from alerts or detections, simply provide an empty list to the alerts or detections field _at the camera level_.
For example, to exclude objects on the camera _gatecamera_ from any detections, include this in your config:
```yaml
cameras:
gatecamera:
review:
detections:
labels: []
```
## Restricting review items to specific zones
By default a review item will be created if any `review -> alerts -> labels` and `review -> detections -> labels` are detected anywhere in the camera frame. You will likely want to configure review items to only be created when the object enters an area of interest, [see the zone docs for more information](./zones.md#restricting-alerts-and-detections-to-specific-zones)
:::info
Because zones don't apply to audio, audio labels will always be marked as a detection by default.
:::

View File

@ -3,6 +3,10 @@ id: snapshots
title: Snapshots
---
Frigate can save a snapshot image to `/media/frigate/clips` for each event named as `<camera>-<id>.jpg`.
Frigate can save a snapshot image to `/media/frigate/clips` for each object that is detected named as `<camera>-<id>.jpg`. They are also accessible [via the api](../integrations/api.md#get-apieventsidsnapshotjpg)
For users with Frigate+ enabled, snapshots are accessible in the UI in the Frigate+ pane to allow for quick submission to the Frigate+ service.
To only save snapshots for objects that enter a specific zone, [see the zone docs](./zones.md#restricting-snapshots-to-specific-zones)
Snapshots sent via MQTT are configured in the [config file](https://docs.frigate.video/configuration/) under `cameras -> your_camera -> mqtt`

View File

@ -23,10 +23,6 @@ NOTE: There is no way to disable stationary object tracking with this value.
`threshold` is the number of frames an object needs to remain relatively still before it is considered stationary.
## Handling stationary objects
In some cases, like a driveway, you may prefer to only have an event when a car is coming & going vs a constant event of it stationary in the driveway. You can reference [this guide](../guides/parked_cars.md) for recommended approaches.
## Why does Frigate track stationary objects?
Frigate didn't always track stationary objects. In fact, it didn't even track objects at all initially.

View File

@ -0,0 +1,59 @@
---
id: tls
title: TLS
---
# TLS
Frigate's integrated NGINX server supports TLS certificates. By default Frigate will generate a self signed certificate that will be used for port 8971. Frigate is designed to make it easy to use whatever tool you prefer to manage certificates.
Frigate is often running behind a reverse proxy that manages TLS certificates for multiple services. You will likely need to set your reverse proxy to allow self signed certificates or you can disable TLS in Frigate's config. However, if you are running on a dedicated device that's separate from your proxy or if you expose Frigate directly to the internet, you may want to configure TLS with valid certificates.
In many deployments, TLS will be unnecessary. It can be disabled in the config with the following yaml:
```yaml
tls:
enabled: False
```
## Certificates
TLS certificates can be mounted at `/etc/letsencrypt/live/frigate` using a bind mount or docker volume.
```yaml
frigate:
...
volumes:
- /path/to/your/certificate_folder:/etc/letsencrypt/live/frigate:ro
...
```
Within the folder, the private key is expected to be named `privkey.pem` and the certificate is expected to be named `fullchain.pem`.
Note that certbot uses symlinks, and those can't be followed by the container unless it has access to the targets as well, so if using certbot you'll also have to mount the `archive` folder for your domain, e.g.:
```yaml
frigate:
...
volumes:
- /etc/letsencrypt/live/frigate:/etc/letsencrypt/live/frigate:ro
- /etc/letsencrypt/archive/frigate:/etc/letsencrypt/archive/frigate:ro
...
```
Frigate automatically compares the fingerprint of the certificate at `/etc/letsencrypt/live/frigate/fullchain.pem` against the fingerprint of the TLS cert in NGINX every minute. If these differ, the NGINX config is reloaded to pick up the updated certificate.
If you issue Frigate valid certificates you will likely want to configure it to run on port 443 so you can access it without a port number like `https://your-frigate-domain.com` by mapping 8971 to 443.
```yaml
frigate:
...
ports:
- "443:8971"
...
```
## ACME Challenge
Frigate also supports hosting the acme challenge files for the HTTP challenge method if needed. The challenge files should be mounted at `/etc/letsencrypt/www`.

View File

@ -1,15 +0,0 @@
---
id: user_interface
title: User Interface Configurations
---
### Experimental UI
While developing and testing new components, users may decide to opt-in to test potential new features on the front-end.
```yaml
ui:
use_experimental: true
```
Note that experimental changes may contain bugs or may be removed at any time in future releases of the software. Use of these features are presented as-is and with no functional guarantee.

View File

@ -10,20 +10,50 @@ For example, the cat in this image is currently in Zone 1, but **not** Zone 2.
Zones cannot have the same name as a camera. If desired, a single zone can include multiple cameras if you have multiple cameras covering the same area by configuring zones with the same name for each camera.
During testing, enable the Zones option for the debug feed so you can adjust as needed. The zone line will increase in thickness when any object enters the zone.
During testing, enable the Zones option for the Debug view of your camera (Settings --> Debug) so you can adjust as needed. The zone line will increase in thickness when any object enters the zone.
To create a zone, follow [the steps for a "Motion mask"](masks.md), but use the section of the web UI for creating a zone instead.
### Restricting events to specific zones
### Restricting alerts and detections to specific zones
Often you will only want events to be created when an object enters areas of interest. This is done using zones along with setting required_zones. Let's say you only want to be notified when an object enters your entire_yard zone, the config would be:
Often you will only want alerts to be created when an object enters areas of interest. This is done using zones along with setting required_zones. Let's say you only want to have an alert created when an object enters your entire_yard zone, the config would be:
```yaml
camera:
record:
events:
cameras:
name_of_your_camera:
review:
alerts:
required_zones:
- entire_yard
zones:
entire_yard:
coordinates: ...
```
You may also want to filter detections to only be created when an object enters a secondary area of interest. This is done using zones along with setting required_zones. Let's say you want alerts when an object enters the inner area of the yard but detections when an object enters the edge of the yard, the config would be
```yaml
cameras:
name_of_your_camera:
review:
alerts:
required_zones:
- inner_yard
detections:
required_zones:
- edge_yard
zones:
edge_yard:
coordinates: ...
inner_yard:
coordinates: ...
```
### Restricting snapshots to specific zones
```yaml
cameras:
name_of_your_camera:
snapshots:
required_zones:
- entire_yard
@ -37,16 +67,8 @@ camera:
Sometimes you want to limit a zone to specific object types to have more granular control of when events/snapshots are saved. The following example will limit one zone to person objects and the other to cars.
```yaml
camera:
record:
events:
required_zones:
- entire_yard
- front_yard_street
snapshots:
required_zones:
- entire_yard
- front_yard_street
cameras:
name_of_your_camera:
zones:
entire_yard:
coordinates: ... (everywhere you want a person)
@ -60,12 +82,27 @@ camera:
Only car objects can trigger the `front_yard_street` zone and only person can trigger the `entire_yard`. You will get events for person objects that enter anywhere in the yard, and events for cars only if they enter the street.
### Zone Loitering
Sometimes objects are expected to be passing through a zone, but an object loitering in an area is unexpected. Zones can be configured to have a minimum loitering time before the object will be considered in the zone.
```yaml
cameras:
name_of_your_camera:
zones:
sidewalk:
loitering_time: 4 # unit is in seconds
objects:
- person
```
### Zone Inertia
Sometimes an objects bounding box may be slightly incorrect and the bottom center of the bounding box is inside the zone while the object is not actually in the zone. Zone inertia helps guard against this by requiring an object's bounding box to be within the zone for multiple consecutive frames. This value can be configured:
```yaml
camera:
cameras:
name_of_your_camera:
zones:
front_yard:
inertia: 3
@ -76,10 +113,25 @@ camera:
There may also be cases where you expect an object to quickly enter and exit a zone, like when a car is pulling into the driveway, and you may want to have the object be considered present in the zone immediately:
```yaml
camera:
cameras:
name_of_your_camera:
zones:
driveway_entrance:
inertia: 1
objects:
- car
```
### Loitering Time
Zones support a `loitering_time` configuration which can be used to only consider an object as part of a zone if they loiter in the zone for the specified number of seconds. This can be used, for example, to create alerts for cars that stop on the street but not cars that just drive past your camera.
```yaml
cameras:
name_of_your_camera:
zones:
front_yard:
loitering_time: 5 # unit is in seconds
objects:
- person
```

Some files were not shown because too many files have changed in this diff Show More