mirror of
https://github.com/blakeblackshear/frigate.git
synced 2024-11-21 19:07:46 +01:00
parent
390403d957
commit
43c623be25
2
.github/workflows/ci.yml
vendored
2
.github/workflows/ci.yml
vendored
@ -212,7 +212,7 @@ jobs:
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Create short sha
|
||||
run: echo "SHORT_SHA=${GITHUB_SHA::7}" >> $GITHUB_ENV
|
||||
- uses: int128/docker-manifest-create-action@v1
|
||||
- uses: int128/docker-manifest-create-action@v2
|
||||
with:
|
||||
tags: ghcr.io/${{ steps.lowercaseRepo.outputs.lowercase }}:${{ github.ref_name }}-${{ env.SHORT_SHA }}
|
||||
suffixes: |
|
||||
|
@ -123,8 +123,8 @@ or when using docker compose:
|
||||
```yaml
|
||||
services:
|
||||
frigate:
|
||||
...
|
||||
environment:
|
||||
---
|
||||
environment:
|
||||
DOWNLOAD_YOLOV8: "1"
|
||||
```
|
||||
|
||||
@ -313,7 +313,7 @@ frigate:
|
||||
|
||||
### Configuration Parameters
|
||||
|
||||
The TensorRT detector can be selected by specifying `tensorrt` as the model type. The GPU will need to be passed through to the docker container using the same methods described in the [Hardware Acceleration](hardware_acceleration.md#nvidia-gpu) section. If you pass through multiple GPUs, you can select which GPU is used for a detector with the `device` configuration parameter. The `device` parameter is an integer value of the GPU index, as shown by `nvidia-smi` within the container.
|
||||
The TensorRT detector can be selected by specifying `tensorrt` as the model type. The GPU will need to be passed through to the docker container using the same methods described in the [Hardware Acceleration](hardware_acceleration.md#nvidia-gpus) section. If you pass through multiple GPUs, you can select which GPU is used for a detector with the `device` configuration parameter. The `device` parameter is an integer value of the GPU index, as shown by `nvidia-smi` within the container.
|
||||
|
||||
The TensorRT detector uses `.trt` model files that are located in `/config/model_cache/tensorrt` by default. These model path and dimensions used will depend on which model you have generated.
|
||||
|
||||
@ -484,11 +484,10 @@ When using docker compose:
|
||||
```yaml
|
||||
services:
|
||||
frigate:
|
||||
...
|
||||
devices:
|
||||
---
|
||||
devices:
|
||||
- /dev/dri
|
||||
- /dev/kfd
|
||||
...
|
||||
```
|
||||
|
||||
For reference on recommended settings see [running ROCm/pytorch in Docker](https://rocm.docs.amd.com/projects/install-on-linux/en/develop/how-to/3rd-party/pytorch-install.html#using-docker-with-pytorch-pre-installed).
|
||||
@ -503,9 +502,9 @@ For chipset specific frigate rocm builds this variable is already set automatica
|
||||
|
||||
For the general rocm frigate build there is some automatic detection:
|
||||
|
||||
- gfx90c -> 9.0.0
|
||||
- gfx1031 -> 10.3.0
|
||||
- gfx1103 -> 11.0.0
|
||||
- gfx90c -> 9.0.0
|
||||
- gfx1031 -> 10.3.0
|
||||
- gfx1103 -> 11.0.0
|
||||
|
||||
If you have something else you might need to override the `HSA_OVERRIDE_GFX_VERSION` at Docker launch. Suppose the version you want is `9.0.0`, then you should configure it from command line as:
|
||||
|
||||
@ -519,18 +518,18 @@ When using docker compose:
|
||||
```yaml
|
||||
services:
|
||||
frigate:
|
||||
...
|
||||
environment:
|
||||
---
|
||||
environment:
|
||||
HSA_OVERRIDE_GFX_VERSION: "9.0.0"
|
||||
```
|
||||
|
||||
Figuring out what version you need can be complicated as you can't tell the chipset name and driver from the AMD brand name.
|
||||
|
||||
- first make sure that rocm environment is running properly by running `/opt/rocm/bin/rocminfo` in the frigate container -- it should list both the CPU and the GPU with their properties
|
||||
- find the chipset version you have (gfxNNN) from the output of the `rocminfo` (see below)
|
||||
- use a search engine to query what `HSA_OVERRIDE_GFX_VERSION` you need for the given gfx name ("gfxNNN ROCm HSA_OVERRIDE_GFX_VERSION")
|
||||
- override the `HSA_OVERRIDE_GFX_VERSION` with relevant value
|
||||
- if things are not working check the frigate docker logs
|
||||
- first make sure that rocm environment is running properly by running `/opt/rocm/bin/rocminfo` in the frigate container -- it should list both the CPU and the GPU with their properties
|
||||
- find the chipset version you have (gfxNNN) from the output of the `rocminfo` (see below)
|
||||
- use a search engine to query what `HSA_OVERRIDE_GFX_VERSION` you need for the given gfx name ("gfxNNN ROCm HSA_OVERRIDE_GFX_VERSION")
|
||||
- override the `HSA_OVERRIDE_GFX_VERSION` with relevant value
|
||||
- if things are not working check the frigate docker logs
|
||||
|
||||
#### Figuring out if AMD/ROCm is working and found your GPU
|
||||
|
||||
@ -566,8 +565,8 @@ or when using docker compose:
|
||||
```yaml
|
||||
services:
|
||||
frigate:
|
||||
...
|
||||
environment:
|
||||
---
|
||||
environment:
|
||||
DOWNLOAD_YOLOV8: "1"
|
||||
```
|
||||
|
||||
|
@ -33,7 +33,6 @@ Fork [blakeblackshear/frigate-hass-integration](https://github.com/blakeblackshe
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- [Frigate source code](#frigate-core-web-and-docs)
|
||||
- GNU make
|
||||
- Docker
|
||||
- An extra detector (Coral, OpenVINO, etc.) is optional but recommended to simulate real world performance.
|
||||
@ -129,7 +128,6 @@ ffmpeg -c:v h264_qsv -re -stream_loop -1 -i https://streams.videolan.org/ffmpeg/
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- [Frigate source code](#frigate-core-web-and-docs)
|
||||
- All [core](#core) prerequisites _or_ another running Frigate instance locally available
|
||||
- Node.js 20
|
||||
|
||||
@ -188,7 +186,6 @@ npm run test
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- [Frigate source code](#frigate-core-web-and-docs)
|
||||
- Node.js 20
|
||||
|
||||
### Making changes
|
||||
|
@ -27,7 +27,7 @@ Motion masks prevent detection of [motion](#motion) in masked areas from trigger
|
||||
|
||||
### Object Mask
|
||||
|
||||
Object filter masks drop any bounding boxes where the bottom center (overlap doesn't matter) is in the masked area. It forces them to be considered a [false positive](#false_positive) so that they are ignored.
|
||||
Object filter masks drop any bounding boxes where the bottom center (overlap doesn't matter) is in the masked area. It forces them to be considered a [false positive](#false-positive) so that they are ignored.
|
||||
|
||||
## Min Score
|
||||
|
||||
|
@ -65,8 +65,8 @@ Home Assistant > Configuration > Integrations > Frigate > Options
|
||||
```
|
||||
|
||||
| Option | Description |
|
||||
| ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| RTSP URL Template | A [jinja2](https://jinja.palletsprojects.com/) template that is used to override the standard RTSP stream URL (e.g. for use with reverse proxies). This option is only shown to users who have [advanced mode](https://www.home-assistant.io/blog/2019/07/17/release-96/#advanced-mode) enabled. See [RTSP streams](#streams) below. |
|
||||
| ----------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| RTSP URL Template | A [jinja2](https://jinja.palletsprojects.com/) template that is used to override the standard RTSP stream URL (e.g. for use with reverse proxies). This option is only shown to users who have [advanced mode](https://www.home-assistant.io/blog/2019/07/17/release-96/#advanced-mode) enabled. See [RTSP streams](#rtsp-stream) below. |
|
||||
|
||||
## Entities Provided
|
||||
|
||||
@ -178,7 +178,7 @@ for how to set these.
|
||||
|
||||
#### API URLs
|
||||
|
||||
When multiple Frigate instances are configured, [API](#api) URLs should include an
|
||||
When multiple Frigate instances are configured, [API](#notification-api) URLs should include an
|
||||
identifier to tell Home Assistant which Frigate instance to refer to. The
|
||||
identifier used is the MQTT `client_id` parameter included in the configuration,
|
||||
and is used like so:
|
||||
|
955
docs/package-lock.json
generated
955
docs/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
532
web/package-lock.json
generated
532
web/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@ -13,7 +13,8 @@ const variants = {
|
||||
},
|
||||
overlay: {
|
||||
active: "font-bold text-white bg-selected rounded-full",
|
||||
inactive: "text-primary-white rounded-full bg-gradient-to-br from-gray-400 to-gray-500 bg-gray-500",
|
||||
inactive:
|
||||
"text-primary-white rounded-full bg-gradient-to-br from-gray-400 to-gray-500 bg-gray-500",
|
||||
},
|
||||
};
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user