upgrade deps (#10257)

* upgrade web deps

* docs deps

* actions deps
This commit is contained in:
Blake Blackshear 2024-03-05 13:00:27 +00:00 committed by GitHub
parent 390403d957
commit 43c623be25
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
8 changed files with 814 additions and 740 deletions

View File

@ -212,7 +212,7 @@ jobs:
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create short sha
run: echo "SHORT_SHA=${GITHUB_SHA::7}" >> $GITHUB_ENV
- uses: int128/docker-manifest-create-action@v1
- uses: int128/docker-manifest-create-action@v2
with:
tags: ghcr.io/${{ steps.lowercaseRepo.outputs.lowercase }}:${{ github.ref_name }}-${{ env.SHORT_SHA }}
suffixes: |

View File

@ -123,9 +123,9 @@ or when using docker compose:
```yaml
services:
frigate:
...
environment:
DOWNLOAD_YOLOV8: "1"
---
environment:
DOWNLOAD_YOLOV8: "1"
```
When this variable is set then frigate will at startup fetch [yolov8.small.models.tar.gz](https://github.com/harakas/models/releases/download/yolov8.1-1.1/yolov8.small.models.tar.gz) and extract it into the `/config/model_cache/yolov8/` directory.
@ -313,7 +313,7 @@ frigate:
### Configuration Parameters
The TensorRT detector can be selected by specifying `tensorrt` as the model type. The GPU will need to be passed through to the docker container using the same methods described in the [Hardware Acceleration](hardware_acceleration.md#nvidia-gpu) section. If you pass through multiple GPUs, you can select which GPU is used for a detector with the `device` configuration parameter. The `device` parameter is an integer value of the GPU index, as shown by `nvidia-smi` within the container.
The TensorRT detector can be selected by specifying `tensorrt` as the model type. The GPU will need to be passed through to the docker container using the same methods described in the [Hardware Acceleration](hardware_acceleration.md#nvidia-gpus) section. If you pass through multiple GPUs, you can select which GPU is used for a detector with the `device` configuration parameter. The `device` parameter is an integer value of the GPU index, as shown by `nvidia-smi` within the container.
The TensorRT detector uses `.trt` model files that are located in `/config/model_cache/tensorrt` by default. These model path and dimensions used will depend on which model you have generated.
@ -484,11 +484,10 @@ When using docker compose:
```yaml
services:
frigate:
...
devices:
- /dev/dri
- /dev/kfd
...
---
devices:
- /dev/dri
- /dev/kfd
```
For reference on recommended settings see [running ROCm/pytorch in Docker](https://rocm.docs.amd.com/projects/install-on-linux/en/develop/how-to/3rd-party/pytorch-install.html#using-docker-with-pytorch-pre-installed).
@ -503,9 +502,9 @@ For chipset specific frigate rocm builds this variable is already set automatica
For the general rocm frigate build there is some automatic detection:
- gfx90c -> 9.0.0
- gfx1031 -> 10.3.0
- gfx1103 -> 11.0.0
- gfx90c -> 9.0.0
- gfx1031 -> 10.3.0
- gfx1103 -> 11.0.0
If you have something else you might need to override the `HSA_OVERRIDE_GFX_VERSION` at Docker launch. Suppose the version you want is `9.0.0`, then you should configure it from command line as:
@ -519,18 +518,18 @@ When using docker compose:
```yaml
services:
frigate:
...
environment:
HSA_OVERRIDE_GFX_VERSION: "9.0.0"
---
environment:
HSA_OVERRIDE_GFX_VERSION: "9.0.0"
```
Figuring out what version you need can be complicated as you can't tell the chipset name and driver from the AMD brand name.
- first make sure that rocm environment is running properly by running `/opt/rocm/bin/rocminfo` in the frigate container -- it should list both the CPU and the GPU with their properties
- find the chipset version you have (gfxNNN) from the output of the `rocminfo` (see below)
- use a search engine to query what `HSA_OVERRIDE_GFX_VERSION` you need for the given gfx name ("gfxNNN ROCm HSA_OVERRIDE_GFX_VERSION")
- override the `HSA_OVERRIDE_GFX_VERSION` with relevant value
- if things are not working check the frigate docker logs
- first make sure that rocm environment is running properly by running `/opt/rocm/bin/rocminfo` in the frigate container -- it should list both the CPU and the GPU with their properties
- find the chipset version you have (gfxNNN) from the output of the `rocminfo` (see below)
- use a search engine to query what `HSA_OVERRIDE_GFX_VERSION` you need for the given gfx name ("gfxNNN ROCm HSA_OVERRIDE_GFX_VERSION")
- override the `HSA_OVERRIDE_GFX_VERSION` with relevant value
- if things are not working check the frigate docker logs
#### Figuring out if AMD/ROCm is working and found your GPU
@ -566,9 +565,9 @@ or when using docker compose:
```yaml
services:
frigate:
...
environment:
DOWNLOAD_YOLOV8: "1"
---
environment:
DOWNLOAD_YOLOV8: "1"
```
Download can be triggered also in regular frigate builds using that environment variable. The following files will be available under `/config/model_cache/yolov8/`:

View File

@ -33,7 +33,6 @@ Fork [blakeblackshear/frigate-hass-integration](https://github.com/blakeblackshe
### Prerequisites
- [Frigate source code](#frigate-core-web-and-docs)
- GNU make
- Docker
- An extra detector (Coral, OpenVINO, etc.) is optional but recommended to simulate real world performance.
@ -129,7 +128,6 @@ ffmpeg -c:v h264_qsv -re -stream_loop -1 -i https://streams.videolan.org/ffmpeg/
### Prerequisites
- [Frigate source code](#frigate-core-web-and-docs)
- All [core](#core) prerequisites _or_ another running Frigate instance locally available
- Node.js 20
@ -188,7 +186,6 @@ npm run test
### Prerequisites
- [Frigate source code](#frigate-core-web-and-docs)
- Node.js 20
### Making changes

View File

@ -27,7 +27,7 @@ Motion masks prevent detection of [motion](#motion) in masked areas from trigger
### Object Mask
Object filter masks drop any bounding boxes where the bottom center (overlap doesn't matter) is in the masked area. It forces them to be considered a [false positive](#false_positive) so that they are ignored.
Object filter masks drop any bounding boxes where the bottom center (overlap doesn't matter) is in the masked area. It forces them to be considered a [false positive](#false-positive) so that they are ignored.
## Min Score

View File

@ -64,9 +64,9 @@ When configuring the integration, you will be asked for the `URL` of your Frigat
Home Assistant > Configuration > Integrations > Frigate > Options
```
| Option | Description |
| ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| RTSP URL Template | A [jinja2](https://jinja.palletsprojects.com/) template that is used to override the standard RTSP stream URL (e.g. for use with reverse proxies). This option is only shown to users who have [advanced mode](https://www.home-assistant.io/blog/2019/07/17/release-96/#advanced-mode) enabled. See [RTSP streams](#streams) below. |
| Option | Description |
| ----------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| RTSP URL Template | A [jinja2](https://jinja.palletsprojects.com/) template that is used to override the standard RTSP stream URL (e.g. for use with reverse proxies). This option is only shown to users who have [advanced mode](https://www.home-assistant.io/blog/2019/07/17/release-96/#advanced-mode) enabled. See [RTSP streams](#rtsp-stream) below. |
## Entities Provided
@ -178,7 +178,7 @@ for how to set these.
#### API URLs
When multiple Frigate instances are configured, [API](#api) URLs should include an
When multiple Frigate instances are configured, [API](#notification-api) URLs should include an
identifier to tell Home Assistant which Frigate instance to refer to. The
identifier used is the MQTT `client_id` parameter included in the configuration,
and is used like so:

955
docs/package-lock.json generated

File diff suppressed because it is too large Load Diff

532
web/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@ -13,7 +13,8 @@ const variants = {
},
overlay: {
active: "font-bold text-white bg-selected rounded-full",
inactive: "text-primary-white rounded-full bg-gradient-to-br from-gray-400 to-gray-500 bg-gray-500",
inactive:
"text-primary-white rounded-full bg-gradient-to-br from-gray-400 to-gray-500 bg-gray-500",
},
};