mirror of
https://github.com/blakeblackshear/frigate.git
synced 2024-11-21 19:07:46 +01:00
Add specific section about GPU in semantic search (#14685)
This commit is contained in:
parent
ab26aee8b2
commit
d10fea6012
@ -43,12 +43,6 @@ The text model is used to embed tracked object descriptions and perform searches
|
|||||||
|
|
||||||
Differently weighted CLIP models are available and can be selected by setting the `model_size` config option:
|
Differently weighted CLIP models are available and can be selected by setting the `model_size` config option:
|
||||||
|
|
||||||
:::tip
|
|
||||||
|
|
||||||
The CLIP models are downloaded in ONNX format, which means they will be accelerated using GPU hardware when available. This depends on the Docker build that is used. See [the object detector docs](../configuration/object_detectors.md) for more information.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
semantic_search:
|
semantic_search:
|
||||||
enabled: True
|
enabled: True
|
||||||
@ -58,6 +52,18 @@ semantic_search:
|
|||||||
- Configuring the `large` model employs the full Jina model and will automatically run on the GPU if applicable.
|
- Configuring the `large` model employs the full Jina model and will automatically run on the GPU if applicable.
|
||||||
- Configuring the `small` model employs a quantized version of the model that uses much less RAM and runs faster on CPU with a very negligible difference in embedding quality.
|
- Configuring the `small` model employs a quantized version of the model that uses much less RAM and runs faster on CPU with a very negligible difference in embedding quality.
|
||||||
|
|
||||||
|
### GPU Acceleration
|
||||||
|
|
||||||
|
The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU hardware, when available. This depends on the Docker build that is used, see [the object detector docs](../configuration/object_detectors.md) for more information.
|
||||||
|
|
||||||
|
If the correct build is used for your GPU and the `large` model is configured, then the GPU will be automatically detected and used automatically.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
semantic_search:
|
||||||
|
enabled: True
|
||||||
|
model_size: small
|
||||||
|
```
|
||||||
|
|
||||||
## Usage and Best Practices
|
## Usage and Best Practices
|
||||||
|
|
||||||
1. Semantic search is used in conjunction with the other filters available on the Search page. Use a combination of traditional filtering and semantic search for the best results.
|
1. Semantic search is used in conjunction with the other filters available on the Search page. Use a combination of traditional filtering and semantic search for the best results.
|
||||||
|
Loading…
Reference in New Issue
Block a user