diff --git a/docs/docs/configuration/semantic_search.md b/docs/docs/configuration/semantic_search.md index 7f84fdf95..22c3ddfe9 100644 --- a/docs/docs/configuration/semantic_search.md +++ b/docs/docs/configuration/semantic_search.md @@ -43,12 +43,6 @@ The text model is used to embed tracked object descriptions and perform searches Differently weighted CLIP models are available and can be selected by setting the `model_size` config option: -:::tip - -The CLIP models are downloaded in ONNX format, which means they will be accelerated using GPU hardware when available. This depends on the Docker build that is used. See [the object detector docs](../configuration/object_detectors.md) for more information. - -::: - ```yaml semantic_search: enabled: True @@ -58,6 +52,18 @@ semantic_search: - Configuring the `large` model employs the full Jina model and will automatically run on the GPU if applicable. - Configuring the `small` model employs a quantized version of the model that uses much less RAM and runs faster on CPU with a very negligible difference in embedding quality. +### GPU Acceleration + +The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU hardware, when available. This depends on the Docker build that is used, see [the object detector docs](../configuration/object_detectors.md) for more information. + +If the correct build is used for your GPU and the `large` model is configured, then the GPU will be automatically detected and used automatically. + +```yaml +semantic_search: + enabled: True + model_size: small +``` + ## Usage and Best Practices 1. Semantic search is used in conjunction with the other filters available on the Search page. Use a combination of traditional filtering and semantic search for the best results.