diff --git a/docs/docs/configuration/semantic_search.md b/docs/docs/configuration/semantic_search.md index 61873478d..7b0a0bc91 100644 --- a/docs/docs/configuration/semantic_search.md +++ b/docs/docs/configuration/semantic_search.md @@ -19,7 +19,7 @@ For best performance, 16GB or more of RAM and a dedicated GPU are recommended. ## Configuration -Semantic search is disabled by default, and must be enabled in your config file before it can be used. Semantic Search is a global configuration setting. +Semantic Search is disabled by default, and must be enabled in your config file before it can be used. Semantic Search is a global configuration setting. ```yaml semantic_search: @@ -41,7 +41,7 @@ The vision model is able to embed both images and text into the same vector spac The text model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Search page when clicking on the gray tracked object chip at the top left of each review item. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate tracked object descriptions. -Differently weighted CLIP models are available and can be selected by setting the `model_size` config option: +Differently weighted CLIP models are available and can be selected by setting the `model_size` config option as `small` or `large`: ```yaml semantic_search: @@ -50,12 +50,18 @@ semantic_search: ``` - Configuring the `large` model employs the full Jina model and will automatically run on the GPU if applicable. -- Configuring the `small` model employs a quantized version of the model that uses much less RAM and runs faster on CPU with a very negligible difference in embedding quality. +- Configuring the `small` model employs a quantized version of the model that uses less RAM and runs on CPU with a very negligible difference in embedding quality. ### GPU Acceleration The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU hardware, when available. This depends on the Docker build that is used. +```yaml +semantic_search: + enabled: True + model_size: large +``` + :::info If the correct build is used for your GPU and the `large` model is configured, then the GPU will be detected and used automatically. @@ -63,26 +69,22 @@ If the correct build is used for your GPU and the `large` model is configured, t **NOTE:** Object detection and Semantic Search are independent features. If you want to use your GPU with Semantic Search, you must choose the appropriate Frigate Docker image for your GPU. - **AMD** - - ROCm will automatically be detected and used for semantic search in the `-rocm` Frigate image. + + - ROCm will automatically be detected and used for Semantic Search in the `-rocm` Frigate image. - **Intel** - - OpenVINO will automatically be detected and used for semantic search in the default Frigate image. + + - OpenVINO will automatically be detected and used for Semantic Search in the default Frigate image. - **Nvidia** - - Nvidia GPUs will automatically be detected and used for semantic search in the `-tensorrt` Frigate image. - - Jetson devices will automatically be detected and used for semantic search in the `-tensorrt-jp(4/5)` Frigate image. + - Nvidia GPUs will automatically be detected and used for Semantic Search in the `-tensorrt` Frigate image. + - Jetson devices will automatically be detected and used for Semantic Search in the `-tensorrt-jp(4/5)` Frigate image. ::: -```yaml -semantic_search: - enabled: True - model_size: small -``` - ## Usage and Best Practices -1. Semantic search is used in conjunction with the other filters available on the Search page. Use a combination of traditional filtering and semantic search for the best results. +1. Semantic Search is used in conjunction with the other filters available on the Search page. Use a combination of traditional filtering and Semantic Search for the best results. 2. Use the thumbnail search type when searching for particular objects in the scene. Use the description search type when attempting to discern the intent of your object. 3. Because of how the AI models Frigate uses have been trained, the comparison between text and image embedding distances generally means that with multi-modal (`thumbnail` and `description`) searches, results matching `description` will appear first, even if a `thumbnail` embedding may be a better match. Play with the "Search Type" setting to help find what you are looking for. Note that if you are generating descriptions for specific objects or zones only, this may cause search results to prioritize the objects with descriptions even if the the ones without them are more relevant. 4. Make your search language and tone closely match exactly what you're looking for. If you are using thumbnail search, **phrase your query as an image caption**. Searching for "red car" may not work as well as "red sedan driving down a residential street on a sunny day". diff --git a/web/src/components/input/InputWithTags.tsx b/web/src/components/input/InputWithTags.tsx index ff46375fd..e5b492bcc 100644 --- a/web/src/components/input/InputWithTags.tsx +++ b/web/src/components/input/InputWithTags.tsx @@ -463,9 +463,13 @@ export default function InputWithTags({ }, [setFilters, resetSuggestions, setSearch, setInputFocused]); const handleClearSimilarity = useCallback(() => { - removeFilter("event_id", filters.event_id!); - removeFilter("search_type", "similarity"); - }, [removeFilter, filters]); + const newFilters = { ...filters }; + if (newFilters.event_id === filters.event_id) { + delete newFilters.event_id; + } + delete newFilters.search_type; + setFilters(newFilters); + }, [setFilters, filters]); const handleInputBlur = useCallback( (e: React.FocusEvent) => { @@ -763,13 +767,15 @@ export default function InputWithTags({ )) - : filterType !== "event_id" && ( + : !(filterType == "event_id" && isSimilaritySearch) && ( - {filterType.replaceAll("_", " ")}:{" "} - {formatFilterValues(filterType, filterValues)} + {filterType === "event_id" + ? "Tracked Object ID" + : filterType.replaceAll("_", " ")} + : {formatFilterValues(filterType, filterValues)}