* fix api filter from matching event id as timestamp

* add padding on platform aware sheet for tablets

* docs tweaks

* tweaks
This commit is contained in:
Josh Hawkins 2024-11-05 08:22:41 -06:00 committed by GitHub
parent 1fc4af9c86
commit 29ea7c53f2
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
4 changed files with 35 additions and 23 deletions

View File

@ -19,7 +19,7 @@ For best performance, 16GB or more of RAM and a dedicated GPU are recommended.
## Configuration
Semantic search is disabled by default, and must be enabled in your config file before it can be used. Semantic Search is a global configuration setting.
Semantic Search is disabled by default, and must be enabled in your config file before it can be used. Semantic Search is a global configuration setting.
```yaml
semantic_search:
@ -41,7 +41,7 @@ The vision model is able to embed both images and text into the same vector spac
The text model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Search page when clicking on the gray tracked object chip at the top left of each review item. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate tracked object descriptions.
Differently weighted CLIP models are available and can be selected by setting the `model_size` config option:
Differently weighted CLIP models are available and can be selected by setting the `model_size` config option as `small` or `large`:
```yaml
semantic_search:
@ -50,12 +50,18 @@ semantic_search:
```
- Configuring the `large` model employs the full Jina model and will automatically run on the GPU if applicable.
- Configuring the `small` model employs a quantized version of the model that uses much less RAM and runs faster on CPU with a very negligible difference in embedding quality.
- Configuring the `small` model employs a quantized version of the model that uses less RAM and runs on CPU with a very negligible difference in embedding quality.
### GPU Acceleration
The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU hardware, when available. This depends on the Docker build that is used.
```yaml
semantic_search:
enabled: True
model_size: large
```
:::info
If the correct build is used for your GPU and the `large` model is configured, then the GPU will be detected and used automatically.
@ -63,26 +69,22 @@ If the correct build is used for your GPU and the `large` model is configured, t
**NOTE:** Object detection and Semantic Search are independent features. If you want to use your GPU with Semantic Search, you must choose the appropriate Frigate Docker image for your GPU.
- **AMD**
- ROCm will automatically be detected and used for semantic search in the `-rocm` Frigate image.
- ROCm will automatically be detected and used for Semantic Search in the `-rocm` Frigate image.
- **Intel**
- OpenVINO will automatically be detected and used for semantic search in the default Frigate image.
- OpenVINO will automatically be detected and used for Semantic Search in the default Frigate image.
- **Nvidia**
- Nvidia GPUs will automatically be detected and used for semantic search in the `-tensorrt` Frigate image.
- Jetson devices will automatically be detected and used for semantic search in the `-tensorrt-jp(4/5)` Frigate image.
- Nvidia GPUs will automatically be detected and used for Semantic Search in the `-tensorrt` Frigate image.
- Jetson devices will automatically be detected and used for Semantic Search in the `-tensorrt-jp(4/5)` Frigate image.
:::
```yaml
semantic_search:
enabled: True
model_size: small
```
## Usage and Best Practices
1. Semantic search is used in conjunction with the other filters available on the Search page. Use a combination of traditional filtering and semantic search for the best results.
1. Semantic Search is used in conjunction with the other filters available on the Search page. Use a combination of traditional filtering and Semantic Search for the best results.
2. Use the thumbnail search type when searching for particular objects in the scene. Use the description search type when attempting to discern the intent of your object.
3. Because of how the AI models Frigate uses have been trained, the comparison between text and image embedding distances generally means that with multi-modal (`thumbnail` and `description`) searches, results matching `description` will appear first, even if a `thumbnail` embedding may be a better match. Play with the "Search Type" setting to help find what you are looking for. Note that if you are generating descriptions for specific objects or zones only, this may cause search results to prioritize the objects with descriptions even if the the ones without them are more relevant.
4. Make your search language and tone closely match exactly what you're looking for. If you are using thumbnail search, **phrase your query as an image caption**. Searching for "red car" may not work as well as "red sedan driving down a residential street on a sunny day".

View File

@ -463,9 +463,13 @@ export default function InputWithTags({
}, [setFilters, resetSuggestions, setSearch, setInputFocused]);
const handleClearSimilarity = useCallback(() => {
removeFilter("event_id", filters.event_id!);
removeFilter("search_type", "similarity");
}, [removeFilter, filters]);
const newFilters = { ...filters };
if (newFilters.event_id === filters.event_id) {
delete newFilters.event_id;
}
delete newFilters.search_type;
setFilters(newFilters);
}, [setFilters, filters]);
const handleInputBlur = useCallback(
(e: React.FocusEvent) => {
@ -763,13 +767,15 @@ export default function InputWithTags({
</button>
</span>
))
: filterType !== "event_id" && (
: !(filterType == "event_id" && isSimilaritySearch) && (
<span
key={filterType}
className="inline-flex items-center whitespace-nowrap rounded-full bg-green-100 px-2 py-0.5 text-sm capitalize text-green-800"
>
{filterType.replaceAll("_", " ")}:{" "}
{formatFilterValues(filterType, filterValues)}
{filterType === "event_id"
? "Tracked Object ID"
: filterType.replaceAll("_", " ")}
: {formatFilterValues(filterType, filterValues)}
<button
onClick={() =>
removeFilter(

View File

@ -16,7 +16,7 @@ import {
PopoverContent,
PopoverTrigger,
} from "@/components/ui/popover";
import { isDesktop, isMobileOnly } from "react-device-detect";
import { isDesktop, isMobile } from "react-device-detect";
import { useFormattedHour } from "@/hooks/use-date-utils";
import FilterSwitch from "@/components/filter/FilterSwitch";
import { Switch } from "@/components/ui/switch";
@ -181,7 +181,7 @@ export default function SearchFilterDialog({
content={content}
contentClassName={cn(
"w-auto lg:min-w-[275px] scrollbar-container h-full overflow-auto px-4",
isMobileOnly && "pb-20",
isMobile && "pb-20",
)}
open={open}
onOpenChange={(open) => {

View File

@ -65,10 +65,14 @@ export function useApiFilterArgs<
const filter: { [key: string]: unknown } = {};
rawParams.forEach((value, key) => {
const isValidNumber = /^-?\d+(\.\d+)?(?!.)/.test(value);
const isValidEventID = /^\d+\.\d+-[a-zA-Z0-9]+$/.test(value);
if (
value != "true" &&
value != "false" &&
(/[^0-9,]/.test(value) || isNaN(parseFloat(value)))
!isValidNumber &&
!isValidEventID
) {
filter[key] = value.includes(",") ? value.split(",") : [value];
} else {