From 90140e7710c5cce10a96cb4936499bd56a8fece3 Mon Sep 17 00:00:00 2001 From: leccelecce <24962424+leccelecce@users.noreply.github.com> Date: Mon, 21 Oct 2024 16:54:55 +0100 Subject: [PATCH] Ollama: minor docs tweak to specify command (#14482) --- docs/docs/configuration/genai.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/docs/configuration/genai.md b/docs/docs/configuration/genai.md index 1a9a125c6..cdaf0adbe 100644 --- a/docs/docs/configuration/genai.md +++ b/docs/docs/configuration/genai.md @@ -37,7 +37,7 @@ Parallel requests also come with some caveats. See the [Ollama documentation](ht ### Supported Models -You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). At the time of writing, this includes `llava`, `llava-llama3`, `llava-phi3`, and `moondream`. Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first. +You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). At the time of writing, this includes `llava`, `llava-llama3`, `llava-phi3`, and `moondream`. Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull llava:7b` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag. :::note @@ -52,7 +52,7 @@ genai: enabled: True provider: ollama base_url: http://localhost:11434 - model: llava + model: llava:7b ``` ## Google Gemini