LPR improvements (#17549)

* auto select LPR model backend for inference

* docs update
This commit is contained in:
Josh Hawkins 2025-04-05 11:03:17 -05:00 committed by GitHub
parent 7bfcf2040d
commit 348e728220
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
4 changed files with 30 additions and 7 deletions

View File

@ -29,7 +29,7 @@ In the default mode, Frigate's LPR needs to first detect a `car` before it can r
## Minimum System Requirements
License plate recognition works by running AI models locally on your system. The models are relatively lightweight and run on your CPU. At least 4GB of RAM is required.
License plate recognition works by running AI models locally on your system. The models are relatively lightweight and will be auto-selected to run on your CPU or GPU. At least 4GB of RAM is required.
## Configuration
@ -280,7 +280,7 @@ By selecting the appropriate configuration, users can optimize their dedicated L
- Disable the `improve_contrast` motion setting, especially if you are running LPR at night and the frame is mostly dark. This will prevent small pixel changes and smaller areas of motion from triggering license plate detection.
- Ensure your camera's timestamp is covered with a motion mask so that it's not incorrectly detected as a license plate.
- For non-Frigate+ users, you may need to change your camera settings for a clearer image or decrease your global `recognition_threshold` config if your plates are not being accurately recognized at night.
- The secondary pipeline mode runs a local AI model on your CPU to detect plates. Increasing detect `fps` will increase CPU usage proportionally.
- The secondary pipeline mode runs a local AI model on your CPU or GPU (auto-selected) to detect plates. Increasing detect `fps` will increase resource usage proportionally.
## FAQ
@ -335,4 +335,4 @@ Use `match_distance` to allow small character mismatches. Alternatively, define
### Will LPR slow down my system?
LPR runs on the CPU, so performance impact depends on your hardware. Ensure you have at least 4GB RAM and a capable CPU for optimal results. If you are running the Dedicated LPR Camera mode, resource usage will be higher compared to users who run a model that natively detects license plates. Tune your motion detection settings for your dedicated LPR camera so that the license plate detection model runs only when necessary.
LPR's performance impact depends on your hardware. Ensure you have at least 4GB RAM and a capable CPU or GPU for optimal results. If you are running the Dedicated LPR Camera mode, resource usage will be higher compared to users who run a model that natively detects license plates. Tune your motion detection settings for your dedicated LPR camera so that the license plate detection model runs only when necessary.

View File

@ -789,9 +789,7 @@ class LicensePlateProcessingMixin:
input_w = int(input_h * max_wh_ratio)
# check for model-specific input width
model_input_w = self.model_runner.recognition_model.runner.ort.get_inputs()[
0
].shape[3]
model_input_w = self.model_runner.recognition_model.runner.get_input_width()
if isinstance(model_input_w, int) and model_input_w > 0:
input_w = model_input_w

View File

@ -108,7 +108,7 @@ class EmbeddingMaintainer(threading.Thread):
# model runners to share between realtime and post processors
if self.config.lpr.enabled:
lpr_model_runner = LicensePlateModelRunner(self.requestor)
lpr_model_runner = LicensePlateModelRunner(self.requestor, device="AUTO")
# realtime processors
self.realtime_processors: list[RealTimeProcessorApi] = []

View File

@ -65,6 +65,31 @@ class ONNXModelRunner:
elif self.type == "ort":
return [input.name for input in self.ort.get_inputs()]
def get_input_width(self):
"""Get the input width of the model regardless of backend."""
if self.type == "ort":
return self.ort.get_inputs()[0].shape[3]
elif self.type == "ov":
input_info = self.interpreter.inputs
first_input = input_info[0]
try:
partial_shape = first_input.get_partial_shape()
# width dimension
if len(partial_shape) >= 4 and partial_shape[3].is_static:
return partial_shape[3].get_length()
# If width is dynamic or we can't determine it
return -1
except Exception:
try:
# gemini says some ov versions might still allow this
input_shape = first_input.shape
return input_shape[3] if len(input_shape) >= 4 else -1
except Exception:
return -1
return -1
def run(self, input: dict[str, Any]) -> Any:
if self.type == "ov":
infer_request = self.interpreter.create_infer_request()