LPR improvements (#17289)

* config options

* processing in maintainer

* detect and process dedicated lpr plates

* create camera type, add manual event and save snapshot

* use const

* ensure lpr events are always detections, typing fixes

* docs

* docs tweaks

* add preprocessing and penalization for low confidence chars
This commit is contained in:
Josh Hawkins 2025-03-23 14:30:48 -05:00 committed by GitHub
parent b7fcd41737
commit fa4643fddf
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
14 changed files with 706 additions and 194 deletions

View File

@ -3,16 +3,17 @@ id: license_plate_recognition
title: License Plate Recognition (LPR)
---
Frigate can recognize license plates on vehicles and automatically add the detected characters to the `recognized_license_plate` field or a known name as a `sub_label` to objects that are of type `car`. A common use case may be to read the license plates of cars pulling into a driveway or cars passing by on a street.
Frigate can recognize license plates on vehicles and automatically add the detected characters to the `recognized_license_plate` field or a known name as a `sub_label` to tracked objects of type `car`. A common use case may be to read the license plates of cars pulling into a driveway or cars passing by on a street.
LPR works best when the license plate is clearly visible to the camera. For moving vehicles, Frigate continuously refines the recognition process, keeping the most confident result. However, LPR does not run on stationary vehicles.
When a plate is recognized, the recognized name is:
- Added to the `car` tracked object as a `sub_label` (if known) or the `recognized_license_plate` field (if unknown)
- Viewable in the Review Item Details pane in Review and the Tracked Object Details pane in Explore.
- Added as a `sub_label` (if known) or the `recognized_license_plate` field (if unknown) to a tracked object.
- Viewable in the Review Item Details pane in Review (sub labels).
- Viewable in the Tracked Object Details pane in Explore (sub labels and recognized license plates).
- Filterable through the More Filters menu in Explore.
- Published via the `frigate/events` MQTT topic as a `sub_label` (known) or `recognized_license_plate` (unknown) for the tracked object.
- Published via the `frigate/events` MQTT topic as a `sub_label` (known) or `recognized_license_plate` (unknown) for the `car` tracked object.
## Model Requirements
@ -22,7 +23,7 @@ Users without a model that detects license plates can still run LPR. Frigate use
:::note
Frigate needs to first detect a `car` before it can recognize a license plate. If you're using a dedicated LPR camera or have a zoomed-in view, make sure the camera captures enough of the `car` for Frigate to detect it reliably.
In the default mode, Frigate's LPR needs to first detect a `car` before it can recognize a license plate. If you're using a dedicated LPR camera and have a zoomed-in view where a `car` will not be detected, you can still run LPR, but the configuration parameters will differ from the default mode. See the [Dedicated LPR Cameras](#dedicated-lpr-cameras) section below.
:::
@ -39,7 +40,17 @@ lpr:
enabled: True
```
Ensure that your camera is configured to detect objects of type `car`, and that a car is actually being detected by Frigate. Otherwise, LPR will not run.
You can also enable it for specific cameras only at the camera level:
```yaml
cameras:
driveway:
...
lpr:
enabled: True
```
For non-dedicated LPR cameras, ensure that your camera is configured to detect objects of type `car`, and that a car is actually being detected by Frigate. Otherwise, LPR will not run.
Like the other real-time processors in Frigate, license plate recognition runs on the camera stream defined by the `detect` role in your config. To ensure optimal performance, select a suitable resolution for this stream in your camera's firmware that fits your specific scene and requirements.
@ -78,6 +89,8 @@ Fine-tune the LPR feature using these optional parameters:
## Configuration Examples
These configuration parameters are available at the global level of your config. The only optional parameters that should be set at the camera level are `enabled` and `min_area`.
```yaml
lpr:
enabled: True
@ -110,6 +123,70 @@ lpr:
- "MN D3163"
```
## Dedicated LPR Cameras
Dedicated LPR cameras are single-purpose cameras with powerful optical zoom to capture license plates on distant vehicles, often with fine-tuned settings to capture plates at night.
Users with a dedicated LPR camera can run Frigate's LPR by specifying a camera type of `lpr` in the camera configuration. An example config for a dedicated LPR camera might look like this:
```yaml
# LPR global configuration
lpr:
enabled: True
min_area: 2000
min_plate_length: 4
# Dedicated LPR camera configuration
cameras:
dedicated_lpr_camera:
type: "lpr" # required to use dedicated LPR camera mode
lpr:
enabled: True
expire_time: 3 # optional, default
ffmpeg: ...
detect:
enabled: False # optional, disable Frigate's standard object detection pipeline
fps: 5
width: 1920
height: 1080
motion:
threshold: 30
contour_area: 80 # use an increased value here to tune out small motion changes
improve_contrast: false
mask: 0.704,0.007,0.709,0.052,0.989,0.055,0.993,0.001 # ensure your camera's timestamp is masked
record:
enabled: True # disable recording if you only want snapshots
detections:
enabled: True
retain:
default: 7
```
The camera-level `type` setting tells Frigate to treat your camera as a dedicated LPR camera. Setting this option bypasses Frigate's standard object detection pipeline so that a `car` does not need to be detected to run LPR. This dedicated LPR pipeline does not utilize defined zones or object masks, and the license plate detector is always run on the full frame whenever motion activity occurs. If a plate is found, a snapshot at the highest scoring moment is saved as a `car` object, visible in Explore and searchable by the recognized plate via Explore's More Filters.
An optional config variable for dedicated LPR cameras only, `expire_time`, can be specified under the `lpr` configuration at the camera level to change the time it takes for Frigate to consider a previously tracked plate as expired.
:::note
When using `type: "lpr"` for a camera, a non-standard object detection pipeline is used. Any detected license plates on dedicated LPR cameras are treated similarly to manual events in Frigate. Note that for `car` objects with license plates:
- Review items will always be classified as a `detection`.
- Snapshots will always be saved.
- Tracked objects are retained according to your retain settings for `record` and `snapshots`.
- Zones and object masks cannot be used.
- The `frigate/events` MQTT topic will not publish tracked object updates, though `frigate/reviews` will if recordings are enabled.
:::
### Best practices for using Dedicated LPR camera mode
- Tune your motion detection and increase the `contour_area` until you see only larger motion boxes being created as cars pass through the frame (likely somewhere between 50-90 for a 1920x1080 detect stream). Increasing the `contour_area` filters out small areas of motion and will prevent excessive resource use from looking for license plates in frames that don't even have a car passing through it.
- Disable the `improve_contrast` motion setting, especially if you are running LPR at night and the frame is mostly dark. This will prevent small pixel changes and smaller areas of motion from triggering license plate detection.
- Ensure your camera's timestamp is covered with a motion mask so that it's not incorrectly detected as a license plate.
- While not strictly required, it may be beneficial to disable standard object detection on your dedicated LPR camera (`detect` --> `enabled: False`). If you've set the camera type to `"lpr"`, license plate detection will still be performed on the entire frame when motion occurs.
- If multiple tracked objects are being produced for the same license plate, you can tweak the `expire_time` to prevent plates from being expired from the view as quickly.
- You may need to change your camera settings for a clearer image or decrease your global `recognition_threshold` config if your plates are not being accurately recognized at night.
## FAQ
### Why isn't my license plate being detected and recognized?
@ -118,14 +195,13 @@ Ensure that:
- Your camera has a clear, human-readable, well-lit view of the plate. If you can't read the plate, Frigate certainly won't be able to. This may require changing video size, quality, or frame rate settings on your camera, depending on your scene and how fast the vehicles are traveling.
- The plate is large enough in the image (try adjusting `min_area`) or increasing the resolution of your camera's stream.
- A `car` is detected first, as LPR only runs on recognized vehicles.
If you are using a Frigate+ model or a custom model that detects license plates, ensure that `license_plate` is added to your list of objects to track.
If you are using the free model that ships with Frigate, you should _not_ add `license_plate` to the list of objects to track.
### Can I run LPR without detecting `car` objects?
No, Frigate requires a `car` to be detected first before recognizing a license plate.
In normal LPR mode, Frigate requires a `car` to be detected first before recognizing a license plate. If you have a dedicated LPR camera, you can change the camera `type` to `"lpr"` to use the Dedicated LPR Camera algorithm. This comes with important caveats, though. See the [Dedicated LPR Cameras](#dedicated-lpr-cameras) section above.
### How can I improve detection accuracy?
@ -150,4 +226,4 @@ Use `match_distance` to allow small character mismatches. Alternatively, define
### Will LPR slow down my system?
LPR runs on the CPU, so performance impact depends on your hardware. Ensure you have at least 4GB RAM and a capable CPU for optimal results.
LPR runs on the CPU, so performance impact depends on your hardware. Ensure you have at least 4GB RAM and a capable CPU for optimal results. If you are running the Dedicated LPR Camera mode, resource usage will be higher compared to users who run a model that natively detects license plates. Tune your motion detection settings for your dedicated LPR camera so that the license plate detection model runs only when necessary.

View File

@ -562,6 +562,7 @@ face_recognition:
blur_confidence_filter: True
# Optional: Configuration for license plate recognition capability
# NOTE: enabled and min_area can be overridden at the camera level
lpr:
# Optional: Enable license plate recognition (default: shown below)
enabled: False
@ -656,6 +657,9 @@ cameras:
# If disabled: config is used but no live stream and no capture etc.
# Events/Recordings are still viewable.
enabled: True
# Optional: camera type used for some Frigate features (default: shown below)
# Options are "generic" and "lpr"
type: "generic"
# Required: ffmpeg settings for the camera
ffmpeg:
# Required: A list of input streams for the camera. See documentation for more information.

View File

@ -409,9 +409,13 @@ class CameraState:
self.previous_frame_id = frame_name
def save_manual_event_image(
self, event_id: str, label: str, draw: dict[str, list[dict]]
self,
frame: np.ndarray | None,
event_id: str,
label: str,
draw: dict[str, list[dict]],
) -> None:
img_frame = self.get_current_frame()
img_frame = frame if frame is not None else self.get_current_frame()
# write clean snapshot if enabled
if self.camera_config.snapshots.clean_copy:

View File

@ -11,6 +11,7 @@ class DetectionTypeEnum(str, Enum):
api = "api"
video = "video"
audio = "audio"
lpr = "lpr"
class DetectionPublisher(Publisher):

View File

@ -15,6 +15,8 @@ class EventMetadataTypeEnum(str, Enum):
regenerate_description = "regenerate_description"
sub_label = "sub_label"
recognized_license_plate = "recognized_license_plate"
lpr_event_create = "lpr_event_create"
save_lpr_snapshot = "save_lpr_snapshot"
class EventMetadataPublisher(Publisher):

View File

@ -1,4 +1,5 @@
import os
from enum import Enum
from typing import Optional
from pydantic import Field, PrivateAttr
@ -42,6 +43,11 @@ from .zone import ZoneConfig
__all__ = ["CameraConfig"]
class CameraTypeEnum(str, Enum):
generic = "generic"
lpr = "lpr"
class CameraConfig(FrigateBaseModel):
name: Optional[str] = Field(None, title="Camera name.", pattern=REGEX_CAMERA_NAME)
enabled: bool = Field(default=True, title="Enable camera.")
@ -102,6 +108,7 @@ class CameraConfig(FrigateBaseModel):
onvif: OnvifConfig = Field(
default_factory=OnvifConfig, title="Camera Onvif Configuration."
)
type: CameraTypeEnum = Field(default=CameraTypeEnum.generic, title="Camera Type")
ui: CameraUiConfig = Field(
default_factory=CameraUiConfig, title="Camera UI Modifications."
)

View File

@ -127,6 +127,11 @@ class LicensePlateRecognitionConfig(FrigateBaseModel):
class CameraLicensePlateRecognitionConfig(FrigateBaseModel):
enabled: bool = Field(default=False, title="Enable license plate recognition.")
expire_time: int = Field(
default=3,
title="Expire plates not seen after number of seconds (for dedicated LPR cameras only).",
gt=0,
)
min_area: int = Field(
default=1000,
title="Minimum area of license plate to begin running recognition.",

View File

@ -1,18 +1,26 @@
"""Handle processing images for face detection and recognition."""
import base64
import datetime
import logging
import math
import random
import re
import string
from typing import List, Optional, Tuple
import cv2
import numpy as np
from Levenshtein import distance
from Levenshtein import distance, jaro_winkler
from pyclipper import ET_CLOSEDPOLYGON, JT_ROUND, PyclipperOffset
from shapely.geometry import Polygon
from frigate.comms.event_metadata_updater import EventMetadataTypeEnum
from frigate.comms.event_metadata_updater import (
EventMetadataPublisher,
EventMetadataTypeEnum,
)
from frigate.config.camera.camera import CameraTypeEnum
from frigate.embeddings.onnx.lpr_embedding import LPR_EMBEDDING_SIZE
from frigate.util.image import area
logger = logging.getLogger(__name__)
@ -28,6 +36,8 @@ class LicensePlateProcessingMixin:
"license_plate" not in self.config.objects.all_objects
)
self.event_metadata_publisher = EventMetadataPublisher()
self.ctc_decoder = CTCDecoder()
self.batch_size = 6
@ -38,6 +48,9 @@ class LicensePlateProcessingMixin:
self.box_thresh = 0.6
self.mask_thresh = 0.6
# matching
self.similarity_threshold = 0.8
def _detect(self, image: np.ndarray) -> List[np.ndarray]:
"""
Detect possible license plates in the input image by first resizing and normalizing it,
@ -197,11 +210,8 @@ class LicensePlateProcessingMixin:
# set to True to write each cropped image for debugging
if False:
save_image = cv2.cvtColor(
plate_images[original_idx], cv2.COLOR_RGB2BGR
)
filename = f"debug/frames/plate_{original_idx}_{plate}_{area}.jpg"
cv2.imwrite(filename, save_image)
cv2.imwrite(filename, plate_images[original_idx])
license_plates[original_idx] = plate
average_confidences[original_idx] = average_confidence
@ -320,7 +330,7 @@ class LicensePlateProcessingMixin:
# Use pyclipper to shrink the polygon slightly based on the computed distance.
offset = PyclipperOffset()
offset.AddPath(points, JT_ROUND, ET_CLOSEDPOLYGON)
points = np.array(offset.Execute(distance * 1.75)).reshape((-1, 1, 2))
points = np.array(offset.Execute(distance * 1.5)).reshape((-1, 1, 2))
# get the minimum bounding box around the shrunken polygon.
box, min_side = self._get_min_boxes(points)
@ -624,6 +634,47 @@ class LicensePlateProcessingMixin:
assert image.shape[2] == input_shape[0], "Unexpected number of image channels."
# convert to grayscale
if image.shape[2] == 3:
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
else:
gray = image
# detect noise with Laplacian variance
laplacian = cv2.Laplacian(gray, cv2.CV_64F)
noise_variance = np.var(laplacian)
brightness = cv2.mean(gray)[0]
noise_threshold = 70
brightness_threshold = 150
is_noisy = (
noise_variance > noise_threshold and brightness < brightness_threshold
)
# apply bilateral filter and sharpening only if noisy
if is_noisy:
logger.debug(
f"Noise detected (variance: {noise_variance:.1f}, brightness: {brightness:.1f}) - denoising"
)
smoothed = cv2.bilateralFilter(gray, d=15, sigmaColor=100, sigmaSpace=100)
sharpening_kernel = np.array([[-1, -1, -1], [-1, 9, -1], [-1, -1, -1]])
processed = cv2.filter2D(smoothed, -1, sharpening_kernel)
else:
logger.debug(
f"No noise detected (variance: {noise_variance:.1f}, brightness: {brightness:.1f}) - skipping denoising and sharpening"
)
processed = gray
# apply CLAHE for contrast enhancement
grid_size = (
max(4, input_w // 40),
max(4, input_h // 40),
)
clahe = cv2.createCLAHE(clipLimit=1.5, tileGridSize=grid_size)
enhanced = clahe.apply(processed)
# Convert back to 3-channel for model compatibility
image = cv2.cvtColor(enhanced, cv2.COLOR_GRAY2RGB)
# dynamically adjust input width based on max_wh_ratio
input_w = int(input_h * max_wh_ratio)
@ -649,6 +700,13 @@ class LicensePlateProcessingMixin:
)
padded_image[:, :, :resized_w] = resized_image
if False:
current_time = int(datetime.datetime.now().timestamp() * 1000)
cv2.imwrite(
f"debug/frames/preprocessed_recognition_{current_time}.jpg",
image,
)
return padded_image
@staticmethod
@ -710,18 +768,38 @@ class LicensePlateProcessingMixin:
top_score = -1
top_box = None
img_h, img_w = input.shape[0], input.shape[1]
# Calculate resized dimensions and padding based on _preprocess_inputs
if img_w > img_h:
resized_h = int(((img_h / img_w) * LPR_EMBEDDING_SIZE) // 4 * 4)
resized_w = LPR_EMBEDDING_SIZE
x_offset = (LPR_EMBEDDING_SIZE - resized_w) // 2
y_offset = (LPR_EMBEDDING_SIZE - resized_h) // 2
scale_x = img_w / resized_w
scale_y = img_h / resized_h
else:
resized_w = int(((img_w / img_h) * LPR_EMBEDDING_SIZE) // 4 * 4)
resized_h = LPR_EMBEDDING_SIZE
x_offset = (LPR_EMBEDDING_SIZE - resized_w) // 2
y_offset = (LPR_EMBEDDING_SIZE - resized_h) // 2
scale_x = img_w / resized_w
scale_y = img_h / resized_h
# Loop over predictions
for prediction in predictions:
score = prediction[6]
if score >= confidence_threshold:
bbox = prediction[1:5]
# Scale boxes back to original image size
scale_x = input.shape[1] / 256
scale_y = input.shape[0] / 256
bbox[0] *= scale_x
bbox[1] *= scale_y
bbox[2] *= scale_x
bbox[3] *= scale_y
# Adjust for padding and scale to original image
bbox[0] = (bbox[0] - x_offset) * scale_x
bbox[1] = (bbox[1] - y_offset) * scale_y
bbox[2] = (bbox[2] - x_offset) * scale_x
bbox[3] = (bbox[3] - y_offset) * scale_y
if score > top_score:
top_score = score
top_box = bbox
if score > top_score:
top_score = score
@ -729,8 +807,8 @@ class LicensePlateProcessingMixin:
# Return the top scoring bounding box if found
if top_box is not None:
# expand box by 30% to help with OCR
expansion = (top_box[2:] - top_box[:2]) * 0.30
# expand box by 5% to help with OCR
expansion = (top_box[2:] - top_box[:2]) * 0.05
# Expand box
expanded_box = np.array(
@ -750,6 +828,7 @@ class LicensePlateProcessingMixin:
def _should_keep_previous_plate(
self, id, top_plate, top_char_confidences, top_area, avg_confidence
):
"""Determine if the previous plate should be kept over the current one."""
if id not in self.detected_license_plates:
return False
@ -764,68 +843,88 @@ class LicensePlateProcessingMixin:
)
# 1. Normalize metrics
# Length score - use relative comparison
# If lengths are equal, score is 0.5 for both
# If one is longer, it gets a higher score up to 1.0
max_length_diff = 4 # Maximum expected difference in plate lengths
# Length score: Equal lengths = 0.5, penalize extra characters if low confidence
length_diff = len(top_plate) - len(prev_plate)
curr_length_score = 0.5 + (
length_diff / (2 * max_length_diff)
) # Normalize to 0-1
curr_length_score = max(0, min(1, curr_length_score)) # Clamp to 0-1
prev_length_score = 1 - curr_length_score # Inverse relationship
max_length_diff = 3
curr_length_score = 0.5 + (length_diff / (2 * max_length_diff))
curr_length_score = max(0, min(1, curr_length_score))
prev_length_score = 0.5 - (length_diff / (2 * max_length_diff))
prev_length_score = max(0, min(1, prev_length_score))
# Area score (normalize based on max of current and previous)
# Adjust length score based on confidence of extra characters
conf_threshold = 0.75 # Minimum confidence for a character to be "trusted"
if len(top_plate) > len(prev_plate):
extra_conf = min(
top_char_confidences[len(prev_plate) :]
) # Lowest extra char confidence
if extra_conf < conf_threshold:
curr_length_score *= extra_conf / conf_threshold # Penalize if weak
elif len(prev_plate) > len(top_plate):
extra_conf = min(prev_char_confidences[len(top_plate) :])
if extra_conf < conf_threshold:
prev_length_score *= extra_conf / conf_threshold
# Area score: Normalize by max area
max_area = max(top_area, prev_area)
curr_area_score = top_area / max_area
prev_area_score = prev_area / max_area
curr_area_score = top_area / max_area if max_area > 0 else 0
prev_area_score = prev_area / max_area if max_area > 0 else 0
# Average confidence score (already normalized 0-1)
# Confidence scores
curr_conf_score = avg_confidence
prev_conf_score = prev_avg_confidence
# Character confidence comparison score
# Character confidence comparison (average over shared length)
min_length = min(len(top_plate), len(prev_plate))
if min_length > 0:
curr_char_conf = sum(top_char_confidences[:min_length]) / min_length
prev_char_conf = sum(prev_char_confidences[:min_length]) / min_length
else:
curr_char_conf = 0
prev_char_conf = 0
curr_char_conf = prev_char_conf = 0
# 2. Define weights
# Penalize any character below threshold
curr_min_conf = min(top_char_confidences) if top_char_confidences else 0
prev_min_conf = min(prev_char_confidences) if prev_char_confidences else 0
curr_conf_penalty = (
1.0 if curr_min_conf >= conf_threshold else (curr_min_conf / conf_threshold)
)
prev_conf_penalty = (
1.0 if prev_min_conf >= conf_threshold else (prev_min_conf / conf_threshold)
)
# 2. Define weights (boost confidence importance)
weights = {
"length": 0.4,
"area": 0.3,
"avg_confidence": 0.2,
"char_confidence": 0.1,
"length": 0.2,
"area": 0.2,
"avg_confidence": 0.35,
"char_confidence": 0.25,
}
# 3. Calculate weighted scores
# 3. Calculate weighted scores with penalty
curr_score = (
curr_length_score * weights["length"]
+ curr_area_score * weights["area"]
+ curr_conf_score * weights["avg_confidence"]
+ curr_char_conf * weights["char_confidence"]
)
) * curr_conf_penalty
prev_score = (
prev_length_score * weights["length"]
+ prev_area_score * weights["area"]
+ prev_conf_score * weights["avg_confidence"]
+ prev_char_conf * weights["char_confidence"]
)
) * prev_conf_penalty
# 4. Log the comparison for debugging
# 4. Log the comparison
logger.debug(
f"Plate comparison - Current plate: {top_plate} (score: {curr_score:.3f}) vs "
f"Previous plate: {prev_plate} (score: {prev_score:.3f})\n"
f"Plate comparison - Current: {top_plate} (score: {curr_score:.3f}, min_conf: {curr_min_conf:.2f}) vs "
f"Previous: {prev_plate} (score: {prev_score:.3f}, min_conf: {prev_min_conf:.2f})\n"
f"Metrics - Length: {len(top_plate)} vs {len(prev_plate)} (scores: {curr_length_score:.2f} vs {prev_length_score:.2f}), "
f"Area: {top_area} vs {prev_area}, "
f"Avg Conf: {avg_confidence:.2f} vs {prev_avg_confidence:.2f}"
f"Avg Conf: {avg_confidence:.2f} vs {prev_avg_confidence:.2f}, "
f"Char Conf: {curr_char_conf:.2f} vs {prev_char_conf:.2f}"
)
# 5. Return True if we should keep the previous plate (i.e., if it scores higher)
# 5. Return True if previous plate scores higher
return prev_score > curr_score
def __update_yolov9_metrics(self, duration: float) -> None:
@ -842,57 +941,55 @@ class LicensePlateProcessingMixin:
"""
self.metrics.alpr_pps.value = (self.metrics.alpr_pps.value * 9 + duration) / 10
def lpr_process(self, obj_data: dict[str, any], frame: np.ndarray):
def _generate_plate_event(self, camera: str, plate: str, plate_score: float) -> str:
"""Generate a unique ID for a plate event based on camera and text."""
now = datetime.datetime.now().timestamp()
rand_id = "".join(random.choices(string.ascii_lowercase + string.digits, k=6))
event_id = f"{now}-{rand_id}"
self.event_metadata_publisher.publish(
EventMetadataTypeEnum.lpr_event_create,
(
now,
camera,
"car",
event_id,
True,
plate_score,
None,
plate,
),
)
return event_id
def lpr_process(
self, obj_data: dict[str, any], frame: np.ndarray, dedicated_lpr: bool = False
):
"""Look for license plates in image."""
if not self.config.cameras[obj_data["camera"]].lpr.enabled:
camera = obj_data if dedicated_lpr else obj_data["camera"]
current_time = int(datetime.datetime.now().timestamp())
if not self.config.cameras[camera].lpr.enabled:
return
id = obj_data["id"]
# don't run for non car objects
if obj_data.get("label") != "car":
logger.debug("Not a processing license plate for non car object.")
if not dedicated_lpr and self.config.cameras[camera].type == CameraTypeEnum.lpr:
return
# don't run for stationary car objects
if obj_data.get("stationary") == True:
logger.debug("Not a processing license plate for a stationary car object.")
return
# don't overwrite sub label for objects that have a sub label
# that is not a license plate
if obj_data.get("sub_label") and id not in self.detected_license_plates:
logger.debug(
f"Not processing license plate due to existing sub label: {obj_data.get('sub_label')}."
)
return
license_plate: Optional[dict[str, any]] = None
if self.requires_license_plate_detection:
logger.debug("Running manual license_plate detection.")
car_box = obj_data.get("box")
if not car_box:
return
if dedicated_lpr:
rgb = cv2.cvtColor(frame, cv2.COLOR_YUV2BGR_I420)
left, top, right, bottom = car_box
car = rgb[top:bottom, left:right]
# double the size of the car for better box detection
car = cv2.resize(car, (int(2 * car.shape[1]), int(2 * car.shape[0])))
# apply motion mask
rgb[self.config.cameras[obj_data].motion.mask == 0] = [0, 0, 0]
if WRITE_DEBUG_IMAGES:
current_time = int(datetime.datetime.now().timestamp())
cv2.imwrite(
f"debug/frames/car_frame_{current_time}.jpg",
car,
f"debug/frames/dedicated_lpr_masked_{current_time}.jpg",
rgb,
)
yolov9_start = datetime.datetime.now().timestamp()
license_plate = self._detect_license_plate(car)
license_plate = self._detect_license_plate(rgb)
logger.debug(
f"YOLOv9 LPD inference time: {(datetime.datetime.now().timestamp() - yolov9_start) * 1000:.2f} ms"
)
@ -901,107 +998,185 @@ class LicensePlateProcessingMixin:
)
if not license_plate:
logger.debug("Detected no license plates for car object.")
logger.debug("Detected no license plates in full frame.")
return
license_plate_area = max(
0,
(license_plate[2] - license_plate[0])
* (license_plate[3] - license_plate[1]),
license_plate_area = (license_plate[2] - license_plate[0]) * (
license_plate[3] - license_plate[1]
)
# check that license plate is valid
# double the value because we've doubled the size of the car
if (
license_plate_area
< self.config.cameras[obj_data["camera"]].lpr.min_area * 2
):
logger.debug("License plate is less than min_area")
if license_plate_area < self.lpr_config.min_area:
logger.debug("License plate area below minimum threshold.")
return
license_plate_frame = car[
license_plate[1] : license_plate[3], license_plate[0] : license_plate[2]
]
else:
# don't run for object without attributes
if not obj_data.get("current_attributes"):
logger.debug("No attributes to parse.")
return
attributes: list[dict[str, any]] = obj_data.get("current_attributes", [])
for attr in attributes:
if attr.get("label") != "license_plate":
continue
if license_plate is None or attr.get("score", 0.0) > license_plate.get(
"score", 0.0
):
license_plate = attr
# no license plates detected in this frame
if not license_plate:
return
license_plate_box = license_plate.get("box")
# check that license plate is valid
if (
not license_plate_box
or area(license_plate_box)
< self.config.cameras[obj_data["camera"]].lpr.min_area
):
logger.debug(f"Invalid license plate box {license_plate}")
return
license_plate_frame = cv2.cvtColor(frame, cv2.COLOR_YUV2BGR_I420)
# Expand the license_plate_box by 30%
box_array = np.array(license_plate_box)
expansion = (box_array[2:] - box_array[:2]) * 0.30
expanded_box = np.array(
[
license_plate_box[0] - expansion[0],
license_plate_box[1] - expansion[1],
license_plate_box[2] + expansion[0],
license_plate_box[3] + expansion[1],
]
).clip(0, [license_plate_frame.shape[1], license_plate_frame.shape[0]] * 2)
# Crop using the expanded box
license_plate_frame = license_plate_frame[
int(expanded_box[1]) : int(expanded_box[3]),
int(expanded_box[0]) : int(expanded_box[2]),
license_plate_frame = rgb[
license_plate[1] : license_plate[3],
license_plate[0] : license_plate[2],
]
# double the size of the license plate frame for better OCR
license_plate_frame = cv2.resize(
license_plate_frame,
(
int(2 * license_plate_frame.shape[1]),
int(2 * license_plate_frame.shape[0]),
),
)
if WRITE_DEBUG_IMAGES:
current_time = int(datetime.datetime.now().timestamp())
cv2.imwrite(
f"debug/frames/license_plate_frame_{current_time}.jpg",
# Double the size for better OCR
license_plate_frame = cv2.resize(
license_plate_frame,
(
int(2 * license_plate_frame.shape[1]),
int(2 * license_plate_frame.shape[0]),
),
)
start = datetime.datetime.now().timestamp()
else:
id = obj_data["id"]
# don't run for non car objects
if obj_data.get("label") != "car":
logger.debug("Not a processing license plate for non car object.")
return
# don't run for stationary car objects
if obj_data.get("stationary") == True:
logger.debug(
"Not a processing license plate for a stationary car object."
)
return
# don't overwrite sub label for objects that have a sub label
# that is not a license plate
if obj_data.get("sub_label") and id not in self.detected_license_plates:
logger.debug(
f"Not processing license plate due to existing sub label: {obj_data.get('sub_label')}."
)
return
license_plate: Optional[dict[str, any]] = None
if self.requires_license_plate_detection:
logger.debug("Running manual license_plate detection.")
car_box = obj_data.get("box")
if not car_box:
return
rgb = cv2.cvtColor(frame, cv2.COLOR_YUV2BGR_I420)
left, top, right, bottom = car_box
car = rgb[top:bottom, left:right]
# double the size of the car for better box detection
car = cv2.resize(car, (int(2 * car.shape[1]), int(2 * car.shape[0])))
if WRITE_DEBUG_IMAGES:
cv2.imwrite(
f"debug/frames/car_frame_{current_time}.jpg",
car,
)
yolov9_start = datetime.datetime.now().timestamp()
license_plate = self._detect_license_plate(car)
logger.debug(
f"YOLOv9 LPD inference time: {(datetime.datetime.now().timestamp() - yolov9_start) * 1000:.2f} ms"
)
self.__update_yolov9_metrics(
datetime.datetime.now().timestamp() - yolov9_start
)
if not license_plate:
logger.debug("Detected no license plates for car object.")
return
license_plate_area = max(
0,
(license_plate[2] - license_plate[0])
* (license_plate[3] - license_plate[1]),
)
# check that license plate is valid
# double the value because we've doubled the size of the car
if (
license_plate_area
< self.config.cameras[obj_data["camera"]].lpr.min_area * 2
):
logger.debug("License plate is less than min_area")
return
license_plate_frame = car[
license_plate[1] : license_plate[3],
license_plate[0] : license_plate[2],
]
else:
# don't run for object without attributes
if not obj_data.get("current_attributes"):
logger.debug("No attributes to parse.")
return
attributes: list[dict[str, any]] = obj_data.get(
"current_attributes", []
)
for attr in attributes:
if attr.get("label") != "license_plate":
continue
if license_plate is None or attr.get(
"score", 0.0
) > license_plate.get("score", 0.0):
license_plate = attr
# no license plates detected in this frame
if not license_plate:
return
license_plate_box = license_plate.get("box")
# check that license plate is valid
if (
not license_plate_box
or area(license_plate_box)
< self.config.cameras[obj_data["camera"]].lpr.min_area
):
logger.debug(f"Invalid license plate box {license_plate}")
return
license_plate_frame = cv2.cvtColor(frame, cv2.COLOR_YUV2BGR_I420)
# Expand the license_plate_box by 30%
box_array = np.array(license_plate_box)
expansion = (box_array[2:] - box_array[:2]) * 0.30
expanded_box = np.array(
[
license_plate_box[0] - expansion[0],
license_plate_box[1] - expansion[1],
license_plate_box[2] + expansion[0],
license_plate_box[3] + expansion[1],
]
).clip(
0, [license_plate_frame.shape[1], license_plate_frame.shape[0]] * 2
)
# Crop using the expanded box
license_plate_frame = license_plate_frame[
int(expanded_box[1]) : int(expanded_box[3]),
int(expanded_box[0]) : int(expanded_box[2]),
]
# double the size of the license plate frame for better OCR
license_plate_frame = cv2.resize(
license_plate_frame,
(
int(2 * license_plate_frame.shape[1]),
int(2 * license_plate_frame.shape[0]),
),
)
if WRITE_DEBUG_IMAGES:
cv2.imwrite(
f"debug/frames/license_plate_frame_{current_time}.jpg",
license_plate_frame,
)
# run detection, returns results sorted by confidence, best first
start = datetime.datetime.now().timestamp()
license_plates, confidences, areas = self._process_license_plate(
license_plate_frame
)
self.__update_lpr_metrics(datetime.datetime.now().timestamp() - start)
logger.debug(f"Text boxes: {license_plates}")
logger.debug(f"Confidences: {confidences}")
logger.debug(f"Areas: {areas}")
if license_plates:
for plate, confidence, text_area in zip(license_plates, confidences, areas):
avg_confidence = (
@ -1012,7 +1187,6 @@ class LicensePlateProcessingMixin:
f"Detected text: {plate} (average confidence: {avg_confidence:.2f}, area: {text_area} pixels)"
)
else:
# no plates found
logger.debug("No text detected")
return
@ -1027,6 +1201,46 @@ class LicensePlateProcessingMixin:
else 0
)
# Check against minimum confidence threshold
if avg_confidence < self.lpr_config.recognition_threshold:
logger.debug(
f"Average confidence {avg_confidence} is less than threshold ({self.lpr_config.recognition_threshold})"
)
return
# For LPR cameras, match or assign plate ID using Jaro-Winkler distance
if dedicated_lpr:
plate_id = None
for existing_id, data in self.detected_license_plates.items():
if (
data["camera"] == camera
and data["last_seen"] is not None
and current_time - data["last_seen"]
<= self.config.cameras[camera].lpr.expire_time
):
similarity = jaro_winkler(data["plate"], top_plate)
if similarity >= self.similarity_threshold:
plate_id = existing_id
logger.debug(
f"Matched plate {top_plate} to {data['plate']} (similarity: {similarity:.3f})"
)
break
if plate_id is None:
plate_id = self._generate_plate_event(
obj_data, top_plate, avg_confidence
)
logger.debug(
f"New plate event for dedicated LPR camera {plate_id}: {top_plate}"
)
else:
logger.debug(
f"Matched existing plate event for dedicated LPR camera {plate_id}: {top_plate}"
)
self.detected_license_plates[plate_id]["last_seen"] = current_time
id = plate_id
# Check if we have a previously detected plate for this ID
if id in self.detected_license_plates:
if self._should_keep_previous_plate(
@ -1035,13 +1249,6 @@ class LicensePlateProcessingMixin:
logger.debug("Keeping previous plate")
return
# Check against minimum confidence threshold
if avg_confidence < self.lpr_config.recognition_threshold:
logger.debug(
f"Average confidence {avg_confidence} is less than threshold ({self.lpr_config.recognition_threshold})"
)
return
# Determine subLabel based on known plates, use regex matching
# Default to the detected plate, use label name if there's a match
sub_label = next(
@ -1068,11 +1275,23 @@ class LicensePlateProcessingMixin:
(id, top_plate, avg_confidence),
)
if dedicated_lpr:
# save the best snapshot
logger.debug(f"Writing snapshot for {id}, {top_plate}, {current_time}")
frame_bgr = cv2.cvtColor(frame, cv2.COLOR_YUV2BGR_I420)
_, buffer = cv2.imencode(".jpg", frame_bgr)
self.sub_label_publisher.publish(
EventMetadataTypeEnum.save_lpr_snapshot,
(base64.b64encode(buffer).decode("ASCII"), id, camera),
)
self.detected_license_plates[id] = {
"plate": top_plate,
"char_confidences": top_char_confidences,
"area": top_area,
"obj_data": obj_data,
"camera": camera,
"last_seen": current_time if dedicated_lpr else None,
}
def handle_request(self, topic, request_data) -> dict[str, any] | None:

View File

@ -35,9 +35,14 @@ class LicensePlateRealTimeProcessor(LicensePlateProcessingMixin, RealTimeProcess
self.sub_label_publisher = sub_label_publisher
super().__init__(config, metrics)
def process_frame(self, obj_data: dict[str, any], frame: np.ndarray):
def process_frame(
self,
obj_data: dict[str, any],
frame: np.ndarray,
dedicated_lpr: bool | None = False,
):
"""Look for license plates in image."""
self.lpr_process(obj_data, frame)
self.lpr_process(obj_data, frame, dedicated_lpr)
def handle_request(self, topic, request_data) -> dict[str, any] | None:
return

View File

@ -1,6 +1,7 @@
"""Maintain embeddings in SQLite-vec."""
import base64
import datetime
import logging
import os
import threading
@ -13,6 +14,7 @@ import numpy as np
from peewee import DoesNotExist
from playhouse.sqliteq import SqliteQueueDatabase
from frigate.comms.detections_updater import DetectionSubscriber, DetectionTypeEnum
from frigate.comms.embeddings_updater import EmbeddingsRequestEnum, EmbeddingsResponder
from frigate.comms.event_metadata_updater import (
EventMetadataPublisher,
@ -26,6 +28,7 @@ from frigate.comms.recordings_updater import (
RecordingsDataTypeEnum,
)
from frigate.config import FrigateConfig
from frigate.config.camera.camera import CameraTypeEnum
from frigate.const import (
CLIPS_DIR,
UPDATE_EVENT_DESCRIPTION,
@ -97,6 +100,7 @@ class EmbeddingMaintainer(threading.Thread):
self.recordings_subscriber = RecordingsDataSubscriber(
RecordingsDataTypeEnum.recordings_available_through
)
self.detection_subscriber = DetectionSubscriber(DetectionTypeEnum.video)
self.embeddings_responder = EmbeddingsResponder()
self.frame_manager = SharedMemoryFrameManager()
@ -162,12 +166,15 @@ class EmbeddingMaintainer(threading.Thread):
self._process_requests()
self._process_updates()
self._process_recordings_updates()
self._process_dedicated_lpr()
self._expire_dedicated_lpr()
self._process_finalized()
self._process_event_metadata()
self.event_subscriber.stop()
self.event_end_subscriber.stop()
self.recordings_subscriber.stop()
self.detection_subscriber.stop()
self.event_metadata_publisher.stop()
self.event_metadata_subscriber.stop()
self.embeddings_responder.stop()
@ -317,6 +324,7 @@ class EmbeddingMaintainer(threading.Thread):
if (
recordings_available is not None
and event_id in self.detected_license_plates
and self.config.cameras[camera].type != "lpr"
):
processor.process_data(
{
@ -374,6 +382,26 @@ class EmbeddingMaintainer(threading.Thread):
if event_id in self.tracked_events:
del self.tracked_events[event_id]
def _expire_dedicated_lpr(self) -> None:
"""Remove plates not seen for longer than expiration timeout for dedicated lpr cameras."""
now = datetime.datetime.now().timestamp()
to_remove = []
for id, data in self.detected_license_plates.items():
last_seen = data.get("last_seen", 0)
if not last_seen:
continue
if now - last_seen > self.config.cameras[data["camera"]].lpr.expire_time:
to_remove.append(id)
for id in to_remove:
self.event_metadata_publisher.publish(
EventMetadataTypeEnum.manual_event_end,
(id, now),
)
self.detected_license_plates.pop(id)
def _process_recordings_updates(self) -> None:
"""Process recordings updates."""
while True:
@ -406,6 +434,42 @@ class EmbeddingMaintainer(threading.Thread):
event_id, RegenerateDescriptionEnum(source)
)
def _process_dedicated_lpr(self) -> None:
"""Process event updates"""
(topic, data) = self.detection_subscriber.check_for_update(timeout=0.01)
if topic is None:
return
camera, frame_name, _, _, motion_boxes, _ = data
if not camera or not self.config.lpr.enabled or len(motion_boxes) == 0:
return
camera_config = self.config.cameras[camera]
if not camera_config.type == CameraTypeEnum.lpr:
return
try:
yuv_frame = self.frame_manager.get(
frame_name, camera_config.frame_shape_yuv
)
except FileNotFoundError:
pass
if yuv_frame is None:
logger.debug(
"Unable to process dedicated LPR update because frame is unavailable."
)
return
for processor in self.realtime_processors:
if isinstance(processor, LicensePlateRealTimeProcessor):
processor.process_frame(camera, yuv_frame, True)
self.frame_manager.close(frame_name)
def _create_thumbnail(self, yuv_frame, box, height=500) -> Optional[bytes]:
"""Return jpg thumbnail of a region of the frame."""
frame = cv2.cvtColor(yuv_frame, cv2.COLOR_YUV2BGR_I420)

View File

@ -278,6 +278,13 @@ class EventProcessor(threading.Thread):
"top_score": event_data["score"],
},
}
if event_data.get("recognized_license_plate") is not None:
event[Event.data]["recognized_license_plate"] = event_data[
"recognized_license_plate"
]
event[Event.data]["recognized_license_plate_score"] = event_data[
"score"
]
Event.insert(event).execute()
elif event_type == EventStateEnum.end:
event = {

View File

@ -577,7 +577,7 @@ class RecordingMaintainer(threading.Thread):
audio_detections,
)
)
elif topic == DetectionTypeEnum.api:
elif topic == DetectionTypeEnum.api or DetectionTypeEnum.lpr:
continue
if frame_time < run_start - stale_frame_count_threshold:

View File

@ -513,7 +513,7 @@ class ReviewSegmentMaintainer(threading.Thread):
_,
audio_detections,
) = data
elif topic == DetectionTypeEnum.api:
elif topic == DetectionTypeEnum.api or DetectionTypeEnum.lpr:
(
camera,
frame_time,
@ -572,13 +572,21 @@ class ReviewSegmentMaintainer(threading.Thread):
or audio in camera_config.review.detections.labels
) and camera_config.review.detections.enabled:
current_segment.audio.add(audio)
elif topic == DetectionTypeEnum.api:
elif topic == DetectionTypeEnum.api or topic == DetectionTypeEnum.lpr:
if manual_info["state"] == ManualEventState.complete:
current_segment.detections[manual_info["event_id"]] = (
manual_info["label"]
)
if self.config.cameras[camera].review.alerts.enabled:
if (
topic == DetectionTypeEnum.api
and self.config.cameras[camera].review.alerts.enabled
):
current_segment.severity = SeverityEnum.alert
elif (
topic == DetectionTypeEnum.lpr
and self.config.cameras[camera].review.detections.enabled
):
current_segment.severity = SeverityEnum.detection
current_segment.last_update = manual_info["end_time"]
elif manual_info["state"] == ManualEventState.start:
self.indefinite_events[camera][manual_info["event_id"]] = (
@ -587,8 +595,16 @@ class ReviewSegmentMaintainer(threading.Thread):
current_segment.detections[manual_info["event_id"]] = (
manual_info["label"]
)
if self.config.cameras[camera].review.alerts.enabled:
if (
topic == DetectionTypeEnum.api
and self.config.cameras[camera].review.alerts.enabled
):
current_segment.severity = SeverityEnum.alert
elif (
topic == DetectionTypeEnum.lpr
and self.config.cameras[camera].review.detections.enabled
):
current_segment.severity = SeverityEnum.detection
# temporarily make it so this event can not end
current_segment.last_update = sys.maxsize
@ -676,6 +692,34 @@ class ReviewSegmentMaintainer(threading.Thread):
logger.warning(
f"Manual event API has been called for {camera}, but alerts are disabled. This manual event will not appear as an alert."
)
elif topic == DetectionTypeEnum.lpr:
if self.config.cameras[camera].review.detections.enabled:
self.active_review_segments[camera] = PendingReviewSegment(
camera,
frame_time,
SeverityEnum.detection,
{manual_info["event_id"]: manual_info["label"]},
{},
[],
set(),
)
if manual_info["state"] == ManualEventState.start:
self.indefinite_events[camera][manual_info["event_id"]] = (
manual_info["label"]
)
# temporarily make it so this event can not end
self.active_review_segments[
camera
].last_update = sys.maxsize
elif manual_info["state"] == ManualEventState.complete:
self.active_review_segments[
camera
].last_update = manual_info["end_time"]
else:
logger.warning(
f"Dedicated LPR camera API has been called for {camera}, but detections are disabled. LPR events will not appear as a detection."
)
self.record_config_subscriber.stop()
self.review_config_subscriber.stop()

View File

@ -1,3 +1,4 @@
import base64
import datetime
import json
import logging
@ -7,6 +8,7 @@ from collections import defaultdict
from enum import Enum
from multiprocessing.synchronize import Event as MpEvent
import cv2
import numpy as np
from peewee import DoesNotExist
@ -394,6 +396,19 @@ class TrackedObjectProcessor(threading.Thread):
return True
def save_lpr_snapshot(self, payload: tuple) -> None:
# save the snapshot image
(frame, event_id, camera) = payload
img = cv2.imdecode(
np.frombuffer(base64.b64decode(frame), dtype=np.uint8),
cv2.IMREAD_COLOR,
)
self.camera_states[camera].save_manual_event_image(
img, event_id, "license_plate", {}
)
def create_manual_event(self, payload: tuple) -> None:
(
frame_time,
@ -409,7 +424,9 @@ class TrackedObjectProcessor(threading.Thread):
) = payload
# save the snapshot image
self.camera_states[camera_name].save_manual_event_image(event_id, label, draw)
self.camera_states[camera_name].save_manual_event_image(
None, event_id, label, draw
)
end_time = frame_time + duration if duration is not None else None
# send event to event maintainer
@ -456,6 +473,59 @@ class TrackedObjectProcessor(threading.Thread):
DetectionTypeEnum.api.value,
)
def create_lpr_event(self, payload: tuple) -> None:
(
frame_time,
camera_name,
label,
event_id,
include_recording,
score,
sub_label,
plate,
) = payload
# send event to event maintainer
self.event_sender.publish(
(
EventTypeEnum.api,
EventStateEnum.start,
camera_name,
"",
{
"id": event_id,
"label": label,
"sub_label": sub_label,
"score": score,
"camera": camera_name,
"start_time": frame_time
- self.config.cameras[camera_name].record.event_pre_capture,
"end_time": None,
"has_clip": self.config.cameras[camera_name].record.enabled
and include_recording,
"has_snapshot": True,
"type": "api",
"recognized_license_plate": plate,
"recognized_license_plate_score": score,
},
)
)
self.ongoing_manual_events[event_id] = camera_name
self.detection_publisher.publish(
(
camera_name,
frame_time,
{
"state": ManualEventState.start,
"label": f"{label}: {sub_label}" if sub_label else label,
"event_id": event_id,
"end_time": None,
},
),
DetectionTypeEnum.lpr.value,
)
def end_manual_event(self, payload: tuple) -> None:
(event_id, end_time) = payload
@ -560,6 +630,10 @@ class TrackedObjectProcessor(threading.Thread):
self.set_recognized_license_plate(
event_id, recognized_license_plate, score
)
elif topic.endswith(EventMetadataTypeEnum.lpr_event_create.value):
self.create_lpr_event(payload)
elif topic.endswith(EventMetadataTypeEnum.save_lpr_snapshot.value):
self.save_lpr_snapshot(payload)
elif topic.endswith(EventMetadataTypeEnum.manual_event_create.value):
self.create_manual_event(payload)
elif topic.endswith(EventMetadataTypeEnum.manual_event_end.value):