mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-07-26 13:47:03 +02:00
LPR tweaks (#17428)
* fix snapshot when using dedicated lpr * enhancement and debugging config * docs
This commit is contained in:
parent
3f1b4438e4
commit
37e0b9b904
@ -87,6 +87,20 @@ Fine-tune the LPR feature using these optional parameters:
|
||||
- For example, setting `match_distance: 1` allows a plate `ABCDE` to match `ABCBE` or `ABCD`.
|
||||
- This parameter will _not_ operate on known plates that are defined as regular expressions. You should define the full string of your plate in `known_plates` in order to use `match_distance`.
|
||||
|
||||
### Image Enhancement
|
||||
|
||||
- **`enhancement`**: A value between **0 and 10** that adjusts the level of image enhancement applied to captured license plates before they are processed for recognition. This preprocessing step can sometimes improve accuracy but may also have the opposite effect.
|
||||
- **Default:** `0` (no enhancement)
|
||||
- Higher values increase contrast, sharpen details, and reduce noise, but excessive enhancement can blur or distort characters, actually making them much harder for Frigate to recognize.
|
||||
- This setting is best adjusted **at the camera level** if running LPR on multiple cameras.
|
||||
- If Frigate is already recognizing plates correctly, leave this setting at the default of `0`. However, if you're experiencing frequent character issues or incomplete plates and you can already easily read the plates yourself, try increasing the value gradually, starting at **5** and adjusting as needed. To preview how different enhancement levels affect your plates, use the `debug_save_plates` configuration option (see below).
|
||||
|
||||
### Debugging
|
||||
|
||||
- **`debug_save_plates`**: Set to `True` to save captured text on plates for debugging. These images are stored in `/media/frigate/clips/lpr`, organized into subdirectories by `<camera>/<event_id>`, and named based on the capture timestamp.
|
||||
- These saved images are not full plates but rather the specific areas of text detected on the plates. It is normal for the text detection model to sometimes find multiple areas of text on the plate. Use them to analyze what text Frigate recognized and how image enhancement affects detection.
|
||||
- **Note:** Frigate does **not** automatically delete these debug images. Once LPR is functioning correctly, you should disable this option and manually remove the saved files to free up storage.
|
||||
|
||||
## Configuration Examples
|
||||
|
||||
These configuration parameters are available at the global level of your config. The only optional parameters that should be set at the camera level are `enabled` and `min_area`.
|
||||
@ -143,6 +157,7 @@ cameras:
|
||||
lpr:
|
||||
enabled: True
|
||||
expire_time: 3 # optional, default
|
||||
enhancement: 3 # optional, enhance the image before trying to recognize characters
|
||||
ffmpeg: ...
|
||||
detect:
|
||||
enabled: False # optional, disable Frigate's standard object detection pipeline
|
||||
@ -151,7 +166,7 @@ cameras:
|
||||
height: 1080
|
||||
motion:
|
||||
threshold: 30
|
||||
contour_area: 80 # use an increased value here to tune out small motion changes
|
||||
contour_area: 60 # use an increased value here to tune out small motion changes
|
||||
improve_contrast: false
|
||||
mask: 0.704,0.007,0.709,0.052,0.989,0.055,0.993,0.001 # ensure your camera's timestamp is masked
|
||||
record:
|
||||
@ -197,6 +212,7 @@ Ensure that:
|
||||
|
||||
- Your camera has a clear, human-readable, well-lit view of the plate. If you can't read the plate's characters, Frigate certainly won't be able to, even if the model is recognizing a `license_plate`. This may require changing video size, quality, or frame rate settings on your camera, depending on your scene and how fast the vehicles are traveling.
|
||||
- The plate is large enough in the image (try adjusting `min_area`) or increasing the resolution of your camera's stream.
|
||||
- Your `enhancement` level (if you've changed it from the default of `0`) is not too high. Too much enhancement will run too much denoising and cause the plate characters to become blurry and unreadable.
|
||||
|
||||
If you are using a Frigate+ model or a custom model that detects license plates, ensure that `license_plate` is added to your list of objects to track.
|
||||
If you are using the free model that ships with Frigate, you should _not_ add `license_plate` to the list of objects to track.
|
||||
@ -229,6 +245,7 @@ Use `match_distance` to allow small character mismatches. Alternatively, define
|
||||
- If you are using a Frigate+ model or a model that detects license plates, watch the debug view (Settings --> Debug) to ensure that `license_plate` is being detected with a `car`.
|
||||
- Watch the debug view to see plates recognized in real-time. For non-dedicated LPR cameras, the `car` label will change to the recognized plate when LPR is enabled and working.
|
||||
- Adjust `detection_threshold` and `recognition_threshold` settings per the suggestions [above](#advanced-configuration).
|
||||
- Enable `debug_save_plates` to save images of detected text on plates to the clips directory (`/media/frigate/clips/lpr`).
|
||||
- Enable debug logs for LPR by adding `frigate.data_processing.common.license_plate: debug` to your `logger` configuration. These logs are _very_ verbose, so only enable this when necessary.
|
||||
|
||||
```yaml
|
||||
|
@ -562,7 +562,7 @@ face_recognition:
|
||||
blur_confidence_filter: True
|
||||
|
||||
# Optional: Configuration for license plate recognition capability
|
||||
# NOTE: enabled and min_area can be overridden at the camera level
|
||||
# NOTE: enabled, min_area, and enhancement can be overridden at the camera level
|
||||
lpr:
|
||||
# Optional: Enable license plate recognition (default: shown below)
|
||||
enabled: False
|
||||
@ -580,6 +580,11 @@ lpr:
|
||||
match_distance: 1
|
||||
# Optional: Known plates to track (strings or regular expressions) (default: shown below)
|
||||
known_plates: {}
|
||||
# Optional: Enhance the detected plate image with contrast adjustment and denoising (default: shown below)
|
||||
# A value between 0 and 10. Higher values are not always better and may perform worse than lower values.
|
||||
enhancement: 0
|
||||
# Optional: Save plate images to /media/frigate/clips/lpr for debugging purposes (default: shown below)
|
||||
debug_save_plates: False
|
||||
|
||||
# Optional: Configuration for AI generated tracked object descriptions
|
||||
# WARNING: Depending on the provider, this will send thumbnails over the internet
|
||||
|
@ -126,6 +126,16 @@ class LicensePlateRecognitionConfig(FrigateBaseModel):
|
||||
known_plates: Optional[Dict[str, List[str]]] = Field(
|
||||
default={}, title="Known plates to track (strings or regular expressions)."
|
||||
)
|
||||
enhancement: int = Field(
|
||||
default=0,
|
||||
title="Amount of contrast adjustment and denoising to apply to license plate images before recognition.",
|
||||
ge=0,
|
||||
le=10,
|
||||
)
|
||||
debug_save_plates: bool = Field(
|
||||
default=False,
|
||||
title="Save plates captured for LPR for debugging purposes.",
|
||||
)
|
||||
|
||||
|
||||
class CameraLicensePlateRecognitionConfig(FrigateBaseModel):
|
||||
@ -139,5 +149,11 @@ class CameraLicensePlateRecognitionConfig(FrigateBaseModel):
|
||||
default=1000,
|
||||
title="Minimum area of license plate to begin running recognition.",
|
||||
)
|
||||
enhancement: int = Field(
|
||||
default=0,
|
||||
title="Amount of contrast adjustment and denoising to apply to license plate images before recognition.",
|
||||
ge=0,
|
||||
le=10,
|
||||
)
|
||||
|
||||
model_config = ConfigDict(extra="ignore", protected_namespaces=())
|
||||
|
@ -4,9 +4,11 @@ import base64
|
||||
import datetime
|
||||
import logging
|
||||
import math
|
||||
import os
|
||||
import random
|
||||
import re
|
||||
import string
|
||||
from pathlib import Path
|
||||
from typing import List, Optional, Tuple
|
||||
|
||||
import cv2
|
||||
@ -20,6 +22,7 @@ from frigate.comms.event_metadata_updater import (
|
||||
EventMetadataTypeEnum,
|
||||
)
|
||||
from frigate.config.camera.camera import CameraTypeEnum
|
||||
from frigate.const import CLIPS_DIR
|
||||
from frigate.embeddings.onnx.lpr_embedding import LPR_EMBEDDING_SIZE
|
||||
from frigate.util.image import area
|
||||
|
||||
@ -107,7 +110,7 @@ class LicensePlateProcessingMixin:
|
||||
return self._process_classification_output(images, outputs)
|
||||
|
||||
def _recognize(
|
||||
self, images: List[np.ndarray]
|
||||
self, camera: string, images: List[np.ndarray]
|
||||
) -> Tuple[List[str], List[List[float]]]:
|
||||
"""
|
||||
Recognize the characters on the detected license plates using the recognition model.
|
||||
@ -137,7 +140,7 @@ class LicensePlateProcessingMixin:
|
||||
# preprocess the images based on the max aspect ratio
|
||||
for i in range(index, min(num_images, index + self.batch_size)):
|
||||
norm_image = self._preprocess_recognition_image(
|
||||
images[indices[i]], max_wh_ratio
|
||||
camera, images[indices[i]], max_wh_ratio
|
||||
)
|
||||
norm_image = norm_image[np.newaxis, :]
|
||||
norm_images.append(norm_image)
|
||||
@ -146,7 +149,7 @@ class LicensePlateProcessingMixin:
|
||||
return self.ctc_decoder(outputs)
|
||||
|
||||
def _process_license_plate(
|
||||
self, image: np.ndarray
|
||||
self, camera: string, id: string, image: np.ndarray
|
||||
) -> Tuple[List[str], List[float], List[int]]:
|
||||
"""
|
||||
Complete pipeline for detecting, classifying, and recognizing license plates in the input image.
|
||||
@ -174,21 +177,37 @@ class LicensePlateProcessingMixin:
|
||||
boxes = self._sort_boxes(list(boxes))
|
||||
plate_images = [self._crop_license_plate(image, x) for x in boxes]
|
||||
|
||||
current_time = int(datetime.datetime.now().timestamp())
|
||||
|
||||
if WRITE_DEBUG_IMAGES:
|
||||
current_time = int(datetime.datetime.now().timestamp())
|
||||
for i, img in enumerate(plate_images):
|
||||
cv2.imwrite(
|
||||
f"debug/frames/license_plate_cropped_{current_time}_{i + 1}.jpg",
|
||||
img,
|
||||
)
|
||||
|
||||
if self.config.lpr.debug_save_plates:
|
||||
logger.debug(f"{camera}: Saving plates for event {id}")
|
||||
|
||||
Path(os.path.join(CLIPS_DIR, f"lpr/{camera}/{id}")).mkdir(
|
||||
parents=True, exist_ok=True
|
||||
)
|
||||
|
||||
for i, img in enumerate(plate_images):
|
||||
cv2.imwrite(
|
||||
os.path.join(
|
||||
CLIPS_DIR, f"lpr/{camera}/{id}/{current_time}_{i + 1}.jpg"
|
||||
),
|
||||
img,
|
||||
)
|
||||
|
||||
# keep track of the index of each image for correct area calc later
|
||||
sorted_indices = np.argsort([x.shape[1] / x.shape[0] for x in plate_images])
|
||||
reverse_mapping = {
|
||||
idx: original_idx for original_idx, idx in enumerate(sorted_indices)
|
||||
}
|
||||
|
||||
results, confidences = self._recognize(plate_images)
|
||||
results, confidences = self._recognize(camera, plate_images)
|
||||
|
||||
if results:
|
||||
license_plates = [""] * len(plate_images)
|
||||
@ -606,7 +625,7 @@ class LicensePlateProcessingMixin:
|
||||
return images, results
|
||||
|
||||
def _preprocess_recognition_image(
|
||||
self, image: np.ndarray, max_wh_ratio: float
|
||||
self, camera: string, image: np.ndarray, max_wh_ratio: float
|
||||
) -> np.ndarray:
|
||||
"""
|
||||
Preprocess an image for recognition by dynamically adjusting its width.
|
||||
@ -634,13 +653,38 @@ class LicensePlateProcessingMixin:
|
||||
else:
|
||||
gray = image
|
||||
|
||||
# apply CLAHE for contrast enhancement
|
||||
grid_size = (
|
||||
max(4, input_w // 40),
|
||||
max(4, input_h // 40),
|
||||
)
|
||||
clahe = cv2.createCLAHE(clipLimit=1.5, tileGridSize=grid_size)
|
||||
enhanced = clahe.apply(gray)
|
||||
if self.config.cameras[camera].lpr.enhancement > 3:
|
||||
# denoise using a configurable pixel neighborhood value
|
||||
logger.debug(
|
||||
f"{camera}: Denoising recognition image (level: {self.config.cameras[camera].lpr.enhancement})"
|
||||
)
|
||||
smoothed = cv2.bilateralFilter(
|
||||
gray,
|
||||
d=5 + self.config.cameras[camera].lpr.enhancement,
|
||||
sigmaColor=10 * self.config.cameras[camera].lpr.enhancement,
|
||||
sigmaSpace=10 * self.config.cameras[camera].lpr.enhancement,
|
||||
)
|
||||
sharpening_kernel = np.array([[-1, -1, -1], [-1, 9, -1], [-1, -1, -1]])
|
||||
processed = cv2.filter2D(smoothed, -1, sharpening_kernel)
|
||||
else:
|
||||
processed = gray
|
||||
|
||||
if self.config.cameras[camera].lpr.enhancement > 0:
|
||||
# always apply the same CLAHE for contrast enhancement when enhancement level is above 3
|
||||
logger.debug(
|
||||
f"{camera}: Enhancing contrast for recognition image (level: {self.config.cameras[camera].lpr.enhancement})"
|
||||
)
|
||||
grid_size = (
|
||||
max(4, input_w // 40),
|
||||
max(4, input_h // 40),
|
||||
)
|
||||
clahe = cv2.createCLAHE(
|
||||
clipLimit=2 if self.config.cameras[camera].lpr.enhancement > 5 else 1.5,
|
||||
tileGridSize=grid_size,
|
||||
)
|
||||
enhanced = clahe.apply(processed)
|
||||
else:
|
||||
enhanced = processed
|
||||
|
||||
# Convert back to 3-channel for model compatibility
|
||||
image = cv2.cvtColor(enhanced, cv2.COLOR_GRAY2RGB)
|
||||
@ -948,6 +992,8 @@ class LicensePlateProcessingMixin:
|
||||
return
|
||||
|
||||
if dedicated_lpr:
|
||||
id = "dedicated-lpr"
|
||||
|
||||
rgb = cv2.cvtColor(frame, cv2.COLOR_YUV2BGR_I420)
|
||||
|
||||
# apply motion mask
|
||||
@ -1149,7 +1195,7 @@ class LicensePlateProcessingMixin:
|
||||
# run detection, returns results sorted by confidence, best first
|
||||
start = datetime.datetime.now().timestamp()
|
||||
license_plates, confidences, areas = self._process_license_plate(
|
||||
license_plate_frame
|
||||
camera, id, license_plate_frame
|
||||
)
|
||||
self.__update_lpr_metrics(datetime.datetime.now().timestamp() - start)
|
||||
|
||||
@ -1257,9 +1303,10 @@ class LicensePlateProcessingMixin:
|
||||
f"{camera}: Writing snapshot for {id}, {top_plate}, {current_time}"
|
||||
)
|
||||
frame_bgr = cv2.cvtColor(frame, cv2.COLOR_YUV2BGR_I420)
|
||||
_, encoded_img = cv2.imencode(".jpg", frame_bgr)
|
||||
self.sub_label_publisher.publish(
|
||||
EventMetadataTypeEnum.save_lpr_snapshot,
|
||||
(base64.b64encode(frame_bgr).decode("ASCII"), id, camera),
|
||||
(base64.b64encode(encoded_img).decode("ASCII"), id, camera),
|
||||
)
|
||||
|
||||
self.detected_license_plates[id] = {
|
||||
|
Loading…
Reference in New Issue
Block a user