Final Documenatation and debug update

This commit is contained in:
OmriAx 2025-03-02 12:31:18 +02:00
parent 8b5a100530
commit 62afebf54f
3 changed files with 46 additions and 46 deletions

View File

@ -129,17 +129,22 @@ detectors:
type: edgetpu
device: pci
```
---
## Hailo-8
## Hailo-8 / Hailo-8L Detector
This detector is available for use with Hailo-8 and Hailo-8L AI Acceleration Module.
This detector is available for use with both Hailo-8 and Hailo-8L AI Acceleration Modules. The integration automatically detects your hardware architecture via the Hailo CLI and selects the appropriate default model if no custom model is specified.
See the [installation docs](../frigate/installation.md#hailo-8l) for information on configuring the hailo8.
See the [installation docs](../frigate/installation.md#hailo-8l) for information on configuring the Hailo hardware.
### Configuration
#### YOLO (Recommended)
Use this configuration for YOLO-based models. When no custom model path or URL is provided, the detector checks for a cached model at `/config/model_cache/hailo` and automatically downloads the default model based on the detected hardware:
- **Hailo-8 hardware:** Uses `yolov8s.hef`
- **Hailo-8L hardware:** Uses `yolov6n.hef`
```yaml
detectors:
hailo8l:
@ -153,13 +158,15 @@ model:
input_pixel_format: rgb
input_dtype: int
model_type: hailoyolo
# The detector will automatically use the appropriate model:
# - YOLOv8s for Hailo-8L hardware
# - YOLOv8m for Hailo-8 hardware
# The detector automatically selects the default model based on your hardware:
# - For Hailo-8 hardware: YOLOv8s (default: yolov8s.hef)
# - For Hailo-8L hardware: YOLOv6n (default: yolov6n.hef)
```
#### SSD
For SSD-based models, provide the model path (or URL) to your compiled SSD model:
```yaml
detectors:
hailo8l:
@ -175,13 +182,9 @@ model:
path: /config/model_cache/h8l_cache/ssd_mobilenet_v1.hef
```
### Custom Models
#### Custom Models
The Hailo-8l detector supports all YOLO models that have been compiled for the Hailo hardware and include post-processing. The detector automatically detects your hardware type (Hailo-8 or Hailo-8L) and uses the appropriate model.
#### Using a Custom URL
You can specify a custom URL to download a model directly:
The Hailo detector supports all YOLO models compiled for Hailo hardware that include post-processing. You can specify a custom URL to download your model directly. If provided, the detector will use the custom model instead of the default one.
```yaml
detectors:
@ -199,8 +202,11 @@ model:
model_type: hailoyolo
```
The detector will automatically handle different output formats from all supported YOLO variants. It's important to match the `model_type` with the actual model architecture for proper processing.
* Tsted custom models : yolov5 , yolov8 , yolov9 , yolov11
> **Note:**
> If both a model path and URL are provided, the detector will first check the local model path. If the file is not found, it will download the model from the URL.
>
> *Tested custom models include: yolov5, yolov8, yolov9, yolov11.*
## OpenVINO Detector

View File

@ -92,37 +92,31 @@ Inference speeds will vary greatly depending on the GPU and the model used.
With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many discrete AMD GPUs.
### Hailo-8
### Hailo-8 Detector
Frigate supports the Hailo8 and Hailo-8L AI Acceleration Module on compatible hardware platforms, including the Raspberry Pi 5 with the PCIe hat from the AI kit. The Hailo accelerator provides dedicated hardware for efficiently running neural network inference.
Frigate supports both the Hailo-8 and Hailo-8L AI Acceleration Modules on compatible hardware platforms—including the Raspberry Pi 5 with the PCIe hat from the AI kit. The Hailo detector integration in Frigate automatically identifies your hardware type and selects the appropriate default model when a custom model isnt provided.
The inference time for the Hailo-8L chip is around 17-21 ms for the SSD MobileNet Version 1 model and 15-18 ms for YOLOv8s models. For the more powerful Hailo-8 chip, the YOLOv8m model has an inference time of approximately 12-15 ms.
**Default Model Configuration:**
- **Hailo-8L:** Default model is **YOLOv6n**.
- **Hailo-8:** Default model is **YOLOv8s**.
In real-world testing with 8 cameras running simultaneously, each camera maintained a detection rate of approximately 20-25 FPS, demonstrating the Hailo accelerator's capability to handle multiple video streams efficiently.
Additionally, the heavier **YOLOv8m** model has been tested on Hailo-8 hardware for users who require higher accuracy despite increased inference time.
Testing on x86 platforms has also been conducted with excellent results. The x86 implementation benefits from having two PCIe lanes available instead of one, resulting in improved FPS , throughput and lower latency compared to the Raspberry Pi setup.
In real-world deployments, even with multiple cameras running concurrently, Frigate has demonstrated consistent performance. Testing on x86 platforms—with dual PCIe lanes—yields further improvements in FPS, throughput, and latency compared to the Raspberry Pi setup.
#### Supported Models
#### Supported Models & Inference Times
The Hailo-8L detector supports all YOLO variants that have been compiled for Hailo hardware with post-processing, including:
| Model Type | Hardware | Inference Time (RPi) | Inference Time (x86) | Resolution |
|--------------------|------------------|----------------------|----------------------|------------|
| SSD MobileNet V1 | Hailo-8L | 1721 ms | 1215 ms | 300×300 |
| SSD MobileNet V1 | Hailo-8 | 1013 ms | | 300×300 |
| YOLOv6n (Default) | Hailo-8L | 1620 ms | 1013 ms | 640×640 |
| YOLOv8s (Default) | Hailo-8 | 1519 ms | 1218 ms | 640×640 |
| YOLOv8m (Tested) | Hailo-8 | 1825 ms | 1622 ms | 640×640 |
- YOLOv5
- YOLOv8
- any Yolo variant with HailoRT Post-process
- SSD mobilnet v1
*Note: Inference times may vary based on system configuration and operating conditions.*
| Model Type | Hardware | Inference Time | Resolution |
|------------|----------|----------------|------------|
| SSD MobileNet V1 | Hailo-8L (RPi) | 17-21 ms | 300×300 |
| SSD MobileNet V1 | Hailo-8L (x86) | 12-15 ms | 300×300 |
| SSD MobileNet V1 | Hailo-8 (rpi) | 13-16 ms | 300×300 |
| YOLOv8s | Hailo-8L (RPi) | 15-18 ms | 640×640 |
| YOLOv8s | Hailo-8L (x86) | 10-13 ms | 640×640 |
| YOLOv8m | Hailo-8 (RPi) | 12-15 ms | 640×640 |
| YOLOv8m | Hailo-8 (x86) | 8-11 ms | 640×640 |
The detector automatically detects your hardware type (Hailo-8 or Hailo-8L) and downloads the appropriate model. The Hailo detector optimizes inference by maintaining a persistent pipeline between detection calls, reducing overhead and providing fast, consistent performance with multiple cameras.
This documentation is part of Frigates internal integration guide, ensuring that users get the optimal performance by automatically adapting to the available Hailo hardware.
## Community Supported Detectors

View File

@ -204,7 +204,7 @@ class HailoDetector(DetectionApi):
def __init__(self, detector_config: 'HailoDetectorConfig'):
global ARCH
ARCH = detect_hailo_arch()
self.cache_dir = "/config/model_cache/hailo"
self.cache_dir = MODEL_CACHE_DIR
self.device_type = detector_config.device
# Model attributes should be provided in detector_config.model
self.model_path = detector_config.model.path if hasattr(detector_config.model, "path") else None
@ -223,7 +223,7 @@ class HailoDetector(DetectionApi):
self.input_queue = queue.Queue()
self.output_queue = queue.Queue()
try:
logging.info(f"[INIT] Loading HEF model from {self.working_model_path}")
logging.debug(f"[INIT] Loading HEF model from {self.working_model_path}")
self.inference_engine = HailoAsyncInference(
self.working_model_path,
self.input_queue,
@ -231,7 +231,7 @@ class HailoDetector(DetectionApi):
self.batch_size
)
self.input_shape = self.inference_engine.get_input_shape()
logging.info(f"[INIT] Model input shape: {self.input_shape}")
logging.debug(f"[INIT] Model input shape: {self.input_shape}")
except Exception as e:
logging.error(f"[INIT] Failed to initialize HailoAsyncInference: {e}")
raise
@ -296,11 +296,11 @@ class HailoDetector(DetectionApi):
return model_path
def detect_raw(self, tensor_input):
logging.info("[DETECT_RAW] Starting detection")
logging.debug("[DETECT_RAW] Starting detection")
# Ensure tensor_input has a batch dimension
if isinstance(tensor_input, np.ndarray) and len(tensor_input.shape) == 3:
tensor_input = np.expand_dims(tensor_input, axis=0)
logging.info(f"[DETECT_RAW] Expanded input shape to {tensor_input.shape}")
logging.debug(f"[DETECT_RAW] Expanded input shape to {tensor_input.shape}")
# Enqueue input and a sentinel value
self.input_queue.put(tensor_input)
@ -314,7 +314,7 @@ class HailoDetector(DetectionApi):
return np.zeros((20, 6), dtype=np.float32)
original_input, infer_results = result
logging.info("[DETECT_RAW] Inference completed.")
logging.debug("[DETECT_RAW] Inference completed.")
# If infer_results is a single-element list, unwrap it.
if isinstance(infer_results, list) and len(infer_results) == 1:
@ -361,7 +361,7 @@ class HailoDetector(DetectionApi):
elif detections_array.shape[0] > 20:
detections_array = detections_array[:20, :]
logging.info(f"[DETECT_RAW] Processed detections: {detections_array}")
logging.debug(f"[DETECT_RAW] Processed detections: {detections_array}")
return detections_array
# Preprocess method using inline utility
@ -370,10 +370,10 @@ class HailoDetector(DetectionApi):
# Close the Hailo device
def close(self):
logging.info("[CLOSE] Closing HailoDetector")
logging.debug("[CLOSE] Closing HailoDetector")
try:
self.inference_engine.hef.close()
logging.info("Hailo device closed successfully")
logging.debug("Hailo device closed successfully")
except Exception as e:
logging.error(f"Failed to close Hailo device: {e}")
raise