mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-08-04 13:47:37 +02:00
update docs with MX3 info
This commit is contained in:
parent
99b2b37238
commit
3f2c6c130c
@ -13,6 +13,7 @@ Frigate supports multiple different detectors that work on different types of ha
|
||||
|
||||
- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
|
||||
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
|
||||
- [MemryX](#memryx-mx3): The MX3 Acceleration module is available in M.2 format, offering broad compatibility across various platforms.
|
||||
|
||||
**AMD**
|
||||
|
||||
@ -49,7 +50,7 @@ This does not affect using hardware for accelerating other tasks such as [semant
|
||||
|
||||
# Officially Supported Detectors
|
||||
|
||||
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8l`, `onnx`, `openvino`, `rknn`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
|
||||
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8l`, `memryx` `onnx`, `openvino`, `rknn`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
|
||||
|
||||
## Edge TPU Detector
|
||||
|
||||
@ -237,6 +238,155 @@ Hailo8 supports all models in the Hailo Model Zoo that include HailoRT post-proc
|
||||
|
||||
---
|
||||
|
||||
## MemryX MX3
|
||||
|
||||
This detector is available for use with the MemryX MX3 accelerator M.2 module. Frigate supports the MX3 on compatible hardware platforms, providing efficient and high-performance object detection.
|
||||
|
||||
See the [installation docs](../frigate/installation.md#memryx-mx3) for information on configuring the MemryX hardware.
|
||||
|
||||
To configure a MemryX detector, simply set the `type` attribute to `memryx` and follow the configuration guide below.
|
||||
|
||||
### Configuration
|
||||
|
||||
To configure the MemryX detector, use the following example configuration:
|
||||
|
||||
#### Single PCIe MemryX MX3
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
memx0:
|
||||
type: memryx
|
||||
device: PCIe:0
|
||||
```
|
||||
|
||||
#### Multiple PCIe MemryX MX3 Modules
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
memx0:
|
||||
type: memryx
|
||||
device: PCIe:0
|
||||
|
||||
memx1:
|
||||
type: memryx
|
||||
device: PCIe:1
|
||||
|
||||
memx2:
|
||||
type: memryx
|
||||
device: PCIe:2
|
||||
```
|
||||
|
||||
### Supported Models
|
||||
|
||||
MemryX `.dfp` models are automatically downloaded at runtime, if enabled, to the container at `/memryx_models/model_folder/`.
|
||||
|
||||
#### YOLO-NAS
|
||||
|
||||
The [YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) model included in this detector is downloaded from the [Models Section](#downloading-yolo-nas-model) and compiled to DFP with [mx_nc](https://developer.memryx.com/tools/neural_compiler.html#usage).
|
||||
|
||||
The input size for **YOLO-NAS** can be set to either **320x320** (default) or **640x640**.
|
||||
|
||||
- The default size of **320x320** is optimized for lower CPU usage and faster inference times.
|
||||
|
||||
##### Configuration
|
||||
|
||||
Below is the recommended configuration for using the **YOLO-NAS** (small) model with the MemryX detector:
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
memx0:
|
||||
type: memryx
|
||||
device: PCIe:0
|
||||
|
||||
model:
|
||||
model_type: yolonas
|
||||
width: 320 # (Can be set to 640 for higher resolution)
|
||||
height: 320 # (Can be set to 640 for higher resolution)
|
||||
input_tensor: hwnc
|
||||
input_dtype: float
|
||||
# path: yolo_nas_s.dfp ##Model is normally fetched through the runtime, so 'path' can be omitted.##
|
||||
labelmap_path: /labelmap/coco-80.txt
|
||||
```
|
||||
|
||||
#### YOLOX
|
||||
|
||||
The model is sourced from the [OpenCV Model Zoo](https://github.com/opencv/opencv_zoo) and precompiled to DFP.
|
||||
|
||||
##### Configuration
|
||||
|
||||
Below is the recommended configuration for using the **YOLOX** (small) model with the MemryX detector:
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
memx0:
|
||||
type: memryx
|
||||
device: PCIe:0
|
||||
|
||||
model:
|
||||
model_type: yolox
|
||||
width: 640
|
||||
height: 640
|
||||
input_tensor: hwnc
|
||||
input_dtype: float_denorm
|
||||
# path: YOLOX_640_640_3_onnx.dfp ##Model is normally fetched through the runtime, so 'path' can be omitted.##
|
||||
labelmap_path: /labelmap/coco-80.txt
|
||||
```
|
||||
|
||||
#### YOLOv9
|
||||
|
||||
The YOLOv9s model included in this detector is downloaded from [the original GitHub](https://github.com/WongKinYiu/yolov9) like in the [Models Section](#yolov9-1) and compiled to DFP with [mx_nc](https://developer.memryx.com/tools/neural_compiler.html#usage).
|
||||
|
||||
##### Configuration
|
||||
|
||||
Below is the recommended configuration for using the **YOLOv9** (small) model with the MemryX detector:
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
memx0:
|
||||
type: memryx
|
||||
device: PCIe:0
|
||||
|
||||
model:
|
||||
model_type: yolo-generic
|
||||
width: 320 # (Can be set to 640 for higher resolution)
|
||||
height: 320 # (Can be set to 640 for higher resolution)
|
||||
input_tensor: hwnc
|
||||
input_dtype: float
|
||||
# path: YOLO_v9_small_320_320_3_onnx.dfp ##Model is normally fetched through the runtime, so 'path' can be omitted.##
|
||||
labelmap_path: /labelmap/coco-80.txt
|
||||
```
|
||||
|
||||
#### SSDLite MobileNet v2
|
||||
|
||||
The model is sourced from the [OpenMMLab Model Zoo](https://mmdeploy-oss.openmmlab.com/model/mmdet-det/ssdlite-e8679f.onnx) and has been converted to DFP.
|
||||
|
||||
##### Configuration
|
||||
|
||||
Below is the recommended configuration for using the **SSDLite MobileNet v2** model with the MemryX detector:
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
memx0:
|
||||
type: memryx
|
||||
device: PCIe:0
|
||||
|
||||
model:
|
||||
model_type: ssd
|
||||
width: 320
|
||||
height: 320
|
||||
input_tensor: hwnc
|
||||
input_dtype: float
|
||||
# path: SSDlite_MobileNet_v2_320_320_3_onnx.dfp ##Model is normally fetched during runtime, so 'path' can be omitted.##
|
||||
labelmap_path: /labelmap/coco-80.txt
|
||||
```
|
||||
|
||||
#### Using a Custom Model
|
||||
To use your own model, bind-mount the compiled `.dfp` file into the container and specify its path using `model.path`. You will also have to update the `labelmap` accordingly.
|
||||
|
||||
For detailed instructions on compiling models, refer to the [MemryX Compiler](https://developer.memryx.com/tools/neural_compiler.html#usage) docs and [Tutorials](https://developer.memryx.com/tutorials/tutorials.html).
|
||||
|
||||
---
|
||||
|
||||
## OpenVINO Detector
|
||||
|
||||
The OpenVINO detector type runs an OpenVINO IR model on AMD and Intel CPUs, Intel GPUs and Intel VPU hardware. To configure an OpenVINO detector, set the `"type"` attribute to `"openvino"`.
|
||||
|
@ -44,6 +44,10 @@ Frigate supports multiple different detectors that work on different types of ha
|
||||
- [Google Coral EdgeTPU](#google-coral-tpu): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
|
||||
- [Supports primarily ssdlite and mobilenet model architectures](../../configuration/object_detectors#edge-tpu-detector)
|
||||
|
||||
- [MemryX](#memryx-mx3): The MX3 M.2 accelerator module is available in m.2 format allowing for a wide range of compatibility with devices.
|
||||
- [Supports many model architectures](../../configuration/object_detectors#memryx-mx3)
|
||||
- Runs best with tiny, small, or medium-size models
|
||||
|
||||
**AMD**
|
||||
|
||||
- [ROCm](#amd-gpus): ROCm can run on AMD Discrete GPUs to provide efficient object detection
|
||||
@ -155,6 +159,36 @@ Frigate supports all Jetson boards, from the inexpensive Jetson Nano to the powe
|
||||
|
||||
Inference speed will vary depending on the YOLO model, jetson platform and jetson nvpmodel (GPU/DLA/EMC clock speed). It is typically 20-40 ms for most models. The DLA is more efficient than the GPU, but not faster, so using the DLA will reduce power consumption but will slightly increase inference time.
|
||||
|
||||
### MemryX MX3
|
||||
|
||||
Frigate supports the MemryX MX3 M.2 AI Acceleration Module on compatible hardware platforms, including both x86 (Intel/AMD) and ARM-based SBCs such as RPi 5.
|
||||
|
||||
A single MemryX MX3 module is capable of handling multiple camera streams using the default models, making it sufficient for most users. For larger deployments with more cameras or bigger models, multiple MX3 modules can be used. Frigate supports multi-detector configurations, allowing you to connect multiple MX3 modules to scale inference capacity seamlessly.
|
||||
|
||||
Detailed information is available [in the detector docs](/configuration/object_detectors#memryx-mx3).
|
||||
|
||||
Frigate supports the following models resolutions with the MemryX MX3 module:
|
||||
|
||||
- **Yolo-NAS**: 320 (default), 640
|
||||
- **YOLOv9**: 320 (default), 640
|
||||
- **YOLOX**: 640
|
||||
- **SSDlite MobileNet v2**: 320
|
||||
|
||||
Due to the MX3's architecture, the maximum frames per second supported cannot be calculated as `1/inference time` and is measured separately. When deciding how many camera streams you may support with your configuration, use the **MX3 Total FPS** column to approximate of the detector's limit, not the Inference Time.
|
||||
|
||||
|
||||
| Model | Input Size | MX3 Inference Time | MX3 Total FPS |
|
||||
|----------------------|------------|--------------------|---------------|
|
||||
| YOLO-NAS-Small | 320 | ~ 9 ms | ~ 378 |
|
||||
| YOLO-NAS-Small | 640 | ~ 21 ms | ~ 138 |
|
||||
| YOLOv9s | 320 | ~ 16 ms | ~ 382 |
|
||||
| YOLOv9s | 640 | ~ 41 ms | ~ 110 |
|
||||
| YOLOX-Small | 640 | ~ 16 ms | ~ 263 |
|
||||
| SSDlite MobileNet v2 | 320 | ~ 5 ms | ~ 1056 |
|
||||
|
||||
Inference speeds may vary depending on the host platform’s CPU performance. The above data was measured on an **Intel 13700 CPU**. Platforms like Raspberry Pi, x86 hosts, Orange Pi, and other ARM-based SBCs have different levels of processing capability, which will post-processing time and may result in lower FPS.
|
||||
|
||||
|
||||
### Rockchip platform
|
||||
|
||||
Frigate supports hardware video processing on all Rockchip boards. However, hardware object detection is only supported on these boards:
|
||||
|
@ -132,6 +132,71 @@ If you are using `docker run`, add this option to your command `--device /dev/ha
|
||||
|
||||
Finally, configure [hardware object detection](/configuration/object_detectors#hailo-8l) to complete the setup.
|
||||
|
||||
### MemryX MX3
|
||||
|
||||
The MemryX MX3 Accelerator is available in the M.2 2280 form factor (like an NVMe SSD), and supports a variety of configurations:
|
||||
- x86 (Intel/AMD) PCs
|
||||
- Raspberry Pi 5
|
||||
- Orange Pi 5 Plus/Max
|
||||
- Multi-M.2 PCIe carrier cards
|
||||
|
||||
#### Configuration
|
||||
|
||||
|
||||
#### Installation
|
||||
|
||||
To get started with MX3 hardware setup for your system, refer to the [Hardware Setup Guide](https://developer.memryx.com/get_started/hardware_setup.html).
|
||||
|
||||
Then follow these steps for installing the correct driver/runtime configuration:
|
||||
|
||||
1. Copy or download [this script](https://github.com/blakeblackshear/frigate/blob/dev/docker/memryx/user_installation.sh).
|
||||
2. Ensure it has execution permissions with `sudo chmod +x user_installation.sh`
|
||||
3. Run the script with `./user_installation.sh`
|
||||
4. **Restart your computer** to complete driver installation.
|
||||
|
||||
#### Setup
|
||||
|
||||
To set up Frigate, follow the default installation instructions, for example: `ghcr.io/blakeblackshear/frigate:stable`
|
||||
|
||||
Next, grant Docker permissions to access your hardware by adding the following lines to your `docker-compose.yml` file:
|
||||
|
||||
```yaml
|
||||
devices:
|
||||
- /dev/memx0
|
||||
```
|
||||
|
||||
During the configuration process, you should run Docker in privileged mode.
|
||||
|
||||
In `docker-compose.yml`, add: `privileged: true`
|
||||
|
||||
If you can't use Docker Compose, you can run the container with something similar to this:
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name frigate-memx \
|
||||
--restart=unless-stopped \
|
||||
--mount type=tmpfs,target=/tmp/cache,tmpfs-size=1000000000 \
|
||||
--shm-size=256m \
|
||||
-v /path/to/your/storage:/media/frigate \
|
||||
-v /path/to/your/config:/config \
|
||||
-v /etc/localtime:/etc/localtime:ro \
|
||||
-e FRIGATE_RTSP_PASSWORD='password' \
|
||||
--add-host gateway.docker.internal:host-gateway \
|
||||
--privileged=true \
|
||||
-p 8971:8971 \
|
||||
-p 8554:8554 \
|
||||
-p 5000:5000 \
|
||||
-p 8555:8555/tcp \
|
||||
-p 8555:8555/udp \
|
||||
--device /dev/memx0 \
|
||||
ghcr.io/blakeblackshear/frigate:stable
|
||||
```
|
||||
|
||||
#### Configuration
|
||||
|
||||
Finally, configure [hardware object detection](/configuration/object_detectors#memryx-mx3) to complete the setup.
|
||||
|
||||
|
||||
### Rockchip platform
|
||||
|
||||
Make sure that you use a linux distribution that comes with the rockchip BSP kernel 5.10 or 6.1 and necessary drivers (especially rkvdec2 and rknpu). To check, enter the following commands:
|
||||
|
Loading…
Reference in New Issue
Block a user