mirror of
https://github.com/blakeblackshear/frigate.git
synced 2025-03-04 00:17:22 +01:00
Remove rocm detector (#16913)
* Remove rocm detector plugin * Update docs to recommend using onnx for rocm * Formatting
This commit is contained in:
parent
0128ec2ba6
commit
71e6e04d77
@ -49,7 +49,7 @@ This does not affect using hardware for accelerating other tasks such as [semant
|
|||||||
|
|
||||||
# Officially Supported Detectors
|
# Officially Supported Detectors
|
||||||
|
|
||||||
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8l`, `onnx`, `openvino`, `rknn`, `rocm`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
|
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `hailo8l`, `onnx`, `openvino`, `rknn`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
|
||||||
|
|
||||||
## Edge TPU Detector
|
## Edge TPU Detector
|
||||||
|
|
||||||
@ -367,7 +367,7 @@ model:
|
|||||||
|
|
||||||
### Setup
|
### Setup
|
||||||
|
|
||||||
The `rocm` detector supports running YOLO-NAS models on AMD GPUs. Use a frigate docker image with `-rocm` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-rocm`.
|
Support for AMD GPUs is provided using the [ONNX detector](#ONNX). In order to utilize the AMD GPU for object detection use a frigate docker image with `-rocm` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-rocm`.
|
||||||
|
|
||||||
### Docker settings for GPU access
|
### Docker settings for GPU access
|
||||||
|
|
||||||
@ -446,29 +446,9 @@ $ docker exec -it frigate /bin/bash -c '(unset HSA_OVERRIDE_GFX_VERSION && /opt/
|
|||||||
|
|
||||||
### Supported Models
|
### Supported Models
|
||||||
|
|
||||||
There is no default model provided, the following formats are supported:
|
See [ONNX supported models](#supported-models) for supported models, there are some caveats:
|
||||||
|
- D-FINE models are not supported
|
||||||
#### YOLO-NAS
|
- YOLO-NAS models are known to not run well on integrated GPUs
|
||||||
|
|
||||||
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. See [the models section](#downloading-yolo-nas-model) for more information on downloading the YOLO-NAS model for use in Frigate.
|
|
||||||
|
|
||||||
After placing the downloaded onnx model in your config folder, you can use the following configuration:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
detectors:
|
|
||||||
rocm:
|
|
||||||
type: rocm
|
|
||||||
|
|
||||||
model:
|
|
||||||
model_type: yolonas
|
|
||||||
width: 320 # <--- should match whatever was set in notebook
|
|
||||||
height: 320 # <--- should match whatever was set in notebook
|
|
||||||
input_pixel_format: bgr
|
|
||||||
path: /config/yolo_nas_s.onnx
|
|
||||||
labelmap_path: /labelmap/coco-80.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
|
|
||||||
|
|
||||||
## ONNX
|
## ONNX
|
||||||
|
|
||||||
|
@ -28,11 +28,11 @@ Not all model types are supported by all detectors, so it's important to choose
|
|||||||
|
|
||||||
## Supported detector types
|
## Supported detector types
|
||||||
|
|
||||||
Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVino (`openvino`), ONNX (`onnx`), and ROCm (`rocm`) detectors.
|
Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVino (`openvino`), and ONNX (`onnx`) detectors.
|
||||||
|
|
||||||
:::warning
|
:::warning
|
||||||
|
|
||||||
Using Frigate+ models with `onnx` and `rocm` is only available with Frigate 0.15 and later.
|
Using Frigate+ models with `onnx` is only available with Frigate 0.15 and later.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
@ -42,7 +42,7 @@ Using Frigate+ models with `onnx` and `rocm` is only available with Frigate 0.15
|
|||||||
| [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `mobiledet` |
|
| [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `mobiledet` |
|
||||||
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolonas` |
|
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolonas` |
|
||||||
| [NVidia GPU](https://deploy-preview-13787--frigate-docs.netlify.app/configuration/object_detectors#onnx)\* | `onnx` | `yolonas` |
|
| [NVidia GPU](https://deploy-preview-13787--frigate-docs.netlify.app/configuration/object_detectors#onnx)\* | `onnx` | `yolonas` |
|
||||||
| [AMD ROCm GPU](https://deploy-preview-13787--frigate-docs.netlify.app/configuration/object_detectors#amdrocm-gpu-detector)\* | `rocm` | `yolonas` |
|
| [AMD ROCm GPU](https://deploy-preview-13787--frigate-docs.netlify.app/configuration/object_detectors#amdrocm-gpu-detector)\* | `onnx` | `yolonas` |
|
||||||
|
|
||||||
_\* Requires Frigate 0.15_
|
_\* Requires Frigate 0.15_
|
||||||
|
|
||||||
|
@ -1,170 +0,0 @@
|
|||||||
import ctypes
|
|
||||||
import logging
|
|
||||||
import os
|
|
||||||
import subprocess
|
|
||||||
import sys
|
|
||||||
|
|
||||||
import cv2
|
|
||||||
import numpy as np
|
|
||||||
from pydantic import Field
|
|
||||||
from typing_extensions import Literal
|
|
||||||
|
|
||||||
from frigate.const import MODEL_CACHE_DIR
|
|
||||||
from frigate.detectors.detection_api import DetectionApi
|
|
||||||
from frigate.detectors.detector_config import (
|
|
||||||
BaseDetectorConfig,
|
|
||||||
ModelTypeEnum,
|
|
||||||
PixelFormatEnum,
|
|
||||||
)
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
DETECTOR_KEY = "rocm"
|
|
||||||
|
|
||||||
|
|
||||||
def detect_gfx_version():
|
|
||||||
return subprocess.getoutput(
|
|
||||||
"unset HSA_OVERRIDE_GFX_VERSION && /opt/rocm/bin/rocminfo | grep gfx |head -1|awk '{print $2}'"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def auto_override_gfx_version():
|
|
||||||
# If environment variable already in place, do not override
|
|
||||||
gfx_version = detect_gfx_version()
|
|
||||||
old_override = os.getenv("HSA_OVERRIDE_GFX_VERSION")
|
|
||||||
if old_override not in (None, ""):
|
|
||||||
logger.warning(
|
|
||||||
f"AMD/ROCm: detected {gfx_version} but HSA_OVERRIDE_GFX_VERSION already present ({old_override}), not overriding!"
|
|
||||||
)
|
|
||||||
return old_override
|
|
||||||
mapping = {
|
|
||||||
"gfx90c": "9.0.0",
|
|
||||||
"gfx1031": "10.3.0",
|
|
||||||
"gfx1103": "11.0.0",
|
|
||||||
}
|
|
||||||
override = mapping.get(gfx_version)
|
|
||||||
if override is not None:
|
|
||||||
logger.warning(
|
|
||||||
f"AMD/ROCm: detected {gfx_version}, overriding HSA_OVERRIDE_GFX_VERSION={override}"
|
|
||||||
)
|
|
||||||
os.putenv("HSA_OVERRIDE_GFX_VERSION", override)
|
|
||||||
return override
|
|
||||||
return ""
|
|
||||||
|
|
||||||
|
|
||||||
class ROCmDetectorConfig(BaseDetectorConfig):
|
|
||||||
type: Literal[DETECTOR_KEY]
|
|
||||||
conserve_cpu: bool = Field(
|
|
||||||
default=True,
|
|
||||||
title="Conserve CPU at the expense of latency (and reduced max throughput)",
|
|
||||||
)
|
|
||||||
auto_override_gfx: bool = Field(
|
|
||||||
default=True, title="Automatically detect and override gfx version"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class ROCmDetector(DetectionApi):
|
|
||||||
type_key = DETECTOR_KEY
|
|
||||||
|
|
||||||
def __init__(self, detector_config: ROCmDetectorConfig):
|
|
||||||
if detector_config.auto_override_gfx:
|
|
||||||
auto_override_gfx_version()
|
|
||||||
|
|
||||||
try:
|
|
||||||
sys.path.append("/opt/rocm/lib")
|
|
||||||
import migraphx
|
|
||||||
|
|
||||||
logger.info("AMD/ROCm: loaded migraphx module")
|
|
||||||
except ModuleNotFoundError:
|
|
||||||
logger.error("AMD/ROCm: module loading failed, missing ROCm environment?")
|
|
||||||
raise
|
|
||||||
|
|
||||||
if detector_config.conserve_cpu:
|
|
||||||
logger.info("AMD/ROCm: switching HIP to blocking mode to conserve CPU")
|
|
||||||
ctypes.CDLL("/opt/rocm/lib/libamdhip64.so").hipSetDeviceFlags(4)
|
|
||||||
|
|
||||||
self.h = detector_config.model.height
|
|
||||||
self.w = detector_config.model.width
|
|
||||||
self.rocm_model_type = detector_config.model.model_type
|
|
||||||
self.rocm_model_px = detector_config.model.input_pixel_format
|
|
||||||
path = detector_config.model.path
|
|
||||||
|
|
||||||
mxr_path = os.path.splitext(path)[0] + ".mxr"
|
|
||||||
if path.endswith(".mxr"):
|
|
||||||
logger.info(f"AMD/ROCm: loading parsed model from {mxr_path}")
|
|
||||||
self.model = migraphx.load(mxr_path)
|
|
||||||
elif os.path.exists(mxr_path):
|
|
||||||
logger.info(f"AMD/ROCm: loading parsed model from {mxr_path}")
|
|
||||||
self.model = migraphx.load(mxr_path)
|
|
||||||
else:
|
|
||||||
logger.info(f"AMD/ROCm: loading model from {path}")
|
|
||||||
|
|
||||||
if (
|
|
||||||
path.endswith(".tf")
|
|
||||||
or path.endswith(".tf2")
|
|
||||||
or path.endswith(".tflite")
|
|
||||||
):
|
|
||||||
# untested
|
|
||||||
self.model = migraphx.parse_tf(path)
|
|
||||||
else:
|
|
||||||
self.model = migraphx.parse_onnx(path)
|
|
||||||
|
|
||||||
logger.info("AMD/ROCm: compiling the model")
|
|
||||||
|
|
||||||
self.model.compile(
|
|
||||||
migraphx.get_target("gpu"), offload_copy=True, fast_math=True
|
|
||||||
)
|
|
||||||
|
|
||||||
logger.info(f"AMD/ROCm: saving parsed model into {mxr_path}")
|
|
||||||
|
|
||||||
os.makedirs(os.path.join(MODEL_CACHE_DIR, "rocm"), exist_ok=True)
|
|
||||||
migraphx.save(self.model, mxr_path)
|
|
||||||
|
|
||||||
logger.info("AMD/ROCm: model loaded")
|
|
||||||
|
|
||||||
def detect_raw(self, tensor_input):
|
|
||||||
model_input_name = self.model.get_parameter_names()[0]
|
|
||||||
model_input_shape = tuple(
|
|
||||||
self.model.get_parameter_shapes()[model_input_name].lens()
|
|
||||||
)
|
|
||||||
|
|
||||||
tensor_input = cv2.dnn.blobFromImage(
|
|
||||||
tensor_input[0],
|
|
||||||
1.0,
|
|
||||||
(model_input_shape[3], model_input_shape[2]),
|
|
||||||
None,
|
|
||||||
swapRB=self.rocm_model_px == PixelFormatEnum.bgr,
|
|
||||||
).astype(np.uint8)
|
|
||||||
|
|
||||||
detector_result = self.model.run({model_input_name: tensor_input})[0]
|
|
||||||
addr = ctypes.cast(detector_result.data_ptr(), ctypes.POINTER(ctypes.c_float))
|
|
||||||
|
|
||||||
tensor_output = np.ctypeslib.as_array(
|
|
||||||
addr, shape=detector_result.get_shape().lens()
|
|
||||||
)
|
|
||||||
|
|
||||||
if self.rocm_model_type == ModelTypeEnum.yolonas:
|
|
||||||
predictions = tensor_output
|
|
||||||
|
|
||||||
detections = np.zeros((20, 6), np.float32)
|
|
||||||
|
|
||||||
for i, prediction in enumerate(predictions):
|
|
||||||
if i == 20:
|
|
||||||
break
|
|
||||||
(_, x_min, y_min, x_max, y_max, confidence, class_id) = prediction
|
|
||||||
# when running in GPU mode, empty predictions in the output have class_id of -1
|
|
||||||
if class_id < 0:
|
|
||||||
break
|
|
||||||
detections[i] = [
|
|
||||||
class_id,
|
|
||||||
confidence,
|
|
||||||
y_min / self.h,
|
|
||||||
x_min / self.w,
|
|
||||||
y_max / self.h,
|
|
||||||
x_max / self.w,
|
|
||||||
]
|
|
||||||
return detections
|
|
||||||
else:
|
|
||||||
raise Exception(
|
|
||||||
f"{self.rocm_model_type} is currently not supported for rocm. See the docs for more info on supported models."
|
|
||||||
)
|
|
@ -17,7 +17,6 @@ from frigate.detectors.detector_config import (
|
|||||||
InputDTypeEnum,
|
InputDTypeEnum,
|
||||||
InputTensorEnum,
|
InputTensorEnum,
|
||||||
)
|
)
|
||||||
from frigate.detectors.plugins.rocm import DETECTOR_KEY as ROCM_DETECTOR_KEY
|
|
||||||
from frigate.util.builtin import EventsPerSecond, load_labels
|
from frigate.util.builtin import EventsPerSecond, load_labels
|
||||||
from frigate.util.image import SharedMemoryFrameManager, UntrackedSharedMemory
|
from frigate.util.image import SharedMemoryFrameManager, UntrackedSharedMemory
|
||||||
from frigate.util.services import listen
|
from frigate.util.services import listen
|
||||||
@ -52,13 +51,7 @@ class LocalObjectDetector(ObjectDetector):
|
|||||||
self.labels = load_labels(labels)
|
self.labels = load_labels(labels)
|
||||||
|
|
||||||
if detector_config:
|
if detector_config:
|
||||||
if detector_config.type == ROCM_DETECTOR_KEY:
|
self.input_transform = tensor_transform(detector_config.model.input_tensor)
|
||||||
# ROCm requires NHWC as input
|
|
||||||
self.input_transform = None
|
|
||||||
else:
|
|
||||||
self.input_transform = tensor_transform(
|
|
||||||
detector_config.model.input_tensor
|
|
||||||
)
|
|
||||||
|
|
||||||
self.dtype = detector_config.model.input_dtype
|
self.dtype = detector_config.model.input_dtype
|
||||||
else:
|
else:
|
||||||
|
Loading…
Reference in New Issue
Block a user