Initial support for rockchip boards (#8382)

* initial support for rockchip boards

* Apply suggestions from code review

apply requested changes

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* requested changes

* rewrite dockerfile

* adjust targets

* Update .github/workflows/ci.yml

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* Update docs/docs/configuration/object_detectors.md

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* Update docs/docs/configuration/object_detectors.md

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* add information to docs

* Update docs/docs/configuration/object_detectors.md

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* format rknn.py

* apply changes from isort and ruff

---------

Co-authored-by: MarcA711 <>
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
This commit is contained in:
Marc Altmann 2023-11-02 13:55:24 +01:00 committed by GitHub
parent a6279a0337
commit 090294e89b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
11 changed files with 242 additions and 1 deletions

View File

@ -79,6 +79,16 @@ jobs:
rpi.tags=${{ steps.setup.outputs.image-name }}-rpi
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64,mode=max
- name: Build and push RockChip build
uses: docker/bake-action@v3
with:
push: true
platforms: linux/arm64
targets: rk
files: docker/rockchip/rk.hcl
set: |
rk.tags=ghcr.io/${{ steps.lowercaseRepo.outputs.lowercase }}:${{ github.ref_name }}-${{ env.SHORT_SHA }}-rk
*.cache-from=type=gha
jetson_jp4_build:
runs-on: ubuntu-latest
name: Jetson Jetpack 4

View File

@ -2,3 +2,5 @@
/docker/tensorrt/ @madsciencetist @NateMeyer
/docker/tensorrt/*arm64* @madsciencetist
/docker/tensorrt/*jetson* @madsciencetist
/docker/rockchip/ @MarcA711

View File

@ -0,0 +1,24 @@
# syntax=docker/dockerfile:1.6
# https://askubuntu.com/questions/972516/debian-frontend-environment-variable
ARG DEBIAN_FRONTEND=noninteractive
FROM wheels as rk-wheels
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
COPY docker/rockchip/requirements-wheels-rk.txt /requirements-wheels-rk.txt
RUN sed -i "/https/d" /requirements-wheels.txt
RUN pip3 wheel --wheel-dir=/rk-wheels -c /requirements-wheels.txt -r /requirements-wheels-rk.txt
FROM wget as rk-libs
RUN wget -qO librknnrt.so https://github.com/MarcA711/rknpu2/raw/master/runtime/RK3588/Linux/librknn_api/aarch64/librknnrt.so
FROM deps AS rk-deps
ARG TARGETARCH
RUN --mount=type=bind,from=rk-wheels,source=/rk-wheels,target=/deps/rk-wheels \
pip3 install -U /deps/rk-wheels/*.whl
WORKDIR /opt/frigate/
COPY --from=rootfs / /
COPY --from=rk-libs /rootfs/librknnrt.so /usr/lib/
COPY docker/rockchip/yolov8n-320x320.rknn /models/

View File

@ -0,0 +1,2 @@
hide-warnings == 0.17
rknn-toolkit-lite2 @ https://github.com/MarcA711/rknn-toolkit2/raw/master/rknn_toolkit_lite2/packages/rknn_toolkit_lite2-1.5.2-cp39-cp39-linux_aarch64.whl

34
docker/rockchip/rk.hcl Normal file
View File

@ -0,0 +1,34 @@
target wget {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
target = "wget"
}
target wheels {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
target = "wheels"
}
target deps {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
target = "deps"
}
target rootfs {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64"]
target = "rootfs"
}
target rk {
dockerfile = "docker/rockchip/Dockerfile"
contexts = {
wget = "target:wget",
wheels = "target:wheels",
deps = "target:deps",
rootfs = "target:rootfs"
}
platforms = ["linux/arm64"]
}

10
docker/rockchip/rk.mk Normal file
View File

@ -0,0 +1,10 @@
BOARDS += rk
local-rk: version
docker buildx bake --load --file=docker/rockchip/rk.hcl --set rk.tags=frigate:latest-rk rk
build-rk: version
docker buildx bake --file=docker/rockchip/rk.hcl --set rk.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rk rk
push-rk: build-rk
docker buildx bake --push --file=docker/rockchip/rk.hcl --set rk.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-rk rk

Binary file not shown.

View File

@ -5,7 +5,7 @@ title: Object Detectors
# Officially Supported Detectors
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `openvino`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `openvino`, `tensorrt`, and `rknn`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
## CPU Detector (not recommended)
@ -291,3 +291,38 @@ To verify that the integration is working correctly, start Frigate and observe t
# Community Supported Detectors
## Rockchip RKNN-Toolkit-Lite2
This detector is only available if one of the following Rockchip SoCs is used:
- RK3566/RK3568
- RK3588/RK3588S
- RV1103/RV1106
- RK3562
These SoCs come with a NPU that will highly speed up detection.
### Setup
RKNN support is provided using the `-rk` suffix for the docker image. Moreover, privileged mode must be enabled by adding the `--privileged` flag to your docker run command or `privileged: true` to your `docker-compose.yml` file.
### Configuration
This `config.yml` shows all relevant options to configure the detector and explains them. All values shown are the default values (except for one). Lines that are required at least to use the detector are labeled as required, all other lines are optional.
```yaml
detectors: # required
rknn: # required
type: rknn # required
model: # required
# path to .rknn model file
path: /models/yolov8n-320x320.rknn
# width and height of detection frames
width: 320
height: 320
# pixel format of detection frame
# default value is rgb but yolov models usually use bgr format
input_pixel_format: bgr # required
# shape of detection frame
input_tensor: nhwc
```

View File

@ -95,6 +95,16 @@ Frigate supports all Jetson boards, from the inexpensive Jetson Nano to the powe
Inference speed will vary depending on the YOLO model, jetson platform and jetson nvpmodel (GPU/DLA/EMC clock speed). It is typically 20-40 ms for most models. The DLA is more efficient than the GPU, but not faster, so using the DLA will reduce power consumption but will slightly increase inference time.
#### Rockchip SoC
Frigate supports SBCs with the following Rockchip SoCs:
- RK3566/RK3568
- RK3588/RK3588S
- RV1103/RV1106
- RK3562
Using the yolov8n model and an Orange Pi 5 Plus with RK3588 SoC inference speeds vary between 25-40 ms.
## What does Frigate use the CPU for and what does it use a detector for? (ELI5 Version)
This is taken from a [user question on reddit](https://www.reddit.com/r/homeassistant/comments/q8mgau/comment/hgqbxh5/?utm_source=share&utm_medium=web2x&context=3). Modified slightly for clarity.

View File

@ -95,6 +95,7 @@ The following community supported builds are available:
`ghcr.io/blakeblackshear/frigate:stable-tensorrt-jp5` - Frigate build optimized for nvidia Jetson devices running Jetpack 5
`ghcr.io/blakeblackshear/frigate:stable-tensorrt-jp4` - Frigate build optimized for nvidia Jetson devices running Jetpack 4.6
`ghcr.io/blakeblackshear/frigate:stable-rk` - Frigate build for SBCs with Rockchip SoC
:::

View File

@ -0,0 +1,113 @@
import logging
from typing import Literal
import cv2
import cv2.dnn
import numpy as np
from hide_warnings import hide_warnings
from pydantic import Field
from rknnlite.api import RKNNLite
from frigate.detectors.detection_api import DetectionApi
from frigate.detectors.detector_config import BaseDetectorConfig
logger = logging.getLogger(__name__)
DETECTOR_KEY = "rknn"
class RknnDetectorConfig(BaseDetectorConfig):
type: Literal[DETECTOR_KEY]
score_thresh: float = Field(
default=0.5, ge=0, le=1, title="Minimal confidence for detection."
)
nms_thresh: float = Field(
default=0.45, ge=0, le=1, title="IoU threshold for non-maximum suppression."
)
class Rknn(DetectionApi):
type_key = DETECTOR_KEY
def __init__(self, config: RknnDetectorConfig):
self.height = config.model.height
self.width = config.model.width
self.score_thresh = config.score_thresh
self.nms_thresh = config.nms_thresh
self.model_path = config.model.path or "/models/yolov8n-320x320.rknn"
self.rknn = RKNNLite(verbose=False)
if self.rknn.load_rknn(self.model_path) != 0:
logger.error("Error initializing rknn model.")
if self.rknn.init_runtime() != 0:
logger.error("Error initializing rknn runtime.")
def __del__(self):
self.rknn.release()
def postprocess(self, results):
"""
Processes yolov8 output.
Args:
results: array with shape: (1, 84, n, 1) where n depends on yolov8 model size (for 320x320 model n=2100)
Returns:
detections: array with shape (20, 6) with 20 rows of (class, confidence, y_min, x_min, y_max, x_max)
"""
results = np.transpose(results[0, :, :, 0]) # array shape (2100, 84)
classes = np.argmax(
results[:, 4:], axis=1
) # array shape (2100,); index of class with max confidence of each row
scores = np.max(
results[:, 4:], axis=1
) # array shape (2100,); max confidence of each row
# array shape (2100, 4); bounding box of each row
boxes = np.transpose(
np.vstack(
(
results[:, 0] - 0.5 * results[:, 2],
results[:, 1] - 0.5 * results[:, 3],
results[:, 2],
results[:, 3],
)
)
)
# indices of rows with confidence > SCORE_THRESH with Non-maximum Suppression (NMS)
result_boxes = cv2.dnn.NMSBoxes(
boxes, scores, self.score_thresh, self.nms_thresh, 0.5
)
detections = np.zeros((20, 6), np.float32)
for i in range(len(result_boxes)):
if i >= 20:
break
index = result_boxes[i]
detections[i] = [
classes[index],
scores[index],
(boxes[index][1]) / self.height,
(boxes[index][0]) / self.width,
(boxes[index][1] + boxes[index][3]) / self.height,
(boxes[index][0] + boxes[index][2]) / self.width,
]
return detections
@hide_warnings
def inference(self, tensor_input):
return self.rknn.inference(inputs=tensor_input)
def detect_raw(self, tensor_input):
output = self.inference(
[
tensor_input,
]
)
return self.postprocess(output[0])