diff --git a/docs/docs/frigate/hardware.md b/docs/docs/frigate/hardware.md index 1791da925..f34842f89 100644 --- a/docs/docs/frigate/hardware.md +++ b/docs/docs/frigate/hardware.md @@ -52,24 +52,25 @@ The OpenVINO detector type is able to run on: More information is available [in the detector docs](/configuration/object_detectors#openvino-detector) -Inference speeds vary greatly depending on the CPU, GPU, or VPU used, some known examples are below: +Inference speeds vary greatly depending on the CPU or GPU used, some known examples of GPU inference times are below: -| Name | MobileNetV2 Inference Speed | YOLO-NAS Inference Speed | Notes | -| -------------------- | --------------------------- | ------------------------- | -------------------------------------- | -| Intel Celeron J4105 | ~ 25 ms | | Can only run one detector instance | -| Intel Celeron N3060 | 130 - 150 ms | | Can only run one detector instance | -| Intel Celeron N3205U | ~ 120 ms | | Can only run one detector instance | -| Intel Celeron N4020 | 50 - 200 ms | | Inference speed depends on other loads | -| Intel i3 6100T | 15 - 35 ms | | Can only run one detector instance | -| Intel i3 8100 | ~ 15 ms | | | -| Intel i5 4590 | ~ 20 ms | | | -| Intel i5 6500 | ~ 15 ms | | | -| Intel i5 7200u | 15 - 25 ms | | | -| Intel i5 7500 | ~ 15 ms | | | -| Intel i5 1135G7 | 10 - 15 ms | | | -| Intel i5 12600K | ~ 15 ms | 320: ~ 20 ms 640: ~ 46 ms | | -| Intel Arc A380 | ~ 6 ms | 320: ~ 10 ms | | -| Intel Arc A750 | ~ 4 ms | 320: ~ 8 ms | | +| Name | MobileNetV2 Inference Time | YOLO-NAS Inference Time | Notes | +| -------------------- | -------------------------- | ------------------------- | -------------------------------------- | +| Intel Celeron J4105 | ~ 25 ms | | Can only run one detector instance | +| Intel Celeron N3060 | 130 - 150 ms | | Can only run one detector instance | +| Intel Celeron N3205U | ~ 120 ms | | Can only run one detector instance | +| Intel Celeron N4020 | 50 - 200 ms | | Inference speed depends on other loads | +| Intel i3 6100T | 15 - 35 ms | | Can only run one detector instance | +| Intel i3 8100 | ~ 15 ms | | | +| Intel i5 4590 | ~ 20 ms | | | +| Intel i5 6500 | ~ 15 ms | | | +| Intel i5 7200u | 15 - 25 ms | | | +| Intel i5 7500 | ~ 15 ms | | | +| Intel i5 1135G7 | 10 - 15 ms | | | +| Intel i3 12000 | | 320: ~ 19 ms 640: ~ 54 ms | | +| Intel i5 12600K | ~ 15 ms | 320: ~ 20 ms 640: ~ 46 ms | | +| Intel Arc A380 | ~ 6 ms | 320: ~ 10 ms | | +| Intel Arc A750 | ~ 4 ms | 320: ~ 8 ms | | ### TensorRT - Nvidia GPU @@ -78,15 +79,15 @@ The TensortRT detector is able to run on x86 hosts that have an Nvidia GPU which Inference speeds will vary greatly depending on the GPU and the model used. `tiny` variants are faster than the equivalent non-tiny model, some known examples are below: -| Name | YoloV7 Inference Speed | YOLO-NAS Inference Speed | -| --------------- | ---------------------- | ------------------------- | -| GTX 1060 6GB | ~ 7 ms | | -| GTX 1070 | ~ 6 ms | | -| GTX 1660 SUPER | ~ 4 ms | | -| RTX 3050 | 5 - 7 ms | 320: ~ 10 ms 640: ~ 16 ms | -| RTX 3070 Mobile | ~ 5 ms | | -| Quadro P400 2GB | 20 - 25 ms | | -| Quadro P2000 | ~ 12 ms | | +| Name | YoloV7 Inference Time | YOLO-NAS Inference Time | +| --------------- | --------------------- | ------------------------- | +| GTX 1060 6GB | ~ 7 ms | | +| GTX 1070 | ~ 6 ms | | +| GTX 1660 SUPER | ~ 4 ms | | +| RTX 3050 | 5 - 7 ms | 320: ~ 10 ms 640: ~ 16 ms | +| RTX 3070 Mobile | ~ 5 ms | | +| Quadro P400 2GB | 20 - 25 ms | | +| Quadro P2000 | ~ 12 ms | | ### AMD GPUs