Update openvino hardware inference times (#16368)

This commit is contained in:
Nicolas Mowen 2025-02-07 09:52:21 -07:00 committed by GitHub
parent 4b429e440b
commit d6a2965cb2
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -52,10 +52,10 @@ The OpenVINO detector type is able to run on:
More information is available [in the detector docs](/configuration/object_detectors#openvino-detector) More information is available [in the detector docs](/configuration/object_detectors#openvino-detector)
Inference speeds vary greatly depending on the CPU, GPU, or VPU used, some known examples are below: Inference speeds vary greatly depending on the CPU or GPU used, some known examples of GPU inference times are below:
| Name | MobileNetV2 Inference Speed | YOLO-NAS Inference Speed | Notes | | Name | MobileNetV2 Inference Time | YOLO-NAS Inference Time | Notes |
| -------------------- | --------------------------- | ------------------------- | -------------------------------------- | | -------------------- | -------------------------- | ------------------------- | -------------------------------------- |
| Intel Celeron J4105 | ~ 25 ms | | Can only run one detector instance | | Intel Celeron J4105 | ~ 25 ms | | Can only run one detector instance |
| Intel Celeron N3060 | 130 - 150 ms | | Can only run one detector instance | | Intel Celeron N3060 | 130 - 150 ms | | Can only run one detector instance |
| Intel Celeron N3205U | ~ 120 ms | | Can only run one detector instance | | Intel Celeron N3205U | ~ 120 ms | | Can only run one detector instance |
@ -67,6 +67,7 @@ Inference speeds vary greatly depending on the CPU, GPU, or VPU used, some known
| Intel i5 7200u | 15 - 25 ms | | | | Intel i5 7200u | 15 - 25 ms | | |
| Intel i5 7500 | ~ 15 ms | | | | Intel i5 7500 | ~ 15 ms | | |
| Intel i5 1135G7 | 10 - 15 ms | | | | Intel i5 1135G7 | 10 - 15 ms | | |
| Intel i3 12000 | | 320: ~ 19 ms 640: ~ 54 ms | |
| Intel i5 12600K | ~ 15 ms | 320: ~ 20 ms 640: ~ 46 ms | | | Intel i5 12600K | ~ 15 ms | 320: ~ 20 ms 640: ~ 46 ms | |
| Intel Arc A380 | ~ 6 ms | 320: ~ 10 ms | | | Intel Arc A380 | ~ 6 ms | 320: ~ 10 ms | |
| Intel Arc A750 | ~ 4 ms | 320: ~ 8 ms | | | Intel Arc A750 | ~ 4 ms | 320: ~ 8 ms | |
@ -78,8 +79,8 @@ The TensortRT detector is able to run on x86 hosts that have an Nvidia GPU which
Inference speeds will vary greatly depending on the GPU and the model used. Inference speeds will vary greatly depending on the GPU and the model used.
`tiny` variants are faster than the equivalent non-tiny model, some known examples are below: `tiny` variants are faster than the equivalent non-tiny model, some known examples are below:
| Name | YoloV7 Inference Speed | YOLO-NAS Inference Speed | | Name | YoloV7 Inference Time | YOLO-NAS Inference Time |
| --------------- | ---------------------- | ------------------------- | | --------------- | --------------------- | ------------------------- |
| GTX 1060 6GB | ~ 7 ms | | | GTX 1060 6GB | ~ 7 ms | |
| GTX 1070 | ~ 6 ms | | | GTX 1070 | ~ 6 ms | |
| GTX 1660 SUPER | ~ 4 ms | | | GTX 1660 SUPER | ~ 4 ms | |