From c74c9ff16161a8539e1cc41b76a5bdea953ce71b Mon Sep 17 00:00:00 2001 From: Nicolas Mowen Date: Sat, 11 Feb 2023 18:59:36 -0700 Subject: [PATCH] Add nvidia detector inference times from survey (#5456) * Add nvidia detector inference times from survey * Fix typo * Update hardware.md --- docs/docs/frigate/hardware.md | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/docs/docs/frigate/hardware.md b/docs/docs/frigate/hardware.md index 95101c70d..a00ae0472 100644 --- a/docs/docs/frigate/hardware.md +++ b/docs/docs/frigate/hardware.md @@ -80,10 +80,15 @@ The TensortRT detector is able to run on x86 hosts that have an Nvidia GPU which Inference speeds will vary greatly depending on the GPU and the model used. `tiny` variants are faster than the equivalent non-tiny model, some known examples are below: -| Name | Model | Inference Speed | -| -------- | --------------- | --------------- | -| RTX 3050 | yolov4-tiny-416 | ~ 5 ms | -| RTX 3050 | yolov7-tiny-416 | ~ 6 ms | +| Name | Inference Speed | +| --------------- | ----------------- | +| GTX 1060 6GB | ~ 7 ms | +| GTX 1070 | ~ 6 ms | +| GTX 1660 SUPER | ~ 4 ms | +| RTX 3050 | 5 - 7 ms | +| RTX 3070 Mobile | ~ 5 ms | +| Quadro P400 2GB | 20 - 25 ms | +| Quadro P2000 | ~ 12 ms | ## What does Frigate use the CPU for and what does it use a detector for? (ELI5 Version)