* Initial WIP dockerfile and scripts to add tensorrt support
* Add tensorRT detector
* WIP attempt to install TensorRT 8.5
* Updates to detector for cuda python library
* TensorRT Cuda library rework WIP
Does not run
* Fixes from rebase to detector factory
* Fix parsing output memory pointer
* Handle TensorRT logs with the python logger
* Use non-async interface and convert input data to float32. Detection runs without error.
* Make TensorRT a separate build from the base Frigate image.
* Add script and documentation for generating TRT Models
* Add support for TensorRT devcontainer
* Add labelmap to trt model script and docs. Cleanup of old scripts.
* Update detect to normalize input tensor using model input type
* Add config for selecting GPU. Fix Async inference. Update documentation.
* Update some CUDA libraries to clean up version warning
* Add CI stage to build TensorRT tag
* Add note in docs for image tag and model support
* Add sub filter for monaco editor
* Don't include files for unused languages
* Move necessary files and cleanup build
* Update sub filter for new location
* Still need to include default editor worker
* Fix error when model already exists
* Log all services to RAM
* Gracefully handle shutdown
* Add logs route
* Remove tail
* Return logs for services
* Display log chooser with copy button
* show logs for specific services
* Clean up settings logs
* Add copy functionality to logs
* Add copy functionality to logs
* Fix merge
* Set archive count to 0
Co-authored-by: Felipe Santos <felipecassiors@gmail.com>
* Log all services to RAM
* Fix tests workdir
* Rotate logs when they reach 10MB and keep only 1 archive
* Gracefully handle shutdown
* Add note about gracetime not working
* Fix logs permission, create fake logs for devcontainer
* Remove empty line
* Update docker/rootfs/etc/services.d/frigate/run
* Fix fake Frigate shebang
* Initial work for adding OpenVino detector. Not functional
* Load model and submit for inference.
Sucessfully load model and initialize OpenVino engine with either CPU or GPU as device.
Does not parse results for objects.
* Detection working with ssdlite_mobilenetv2 FP16 model
* Add OpenVIno support and model to docker image
* Add documentation for OpenVino detector configuration
* Adds support for ARM32/ARM64 and the Myriad X hardware
- Use custom-built openvino wheel for all platforms
- Add libusb build without udev for NCS2 support
* Add documentation around Intel CPU requirements and NCS2 setup
* Print all available output tensors
* Update documentation for config parameters
* fix makefile variable
* add branch for testing
* fix arm32 build
* use amd64 for web build
* install wheels in a separate layer for better parallel builds
* try build-push-action
* try using gh context
* use short sha
* cleanup
* Make it easier to run the devcontainer
* Some more improvements
* Tidy up few other things
* Better name stages
* Fix CI
* Setup everything with one click
* Allow to set IMAGE_OWNER
* Change IMAGE_OWNER to IMAGE_REPO
* Fix CI with IMAGE_REPO
* Fix nodejs installation
* Test devcontainer build as part of CI
* Build devcontainer in its own job
* Fix devcontainer cli installation
* Fix devcontainer build
* Fix devcontainer build in CI again
* Enable buildkit only
* Increase coverage of devcontainer test
* Fix devcontainer start in CI
* Ensure latest version of docker compose is used
* Fix install compose action
* Disable CI stuff which does not work until we fix them