* reload the window on 401
* backend apis for auth
* add login page
* re-enable web linter
* fix login page routing
* bypass csrf for internal auth endpoint
* disable healthcheck in devcontainer target
* include login page in vite build
* redirect to login page on 401
* implement config for users and settings
* implement JWT actual secret
* add brute force protection on login
* add support for redirecting from auth failures on api calls
* return location for redirect
* default cookie name should pass regex test
* set hash iterations to current OWASP recommendation
* move users to database instead of config
* config option to reset admin password on startup
* user management UI
* check for deleted user on refresh
* validate username and fixes
* remove password constraint
* cleanup
* fix user check on refresh
* web fixes
* implement auth via new external port
* use x-forwarded-for to rate limit login attempts by ip
* implement logout and profile
* fixes
* lint fixes
* add support for user passthru from upstream proxies
* add support for specifying a logout url
* add documentation
* Update docs/docs/configuration/authentication.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update docs/docs/configuration/authentication.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
Setting cache-to=compression=zstd causes the resulting user-pulled image
to have zstd-compressed layers, which are not compatible with docker
prior to 23.0. Ubuntu 20.04 still ships with docker 20.10, which yields
`Error processing tar file` when pulling these images.
Renaming the jetpack cache images is my way of clearing the cache of the
prior zstd layers, and it clarifies the convention I used for the other
cache images in which there is one cache per base image/job, not per
target/step. We don't need to delete the non-jetson cache images because
they haven't been rebuilt since zstd was enabled.
* fixup! Split independent builds into parallel jobs
* Combine caches within steps of same job
* Remove Maintain Cache workflow
Now that we're caching to ghcr instead of gha, we don't have to worry
about gha's cache eviction after 7 days/10 GB.
* Factor out common setup steps
* Re-order
* Split independent builds into parallel jobs
* Cache jetson builds
* Use zstd compression
* Switch from gha cache to registry cache
A CI run (four images cached with mode-max) populates the cache with 295
cache entries totalling 23.44 GB. This exceeds gha's 10GB limit, causing
trashing. Try with a registry instead.
* Enable manual CI runs
* Non-Jetson changes
Required for later commits:
- Allow base image to be overridden (and don't assume its WORKDIR)
- Ensure python3.9
- Map hwaccel decode presets as strings instead of lists
Not required:
- Fix existing documentation
- Simplify hwaccel scale logic
* Prepare for multi-arch tensorrt build
* Add tensorrt images for Jetson boards
* Add Jetson ffmpeg hwaccel
* Update docs
* Add CODEOWNERS
* CI
* Change default model from yolov7-tiny-416 to yolov7-320
In my experience the tiny models perform markedly worse without being
much faster
* fixup! Update docs
* Make main frigate build non rpi specific and build rpi using base image
* Add boards to sidebar
* Fix docker build
* Fix docs build
* Update pr branch for testing
* remove target from rpi build
* Remove manual build
* Add push build for rpi
* fix typos, improve wording
* Add arm build for rpi
* Cleanup and add default github ref name
* Cleanup docker build file system
* Setup to use docker bake
* Add ci/cd for bake
* Fix path
* Fix devcontainer
* Set targets
* Fix build
* Fix syntax
* Add wheels target
* Move dev container to trt
* Update key and fix rpi local
* Move requirements files and set intermediate targets
* Add back --load
* Update docs for community board development
* Update installation docs to reflect different builds available
* Update docs with official and community supported headers
* Update codeowners docs
* Update docs
* Assemble main and standard builds
* Change order of pushes
* Remove community board after successful build
* Fix rpi bake file names
* Add isort and ruff linter
Both linters are pretty common among modern python code bases.
The isort tool provides stable sorting and grouping, as well as pruning
of unused imports.
Ruff is a modern linter, that is very fast due to being written in rust.
It can detect many common issues in a python codebase.
Removes the pylint dev requirement, since ruff replaces it.
* treewide: fix issues detected by ruff
* treewide: fix bare except clauses
* .devcontainer: Set up isort
* treewide: optimize imports
* treewide: apply black
* treewide: make regex patterns raw strings
This is necessary for escape sequences to be properly recognized.
* Initial WIP dockerfile and scripts to add tensorrt support
* Add tensorRT detector
* WIP attempt to install TensorRT 8.5
* Updates to detector for cuda python library
* TensorRT Cuda library rework WIP
Does not run
* Fixes from rebase to detector factory
* Fix parsing output memory pointer
* Handle TensorRT logs with the python logger
* Use non-async interface and convert input data to float32. Detection runs without error.
* Make TensorRT a separate build from the base Frigate image.
* Add script and documentation for generating TRT Models
* Add support for TensorRT devcontainer
* Add labelmap to trt model script and docs. Cleanup of old scripts.
* Update detect to normalize input tensor using model input type
* Add config for selecting GPU. Fix Async inference. Update documentation.
* Update some CUDA libraries to clean up version warning
* Add CI stage to build TensorRT tag
* Add note in docs for image tag and model support