* Revert numpy upgrade
* Upgrade arm64 onnx version to match amd64
* Increase CODEOWNERS granularity
Not sure if it has an effect since I don't have repository write access
Setting cache-to=compression=zstd causes the resulting user-pulled image
to have zstd-compressed layers, which are not compatible with docker
prior to 23.0. Ubuntu 20.04 still ships with docker 20.10, which yields
`Error processing tar file` when pulling these images.
Renaming the jetpack cache images is my way of clearing the cache of the
prior zstd layers, and it clarifies the convention I used for the other
cache images in which there is one cache per base image/job, not per
target/step. We don't need to delete the non-jetson cache images because
they haven't been rebuilt since zstd was enabled.
* fixup! Split independent builds into parallel jobs
* Combine caches within steps of same job
* Remove Maintain Cache workflow
Now that we're caching to ghcr instead of gha, we don't have to worry
about gha's cache eviction after 7 days/10 GB.
* Factor out common setup steps
* Re-order
* Split independent builds into parallel jobs
* Cache jetson builds
* Use zstd compression
* Switch from gha cache to registry cache
A CI run (four images cached with mode-max) populates the cache with 295
cache entries totalling 23.44 GB. This exceeds gha's 10GB limit, causing
trashing. Try with a registry instead.
* Enable manual CI runs
* Run ffmpeg sub process & video_properties as async
* Run recording cleanup in the main process
* More cleanup
* Use inter process communication to write recordings into the DB
* Formatting
* Non-Jetson changes
Required for later commits:
- Allow base image to be overridden (and don't assume its WORKDIR)
- Ensure python3.9
- Map hwaccel decode presets as strings instead of lists
Not required:
- Fix existing documentation
- Simplify hwaccel scale logic
* Prepare for multi-arch tensorrt build
* Add tensorrt images for Jetson boards
* Add Jetson ffmpeg hwaccel
* Update docs
* Add CODEOWNERS
* CI
* Change default model from yolov7-tiny-416 to yolov7-320
In my experience the tiny models perform markedly worse without being
much faster
* fixup! Update docs
* Make main frigate build non rpi specific and build rpi using base image
* Add boards to sidebar
* Fix docker build
* Fix docs build
* Update pr branch for testing
* remove target from rpi build
* Remove manual build
* Add push build for rpi
* fix typos, improve wording
* Add arm build for rpi
* Cleanup and add default github ref name
* Cleanup docker build file system
* Setup to use docker bake
* Add ci/cd for bake
* Fix path
* Fix devcontainer
* Set targets
* Fix build
* Fix syntax
* Add wheels target
* Move dev container to trt
* Update key and fix rpi local
* Move requirements files and set intermediate targets
* Add back --load
* Update docs for community board development
* Update installation docs to reflect different builds available
* Update docs with official and community supported headers
* Update codeowners docs
* Update docs
* Assemble main and standard builds
* Change order of pushes
* Remove community board after successful build
* Fix rpi bake file names
* Store camera labels in dict and other optimizations
* Add max on timeout so it is at least 60
* Ensure db timeout is at least 60
* Update list once a day to ensure new labels are cleaned up
* Formatting
* Insert recordings as bulk instead of individually.
* Fix
* Refactor event and timeline cleanup
* Remove unused
* Send mqtt message when audio is detected
* Fix value
* Add audio topics to mqtt docs and add mqtt headers
* Use existing standard for values
* Update mqtt.md
* Add auto configuration for height, width and fps in detect role
* Add auto-configuration for detect width, height, and fps for input roles with detect in the CameraConfig class in config.py
* Refactor code to retrieve video properties from input stream in CameraConfig class and add optional parameter to retrieve video duration in get_video_properties function
* format
* Set default detect dimensions to 1280x720 and update DetectConfig to use the defaults
* Revert "Set default detect dimensions to 1280x720 and update DetectConfig to use the defaults"
This reverts commit a1aed0414d.
* Add default detect dimensions if autoconfiguration failed and log a warning message
* fix warn message spelling on frigate/config.py
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Ensure detect height and width are not None before using them in camera configuration
* docs: initial commit
* rename streamInfo to stream_info
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Apply suggestions from code review
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update docs
* handle case then get_video_properties returns 0x0 dimension
* Set detect resolution based on stream properties if available, else apply default values
* Update FrigateConfig to set default values for stream_info if resolution detection fails
* Update camera detection dimensions based on stream information if available
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Add limit to event query for fetching latest event with specified label and camera name
* Refactor the label_thumbnail function in frigate/http.py to simplify the event_query logic and improve code readability
* Support setting sub label scores via API
* Update docs
* Update docs
* Formatting
* Throw error when score is outside expected bounds
* Fix / cleanup
* Refactor event label and sub-label score display in Events.jsx to include the event label and sub-label percentage
* Revert "Refactor event label and sub-label score display in Events.jsx to include the event label and sub-label percentage"
This reverts commit 87c23adf1e.
* Refactor event label and sub label score display in Events component
* much improved motion estimation and tracking
* docs updates
* move ptz specific mp values to ptz_metrics dict
* only check if moving at frame time
* pass full dict instead of individual values