* Run ffmpeg sub process & video_properties as async
* Run recording cleanup in the main process
* More cleanup
* Use inter process communication to write recordings into the DB
* Formatting
* Non-Jetson changes
Required for later commits:
- Allow base image to be overridden (and don't assume its WORKDIR)
- Ensure python3.9
- Map hwaccel decode presets as strings instead of lists
Not required:
- Fix existing documentation
- Simplify hwaccel scale logic
* Prepare for multi-arch tensorrt build
* Add tensorrt images for Jetson boards
* Add Jetson ffmpeg hwaccel
* Update docs
* Add CODEOWNERS
* CI
* Change default model from yolov7-tiny-416 to yolov7-320
In my experience the tiny models perform markedly worse without being
much faster
* fixup! Update docs
* Store camera labels in dict and other optimizations
* Add max on timeout so it is at least 60
* Ensure db timeout is at least 60
* Update list once a day to ensure new labels are cleaned up
* Formatting
* Insert recordings as bulk instead of individually.
* Fix
* Refactor event and timeline cleanup
* Remove unused
* Send mqtt message when audio is detected
* Fix value
* Add audio topics to mqtt docs and add mqtt headers
* Use existing standard for values
* Update mqtt.md
* Add auto configuration for height, width and fps in detect role
* Add auto-configuration for detect width, height, and fps for input roles with detect in the CameraConfig class in config.py
* Refactor code to retrieve video properties from input stream in CameraConfig class and add optional parameter to retrieve video duration in get_video_properties function
* format
* Set default detect dimensions to 1280x720 and update DetectConfig to use the defaults
* Revert "Set default detect dimensions to 1280x720 and update DetectConfig to use the defaults"
This reverts commit a1aed0414d.
* Add default detect dimensions if autoconfiguration failed and log a warning message
* fix warn message spelling on frigate/config.py
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Ensure detect height and width are not None before using them in camera configuration
* docs: initial commit
* rename streamInfo to stream_info
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Apply suggestions from code review
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update docs
* handle case then get_video_properties returns 0x0 dimension
* Set detect resolution based on stream properties if available, else apply default values
* Update FrigateConfig to set default values for stream_info if resolution detection fails
* Update camera detection dimensions based on stream information if available
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Add limit to event query for fetching latest event with specified label and camera name
* Refactor the label_thumbnail function in frigate/http.py to simplify the event_query logic and improve code readability
* Support setting sub label scores via API
* Update docs
* Update docs
* Formatting
* Throw error when score is outside expected bounds
* Fix / cleanup
* much improved motion estimation and tracking
* docs updates
* move ptz specific mp values to ptz_metrics dict
* only check if moving at frame time
* pass full dict instead of individual values
* only check camera status if autotracking enabled
* a more sensible sleep time
* Only check camera status if preparing for a move
* only update tracked obj position when ptz stopped
* both pantilt *and* zoom should be idle
* check more often after moving
* No need to move pan and tilt separately
* Basic functionality
* Threaded motion estimator
* Revert "Threaded motion estimator"
This reverts commit 3171801607.
* Don't detect motion when ptz is moving
* fix motion logic
* fix mypy error
* Add threaded queue for movement for slower ptzs
* Move queues per camera
* Move autotracker start to app.py
* iou value for tracked object
* mqtt callback
* tracked object should be initially motionless
* only draw thicker box if autotracking is enabled
* Init if enabled when initially disabled in config
* Fix init
* Thread names
* Always use motion estimator
* docs
* clarify fov support
* remove size ratio
* use mp event instead of value for ptz status
* update autotrack at half fps
* fix merge conflict
* fix event type for mypy
* clean up
* Clean up
* remove unused code
* merge conflict fix
* docs: update link to object_detectors page
* Update docs/docs/configuration/autotracking.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* clarify wording
* pass actual instances directly
* default return preset
* fix type
* Error message when onvif init fails
* disable autotracking if onvif init fails
* disable autotracking if onvif init fails
* ptz module
* verify required_zones in config
* update util after dev merge
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update to latest tensorrt (8.6.1) release
* Build trt libyolo_layer.so in container
* Update tensorrt_models script to convert models from the frigate container
* Fix typo in model script
* Fix paths to yolo lib and models folder
* Add S6 scripts to test and convert specified TensortRT models at startup.
Rearrange tensorrt files into a docker support folder.
* Update TensorRT documentation to reflect the new model conversion process and minimum HW support.
* Fix model_cache path to live in config directory
* Move tensorrt s6 files to the correct directory
* Fix issues in model generation script
* Disable global timeout for s6 services
* Add version folder to tensorrt model_cache path
* Include TensorRT version 8.5.3
* Add numpy requirement prior to removal of np.bool
* This TRT version uses a mixture of cuda dependencies
* Redirect stdout from noisy model conversion