* Update to latest tensorrt (8.6.1) release
* Build trt libyolo_layer.so in container
* Update tensorrt_models script to convert models from the frigate container
* Fix typo in model script
* Fix paths to yolo lib and models folder
* Add S6 scripts to test and convert specified TensortRT models at startup.
Rearrange tensorrt files into a docker support folder.
* Update TensorRT documentation to reflect the new model conversion process and minimum HW support.
* Fix model_cache path to live in config directory
* Move tensorrt s6 files to the correct directory
* Fix issues in model generation script
* Disable global timeout for s6 services
* Add version folder to tensorrt model_cache path
* Include TensorRT version 8.5.3
* Add numpy requirement prior to removal of np.bool
* This TRT version uses a mixture of cuda dependencies
* Redirect stdout from noisy model conversion
* Add functionality to update YAML config file with PUT request in HTTP endpoint
* Refactor copying of text to clipboard with Clipboard API and fallback to document.execCommand('copy') in CameraMap.jsx file
* Update YAML file from URL query parameters in frigate/http.py and add functionality to save motion masks, zones, and object masks in CameraMap.jsx
* formatting
* fix merging fuckup
* Refactor camera zone coordinate saving to use single query parameter per zone in CameraMap.jsx
* remove unnecessary print statements in util.py
* Refactor update_yaml_file function to separate the logic for updating YAML data into a new function update_yaml().
* fix merge errors
* Refactor code to improve error handling and add dependencies to useEffect hooks
* reduce contention on frame_queue
don't check if the queue is full, just attempt to add the frame
in a non-blocking manner, and then if it fails, skip it
* don't check if the frame queue is empty, just try and get from it
* Update frigate/video.py
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
---------
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Refactored queues to use faster_fifo instead of mp.Queue
* Refactored LimitedQueue to include a counter for the number of items in the queue and updated put and get methods to use the counter
* Refactor app.py and util.py to use a custom Queue implementation called LQueue instead of the existing Queue
* Refactor put and get methods in LimitedQueue to handle queue size and blocking behavior more efficiently
* code format
* remove code from other branch (merging fuckup)
* Check ffmpeg version instead of checking for presence of BTBN_PATH
* Query ffmpeg version in s6 run script instead of subprocessing in every import
* Define LIBAVFORMAT_VERSION_MAJOR in devcontainer too
* Formatting
* Default ffmpeg version to current btbn version so unit tests pass
* Reduce framerate before downscaling
It is cheaper to drop frames and downscale those that remain than it is
to downscale all frames and then drop some of them. This is achieved
with the filter chain `-cv fps=FPS,scale=W:H`, and perhaps was the
original intention. The plain `-r` and `-s` flags do not execute in
order though - they each put themselves at the *end* of the filterchain,
so `-r FPS -s WxH` actually applies the scale filter first, and then the
rate filter.
This fix can halve the CPU used by the detect ffmpeg process.
* Bring back hard rate limits
* Force birdseye cameras into standard aspect ratios
* Clarify comment
* Formatting
* Actually use the calculated aspect ratio when building the layout
* Fix Y aspect
* Force canvas into known aspect ratio as well
* Save canvas size and don't recalculate
* Cache coefficients that are used for different size layouts
* Further optimize calculations to not be done multiple times
* Add audio process PID to the list of processes and log the start of the audio process
* Update audio process PID key in processes dictionary to "audioDetector" instead of "audio".
* Scale layout up to max size after it has been calculated
* Limit portrait cameras to taking up 2 rows
* Fix bug
* Fix birdsye not removing cameras once objects are no longer visible
* Fix lint
* Initial audio classification model implementation
* fix mypy
* Keep audio labelmap local
* Cleanup
* Start adding config for audio
* Add the detector
* Add audio detection process keypoints
* Build out base config
* Load labelmap correctly
* Fix config bugs
* Start audio process
* Fix startup issues
* Try to cleanup restarting
* Add ffmpeg input args
* Get audio detection working
* Save event to db
* End events if not heard for 30 seconds
* Use not heard config
* Stop ffmpeg when shutting down
* Fixes
* End events correctly
* Use api instead of event queue to save audio events
* Get events working
* Close threads when stop event is sent
* remove unused
* Only start audio process if at least one camera is enabled
* Add const for float
* Cleanup labelmap
* Add audio icon in frontend
* Add ability to toggle audio with mqtt
* Set initial audio value
* Fix audio enabling
* Close logpipe
* Isort
* Formatting
* Fix web tests
* Fix web tests
* Handle cases where args are a string
* Remove log
* Cleanup process close
* Use correct field
* Simplify if statement
* Use var for localhost
* Add audio detectors docs
* Add restream docs to mention audio detection
* Add full config docs
* Fix links to other docs
---------
Co-authored-by: Jason Hunter <hunterjm@gmail.com>
* use a different method for blur and contrast to reduce CPU
* blur with radius instead
* use faster interpolation for motion
* improve contrast based on averages
* increase default threshold to 30
* ensure mask is applied after contrast improvement
* update opencv
* update benchmark script
* configurable ffmpeg timeout
* configurable ffmpeg healthcheck interval
rename timeout to healthcheck_interval
only grab config value once
* configurable ffmpeg retry interval
rename healthcheck_interval to retry_interval
* add retry_interval to docs
- update retry_interval text in config.py
* optimize frame-per-second calculations
FPS is never calculated over differing periods for even given
counter. By only storing timestamps within the period that is
used, we avoid having to iterate 1000 entries every time it's
re-calculated (multiple times per second). 1000 entries would
normally only be needed if something is running at 100fps. A
more common speed - anywhere from 5 to 30fps, only needs 50
to 300 entries.
This isn't a huge improvement in the grand scheme, but when motion
detection is disabled, it takes nearly as much time in a flamegraph
as actually transferring the video frame.
* fix python checks
* Load labels dynamically to include custom events and audio, do not include attribute labels
* Formatting
* Fix sorting
* Also filter tracked object list on camera page
* isort
* Don't fail before load
* pass attribute labels as attributes
* add label attrs to events and snapshots
* incorporate area of license_plate and face into snapshot selection
* populate sublabels for cars with logos
* Make camera recordings mover asynchronous
* Formatting
* Move to using cv2 instead of external ffmpeg process
* Use ffprobe if cv2 failed
* Formatting
* Fix bad access
* Formatting
* Update frigate/record/maintainer.py
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Update name of caller
---------
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Prefer horizontal layout to vertical
* Rewrite birdseye to use aspect ratios instead of resolutions as layout configurator
* Improve layout with slightly larger than 16:9 cameras
* Remove manual 2 camera layout
* Lint
* Remove log
* "Refactor storage stats calculation to use powers of 2 for more accurate values"
* replace 1000000 to 2^20
* Refactor storage unit size display to use binary prefixes
This commit updates the display of storage unit sizes in both the camera storage stats and the Storage component in the web UI to use binary prefixes (MiB and GiB) instead of decimal prefixes (MB and GB). This provides more accurate and consistent representation of storage sizes
* refactor existing motion detector
* implement and use cnt bgsub
* pass fps to motion detector
* create a simplified motion detector
* lightning detection
* update default motion config
* lint imports
* use estimated boxes for regions
* use improved motion detector
* update test
* use a different strategy for clustering motion and object boxes
* increase alpha during calibration
* simplify object consolidation
* add some reasonable constraints to the estimated box
* adjust cluster boundary to 10%
* refactor
* add disabled debug code
* fix variable scope
* Add option for network bandwidth and only calculate if enabled
* Don't show network bandwidth in system stats page if not enabled
* Formatting
* Hide other rows as well
* Add docs
* Add config options for AMD and Intel GPU stats
* Fix stats access
* Update docs
* Use correct bool syntax
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
---------
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Update pull_request.yml
* Add temporary table for deletion and use pagination to process recordings in chunks for deletion of recordings with missing files
* move RecordingsToDelete class to models.py
* recording cleanup: bugfixes
* Update cleanup.py
* improve log message in cleanup.py
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>