* Docs: Fix and clarify which /dev/video devices to use with Raspberry Pi
* Update docs/docs/configuration/hardware_acceleration.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update docs/docs/configuration/hardware_acceleration.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Revise VSCode hostname info in docs
* Fix misplaced backtick
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* ROCm AMD/GPU based build and detector, WIP
* detectors/rocm: separate yolov8 postprocessing into own function; fix box scaling; use cv2.dnn.blobForImage for preprocessing; assert on required model parameters
* AMD/ROCm: add couple of more ultralytics models; comments
* docker/rocm: make imported model files readable by all
* docker/rocm: readme about running on AMD GPUs
* docker/rocm: updated README
* docker/rocm: updated README
* docker/rocm: updated README
* detectors/rocm: separated preprocessing functions into yolo_utils.py
* detector/plugins: added onnx cpu plugin
* docker/rocm: updated container with limite label sets
* example detectors view
* docker/rocm: updated README.md
* docker/rocm: update README.md
* docker/rocm: do not set HSA_OVERRIDE_GFX_VERSION at all for the general version as the empty value broke rocm
* detectors: simplified/optimized yolov8_postprocess
* detector/yolo_utils: indentation, remove unused variable
* detectors/rocm: default option to conserve cpu usage at the expense of latency
* detectors/yolo_utils: use nms to prefilter overlapping boxes if too many detected
* detectors/edgetpu_tfl: add support for yolov8
* util/download_models: script to download yolov8 model files
* docker/main: add download-models overlay into s6 startup
* detectors/rocm: assume models are in /config/model_cache/yolov8/
* docker/rocm: compile onnx files into mxr files at startup
* switch model download into bash script
* detectors/rocm: automatically override HSA_OVERRIDE_GFX_VERSION for couple of known chipsets
* docs: rocm detector first notes
* typos
* describe builds (harakas temporary)
* docker/rocm: also build a version for gfx1100
* docker/rocm: use cp instead of tar
* docker.rocm: remove README as it is now in detector config
* frigate/detectors: renamed yolov8_preprocess->preprocess, pass input tensor element type
* docker/main: use newer openvino (2023.3.0)
* detectors: implement class aggregation
* update yolov8 model
* add openvino/yolov8 support for label aggregation
* docker: remove pointless s6/timeout-up files
* Revert "detectors: implement class aggregation"
This reverts commit dcfe6bbf6f.
* detectors/openvino: remove class aggregation
* detectors: increase yolov8 postprocessing score trershold to 0.5
* docker/rocm: separate rocm distributed files into its own build stage
* Update object_detectors.md
* updated CODEOWNERS file for rocm
* updated build names for documentation
* Revert "docker/main: use newer openvino (2023.3.0)"
This reverts commit dee95de908.
* reverrted openvino detector
* reverted edgetpu detector
* scratched rocm docs from any mention of edgetpu or openvino
* Update docs/docs/configuration/object_detectors.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* renamed frigate.detectors.yolo_utils.py -> frigate.detectors.util.py
* clarified rocm example performance
* Improved wording and clarified text
* Mentioned rocm detector for AMD GPUs
* applied ruff formating
* applied ruff suggested fixes
* docker/rocm: fix missing argument resulting in larger docker image sizes
* docs/configuration/object_detectors: fix links to yolov8 release files
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Generate low res low fps previews for recordings viewer
* Make sure previews end on the hour
* Fix durations and decrase keyframe interval to ensure smooth scrubbing
* Ensure minimized resolution is compatible with yuv
* Add ability to configure preview quality
* Fix
* Clean up previews more efficiently
* Use iterator
* revamp plus docs
* consolidate label guidance
* add some common complete config examples
* clarify zone presence
* bottom center example of mask
* update recommended hardware
* update nav
* update getting started
* add openvino example
* explain why we track stationary objects
* move false positive guide to config folder
* fix link
* update record and parked car guide
* tweaks
* Add glossary with commonly used terms for frigate
* Link back to full docs pages
* Add glossary to sidebar
* Clarifications and grammar fixes
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
---------
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* update docs to clarify variable substitution in go2rtc
* update to complet
* cleanup spacing
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Included caution about Snapcraft docker issues.
* Accepted format suggestion from NickM-27
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* support for other yolov models and config checks
* apply code formatting
* Information about core mask and inference speed
* update rknn postprocess and remove params
* update model selection
* Apply suggestions from code review
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* support rknn on all socs
* apply changes from review and fix post process bug
* apply code formatting
* update tip in object_detectors docs
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* support for other yolov models and config checks
* apply code formatting
* Information about core mask and inference speed
* update rknn postprocess and remove params
* update model selection
* Apply suggestions from code review
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Add endpoint to restart Frigate
The only means of restarting Frigate remotely is to issue
a restart topic on the server's websocket. It's
convenient to also expose this capability via HTTP endpoint.
* Add new section to API docs
* Remove extra line
* prevent estimate clipping when autotracking
* use unclipped estimate in distance function only
* remove autotracking velocity changes
* publish on init
* optimize motion and velocity estimation
* change recommended fps and fix config validate
* remove unneeded var
* process at most 3 objects per second
* fix test
* add `--validate-config` option for CI config validation
Signed-off-by: Russell Troxel <russell.troxel@segment.com>
* Fix Lint
Signed-off-by: Russell Troxel <russell.troxel@segment.com>
* Add docs & test live
Signed-off-by: Russell Troxel <russell.troxel@segment.com>
* Update docs/docs/configuration/advanced.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Fix Lint
Signed-off-by: Russell Troxel <russell@troxel.io>
---------
Signed-off-by: Russell Troxel <russell.troxel@segment.com>
Signed-off-by: Russell Troxel <russell@troxel.io>
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* zoom in/out in search for lost objects
* predicted box should not be empty
* clean up and update zoom logic
* only zoom if enabled
* more cleanup
* check for valid velocity when zooming
* only try absolute zoom in if obj area has changed
* zoom logic
* don't enqueue lost object zoom if already at limit
* don't disable motion boxes during ptz moves
* velocity threshold based on move coefficients
* fix area zoom logic
* disable debug zoom
* don't process objects if ptz moving
* recalc with exponent
* change exponent
* remove lost object zooming
* increase distance threshold for stationary object
* increase distance threshold constant
* only zoom out if nonzero
* camera name in all debug logging
* add camera name to debug logging
* camera variable name consistency
* update calibration behavior and docs
* docs and better zooming
* more sensible target values
* docs wording
* fix velocity threshold variable
* zooming tweaks and remove iou for current objects
* debug and docs
* get valid velocity
* include zero
* additional debug statements
* add zoom hysteresis
* zoom on initial move if relative
* only update target box if we actually zoom
* merge dev
* use getattr instead of get
* increase distance threshold
* reverse logic
* get_camera_status after preset move to store zoom
* final tweaks and docs
* use constants and catch possible debug exception
* adjust zoom factor exponent
* don't run motion estimation when calling preset
* adjust dimension threshold
* use numpy for velocity estimate calcs
* more numpy conversion
* fix numpy shapes
* numpy zeros dimension
* more zoom out conditions
* fix velocity bug
* ensure init has been called in debug view
* ensure onvif init if enabling by mqtt
* change default hysteresis values
* recalc relative zoom value
* zoom out value
* try to zoom when object isn't moving
* try zoom when tracked object is not moving
* don't try to zoom every time
* negate zoom out condition when needed
* hysteresis constants for absolute zooming
* update zoom conditions
* don't recalc target box on zoom only
* zoom out if above area threshold
* don't print zooming debug for stationary obj
* revamp zooming to use area moving average
* zooming tweaks and expose property
* limit zoom with max target box
* use calibration to determine zoom levels
* zoom logic fix
* docs
* add tapo c200 camera
* fix initial absolute zoom
* small zoom logic fix
* better invalid velocity checks
* fix test
* really fix test this time
* Refactor media source handling in MsePlayer.js and Birdseye.jsx to support ManagedMediaSource
* lint
* Update docs to reflect iOS supporting mse
---------
Co-authored-by: Sergey Krashevich <svk@svk.su>
* Update api.md
* Update api.md
* Added filter option for min/max score for event to API function /events
* Added filter for submitted events
* Update http.py
* don't zoom if camera doesn't support it
* basic zooming
* make zooming configurable
* zooming docs
* optional zooming in camera status
* Use absolute instead of relative zooming
* increase edge threshold
* zoom considering object area
* bugfixes
* catch onvif zooming errors
* relative zooming option for dahua/amcrest cams
* docs
* docs
* don't make small movements
* remove old logger statement
* fix small movements
* use enum in config for zooming
* fix formatting
* empty move queue first
* clear tracked object before waiting for stop
* use velocity estimation for movements
* docs updates
* add tests
* typos
* recalc every 50 moves
* adjust zoom based on estimate box if calibrated
* tweaks for fast objects and large movements
* use real time for calibration and add info logging
* docs updates
* remove area scale
* Add example video to docs
* zooming font header size the same as the others
* log an error if a ptz doesn't report a MoveStatus
* debug logging for onvif service capabilities
* ensure camera supports ONVIF MoveStatus
* Add args to ignore audio and only process keyframes
* Add timelapse args to config
* Update docs
* Formatting
* Fix spacing
* Fix formatting
* add example of math for pts
* add note about network bandwidth permissions
* Update default net int
* Set default network interfaces to empty
* Don't read interfaces if none are set
* Formatting
* Add stderr output
* Non-Jetson changes
Required for later commits:
- Allow base image to be overridden (and don't assume its WORKDIR)
- Ensure python3.9
- Map hwaccel decode presets as strings instead of lists
Not required:
- Fix existing documentation
- Simplify hwaccel scale logic
* Prepare for multi-arch tensorrt build
* Add tensorrt images for Jetson boards
* Add Jetson ffmpeg hwaccel
* Update docs
* Add CODEOWNERS
* CI
* Change default model from yolov7-tiny-416 to yolov7-320
In my experience the tiny models perform markedly worse without being
much faster
* fixup! Update docs
* Make main frigate build non rpi specific and build rpi using base image
* Add boards to sidebar
* Fix docker build
* Fix docs build
* Update pr branch for testing
* remove target from rpi build
* Remove manual build
* Add push build for rpi
* fix typos, improve wording
* Add arm build for rpi
* Cleanup and add default github ref name
* Cleanup docker build file system
* Setup to use docker bake
* Add ci/cd for bake
* Fix path
* Fix devcontainer
* Set targets
* Fix build
* Fix syntax
* Add wheels target
* Move dev container to trt
* Update key and fix rpi local
* Move requirements files and set intermediate targets
* Add back --load
* Update docs for community board development
* Update installation docs to reflect different builds available
* Update docs with official and community supported headers
* Update codeowners docs
* Update docs
* Assemble main and standard builds
* Change order of pushes
* Remove community board after successful build
* Fix rpi bake file names
* Send mqtt message when audio is detected
* Fix value
* Add audio topics to mqtt docs and add mqtt headers
* Use existing standard for values
* Update mqtt.md
* Add auto configuration for height, width and fps in detect role
* Add auto-configuration for detect width, height, and fps for input roles with detect in the CameraConfig class in config.py
* Refactor code to retrieve video properties from input stream in CameraConfig class and add optional parameter to retrieve video duration in get_video_properties function
* format
* Set default detect dimensions to 1280x720 and update DetectConfig to use the defaults
* Revert "Set default detect dimensions to 1280x720 and update DetectConfig to use the defaults"
This reverts commit a1aed0414d.
* Add default detect dimensions if autoconfiguration failed and log a warning message
* fix warn message spelling on frigate/config.py
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Ensure detect height and width are not None before using them in camera configuration
* docs: initial commit
* rename streamInfo to stream_info
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Apply suggestions from code review
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update docs
* handle case then get_video_properties returns 0x0 dimension
* Set detect resolution based on stream properties if available, else apply default values
* Update FrigateConfig to set default values for stream_info if resolution detection fails
* Update camera detection dimensions based on stream information if available
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Support setting sub label scores via API
* Update docs
* Update docs
* Formatting
* Throw error when score is outside expected bounds
* Fix / cleanup
* much improved motion estimation and tracking
* docs updates
* move ptz specific mp values to ptz_metrics dict
* only check if moving at frame time
* pass full dict instead of individual values
* Basic functionality
* Threaded motion estimator
* Revert "Threaded motion estimator"
This reverts commit 3171801607.
* Don't detect motion when ptz is moving
* fix motion logic
* fix mypy error
* Add threaded queue for movement for slower ptzs
* Move queues per camera
* Move autotracker start to app.py
* iou value for tracked object
* mqtt callback
* tracked object should be initially motionless
* only draw thicker box if autotracking is enabled
* Init if enabled when initially disabled in config
* Fix init
* Thread names
* Always use motion estimator
* docs
* clarify fov support
* remove size ratio
* use mp event instead of value for ptz status
* update autotrack at half fps
* fix merge conflict
* fix event type for mypy
* clean up
* Clean up
* remove unused code
* merge conflict fix
* docs: update link to object_detectors page
* Update docs/docs/configuration/autotracking.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* clarify wording
* pass actual instances directly
* default return preset
* fix type
* Error message when onvif init fails
* disable autotracking if onvif init fails
* disable autotracking if onvif init fails
* ptz module
* verify required_zones in config
* update util after dev merge
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update ha_network_storage.md to make it clear which config file
* Update docs/docs/guides/ha_network_storage.md with relative url
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update to latest tensorrt (8.6.1) release
* Build trt libyolo_layer.so in container
* Update tensorrt_models script to convert models from the frigate container
* Fix typo in model script
* Fix paths to yolo lib and models folder
* Add S6 scripts to test and convert specified TensortRT models at startup.
Rearrange tensorrt files into a docker support folder.
* Update TensorRT documentation to reflect the new model conversion process and minimum HW support.
* Fix model_cache path to live in config directory
* Move tensorrt s6 files to the correct directory
* Fix issues in model generation script
* Disable global timeout for s6 services
* Add version folder to tensorrt model_cache path
* Include TensorRT version 8.5.3
* Add numpy requirement prior to removal of np.bool
* This TRT version uses a mixture of cuda dependencies
* Redirect stdout from noisy model conversion
* Initial audio classification model implementation
* fix mypy
* Keep audio labelmap local
* Cleanup
* Start adding config for audio
* Add the detector
* Add audio detection process keypoints
* Build out base config
* Load labelmap correctly
* Fix config bugs
* Start audio process
* Fix startup issues
* Try to cleanup restarting
* Add ffmpeg input args
* Get audio detection working
* Save event to db
* End events if not heard for 30 seconds
* Use not heard config
* Stop ffmpeg when shutting down
* Fixes
* End events correctly
* Use api instead of event queue to save audio events
* Get events working
* Close threads when stop event is sent
* remove unused
* Only start audio process if at least one camera is enabled
* Add const for float
* Cleanup labelmap
* Add audio icon in frontend
* Add ability to toggle audio with mqtt
* Set initial audio value
* Fix audio enabling
* Close logpipe
* Isort
* Formatting
* Fix web tests
* Fix web tests
* Handle cases where args are a string
* Remove log
* Cleanup process close
* Use correct field
* Simplify if statement
* Use var for localhost
* Add audio detectors docs
* Add restream docs to mention audio detection
* Add full config docs
* Fix links to other docs
---------
Co-authored-by: Jason Hunter <hunterjm@gmail.com>
* Clean up docs given HA addon storage support
* Add guide for using HA network storage
* Add to sidebar
* Specify that media type needs to be used
* Link to storage guide from install docs
* Instruct users to store DB in /config
* Update ha_network_storage.md
* Recommend that data is moved or deleted
* use a different method for blur and contrast to reduce CPU
* blur with radius instead
* use faster interpolation for motion
* improve contrast based on averages
* increase default threshold to 30
* ensure mask is applied after contrast improvement
* update opencv
* update benchmark script
* configurable ffmpeg timeout
* configurable ffmpeg healthcheck interval
rename timeout to healthcheck_interval
only grab config value once
* configurable ffmpeg retry interval
rename healthcheck_interval to retry_interval
* add retry_interval to docs
- update retry_interval text in config.py
* pass attribute labels as attributes
* add label attrs to events and snapshots
* incorporate area of license_plate and face into snapshot selection
* populate sublabels for cars with logos
* Update faqs.md
I spent hours trying to figure this out and if this could be included in some way that would potentially help someone out there.
* Update docs/docs/troubleshooting/faqs.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update docs/docs/troubleshooting/faqs.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update faqs.md
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>