* Send mqtt message when audio is detected
* Fix value
* Add audio topics to mqtt docs and add mqtt headers
* Use existing standard for values
* Update mqtt.md
* Add auto configuration for height, width and fps in detect role
* Add auto-configuration for detect width, height, and fps for input roles with detect in the CameraConfig class in config.py
* Refactor code to retrieve video properties from input stream in CameraConfig class and add optional parameter to retrieve video duration in get_video_properties function
* format
* Set default detect dimensions to 1280x720 and update DetectConfig to use the defaults
* Revert "Set default detect dimensions to 1280x720 and update DetectConfig to use the defaults"
This reverts commit a1aed0414d.
* Add default detect dimensions if autoconfiguration failed and log a warning message
* fix warn message spelling on frigate/config.py
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Ensure detect height and width are not None before using them in camera configuration
* docs: initial commit
* rename streamInfo to stream_info
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Apply suggestions from code review
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update docs
* handle case then get_video_properties returns 0x0 dimension
* Set detect resolution based on stream properties if available, else apply default values
* Update FrigateConfig to set default values for stream_info if resolution detection fails
* Update camera detection dimensions based on stream information if available
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Add limit to event query for fetching latest event with specified label and camera name
* Refactor the label_thumbnail function in frigate/http.py to simplify the event_query logic and improve code readability
* Support setting sub label scores via API
* Update docs
* Update docs
* Formatting
* Throw error when score is outside expected bounds
* Fix / cleanup
* much improved motion estimation and tracking
* docs updates
* move ptz specific mp values to ptz_metrics dict
* only check if moving at frame time
* pass full dict instead of individual values
* only check camera status if autotracking enabled
* a more sensible sleep time
* Only check camera status if preparing for a move
* only update tracked obj position when ptz stopped
* both pantilt *and* zoom should be idle
* check more often after moving
* No need to move pan and tilt separately
* Basic functionality
* Threaded motion estimator
* Revert "Threaded motion estimator"
This reverts commit 3171801607.
* Don't detect motion when ptz is moving
* fix motion logic
* fix mypy error
* Add threaded queue for movement for slower ptzs
* Move queues per camera
* Move autotracker start to app.py
* iou value for tracked object
* mqtt callback
* tracked object should be initially motionless
* only draw thicker box if autotracking is enabled
* Init if enabled when initially disabled in config
* Fix init
* Thread names
* Always use motion estimator
* docs
* clarify fov support
* remove size ratio
* use mp event instead of value for ptz status
* update autotrack at half fps
* fix merge conflict
* fix event type for mypy
* clean up
* Clean up
* remove unused code
* merge conflict fix
* docs: update link to object_detectors page
* Update docs/docs/configuration/autotracking.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* clarify wording
* pass actual instances directly
* default return preset
* fix type
* Error message when onvif init fails
* disable autotracking if onvif init fails
* disable autotracking if onvif init fails
* ptz module
* verify required_zones in config
* update util after dev merge
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update to latest tensorrt (8.6.1) release
* Build trt libyolo_layer.so in container
* Update tensorrt_models script to convert models from the frigate container
* Fix typo in model script
* Fix paths to yolo lib and models folder
* Add S6 scripts to test and convert specified TensortRT models at startup.
Rearrange tensorrt files into a docker support folder.
* Update TensorRT documentation to reflect the new model conversion process and minimum HW support.
* Fix model_cache path to live in config directory
* Move tensorrt s6 files to the correct directory
* Fix issues in model generation script
* Disable global timeout for s6 services
* Add version folder to tensorrt model_cache path
* Include TensorRT version 8.5.3
* Add numpy requirement prior to removal of np.bool
* This TRT version uses a mixture of cuda dependencies
* Redirect stdout from noisy model conversion
* Add functionality to update YAML config file with PUT request in HTTP endpoint
* Refactor copying of text to clipboard with Clipboard API and fallback to document.execCommand('copy') in CameraMap.jsx file
* Update YAML file from URL query parameters in frigate/http.py and add functionality to save motion masks, zones, and object masks in CameraMap.jsx
* formatting
* fix merging fuckup
* Refactor camera zone coordinate saving to use single query parameter per zone in CameraMap.jsx
* remove unnecessary print statements in util.py
* Refactor update_yaml_file function to separate the logic for updating YAML data into a new function update_yaml().
* fix merge errors
* Refactor code to improve error handling and add dependencies to useEffect hooks
* reduce contention on frame_queue
don't check if the queue is full, just attempt to add the frame
in a non-blocking manner, and then if it fails, skip it
* don't check if the frame queue is empty, just try and get from it
* Update frigate/video.py
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
---------
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Refactored queues to use faster_fifo instead of mp.Queue
* Refactored LimitedQueue to include a counter for the number of items in the queue and updated put and get methods to use the counter
* Refactor app.py and util.py to use a custom Queue implementation called LQueue instead of the existing Queue
* Refactor put and get methods in LimitedQueue to handle queue size and blocking behavior more efficiently
* code format
* remove code from other branch (merging fuckup)
* Check ffmpeg version instead of checking for presence of BTBN_PATH
* Query ffmpeg version in s6 run script instead of subprocessing in every import
* Define LIBAVFORMAT_VERSION_MAJOR in devcontainer too
* Formatting
* Default ffmpeg version to current btbn version so unit tests pass
* Reduce framerate before downscaling
It is cheaper to drop frames and downscale those that remain than it is
to downscale all frames and then drop some of them. This is achieved
with the filter chain `-cv fps=FPS,scale=W:H`, and perhaps was the
original intention. The plain `-r` and `-s` flags do not execute in
order though - they each put themselves at the *end* of the filterchain,
so `-r FPS -s WxH` actually applies the scale filter first, and then the
rate filter.
This fix can halve the CPU used by the detect ffmpeg process.
* Bring back hard rate limits
* Force birdseye cameras into standard aspect ratios
* Clarify comment
* Formatting
* Actually use the calculated aspect ratio when building the layout
* Fix Y aspect
* Force canvas into known aspect ratio as well
* Save canvas size and don't recalculate
* Cache coefficients that are used for different size layouts
* Further optimize calculations to not be done multiple times
* Add audio process PID to the list of processes and log the start of the audio process
* Update audio process PID key in processes dictionary to "audioDetector" instead of "audio".
* Scale layout up to max size after it has been calculated
* Limit portrait cameras to taking up 2 rows
* Fix bug
* Fix birdsye not removing cameras once objects are no longer visible
* Fix lint
* Initial audio classification model implementation
* fix mypy
* Keep audio labelmap local
* Cleanup
* Start adding config for audio
* Add the detector
* Add audio detection process keypoints
* Build out base config
* Load labelmap correctly
* Fix config bugs
* Start audio process
* Fix startup issues
* Try to cleanup restarting
* Add ffmpeg input args
* Get audio detection working
* Save event to db
* End events if not heard for 30 seconds
* Use not heard config
* Stop ffmpeg when shutting down
* Fixes
* End events correctly
* Use api instead of event queue to save audio events
* Get events working
* Close threads when stop event is sent
* remove unused
* Only start audio process if at least one camera is enabled
* Add const for float
* Cleanup labelmap
* Add audio icon in frontend
* Add ability to toggle audio with mqtt
* Set initial audio value
* Fix audio enabling
* Close logpipe
* Isort
* Formatting
* Fix web tests
* Fix web tests
* Handle cases where args are a string
* Remove log
* Cleanup process close
* Use correct field
* Simplify if statement
* Use var for localhost
* Add audio detectors docs
* Add restream docs to mention audio detection
* Add full config docs
* Fix links to other docs
---------
Co-authored-by: Jason Hunter <hunterjm@gmail.com>
* use a different method for blur and contrast to reduce CPU
* blur with radius instead
* use faster interpolation for motion
* improve contrast based on averages
* increase default threshold to 30
* ensure mask is applied after contrast improvement
* update opencv
* update benchmark script
* configurable ffmpeg timeout
* configurable ffmpeg healthcheck interval
rename timeout to healthcheck_interval
only grab config value once
* configurable ffmpeg retry interval
rename healthcheck_interval to retry_interval
* add retry_interval to docs
- update retry_interval text in config.py
* optimize frame-per-second calculations
FPS is never calculated over differing periods for even given
counter. By only storing timestamps within the period that is
used, we avoid having to iterate 1000 entries every time it's
re-calculated (multiple times per second). 1000 entries would
normally only be needed if something is running at 100fps. A
more common speed - anywhere from 5 to 30fps, only needs 50
to 300 entries.
This isn't a huge improvement in the grand scheme, but when motion
detection is disabled, it takes nearly as much time in a flamegraph
as actually transferring the video frame.
* fix python checks
* Load labels dynamically to include custom events and audio, do not include attribute labels
* Formatting
* Fix sorting
* Also filter tracked object list on camera page
* isort
* Don't fail before load
* pass attribute labels as attributes
* add label attrs to events and snapshots
* incorporate area of license_plate and face into snapshot selection
* populate sublabels for cars with logos
* Make camera recordings mover asynchronous
* Formatting
* Move to using cv2 instead of external ffmpeg process
* Use ffprobe if cv2 failed
* Formatting
* Fix bad access
* Formatting
* Update frigate/record/maintainer.py
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Update name of caller
---------
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Prefer horizontal layout to vertical
* Rewrite birdseye to use aspect ratios instead of resolutions as layout configurator
* Improve layout with slightly larger than 16:9 cameras
* Remove manual 2 camera layout
* Lint
* Remove log
* "Refactor storage stats calculation to use powers of 2 for more accurate values"
* replace 1000000 to 2^20
* Refactor storage unit size display to use binary prefixes
This commit updates the display of storage unit sizes in both the camera storage stats and the Storage component in the web UI to use binary prefixes (MiB and GiB) instead of decimal prefixes (MB and GB). This provides more accurate and consistent representation of storage sizes
* refactor existing motion detector
* implement and use cnt bgsub
* pass fps to motion detector
* create a simplified motion detector
* lightning detection
* update default motion config
* lint imports
* use estimated boxes for regions
* use improved motion detector
* update test
* use a different strategy for clustering motion and object boxes
* increase alpha during calibration
* simplify object consolidation
* add some reasonable constraints to the estimated box
* adjust cluster boundary to 10%
* refactor
* add disabled debug code
* fix variable scope
* Add option for network bandwidth and only calculate if enabled
* Don't show network bandwidth in system stats page if not enabled
* Formatting
* Hide other rows as well
* Add docs
* Add config options for AMD and Intel GPU stats
* Fix stats access
* Update docs
* Use correct bool syntax
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
---------
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Update pull_request.yml
* Add temporary table for deletion and use pagination to process recordings in chunks for deletion of recordings with missing files
* move RecordingsToDelete class to models.py
* recording cleanup: bugfixes
* Update cleanup.py
* improve log message in cleanup.py
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Objects need to be in zones multiple times to be considered present in the zone
* Add a field to configure inertia per zone
* Formatting
* Use correct default method
* Clarify zone presence behavior
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
---------
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
* Calculate possible layout
* Working in ideal conditions
* Fix issues with different heights
* Remove logs
* Optimally handle cameras that don't match the canvas aspect ratio
* Make sure to copy so list is not overwritten
* Remove unused import
* Remove try catch
* Optimize layout for low amount of cameras
* Try to scale frames up if not enough space is used
* Add function to get physical interfaces for bandwidth calculation in get_bandwidth_stats() function
* Add telemetry configuration option for enabled network interfaces, with default values for monitoring bandwidth stats for camera ffmpeg processes, go2rtc, and object detectors. Also add support for FrigateConfig in set_bandwidth_stats function to get bandwidth stats for specified network interfaces
* Enable auto vacuums
* Enable auto vacuum
* Fix separator
* Fix separator and remove incorrect log
* Limit to 1 row since that is all that is used
* Add index on camera + segment_size
* Formatting
* Increase timeout and cache_size
* Set DB mode to NORMAL synchronous level
* Formatting
* Vacuum every 2 weeks
* Remove fstring
* Use string
* Use consts
* Add ability to export frigate clips
* Add http endpoint
* Add dir to nginx
* Add webUI
* Formatting
* Cleanup unused
* Optimize timelapse
* Fix pts
* Use JSON body for params
* Use hwaccel to encode when available
* Print ffmpeg command when fail
* Print ffmpeg command when fail
* Add separate ffmpeg preset for timelapse
* Add docs outlining the export directory
* Add export docs
* Use ''
* Fix playlist max time
* Lower max playlist time
* Add api docs for export
* isort fixes
* Add isort and ruff linter
Both linters are pretty common among modern python code bases.
The isort tool provides stable sorting and grouping, as well as pruning
of unused imports.
Ruff is a modern linter, that is very fast due to being written in rust.
It can detect many common issues in a python codebase.
Removes the pylint dev requirement, since ruff replaces it.
* treewide: fix issues detected by ruff
* treewide: fix bare except clauses
* .devcontainer: Set up isort
* treewide: optimize imports
* treewide: apply black
* treewide: make regex patterns raw strings
This is necessary for escape sequences to be properly recognized.
* Move to events package
* Improve handling of external events
* Handle external events in the event queue
* Pass in event processor
* Check event json
* Fix json parsing and change defaults
* Fix snapshot saving
* Hide % score when not available
* Correct docs and add json example
* Save event png db
* Adjust image
* Formatting
* Add catch for failure ending event
* Add init to modules
* Fix naming
* Formatting
* Fix http creation
* fix test
* Change to PUT and include response in docs
* Add ability to set bounding box locations in snapshot
* Support multiple box annotations
* Cleanup docs example response
Co-authored-by: Blake Blackshear <blake@frigate.video>
* Cleanup docs wording
Co-authored-by: Blake Blackshear <blake@frigate.video>
* Store full frame for thumbnail
* Formatting
* Set thumbnail height to 175
* Formatting
---------
Co-authored-by: Blake Blackshear <blake@frigate.video>
* Add network bandwidth usage to System table display in System.jsx and update get_bandwidth_stats function in util.py to include go2rtc processes
* black...
* Add network bandwidth usage to system table in web UI and improve regex in get_bandwidth_stats function to include frigate detector processes
* black...
* Update bandwidth calculation to include both incoming and outgoing traffic
* black:(
* Add Deepstack detector plugin with configurable API URL, timeout, and API key
* Update DeepStack plugin to recognize 'truck' as 'car' for label indexing
* Add debug logging to DeepStack plugin for better monitoring and troubleshooting
* Refactor DeepStack label loading from file to use merged labelmap
* Black format
* add documentation draft
* fix link to codeproject website
* Apply suggestions from code review
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* nvml
* black...black...black...
* small fix for avoid errors on strange GPUs and old drivers
* fix type errors
* fix type errors
* fix unittest process crash
where the tests for tests?..
* it's impossible to mock low-level library
* fix double % for other GPU types
* remove space before gpu statistic values
* test refactor process stats
* Update util.py
* bugfix
* black formatting
* add missing processes field to StatsTrackingTypes class
* fix python checks and tests...
* use psutil for calcilate cpu utilization on get_cpu_stats
* black...black...black...
* add cpu average
* calculate statiscts for logger process
* add P-ID for other processes in System.jsx
* Apply suggestions from code review
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* make page beautiful again :)
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* avoid executing external tools by using Python's built-in os module to interact with the filesystem directly
* Refactor recording cleanup script to use os module instead of subprocess
* black format util.py
* Ooooops
* Refactor get_cpu_stats() to properly identify recording process
* Organize event table to be more generalized
* Add appropriate fields to data
* Move tracked object logic to own function
* Add source type to event queue
* rename enum
* Fix types that are used in webUI
* remove redundant
* Formatting
* fix typing
* Rename enum
* Add option to sort cameras inside Birdseye
* Make default order to be sorted alphabetically
* Add docs for sorting cameras
* Update index.md for cameras
* Remove irelevant comments
* Move recordings management to own process and ensure db multiprocess access
* remove reference to old threads
* Cleanup directory remover
* Mypy fixes
* Fix mypy
* Add support back for setting record via MQTT and WS
* Formatting
* Fix rebase issue
* Add support for ptz commands via websocket
* Fix startup issues
* Fix bugs
* Set config manually
* Add more commands
* Add presets
* Add zooming
* Fixes
* Set name
* Cleanup
* Add ability to set presets from UI
* Add ability to set preset from UI
* Cleanup for errors
* Ui tweaks
* Add visual design for pan / tilt
* Add pan/tilt support
* Support zooming
* Try to set wsdl
* Fix duplicate logs
* Catch auth errors
* Don't init onvif for disabled cameras
* Fix layout sizing
* Don't comment out
* Fix formatting
* Add ability to control camera with keyboard shortcuts
* Disallow user selection
* Fix mobile pressing
* Remove logs
* Substitute onvif password
* Add ptz controls ot birdseye
* Put wsdl back
* Add padding
* Formatting
* Catch onvif error
* Optimize layout for mobile and web
* Place ptz controls next to birdseye view in large layout
* Fix pt support
* Center text titles
* Update tests
* Update docs
* Write camera docs for PTZ
* Add MQTT docs for PTZ
* Add ptz info docs for http
* Fix test
* Make half width when full screen
* Fix preset panel logic
* Fix parsing
* Update mqtt.md
* Catch preset error
* Add onvif example to docs
* Remove template example from main camera docs
* Migrate db path to /config
* Ensure oneshot runs
* Put logic inside of Frigate's run
* Use new db default path in code
* Fix missing config dir
* Upgrade yq to 4.33.3
* Create timeline table
* Fix indexes
* Add other fields
* Adjust schema to be less descriptive
* Handle timeline queue from tracked object data
* Setup timeline queue in events
* Add source id for index
* Add other fields
* Fixes
* Formatting
* Store better data
* Add api with filtering
* Setup basic UI for timeline in events
* Cleanups
* Add recordings snapshot url
* Start working on timeline ui
* Add tooltip with info
* Improve icons
* Fix start time with clip
* Move player logic back to clips
* Make box in timeline relative coordinates
* Make region relative
* Get box overlay working
* Remove overlay when playing again
* Add disclaimer when selecting overlay points
* Add docs for new apis
* Fix mobile
* Fix docs
* Change color of bottom center box
* Fix vscode formatting
It is an alias for the python float type, and got deprecated in 1.20 and
was removed in 1.24. The rest of the project already uses float32
(single), so I believe this is also correct here, as opposed to float64
(double).
It supports the same entrypoints, given that tflite is a small cut-out
of the big tensorflow picture.
This patch was created for downstream usage in nixpkgs, where we don't
have the tflite python package, but do have the full tensorflow package.
* Add docs for time / date styling
* Convert 12hour time format option to enum
* Change option in web
* Add docs with examples
* Fix errors in docs
* Fix mismatched names
* Initial commit that adds YOLOv5 and YOLOv8 support for OpenVINO detector
* Fixed double inference bug with YOLOv5 and YOLOv8
* Modified documentation to mention YOLOv5 and YOLOv8
* Changes to pass lint checks
* Change minimum threshold to improve model performance
* Fix link
* Clean up YOLO post-processing
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Add ability to GPU device to be automatically detected when multiple exist
* Add logging info
* Fix access
* Fix
* Formatting
* Fix path of device
* Use log error instead of raise
* Remove log which could apply to other caess
* Set default value
* rework logic and support auto gpu selection for encoding gpu as well
* dont wait so long for queues
* implement stop methods for comms
* set the detection events on exit and return early from processing
* handle the stop event in the broadcast threads
* short circuit the detection process exit code if it already exited
* some logging for stats thread
* just keep the log process alive 1 second after the last log message
* ensure the multiprocessing queues are emptied and closed
* Update frigate/log.py
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update frigate/log.py
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* mypy fixes
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Initial commit to enable Yolox models with OpenVINO in Frigate
* Fix ModelEnumtType import error in openvino.py
* Initial edit of the docs to include verbage about yolox
* Initial edit of the docs to include verbage about yolox
* Elaborate configuration and limitations in docs.
* Add capability to dynamically determine number of classes in yolox model
* Further refinements
* Removed unnecesarry comments, improved documentation, addressed PR items
* Fixed lint formatting issues
* Restart ffmpeg if process exceeds detect fps by 10
* Update frigate/video.py
Co-authored-by: Felipe Santos <felipecassiors@gmail.com>
* spelling
---------
Co-authored-by: Felipe Santos <felipecassiors@gmail.com>
* Refator s6 scripts to the new format
* Remove unneeded workaround
* Migrate logging to new s6 format
* Remove more unnecessary s6 variables
* Fix prepare-log and when go2rtc is not present in config
* Restart the whole container if either Frigate or go2rtc fails
* D
* Fix service name in finish
* Fix nginx finish comment
* Restart improvements
* Fix devcontainer
* Fix format
* Update Dockerfile
Co-authored-by: Felipe Santos <felipecassiors@gmail.com>
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Add config fields
* Clean up camera default values
* Set recordings timezone with config if available
* Adjust for timezone config
* Cleanup setting of the timezone
* Don't fail on MSE check iPad
* Fix MSE check for birdseye
* Add docs
* Fix test
* Add in_progress parameter to /api/events to filter the results.
* Change in_progress to default to no filtering, 0 means no in progress, 1 means only in progress.
* Fix code format with black.
* Clear blank line.
* Add missing labels to default labelmap. Fill any holes with "unknown". Remove unique labelmap for tensorrt.
* Replace "truck" with "car" on Openvino labelmap
* check stream specific hwaccel_args for gpu stats
* fix indentation
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* check special chars for linter
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Add video codec to restream config
* Add handling of encode engine and video codec
* Add test for video encoding
* Set in main configuration docs as well
* Add example to restream docs
* Put back patch
* [API] filter for favorite events
* Added /api/events filter for favorite (retain_indefinitely) events
* New Star button to filter for favorite events on the Events page
* fix python formatting
* keep Events favorite button to right side
* Try using RTSP for restream
* Add ability to get snapshot of birdseye when birdseye restream is enabled
* Write to pipe instead of encoding mpeg1
* Write to cache instead
* Use const for location
* Formatting
* Add hardware encoding for birdseye based on ffmpeg preset
* Provide framerate
* Adjust args
* Fix order
* Delete pipe file if it exists
* Cleanup spacing
* Fix spacing
* Initial WIP dockerfile and scripts to add tensorrt support
* Add tensorRT detector
* WIP attempt to install TensorRT 8.5
* Updates to detector for cuda python library
* TensorRT Cuda library rework WIP
Does not run
* Fixes from rebase to detector factory
* Fix parsing output memory pointer
* Handle TensorRT logs with the python logger
* Use non-async interface and convert input data to float32. Detection runs without error.
* Make TensorRT a separate build from the base Frigate image.
* Add script and documentation for generating TRT Models
* Add support for TensorRT devcontainer
* Add labelmap to trt model script and docs. Cleanup of old scripts.
* Update detect to normalize input tensor using model input type
* Add config for selecting GPU. Fix Async inference. Update documentation.
* Update some CUDA libraries to clean up version warning
* Add CI stage to build TensorRT tag
* Add note in docs for image tag and model support