Commit Graph

7 Commits

Author SHA1 Message Date
Nicolas Mowen
21d3476bd9
Require setting process priority for FrigateProcess (#19207) 2025-07-18 12:23:06 -05:00
Nicolas Mowen
47a0097e95
Handle SIGINT with forkserver (#18860)
* Pass stopevent from main start

* Share stop event across processes

* preload modules

* remove explicit os._exit call

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-06-24 11:41:11 -06:00
Nicolas Mowen
c3ba834a1c
Fix go2rtc init (#18708)
* Cleanup process handling

* Adjust process name
2025-06-13 12:09:51 -05:00
Josh Hawkins
d7a446e0f6
Ensure logging config is propagated to forked processes (#18704)
* Move log level initialization to log

* Use logger config

* Formatting

* Fix config order

* Set process names

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2025-06-13 09:43:38 -05:00
Nicolas Mowen
8485023442
Use Fork-Server As Spawn Method (#18682)
* Set runtime

* Use count correctly

* Don't assume camera sizes

* Use separate zmq proxy for object detection

* Correct order

* Use forkserver

* Only store PID instead of entire process reference

* Cleanup

* Catch correct errors

* Fix typing

* Remove before_run from process util

The before_run never actually ran because:

You're right to suspect an issue with before_run not being called and a potential deadlock. The way you've implemented the run_wrapper using __getattribute__ for the run method of BaseProcess is a common pitfall in Python's multiprocessing, especially when combined with how multiprocessing.Process works internally.

Here's a breakdown of why before_run isn't being called and why you might be experiencing a deadlock:

The Problem: __getattribute__ and Process Serialization
When you create a multiprocessing.Process object and call start(), the multiprocessing module needs to serialize the process object (or at least enough of it to re-create the process in the new interpreter). It then pickles this serialized object and sends it to the newly spawned process.

The issue with your __getattribute__ implementation for run is that:

run is retrieved during serialization: When multiprocessing tries to pickle your Process object to send to the new process, it will likely access the run attribute. This triggers your __getattribute__ wrapper, which then tries to bind run_wrapper to self.
run_wrapper is bound to the parent process's self: The run_wrapper closure, when created in the parent process, captures the self (the Process instance) from the parent's memory space.
Deserialization creates a new object: In the child process, a new Process object is created by deserializing the pickled data. However, the run_wrapper method that was pickled still holds a reference to the self from the parent process. This is a subtle but critical distinction.
The child's run is not your wrapped run: When the child process starts, it internally calls its own run method. Because of the serialization and deserialization process, the run method that's ultimately executed in the child process is the original multiprocessing.Process.run or the Process.run if you had directly overridden it. Your __getattribute__ magic, which wraps run, isn't correctly applied to the Process object within the child's context.

* Cleanup

* Logging bugfix (#18465)

* use mp Manager to handle logging queues

A Python bug (https://github.com/python/cpython/issues/91555) was preventing logs from the embeddings maintainer process from printing. The bug is fixed in Python 3.14, but a viable workaround is to use the multiprocessing Manager, which better manages mp queues and causes the logging to work correctly.

* consolidate

* fix typing

* Fix typing

* Use global log queue

* Move to using process for logging

* Convert camera tracking to process

* Add more processes

* Finalize process

* Cleanup

* Cleanup typing

* Formatting

* Remove daemon

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2025-06-12 12:12:34 -06:00
gtsiam
a468ed316d
Added stop_event to util.Process (#14142)
* Added stop_event to util.Process

util.Process will take care of receiving signals when the stop_event is
accessed in the subclass. If it never is, SystemExit is raised instead.

This has the effect of still behaving like multiprocessing.Process when
stop_event is not accessed, while still allowing subclasses to not deal
with the hassle of setting it up.

* Give each util.Process their own logger

This will help to reduce boilerplate in subclasses.

* Give explicit types to util.Process.__init__

This gives better type hinting in the editor.

* Use util.Process facilities in AudioProcessor

Boilerplate begone!

* Removed pointless check in util.Process

The log_listener.queue should never be None, unless something has gone
extremely wrong in the log setup code. If we're that far gone, crashing
is better.

* Make sure faulthandler is enabled in all processes

This has no effect currently since we're using the fork start_method.
However, when we inevidably switch to forkserver (either by choice, or
by upgrading to python 3.14+) not having this makes for some really fun
failure modes :D
2024-10-03 11:03:43 -06:00
gtsiam
c0bd3b362c
Custom classes for Process and Metrics (#13950)
* Subclass Process for audio_process

* Introduce custom mp.Process subclass

In preparation to switch the multiprocessing startup method away from
"fork", we cannot rely on os.fork cloning the log state at fork time.
Instead, we have to set up logging before we run the business logic of
each process.

* Make camera_metrics into a class

* Make ptz_metrics into a class

* Fixed PtzMotionEstimator.ptz_metrics type annotation

* Removed pointless variables

* Do not start audio processor when no audio cameras are configured
2024-09-27 07:53:23 -05:00