mirror of
https://github.com/juanfont/headscale.git
synced 2025-10-19 11:15:48 +02:00
Before this patch, we would send a message to each "node stream" that there is an update that needs to be turned into a mapresponse and sent to a node. Producing the mapresponse is a "costly" afair which means that while a node was producing one, it might start blocking and creating full queues from the poller and all the way up to where updates where sent. This could cause updates to time out and being dropped as a bad node going away or spending too time processing would cause all the other nodes to not get any updates. In addition, it contributed to "uncontrolled parallel processing" by potentially doing too many expensive operations at the same time: Each node stream is essentially a channel, meaning that if you have 30 nodes, we will try to process 30 map requests at the same time. If you have 8 cpu cores, that will saturate all the cores immediately and cause a lot of wasted switching between the processing. Now, all the maps are processed by workers in the mapper, and the number of workers are controlable. These would now be recommended to be a bit less than number of CPU cores, allowing us to process them as fast as we can, and then send them to the poll. When the poll recieved the map, it is only responsible for taking it and sending it to the node. This might not directly improve the performance of Headscale, but it will likely make the performance a lot more consistent. And I would argue the design is a lot easier to reason about. Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com> |
||
---|---|---|
.. | ||
assets | ||
capver | ||
db | ||
derp | ||
dns | ||
mapper | ||
policy | ||
routes | ||
state | ||
templates | ||
types | ||
util | ||
app.go | ||
auth.go | ||
debug.go | ||
grpcv1_test.go | ||
grpcv1.go | ||
handlers.go | ||
metrics.go | ||
noise.go | ||
oidc.go | ||
platform_config.go | ||
poll.go | ||
suite_test.go | ||
tailsql.go |