-
Notifications
You must be signed in to change notification settings - Fork 644
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GUACAMOLE-1846: Synchronize new users to connections in batches to fix join race condition. #455
GUACAMOLE-1846: Synchronize new users to connections in batches to fix join race condition. #455
Conversation
Oh cool, this issue/fix sounds like it might be related to what I was seeing in #316 (comment) |
d999f53
to
1f0dc9e
Compare
Marking as draft while I investigate an issue. |
b7736e2
to
eaca340
Compare
Ok, I think this should be ready to go again. I'm still seeing the client occasionally get locked up when shutting down and fail to shut down cleanly, but at least it's guaranteed to always shut down with the new shutdown handling introduced in eaca340. |
57837c0
to
83b789d
Compare
This was an issue with lock acquisition/release order and has been fixed.
|
f782fcc
to
ad180be
Compare
@jmuehlner, as a bug fix should this be against |
Uh, yeah, I suppose so. Let me retarget it. It's done |
ad180be
to
eedac89
Compare
…g user promotion.
eedac89
to
826cb78
Compare
This change turned out to be pretty involved, but at their core, these changes are addressing the race condition described in GUACAMOLE-1846.
This change addresses this problem by batching up all newly connected users and synchronizing connection state to them using a newly defined
join_pending_handler
, at which time they'll be promoted to full users, and the broadcast socket will write to them.This synchronization is done using a new broadcast socket that writes to all synchronized users at once, instead of once at a time.
Pending users are synchronized once every quarter second, which should result in little noticeable delay upon joining a connection.
Thanks to @mike-jumper for helping to work through the logic here and help figure out various bugs along the way.