Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OF-2505: BOSH: add back-pressure to network IO when processing can't keep up #2109

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

guusdk
Copy link
Member

@guusdk guusdk commented Sep 13, 2022

For BOSH and websocket connections, this commit introduces a new feature (disabled by default) that allows Openfire to be configured to apply 'back pressure' that could be useful in a scenario where the network IO thread pool is outpacing the processing thread pool.

By enabling the feature using xmpp.httpbind.worker.backpressure-enabled (and setting the queue to a limited value with xmpp.httpbind.worker.queue-capacity), the queuing of new tasks by the network IO thread pool blocks when the queue is at capacity.

@guusdk guusdk added this to the 4.7.4 milestone Sep 13, 2022
@guusdk
Copy link
Member Author

guusdk commented Sep 13, 2022

I have already cherry-picked this to the 4.7 branch, as I needed a Bamboo build. If this PR is rejected, or receives changes, those changes should also be applied to the 4.7 branch!

…keep up

For BOSH and websocket connections, this commit introduces a new feature (disabled by default) that allows Openfire to be configured to apply 'back pressure' that could be useful in a scenario where the network IO thread pool is outpacing the processing thread pool.

By enabling the feature using `xmpp.httpbind.worker.backpressure-enabled` (and setting the queue to a limited value with `xmpp.httpbind.worker.queue-capacity`, the queuing of new tasks by the network IO thread pool blocks when the queue is at capacity.
@akrherz
Copy link
Member

akrherz commented Sep 23, 2022

@guusdk Did you get any feedback on your 4.7 branch build with this change included?

@guusdk
Copy link
Member Author

guusdk commented Sep 27, 2022

I did - although it seemed to work as designed, the net effect of the change wasn't what we were hoping for in the very specific load test that was being used. We seem to be running into race conditions / locking scenario.

There still might be merit in this change, even if we're not going to use it in the specific use-case that I had in mind.

@guusdk
Copy link
Member Author

guusdk commented Sep 27, 2022

I think this at the very least needs a modification. The enum ThreadPoolExecutorRejectionPolicy that is added in this PR is not used at all. I think that enum is an artifact of an earlier attempt to get this functionality implemented. It can be removed.

@Fishbowler Fishbowler marked this pull request as draft October 21, 2022 19:11
@Fishbowler
Copy link
Member

Converted to draft whilst it awaits changes.

@guusdk guusdk removed this from the 4.7.4 milestone Oct 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants