-
Notifications
You must be signed in to change notification settings - Fork 196
sync spaces in jsoncs3 concurrently #10647
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hm 1 definitely is not the right concurrency. Maybe 10? The code is already concurrent, but in the past we started as many goroutines as we needed to fetch spaces. Then we initially limited it to 5 #10572 Then we limited it to 1 with https://github.com/owncloud/ocis/pull/10580/files#diff-0b152f7605595cf8986ba47f9ec12136e240c27124250ff619f7786b19cb6d76 And I had a draft to increase it to 5 again #10595 🤪 |
in a k8s cluster raising SHARING_USER_JSONCS3_MAX_CONCURRENCY from 1 to 10 decreases request duration from 18.47s to 5.82s: in a single binary deployment raising SHARING_USER_JSONCS3_MAX_CONCURRENCY from 1 to 5 increases request duration from ~340ms to 1.28sec. We need to verify this ... something might be fishy in the ci. |
@kobergj @wkloucek we should set In ocis 5 we just start a goroutine for each space. This behavior can be restored by setting With |
We are blocked until a decision is made
|
Could we also add some more context to the documentation after the findings in here? For non-involved people it's hard to digest what the settings are for when just reading the current documentation. So in detail: What is more concurrent on the sharing service if we have more concurrency? |
every test run creates a new space and that has accumulated.
now ListReceivedShares has to sync AFAICT 74 spaces.
The text was updated successfully, but these errors were encountered: