-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Invalid statistics #665
Comments
Hi @alex60217101990 thanks for reaching out and submitting this issue We'll open a ticket for this. Re:TTL based removal - since the server client relationship is realtime, and counts on constant connectivity , there is no time tracking at the moment. And ideally a fix here would maintain a better realtime status. That being said time tracking might be useful regardless. |
Hi @orweis , I'll add more details.
Additional info - 2 replicas of opal server and kafka as a broker. Version is latest at the moment. |
Describe the bug
We have carried out load tests on the Opal server and found that during scaling, there are ghost clients persisting in the server's statistics. Could you incorporate a mechanism to automatically purge invalid connections from the statistics based on their TTL?
To Reproduce
To reproduce this issue, it is enough to deploy the Opal server in k8s with a certain number of replicas (we use Kafka for synchronization). Then scale the ReplicaSet with Opal clients to a certain value (100, 200, it doesn’t matter). Statistics are enabled on both the Opal server and the Opal clients. Next, as soon as some pods have transitioned to the ready status, change the replication factor by reducing it, then increasing it again. Simultaneously, make requests to the Opal server's service endpoint for its statistics endpoint.
OPAL version
The text was updated successfully, but these errors were encountered: