-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Octopus Worker Scaler #3808
Comments
Do you have a URL to that product? Never heard of it, but thanks for the suggestion! Are you interested in contributing it? |
Sure: https://octopus.com/ its used a lot in regulated industries as it has very good audit logs for deployments. I've been playing with it at my current contract and they don't really get scaling and I thought it'd be a good fit. I'm happy to write the initial code and test; as long as someone reviews my dirty hack skills :D |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
@Eldarrin Are you still interested in contributing? |
Picking this up again :) |
@JorTurFer @tomkerkhove I have an interesting issue here which I suspect you have hit before. The worker will come up with the scaler as decided by the queue length, but the worker may or may not be actually executing a task as that may be a different worker. Is there a simple way already in KEDA for detecting the running vs executing state (ready/live probe etc). Any thoughts? I'm actually thinking about the same for the azure pipeline scalers as they exhibit the same behaviours. Basically I don't want to SIGTERM an executing process |
It's an excellent point, but I think that we should follow the same approach here as we follow in Azure Pipelines or GH Actions scalers: The scaler should be used with ScaledJob to ensure that an active worker isn't killed. KEDA shouldn't know anything from the scaled workload because KEDA doesn't manage them. Apart from that design perspective, KEDA can't decide which instance is removed because KEDA exposes the metric, and it's the k8s control plane who decides the killed instances |
Ok, linking the octopus issue. OctopusDeploy/OctopusTentacle#458 Not sure what to do till I get that resolved. |
Maybe using |
I am using the octopus deploy and would like to auto-scale the tentacle based on the KEDA and I am interested in testing this. I am using OCI-OKE cluster and tentacle is deployed, but the workers are inactive and interested to get this up and running and use KEDA to scale the workers inside the workerpool. |
@Eldarrin , are you still interested in contribute with this? |
@JorTurFer @Exnadella The problem still exists that the scaler can kill itself without warning (possibly mid job). There is a bit of a dirty hack that's possible is you have a kill blocker that detects |
Thanks for the update ❤️ |
Proposal
A docker container running an Octopus Tentacle which can be scaled according to queue requirements; focusing on using kubernetes to scale Octopus Workers
Scaler Source
Octopus Task Queue length
Scaling Mechanics
Queue Length > metric initiate a new worker in a pool
Authentication Source
Octopus API Key
Anything else?
Happy to write it if it gets enough appreciation. :)
The text was updated successfully, but these errors were encountered: