-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/splunk observability scaler #6192
base: main
Are you sure you want to change the base?
Feature/splunk observability scaler #6192
Conversation
… in hindsight Signed-off-by: Sebastian Schimper <[email protected]>
Signed-off-by: Sebastian Schimper <[email protected]>
} | ||
|
||
func getPodCount(kc *kubernetes.Clientset, namespace string) int { | ||
pods, err := kc.CoreV1().Pods(namespace).List(context.TODO(), metav1.ListOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sschimper-splunk The context should be created and passed in by the calling funciton
Signed-off-by: Sebastian Schimper <[email protected]>
The only files we should be changing under |
name: splunk-secrets | ||
namespace: {{.TestNamespace}} | ||
data: | ||
accessToken: YW1JeUpqVHRJd185cDhOWG01X21KQQ== # one time through-away access token used just for testing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems this test relies on an actual upstream being available to communicate with. Would it be possible to create a pod here that simply mocks responses? We could override the endpoint in the scaler config and point it to our mocked API. This would ensure the tests could still run in a more closed loop fashion without any upstream dependencies. Thoughts?
kedautil "github.com/kedacore/keda/v2/pkg/util" | ||
) | ||
|
||
type splunkObservabilityMetadata struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we support a parameter to override an endpoint in case that changes in the future?
} | ||
|
||
func newSplunkO11yConnection(meta *splunkObservabilityMetadata, logger logr.Logger) (*signalflow.Client, error) { | ||
logger.Info(fmt.Sprintf("meta: %+v\n", meta)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we actually need to log this?
for _, pl := range msg.Payloads { | ||
value, ok := pl.Value().(float64) | ||
if !ok { | ||
return -1, fmt.Errorf("error: could not convert Splunk Observability metric value to float64") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we're returning an error type, I don't think we need to include error:
in the error message
|
||
switch s.metadata.QueryAggregator { | ||
case "max": | ||
s.logger.Info(fmt.Sprintf("Returning max value: %.4f\n", max)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need to log every value being returned. The function should just execute the logic. Otherwise, the logs will get pretty noisy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was just for me as a sort of poor-mans-debugging. Let me get back to you with this.
|
||
func (s *splunkObservabilityScaler) GetMetricSpecForScaling(context.Context) []v2.MetricSpec { | ||
metricName := kedautil.NormalizeString("signalfx") | ||
re := regexp.MustCompile(`data\('([^']*)'`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Recompiling this regex every time GetMetricSpecForScaling()
is expensive in terms of CPU operations. To optimize this. We should move the regex to be compiled with the package and not every time the function is called. To do this, we can simply put it in a var at the top of the package like so:
var (
dataRegex = regexp.MustCompile(`data\('([^']*)'`)
)
Is regex necessary though? Is there not a more predictable way to get the data returned?
- type: splunk-observability | ||
metricType: Value | ||
metadata: | ||
query: "data('fdse-1989-tenable-test-metric').publish()" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should probably be a more generic metric name and not pertaining to any jiras
With this pull request, I would like to add a new custom KEDA scaler that interacts with the Splunk Observability Cloud Platform. It is able to query metrics from Splunk Observability Cloud and scale a deployment according to a predefined target value.
As for now, I do not have the created a pull request to update the Helm chart, becasue I did not think it necessary. However, my knowledge about Helm charts is admittedly limited, and I am happy to fix this in hindsight if that is necessary.
Thank you.
Checklist
Relates to: