Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/splunk observability scaler #6192

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

sschimper-splunk
Copy link

With this pull request, I would like to add a new custom KEDA scaler that interacts with the Splunk Observability Cloud Platform. It is able to query metrics from Splunk Observability Cloud and scale a deployment according to a predefined target value.

As for now, I do not have the created a pull request to update the Helm chart, becasue I did not think it necessary. However, my knowledge about Helm charts is admittedly limited, and I am happy to fix this in hindsight if that is necessary.
Thank you.

Checklist

Relates to:

  • Initial proposal: #6190
  • Pull request containing the documentation on this scaler: #1477

@sschimper-splunk sschimper-splunk requested a review from a team as a code owner September 26, 2024 08:20
}

func getPodCount(kc *kubernetes.Clientset, namespace string) int {
pods, err := kc.CoreV1().Pods(namespace).List(context.TODO(), metav1.ListOptions{})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider to use well-defined context

Ignore this finding from context-todo.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sschimper-splunk The context should be created and passed in by the calling funciton

@circa10a
Copy link
Contributor

The only files we should be changing under pkg/ in this PR is scalers/ and scaling/. We should remove the other changes introduced.

name: splunk-secrets
namespace: {{.TestNamespace}}
data:
accessToken: YW1JeUpqVHRJd185cDhOWG01X21KQQ== # one time through-away access token used just for testing
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems this test relies on an actual upstream being available to communicate with. Would it be possible to create a pod here that simply mocks responses? We could override the endpoint in the scaler config and point it to our mocked API. This would ensure the tests could still run in a more closed loop fashion without any upstream dependencies. Thoughts?

kedautil "github.com/kedacore/keda/v2/pkg/util"
)

type splunkObservabilityMetadata struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we support a parameter to override an endpoint in case that changes in the future?

pkg/scalers/splunk_observability_scaler.go Show resolved Hide resolved
}

func newSplunkO11yConnection(meta *splunkObservabilityMetadata, logger logr.Logger) (*signalflow.Client, error) {
logger.Info(fmt.Sprintf("meta: %+v\n", meta))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we actually need to log this?

pkg/scalers/splunk_observability_scaler.go Show resolved Hide resolved
pkg/scalers/splunk_observability_scaler.go Show resolved Hide resolved
for _, pl := range msg.Payloads {
value, ok := pl.Value().(float64)
if !ok {
return -1, fmt.Errorf("error: could not convert Splunk Observability metric value to float64")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we're returning an error type, I don't think we need to include error: in the error message


switch s.metadata.QueryAggregator {
case "max":
s.logger.Info(fmt.Sprintf("Returning max value: %.4f\n", max))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to log every value being returned. The function should just execute the logic. Otherwise, the logs will get pretty noisy

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was just for me as a sort of poor-mans-debugging. Let me get back to you with this.


func (s *splunkObservabilityScaler) GetMetricSpecForScaling(context.Context) []v2.MetricSpec {
metricName := kedautil.NormalizeString("signalfx")
re := regexp.MustCompile(`data\('([^']*)'`)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Recompiling this regex every time GetMetricSpecForScaling() is expensive in terms of CPU operations. To optimize this. We should move the regex to be compiled with the package and not every time the function is called. To do this, we can simply put it in a var at the top of the package like so:

var (
     dataRegex = regexp.MustCompile(`data\('([^']*)'`)
)

Is regex necessary though? Is there not a more predictable way to get the data returned?

- type: splunk-observability
metricType: Value
metadata:
query: "data('fdse-1989-tenable-test-metric').publish()"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably be a more generic metric name and not pertaining to any jiras

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants