-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Empty results custom resources detail field in K8sGPT operator deployment with LocalAI backend #433
Comments
Local lab set up shown here:
|
Downloaded models directory current content shown here:
|
LocalAI image information from Helm values shown here:
|
Notes:
|
does this work in other version? |
Facing same issue with Amazon bedrock and latest version of k8sgpt and k8sgpt operator. Details field in the results object is empty, however I am able to get results from Amazon bedrock using the k8sgpt cli. |
@atul86244 do you have any logs in operator? |
@aparandian could you share the values you used to install the localai deployment? |
@JuHyung-Son yes, here are the logs:
|
Will take a look at this |
can you actually call llm request directly?? im not familiar with localai, i got this message
|
I could not reproduce this with:
|
it seems localai backend related |
I will test this now.. |
As JuHyung-Son mentioned, it could be LocalAI related. However, the "k8sgpt-sample-localai" service for K8sGPT CRD uses 8080 port as well. Since I have used 8080 for local-ai, not sure where I can specify the port for K8sGPT CRD so that it won't conflict.... Is here hardcoded?
|
@cf250024 and if local-ai server is deployed on same namespace with k8sgpt, it should use other port. otherwise, port does not matter |
nice first issue and worth creating a gh issue :) |
Hi @AlexsJones, as discussed, here is the AI spec I am using for bedrock:
|
@AlexsJones , I think the issue here is that my org provides temporary creds for aws so passing the AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY to the secret is not working. I tried adding AWS_SESSION_TOKEN to the secrets but that didn't help either. Errors from the logs below:
Tried IRSA too but couldn't find the right config to use for K8sGPT, tried below setting with IRSA but it doesn't work:
Do we have any specific k8sgpt setting which should be used with IRSA? |
The AWS go-sdk does support a few different modes of identity management. For |
Yes, this is what I tried, the changes done on the |
The downside of this approach is that on average the session key would need rotation every twelve hours |
I will open a branch to provide support for IRSA, you can test it out for me and let me know if it helps! |
Tested operator with eks pod identity emplemented for bedrock as backend, however still get results as empty set up
Latest operator 0.1.7 where EKS Pod identity is supported k8sgpt bedrock config
sa
log from k8sgpt bedrock backend
no details shows on results
a detailed json output of result
|
@angelaaaaaaaw thanks for detail set up. Can you add k8sgpt operator logs too? |
I've also seen this very problem using local-ai as the backend. I noticed that @AlexsJones reported everything was working as expected using k8sgpt-operator version 0.1.6. I noticed that the logs from version 0.1.6 are different from 0.1.7. Below are my logs using k8sgpt-operator version 0.1.6:
Notice how in the log above, the My K8sGPT Custom Resource and LocalAI setup both remained constant between my two tests. The only thing that changed, and fixed my issue, was downgrading the operator from 0.1.7 to 0.1.6. |
Checklist
Affected Components
K8sGPT Version
v0.1.3
Kubernetes Version
v1.25.0
Host OS and its Version
MacOS 14.4.1
Steps to reproduce
Expected behaviour
Results custom resources detail field should provide information or suggestion on how to fix the broken workload
Actual behaviour
Empty results custom resources detail field
Additional Information
Results custom resource shown here:
LocalAI pod logs shown here:
K8sGPT operator controller manager pod logs shown here:
K8sGPT localAI pod logs here:
K8sGPT Custom Resource configuration shown here (tested both AI enabled and disabled):
The text was updated successfully, but these errors were encountered: