Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add more ingress flexibility in Helm chart #1216

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Nathanael-Mtd
Copy link

To make CISO Assistant more flexible for Kubernetes users, I made some (breaking) changes on the Helm Chart :

  • Add Ingress config in values (annotations and ingressClassName)
  • Enable tls on Ingress
  • Caddy removal (useless if ingress.tls used)

I don't think it will be a big pain to migrate chart with these changes.

It will requires using Cert-Manager for automatic letsencrypt/ACME/PKI certificate generation, or just create secret manually alongside the chart deploy.

I will be happy to add custom TLS secret creation from values file if it can help some users to avoid doing manual things.

That PR can probably fix these issues : #1123 #1135

Add ingress configuration and add tls options in ingress to avoid using Caddy
Copy link

github-actions bot commented Dec 20, 2024

CLA Assistant Lite bot All contributors have signed the CLA ✍️ ✅

@Nathanael-Mtd Nathanael-Mtd changed the title Add more flexibility in Helm chart Add more ingress flexibility in Helm chart Dec 20, 2024
@Nathanael-Mtd
Copy link
Author

I have read the CLA Document and I hereby sign the CLA

github-actions bot added a commit that referenced this pull request Dec 20, 2024
@ab-smith
Copy link
Contributor

This is awesome @Nathanael-Mtd thank you for the contribution!
I was thinking more about having caddy as optional rather than just removing it

@Nathanael-Mtd
Copy link
Author

Nathanael-Mtd commented Dec 20, 2024

Maybe, but I don't think chaining multiple reverse proxy, or adding reverse proxy inside pods are good ideas overall, specifically if there are no added value.

Also I think that frontend and backend should be separated in two Pods, and avoiding using statefulset, because in case of replicas > 1, we will have multiple distincts databases running.
Better using Deployments, if we want to scale backend, we can use a volume in ReadWriteMany mode to share SQLite DB across all backend pods.
By doing that separation, we can add liveness/readiness probes on each Pod.

If you're ok with that, I can make these changes, and also adding an option to keep Caddy as a separate Deployment if you think that's always useful.

@Nathanael-Mtd Nathanael-Mtd marked this pull request as draft December 20, 2024 21:50
@ab-smith
Copy link
Contributor

I tell you what: let’s start a different chart for these options: deployment-based, postgres instead of sqlite (to support multiple clients, the backend in this case) and we drop caddy in this flavor. What do you think?

@Nathanael-Mtd
Copy link
Author

Seems fine for me, we can keep an option to use SQLite for simple deployments.
How do you want to proceed on that "flavor" ? Different name/folder/repo ? And about the current chart, you want to keep and maintain it, or deprecate it later ?

@ab-smith
Copy link
Contributor

Yup, we can go for a different folder in this repo, ciso-assistant-alt or something like that, and I’ll handle the distribution afterwards.
we will keep both and see how the community responds and adjust accordingly.
We can try your sqlite idea with ReadWriteMany, just that I’ve had bad experiences and mixed feedback when people twisting it for concurrency but could be a good learning opportunity.
P.S : thank you again and feel free to jump in to the discord server.

@Nathanael-Mtd
Copy link
Author

Fine, I will create an another PR for the new chart if it's ok for you.

ReadWriteMany was just an idea to avoid impossible scalability in case of Statefulset, but to be fair, managing the backend scalability with database is generally a pain 😅

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants