-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mutating webhook is called but does not install cloud-sql-proxy container #276
Comments
I'm also facing the same issue. I can see the
Logs from the
|
Were the workload pods created by another operator? It may be a duplicate of #244. We are going to release this fix in the next week. |
Hello @iazunna, We have released preview version v0.4.0. Please give this another try and let me know how it goes. Note this version has some breaking changes. Be sure to check the Release Notes. |
Yes, I'm using argocd to manage the resources. |
Hi @hessjcg, thanks for releasing a new version of the operator, which I am yet to test. In the meantime, as a workaround, I am using a standalone Pod to run the apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: cloud-sql-proxy
name: cloud-sql-proxy
namespace: cloud-sql-proxy
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: cloud-sql-proxy
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: cloud-sql-proxy
spec:
containers:
- args:
- my-project-id:somewhere:mysql-db1?address=0.0.0.0&port=30016
- my-project-id:somewhere:mysql-db2?address=0.0.0.0&port=30017
- --credentials-file=/secrets/cloudsql/credentials.json
command:
- /cloud-sql-proxy
image: eu.gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.1.2
imagePullPolicy: IfNotPresent
name: cloud-sql-proxy
ports:
- containerPort: 30016
name: db1-port
protocol: TCP
- containerPort: 30017
name: db2-port
protocol: TCP
resources:
limits:
cpu: 150m
memory: 150Mi
requests:
cpu: 100m
memory: 100Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /secrets/cloudsql
name: cloudsql-instance-credentials
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: cloudsql-instance-credentials
secret:
defaultMode: 420
secretName: cloudsql-instance-credentials
---
apiVersion: v1
kind: Service
metadata:
name: mysql-db1
namespace: cloud-sql-proxy
spec:
clusterIP: <svc-ip>
clusterIPs:
- <svc-ip>
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 30016
protocol: TCP
targetPort: 30016
selector:
app: cloud-sql-proxy
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: mysql-db2
namespace: cloud-sql-proxy
spec:
clusterIP: <svc-ip>
clusterIPs:
- <svc-ip>
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 30017
protocol: TCP
targetPort: 30017
selector:
app: cloud-sql-proxy
sessionAffinity: None
type: ClusterIP |
I tried the new version but still getting the same results. I'm testing on a GKE cluster. |
Logs to help
I believe this |
I am also seeing this on GKE v1.23.16-gke.200 with cloud-sql-proxy-operator v0.4.0. I am using ArgoCD as well, and I decided for now to just manually create the cloud-sql-proxy sidecar container. |
Hi, I'm going to look into this further over the next week. Are all of you experiencing this issue using ArgoCD? |
I'm downgrading this to a P2 because it seems to be a problem with just the ArgoCD operator. |
@hessjcg I am closing the issue because the workaround with a standalone pod for the |
Glad to hear you have a workaround. Nonetheless, the Proxy Operator should work with other operators, so I think it's still valuable to figure out what's not working here. |
I was unable to reproduce this using operator 0.5.0 with the simple example configuration. I tried these three ways of deploying the example:
In all three cases, the operator was able to add the sidecar proxy container to the deployment and connect to the database. Thus, we haven't yet found the root cause of this issue. I will work on improving status reporting (#50) in coming versions of the operator so that hopefully we can narrow down the problem if this happens again. I'm going to close this for now. If you have more information about your workloads, please comment on this issue. |
I was able to reproduce the part of this issue where pods are created without proxy containers. I am tracking it in #337. |
Expected Behavior
After the creation of a
AuthProxyWorkload
, the workload is recognized and the mutating admission webhook adds to the pod the missing container,cloud-sql-proxy
, to connect to the sql instance.Actual Behavior
The side-car container is never added.
Steps to Reproduce the Problem
AuthProxyWorkload
AuhtProxyWorkload
statuscloud-sql-proxy
Specifications
0.3.0
1.21.14-gke.15800
)The text was updated successfully, but these errors were encountered: