Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting opposite result #21

Open
shreya-bhatnagar opened this issue Jul 21, 2021 · 4 comments
Open

Getting opposite result #21

shreya-bhatnagar opened this issue Jul 21, 2021 · 4 comments

Comments

@shreya-bhatnagar
Copy link

shreya-bhatnagar commented Jul 21, 2021

Just like issue-8 I am also getting opposite result. I have applied the external service yaml, but instead of allowing google.com its blocking google.com and allowing other calls. What possibly am I doing wrong ?

My ExternalService.yaml

    apiVersion: egress.monzo.com/v1
    kind: ExternalService
    metadata:
      name: google
    spec:
      dnsName: google.com
      # optional, defaults to false, instructs dns server to rewrite queries for dnsName
      hijackDns: true
      ports:
      - port: 80
      - port: 443
        protocol: TCP
      minReplicas: 1
      maxReplicas: 3

My testpod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  namespace: testNs-system
  labels:
    egress.monzo.com/allowed-gateway: google
spec:
  containers:
  - image: nginx:1.14.2
    command:
      - "sleep"
      - "604800"
    imagePullPolicy: IfNotPresent
    name: nginx
  restartPolicy: Always

From testpod curl -v https://google.com is blocking and other urls are allowed. As per operator's Readme, I need a defaut-deny-Egress K3s policy also , therefore I applied that too. but after applying default-deny-Egress policy all egress calls including google.com (the one whitelisted) is blocking from testpod.

Default-Deny-All-Egress.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all-egress
  namespace: testNs-system
spec:
  podSelector:
    matchLabels:
      app: nginx
      egress.monzo.com/allowed-gateway: google
  policyTypes:
  - Egress
  egress: []

Note:

1. I am not seeing logs in egress-operator-controller-manager pod while any curl command is fired from testpod, I assumed that after deploying this operator all egress calls will go via egress-controller-manager . Therefore it should come in logs.

root@Ubuntu18-VM:~# kubectl -n egress-operator-system logs egress-operator-controller-manager-68d9cc55fb-vwg6t -c manager
2021-07-21T12:15:08.134Z        INFO    controller-runtime.metrics      metrics server is starting to listen    {"addr": "127.0.0.1:8080"}
2021-07-21T12:15:08.135Z        INFO    setup   starting manager
2021-07-21T12:15:08.135Z        INFO    controller-runtime.manager      starting metrics server {"path": "/metrics"}
2021-07-21T12:15:08.229Z        DEBUG   controller-runtime.manager.events       Normal  {"object": {"kind":"ConfigMap","namespace":"egress-operator-system","name":"controller-leader-election-helper","uid":"05c26e1e-4e8f-46e3-8723-2095b4206211","apiVersion":"v1","resourceVersion":"1142"}, "reason": "LeaderElection", "message": "egress-operator-controller-manager-68d9cc55fb-vwg6t_c4c5b955-6cf7-481b-a995-8e7a6f7e2d07 became leader"}
2021-07-21T12:15:08.325Z        INFO    controller-runtime.controller   Starting EventSource    {"controller": "externalservice", "source": "kind source: /, Kind="}
2021-07-21T12:15:09.326Z        DEBUG   controller-runtime.controller   Successfully Reconciled {"controller": "externalservice", "request": "/google"}
2021-07-21T12:15:21.691Z        INFO    controllers.ExternalService     Patching object {"patch": "{\"metadata\":{\"labels\":{\"egress.monzo.com/hijack-dns\":\"true\"}}}", "kind": "/v1, Kind=Service"}
2021-07-21T12:15:21.699Z        DEBUG   controller-runtime.controller   Successfully Reconciled {"controller": "externalservice", "request": "/google"}

2. Logs of google-76566579bc-jzggg pod created after applying ExternalService.yaml :

root@Ubuntu18-VM:~# kubectl -n egress-operator-system logs google-76566579bc-jzggg  | more
[2021-07-21 12:15:21.126][1][info][main] [source/server/server.cc:249] initializing epoch 0 (hot restart version=11.104)
[2021-07-21 12:15:21.126][1][info][main] [source/server/server.cc:251] statically linked extensions:
[2021-07-21 12:15:21.126][1][info][main] [source/server/server.cc:253]   access_loggers: envoy.file_access_log,envoy.http_grpc_access_log,envoy.tcp_grpc_access_log
[2021-07-21 12:15:21.126][1][info][main] [source/server/server.cc:256]   filters.http: envoy.buffer,envoy.cors,envoy.csrf,envoy.ext_authz,envoy.fault,envoy.filters.http.adapti
ve_concurrency,envoy.filters.http.dynamic_forward_proxy,envoy.filters.http.grpc_http1_reverse_bridge,envoy.filters.http.grpc_stats,envoy.filters.http.header_to_metadata,envoy.
filters.http.jwt_authn,envoy.filters.http.original_src,envoy.filters.http.rbac,envoy.filters.http.tap,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.g
zip,envoy.health_check,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash
[2021-07-21 12:15:21.126][1][info][main] [source/server/server.cc:259]   filters.listener: envoy.listener.http_inspector,envoy.listener.original_dst,envoy.listener.original_sr
c,envoy.listener.proxy_protocol,envoy.listener.tls_inspector
[2021-07-21 12:15:21.126][1][info][main] [source/server/server.cc:262]   filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.dubbo_proxy,en
voy.filters.network.mysql_proxy,envoy.filters.network.rbac,envoy.filters.network.sni_cluster,envoy.filters.network.thrift_proxy,envoy.filters.network.zookeeper_proxy,envoy.htt
p_connection_manager,envoy.mongo_proxy,envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy
[2021-07-21 12:15:21.126][1][info][main] [source/server/server.cc:264]   stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.stat_sinks.hystrix,envoy.statsd
[2021-07-21 12:15:21.126][1][info][main] [source/server/server.cc:266]   tracers: envoy.dynamic.ot,envoy.lightstep,envoy.tracers.datadog,envoy.tracers.opencensus,envoy.tracers
.xray,envoy.zipkin
[2021-07-21 12:15:21.126][1][info][main] [source/server/server.cc:269]   transport_sockets.downstream: envoy.transport_sockets.alts,envoy.transport_sockets.raw_buffer,envoy.tr
ansport_sockets.tap,envoy.transport_sockets.tls,raw_buffer,tls
[2021-07-21 12:15:21.126][1][info][main] [source/server/server.cc:272]   transport_sockets.upstream: envoy.transport_sockets.alts,envoy.transport_sockets.raw_buffer,envoy.tran
sport_sockets.tap,envoy.transport_sockets.tls,raw_buffer,tls
[2021-07-21 12:15:21.126][1][info][main] [source/server/server.cc:278] buffer implementation: new
[2021-07-21 12:15:21.129][1][info][main] [source/server/server.cc:344] admin address: 0.0.0.0:11000
[2021-07-21 12:15:21.130][1][info][main] [source/server/server.cc:458] runtime: layers:
  - name: base
    static_layer:
      {}
  - name: admin
    admin_layer:
      {}
[2021-07-21 12:15:21.130][1][info][config] [source/server/configuration_impl.cc:62] loading 0 static secret(s)
[2021-07-21 12:15:21.130][1][info][config] [source/server/configuration_impl.cc:68] loading 2 cluster(s)
[2021-07-21 12:15:21.130][1][info][config] [source/server/configuration_impl.cc:72] loading 2 listener(s)
[2021-07-21 12:15:21.131][1][info][config] [source/server/configuration_impl.cc:97] loading tracing configuration
[2021-07-21 12:15:21.131][1][info][config] [source/server/configuration_impl.cc:117] loading stats sink configuration
[2021-07-21 12:15:21.131][1][info][main] [source/server/server.cc:549] starting main dispatch loop
[2021-07-21 12:15:21.170][1][info][upstream] [source/common/upstream/cluster_manager_impl.cc:161] cm init: all clusters initialized
[2021-07-21 12:15:21.170][1][info][main] [source/server/server.cc:528] all clusters initialized. initializing init manager
[2021-07-21 12:15:21.170][1][info][config] [source/server/listener_manager_impl.cc:578] all dependencies initialized. starting workers
[2021-07-21T12:15:21.653Z] "GET /ready HTTP/1.1" 200 - 0 5 0 - "192.168.1.9" "kube-probe/1.21" "-" "172.16.216.7:11000" "-"
[2021-07-21T12:15:29.055Z] "GET /ready HTTP/1.1" 200 - 0 5 0 - "192.168.1.9" "kube-probe/1.21" "-" "172.16.216.7:11000" "-"

3. Network policies description:

root@Ubuntu18-VM:~# kubectl get networkpolicy -A
NAMESPACE                NAME                            POD-SELECTOR                                         AGE
egress-operator-system   egress-operator-public-egress   app=egress-gateway                                   4h10m
egress-operator-system   google                          egress.monzo.com/gateway=google                      4h10m
default                  default-deny-all-egress         app=ubuntu,egress.monzo.com/allowed-gateway=google   19s

Please let me know what am I missing here and how can I make this operator work ?

@chongyangshi
Copy link
Contributor

Hi Shreya,

assumed that after deploying this operator all egress calls will go via egress-controller-manager

This is not the case, the CoreDNS plugin will tell pods in the cluster to send their traffic to the egress gateway deployments it has launched in its namespace, which are fronted by individual Kubernetes Services for each hostname fronted by each egress gateway deployment.

but after applying default-deny-Egress policy all egress calls including google.com (the one whitelisted) is blocking from testpod.

This is expected, as this policy essentially requires all pods in the namespace of this policy to have an ingress policy in the egress-operator's namespace accepting their egress traffic sent via egress gateways instead.

My testpod.yaml

apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: testNs-system
labels:
egress.monzo.com/allowed-gateway: google

The reason your pod cannot talk to Google via the gateway is because the format of the label you've given does not match the NetworkPolicy created for that egress gateway (https://github.com/monzo/egress-operator/blob/master/controllers/networkpolicy.go#L74). To see what label is expected, do kubectl get networkpolicy google -n egress-operator-system and check what's in .spec.ingress[].from.podSelector[].

For example, in your case if the egress gateway's name was google, the expected source pod label would be egress.monzo.com/allowed-google: "true".

@shreya-bhatnagar
Copy link
Author

I already tried this label : egress.monzo.com/allowed-google: "true" but its still not working and behaving the same way as mentioned in que.

@srinicrick65
Copy link

srinicrick65 commented Aug 28, 2021

hi ,
When i am installing calico as CNI plugin for kubernetes k3s distribution and try the curl on the domain which has to go through egress gateway i am getting SSL issue . I am not sure what is the issue but i can see that the call is redirecting to the egress service. But when i am not having calico cni it works fine. Can someone help with this .

Requirement

curl 7.52.1
OpenSSL 1.1.0l 10 Sep 2019 (Library: OpenSSL 1.1.0j 20 Nov 2018)
K3S_VERSION=v1.21.1+k3s1
Calico as CNI

root@nginx:/# curl https://github.com -v

  • Rebuilt URL to: https://github.com/
  • Trying 10.43.243.74...
  • TCP_NODELAY set
  • Connected to github.com (10.43.243.74) port 443 (#0)
  • ALPN, offering h2
  • ALPN, offering http/1.1
  • Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@strength
  • successfully set certificate verify locations:
  • CAfile: /etc/ssl/certs/ca-certificates.crt
    CApath: /etc/ssl/certs
  • TLSv1.2 (OUT), TLS header, Certificate Status (22):
  • TLSv1.2 (OUT), TLS handshake, Client hello (1):
  • Unknown SSL protocol error in connection to github.com:443
  • Curl_http_done: called premature == 0
  • Closing connection 0
    curl: (35) Unknown SSL protocol error in connection to github.com:443
kubectl get pods -A -o wide
test                     nginx                                                 1/1     Running   0          5h27m   172.18.106.8

kubectl get svc -A
egress-operator-system   egress-operator-controller-manager-metrics-service   ClusterIP   10.43.241.184   <none>        8443/TCP                 5h51m
egress-operator-system   github                                               ClusterIP   **10.43.243.74**    <none>        443/TCP                  5h51m

externalServicegit.yaml

apiVersion: egress.monzo.com/v1
kind: ExternalService
metadata:
  name: github
spec:
  dnsName: github.com
  hijackDns: true
  ports:
  - port: 443
    protocol: TCP
  minReplicas: 1
  maxReplicas: 1

testPod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  namespace: test
  labels:
    egress.monzo.com/allowed-github: "true"
spec:
  containers:
  - image: nginx:1.14.2
    command:
      - "sleep"
      - "604800"
    imagePullPolicy: IfNotPresent
    name: nginx
  restartPolicy: Always

@arnavpisces
Copy link

@shreya-bhatnagar were you able to resolve the issue, if yes, can you explain the steps. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants