-
Notifications
You must be signed in to change notification settings - Fork 176
Description
The goal is to know where network connections are going to outside of the cluster and authorizing those in advance. All connections that leave the cluster network need to be whitelisted. For that, I propose an Egress resource to specify the whitelist.
Requirements
- No network traffic going out of the cluster by default.
- Explicit whitelisting possible for the user.
- Explicit whitelisting possible for system bootstrapping (like Docker registry and OAuth).
Not a requirement:
- Fine granular rules per pod. Only on a cluster level.
Example specification
apiVersion: "zalando.org/v1"
kind: EgressRuleSet
metadata:
name: my-app-targets
spec:
targets:
- mydependency1.example.com:443
- mydependency2.example.com:443
- *.example.org:80
In some cases, pinning it down to predetermined domains doesn't work. Examples would be crawler or applications that need to react on user input like webhooks. In this case, one needs a switch to allow everything:
apiVersion: "zalando.org/v1"
kind: EgressRuleSet
metadata:
name: world-access
spec:
targets:
- *:80
Multiple rule sets can exist at the same time and a union of the targets would determine the set of the whole cluster.
Wildcarding everything is okay as its still an explicit choice that could be checked during deployment. Wildcarding ports should be discouraged (and even not implemented) unless we find any valid use case where this is not purely security insensitive.
Example integration
Since this would probably be implemented as an HTTP proxy, the integration pattern should be that the standard environment variable http_proxy
is set by default in every container that starts without the user having to specify it.
Example implementation
One should set up a HA/scalable HTTP proxy like squid. In addition, an egress-controller
should observe the EgressRuleSet
resources and reconfigure the squid accordingly.
The AWS Security Group of all Kubernetes nodes would not have the default "allow outbound" rule so that every traffic going out would be dropped. The HTTP proxy would need to run outside of the security group in some kind of "DMZ" setup (like the ALBs and ELBs) where Kubernetes nodes can go to. The HTTP proxy server then itself has full outbound rules in its Security Group.