Skip to content

Commit 1a2df53

Browse files
authoredApr 7, 2023
Merge pull request #134 from Phantom-Intruder/javaclient
Javaclient101
2 parents 95650e1 + 736f14e commit 1a2df53

9 files changed

+71
-62
lines changed
 

‎ClusterNetworking101/README.md

+1-28
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,6 @@ Networking in Kubernetes is a central part of Kubernetes, but it can be challeng
88
• External-to-Service communications: this is covered by services.
99
• Pod-to-Pod communications: this is the primary focus of this lab.
1010

11-
## The problem with Kubernetes networking
12-
13-
Kubernetes deals with highly distributed systems, where each pod is an isolated unit with its own IP address that needs to communicate with other pods, meaning that its almost as if there were two separate machines trying to talk to each other. However, unlike a conventional context, the number of pods in a cluster can scale at will leading to an unmanagable number of pods. Additionaly, you can have multiple worker nodes one servers located across the world and require pods in each of these nodes to communicate with each other. Trying to use dynamic port allocation to fix the problem would only increase the complexity of the system, meaning that a better solution has to be used. Finally, Kubernetes is supposed to handle everything related to its network by itself.
14-
15-
For example, you could start two nginx pods within the same cluster and have them sucessfully ping each other without having to do any network configuration yourself. You can even reach the containers within those pods without doing any port mapping at all. If you have multiple containers running on the same pod, they can talk to each other via localhost (since they share IP and MAC addresses). If you want ports in your pods being accessed from the outside world, you can easily set up a service such as NodePort or LoadBalancer. All of this is part of the explicitly designed Kubernetes networking model. Let's start by taking a look at the fundamental rules that are used to define this model.
1611

1712
## Kubernetes Networking Rules
1813

@@ -49,29 +44,7 @@ Kubernetes supports both networking models, so you can base your model of choice
4944

5045
A CNI is simply a link between the container runtime (like Docker or rkt) and the network plugin. The network plugin is nothing but the executable that handles the actual connection of the container to or from the network, according to a set of rules defined by the CNI. So, to put it simply, a CNI is a set of rules and Go libraries that aid in container/network-plugin integration.
5146

52-
All of the CNIs can be deployed by simply running a pod or a daemonset that launches and manages their daemons. What's interesting is that CNIs aren't exclusive to Kubernetes, or even bound to Kubernetes in anyway. There are multiple CNIs available that are part of the CNI project, and these are standalone applications that work across various runtimes. CNI configuration formats are created in json, and look like this:
53-
54-
```json
55-
{
56-
"name": "cniname",
57-
"type": "bridge",
58-
"bridge": "containernet",
59-
"isDefaultGateway": true,
60-
"forceAddress": false,
61-
"ipam": {
62-
"type": "host-local",
63-
"subnet": "10.10.0.0/16"
64-
}
65-
}
66-
```
67-
68-
While the above configuration may seen unfamiliar to you, it is simply a CNI configuration that creates a bridged networked for your pods. So while you may not have seen this configuration before, you most certainly have used it.
69-
70-
The network types which are also referred to as interface plugins include bridge, loopback, vlan, macvlan, ipvlan, and so on. If you have looked at Docker networking before, you will notice that these match the network types avilable with Docker. You can find more information about them in the [offical Docker doucmentation](https://docs.docker.com/network/#network-drivers).
71-
72-
The `IPAM` section defines how the ip addresses are assigned to pods. `host-local` IPAM allocates IPv4 and IPv6 addresses out of a specified address range defined by the subnet, while setting the type type to DHCP will set the IP address dynamically based on what the DHCP server assignes. You could also set the type to static, which would allow you to specify a single IP address that won't change.
73-
74-
Let’s have a look now at the most well-known Kubernetes networking solutions.
47+
All of the CNIs can be deployed by simply running a pod or a daemonset that launches and manages their daemons. Let’s have a look now at the most well-known Kubernetes networking solutions
7548

7649

7750
### AWS VPC CNI for Kubernetes

‎JavaClient101/intro.md

+56
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
# The Kubernetes Java client
2+
3+
If you consider the Kubernetes architecture, you will see that there is a Kubernetes API which is called to perform operations on a Kubernetes cluster. Generally, you would use `kubectl` to call this API. However, if you were on something like a Java application and wanted to perform an operation on a Kubernetes API from the Java code, it would be a bit of a hassle to create `ProcessBuilder` objects to execute a kubectl command. Additionally, this would create long and unreadable code that is generally bad for code maintainability in the long run. This is where the various Kubernetes clients that are available for different programming languages come in. These clients are able to do almost everything that the kubectl command can. You can take a look at the list of examples that the Java client has provided [here](https://github.com/kubernetes-client/java/tree/master/examples/examples-release-15/src/main/java/io/kubernetes/client/examples). Note that while it is entirely possible to deploy Java application on Kubernetes clusters, that is not the aim of the client library. The idea here is to control and configure the cluster from within a Java application.
4+
5+
In this case, we will be considering the Java client, although clients for other languages would run in roughly the same manner. To set up the Kubernetes library, we will be using Maven.
6+
7+
## Lab
8+
9+
### Requirements
10+
11+
For starters, you need to have Maven installed. You can download it from [here](https://maven.apache.org/download.cgi). You will also need Java installed. Specifically, you will need Java 17 since the sample Spring application we will be running requires it. You also need a cluster to work with. If you already have a cluster available, you can go ahead and deploy it to this cluster. If not, using [Minikube](https://minikube.sigs.k8s.io/docs/start/) is the fastest and most convenient way to get a simple, one-node cluster up and running on your local machine. You can install it on any platform, and you can use several drivers ranging from Docker to Hyper-V to set up Minikube.
12+
13+
You also need to clean and build the client package after cloning it from GitHub, so let's start with those steps:
14+
15+
```
16+
git clone --recursive https://github.com/kubernetes-client/java
17+
cd java
18+
mvn install
19+
```
20+
21+
Make sure that your JAVA_HOME is set properly or the final command will fail.
22+
23+
### Making the Java project
24+
25+
Now that the prerequisites are complete, you are ready to set up the Kubernetes client on your Java project. For this instance, we will use a Java project that deploys a Spring [pet clinic web app](https://github.com/spring-projects/spring-petclinic). The initial application can be cloned from [this repo](https://github.com/Phantom-Intruder/java-kubeclient). This is the application we will be using as a base for the rest of the lab. Included in the repo is a folder called `configs` which include the deployment and service that needs to be deployed on to your cluster for the application to run. We will not be doing this using the regular `kubectl apply -f deployment.yaml`, instead managing the application using the Java client.
26+
27+
To start off, you need to set up the Kubernetes library you just built. To set up the Kubernetes library, you need to only add the following lines to the pom.xml:
28+
29+
```xml
30+
<dependencies>
31+
<dependency>
32+
<groupId>io.kubernetes</groupId>
33+
<artifactId>client-java</artifactId>
34+
<version>15.0.1</version>
35+
</dependency>
36+
</dependencies>
37+
```
38+
39+
Run:
40+
41+
```
42+
mvn clean install
43+
```
44+
45+
This will set up the new Kubernetes dependency, which means that you are now all set up to go. What we want to do first is to create a deployment and a service. There are two ways to achieve this. The Java client is powerful enough that you can create both resources without having to touch an external yaml file by declaring everything you would declare in the Yaml from within the Java code. And example of this can be found [here](https://github.com/kubernetes-client/java/blob/master/examples/examples-release-15/src/main/java/io/kubernetes/client/examples/DeployRolloutRestartExample.java) where a deployment is created, restarted and rolled out from within the code itself. However, understanding and maintaining such syntax can become tedious, which is why the client also allows you to use a pre-existing yaml files and use it to deploy the code from within the Java client. This is done by creating a File object from the yaml file and passing it into the relevant method:
46+
47+
```java
48+
File file = new File("configs/service.yaml");
49+
V1Service yamlSvc = (V1Service) Yaml.load(file);
50+
51+
CoreV1Api api = new CoreV1Api();
52+
V1Service createResult =
53+
api.createNamespacedService("default", yamlSvc, null, null, null, null);
54+
```
55+
56+
Its also possible to run other types of commands against the cluster. Examples of what can be done along with code examples can be found in the [examples page](https://github.com/kubernetes-client/java/wiki/3.-Code-Examples).

‎Network_Policies101/Deny_egress_traffic_that_has_no_rules.md

+2-5
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
# Deny Egress Traffic That Has No Rules
22

3-
We’re doing the same thing here but on egress traffic. You can find a NetworkPolicy definition that will deny all outgoing traffic unless allowed by another rule [here](./default-deny-egress.yaml). As you can see, it is basically the same thing as the rule that allowed no ingresses.
3+
We’re doing the same thing here but on egress traffic. The following NetworkPolicy definition will deny all outgoing traffic unless allowed by another rule:
4+
45

56
## Steps
67
```
@@ -25,7 +26,3 @@ We can see that this is the case by switching over to our “access” pod in th
2526
/ #
2627
2728
```
28-
29-
Now, let's take a look at the other side of what we have been doing. What do we do if we want to only allow all ingress traffic (and not egress)? For example, there might be a debugging situation where you need to test an application without having to worry about the network policies, meaning that you want to override any policies that are curently applied. So let's look into that.
30-
31-
[Next: allowing ingress traffic](./allow_all_ingress_traffic_exclusively.md)

‎Network_Policies101/Deny_ingress_traffic_that_has_no_rules.md

+2-6
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
# Deny Ingress Traffic That Has No Rules
22

3-
An effective network security rule starts with denying all traffic by default unless explicitly allowed. This is how firewalls work. By default, Kubernetes regards any pod that is not selected by a NetworkPolicy as “non-isolated”. This means all ingress and egress traffic is allowed. So, a good foundation is to deny all traffic by default unless a NetworkPolicy rule defines which connections should pass. A NetworkPolicy definition for denying all ingress traffic may look like [this](./default-deny-ingress.yaml). You'll notice that it looks like a regular network policy, except without most parts of the policy definition.
3+
An effective network security rule starts with denying all traffic by default unless explicitly allowed. This is how firewalls work. By default, Kubernetes regards any pod that is not selected by a NetworkPolicy as “non-isolated”. This means all ingress and egress traffic is allowed. So, a good foundation is to deny all traffic by default unless a NetworkPolicy rule defines which connections should pass. A NetworkPolicy definition for denying all ingress traffic may look like this:
44

5-
## Lab
5+
## Steps
66
```
77
git clone https://github.com/collabnix/kubelabs.git
88
cd kubelabs/Network_Policies101/
@@ -67,7 +67,3 @@ You can clean up after this tutorial by deleting the network-policy-demo namespa
6767
```
6868
kubectl delete ns network-policy-demo
6969
```
70-
71-
Now that we have looked at denying ingresses, let's look at denying egresses.
72-
73-
[Next: Denying egress traffic](./Deny_egress_traffic_that_has_no_rules.md)

‎Network_Policies101/First_Network_Policy.md

+3-8
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Creating Your First NetworkPolicy Definition
22

3-
The NetworkPolicy resource uses labels to determine which pods it will manage. The security rules defined in the resource are applied to groups of pods. This works in the same sense as security groups that cloud providers use to enforce policies on groups of resources. Below is a sample network policy.
3+
The NetworkPolicy resource uses labels to determine which pods it will manage. The security rules defined in the resource are applied to groups of pods. This works in the same sense as security groups that cloud providers use to enforce policies on groups of resources.
44

55
```
66
apiVersion: networking.k8s.io/v1
@@ -37,12 +37,7 @@ spec:
3737
ports:
3838
- protocol: TCP
3939
port: 5978
40-
```
41-
42-
Let us look into it in detail. First, you would notice that it is of Kind NetworkPolicy, and it is meant to apply to pods that have the label `db`. The next section says that this policy allows both ingresses and egresses in and out of the pods. The next two blocks define where the ingresses and egresses are allowed to come from. Ingress has a "from" section while egress has a "to" section, and each section has a largely similar body. An `ipBlock` section has been defined with a CIDR range to define which IP addresses are allowed. In the above case, the cidr is `172.17.0.0/16`, which means that this ingress rule covers everything from 172.17.0.0 – 172.17.255.255. The `-16` is what dictates this range. However, if you were to create a pod with an IP address of 172.17.1.0 and use this network policy, the pod will not be included in the ingress range. This is because of the `except` section that singles out `172.17.1.0/24`, which is the whole range from 172.17.1.0 to 172.17.1.255. Any pod with an address from that range will not fall into the ingress category.
43-
44-
The next two parts of the ingress block are the `namespaceSelector` and `podSelector`. These allow you to match all the pods in a specific namespace, as well as all the pods that have a specific pod label. The final part is the `ports` section which determines which ports and protocols can be used to communicate into the pods. So in a way, everything that happens inside this block filter pods that are allowed to communicate with the pods to the network policy gets applied.
40+
41+
```
4542
46-
Since the filtering part of the network policy is the biggest factor of this resource, let's take a closer look at that in the next section.
4743
48-
[Next: Filtering with selectors](./how_can_we_fine-tune_network_policy_using_selectors.md)

‎Network_Policies101/README.md

+2-7
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,8 @@
11
# What is a Kubernetes Network Policy?
2-
3-
If you were to create two namespaces in your cluster and start two nginx pods in the two different namespaces, you would notice that each pod is able to communicate with all the other pods. This is great if you are running a single node cluster on your local computer where you don't have to worry about pod security. However, when it comes to production clusters or large clusters within organization where several teams deploy their workloads in different namespaces, you likely don't want your clusters interfering with each other. One option is to use [virtual clsuter](../Loft101/what-is-loft.md). The other more obvious way is by introducing Kubernetes network policies.
4-
5-
A network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints. NetworkPolicy resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods.
2+
A network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints.NetworkPolicy resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods.
63
To apply a NetworkPolicy definition in a Kubernetes cluster, the network plugin must support NetworkPolicy. Otherwise, any rules that you apply are useless. Examples of network plugins that support NetworkPolicy include Calico, Cilium, Kube-router, Romana, and Weave Net.
74

85
![](img/1.gif)
96

107

11-
Do you need a NetworkPolicy resource defined in your cluster? The default Kubernetes policy allows pods to receive traffic from anywhere (these are referred to as non-isolated pods). So unless you are in a development environment, you’ll certainly need a NetworkPolicy in place. So let's take a look at creating your first network policy.
12-
13-
[Next: Creating a network policy](./First_Network_Policy.md)
8+
Do you need a NetworkPolicy resource defined in your cluster? The default Kubernetes policy allows pods to receive traffic from anywhere (these are referred to as non-isolated pods). So unless you are in a development environment, you’ll certainly need a NetworkPolicy in place.

‎Network_Policies101/allow_all_ingress_traffic_exclusively.md

+1-3
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Allow All Ingress Traffic Exclusively
22

3-
We may want to override any other NetworkPolicy that restricts traffic to your pods, perhaps for troubleshooting a connection issue. This can be done by applying [this NetworkPolicy definition](./allow-ingress.yaml).
3+
We may want to override any other NetworkPolicy that restricts traffic to your pods, perhaps for troubleshooting a connection issue. This can be done by applying the following NetworkPolicy definition:
44

55
## Steps
66
```
@@ -55,5 +55,3 @@ Commercial support is available at
5555
The only difference we have here is that we add an ingress object with no rules at all.
5656

5757
Be aware, though, that this policy will override any other isolating policy in the same namespace.
58-
59-
[Next: Allow all egress traffic](./allow_all_egress_traffic_exclusively.md)

‎Network_Policies101/how_can_we_fine-tune_network_policy_using_selectors.md

+1-5
Original file line numberDiff line numberDiff line change
@@ -46,8 +46,4 @@ ipBlock can also be used to block specific IPs from an allowed range. This can b
4646
cidr: 182.213.0.0/16
4747
except:
4848
- 182.213.50.43/24
49-
```
50-
51-
Now that we took a look at allowing traffic into pods, let's move to looking at denying traffic.
52-
53-
[Next: Denying ingress traffic](./Deny_ingress_traffic_that_has_no_rules.md)
49+
```

‎README.md

+3
Original file line numberDiff line numberDiff line change
@@ -262,6 +262,9 @@
262262
- [What is Kafka](./Strimzi101/kafka.md)
263263
- [Running Kafka on Kubernetes](./Strimzi101/kafka-on-kubernetes.md)
264264

265+
## Java client for Kubernetes
266+
- [Introduction](./JavaClient101/intro.md)
267+
265268
## Kubernetes Cheat Sheet
266269
- [Kubernetes Cheat Sheet](./Kubernetes%20Cheat%20Sheet/Kubernetes%20Cheat%20Sheet.md)
267270

0 commit comments

Comments
 (0)
Please sign in to comment.