Skip to content

Commit fcf1673

Browse files
authored
docs: enhance cluster-proxy blog post with improved text, structure, and section hierarchy (#529)
- Fix typos and grammar issues throughout the document - Standardize terminology (Hub/Spoke → hub/managed clusters) - Improve section structure with better logical flow (Setup → Install → Use) - Add detailed explanation for GATEWAY_IP configuration - Fix cluster name inconsistencies (cluster1 → managed) - Move 'Verifying the Deployment' under 'Installing Cluster Proxy' section - Rename main usage section for better clarity These improvements enhance readability and make the tutorial easier to follow. Signed-off-by: xuezhaojun <[email protected]>
1 parent 106ba5f commit fcf1673

File tree

1 file changed

+313
-0
lines changed
  • content/en/blog/cluster-proxy-support-service-proxy

1 file changed

+313
-0
lines changed
Lines changed: 313 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,313 @@
1+
---
2+
title: Cluster Proxy Now Supports "Service Proxy" — An Easy Way to Access Services in Managed Clusters
3+
date: 2025-11-11
4+
author: Zhao Xue [@xuezhaojun](https://github.com/xuezhaojun)
5+
toc_hide: true
6+
---
7+
8+
## Introduction
9+
10+
Cluster Proxy is an [OCM addon](https://github.com/open-cluster-management-io/cluster-proxy) that provides L4 network connectivity between hub and managed clusters through a reverse proxy tunnel. In previous versions, accessing services on managed clusters through cluster-proxy required using a specialized Go package, the [konnectivity client](https://github.com/open-cluster-management-io/cluster-proxy/blob/main/examples/test-client.md).
11+
12+
With the new v0.9.0 release, we've introduced a more convenient approach — "Service Proxy". This feature provides an HTTPS service that allows users to access the kube-apiserver and other services in managed clusters through a specific URL structure. Additionally, it introduces a more user-friendly authentication and authorization mechanism using **Impersonation**, enabling users to authenticate and authorize against the managed cluster's kube-apiserver using their hub user token.
13+
14+
Let's set up a simple test environment to demonstrate these new capabilities.
15+
16+
## Setting Up the Environment
17+
18+
First, create a basic OCM environment with one hub cluster and one managed cluster.
19+
20+
Create a hub cluster with port mapping for the proxy-entrypoint service. The `extraPortMappings` configuration exposes port 30091 from the container to the host machine, allowing external access to the proxy service:
21+
22+
```bash
23+
# Create hub cluster with port mapping for proxy-entrypoint service
24+
cat <<EOF | kind create cluster --name "hub" --config=-
25+
kind: Cluster
26+
apiVersion: kind.x-k8s.io/v1alpha4
27+
nodes:
28+
- role: control-plane
29+
extraPortMappings:
30+
- containerPort: 30091
31+
hostPort: 30091
32+
protocol: TCP
33+
EOF
34+
35+
# Create managed cluster
36+
kind create cluster --name "managed"
37+
38+
# Initialize the OCM hub cluster
39+
echo "Initializing the OCM hub cluster..."
40+
clusteradm init --wait --context kind-hub
41+
42+
# Get join command from hub
43+
joincmd=$(clusteradm get token --context kind-hub | grep clusteradm)
44+
45+
# Join managed cluster to hub
46+
echo "Joining managed cluster to hub..."
47+
$(echo ${joincmd} --force-internal-endpoint-lookup --wait --context kind-managed | sed "s/<cluster_name>/managed/g")
48+
49+
# Accept the managed cluster
50+
echo "Accepting managed cluster..."
51+
clusteradm accept --context kind-hub --clusters managed --wait
52+
53+
# Verify the setup
54+
echo "Verifying the setup..."
55+
kubectl get managedclusters --all-namespaces --context kind-hub
56+
```
57+
58+
## Installing Cluster Proxy
59+
60+
Next, install the Cluster Proxy addon following the [official installation guide](https://open-cluster-management.io/docs/getting-started/integration/cluster-proxy/):
61+
62+
```shell
63+
helm repo add ocm https://open-cluster-management.io/helm-charts/
64+
helm repo update
65+
helm search repo ocm/cluster-proxy
66+
```
67+
68+
Verify that the CHART VERSION is v0.9.0 or later:
69+
70+
```shell
71+
$ helm search repo ocm/cluster-proxy
72+
NAME CHART VERSION APP VERSION DESCRIPTION
73+
ocm/cluster-proxy 0.9.0 1.1.0 A Helm chart for Cluster-Proxy OCM Addon
74+
```
75+
76+
### Setting Up TLS Certificates
77+
78+
The new deployment `cluster-proxy-addon-user` requires server certificates for its HTTPS service, otherwise the deployment will hang in the container creating state:
79+
80+
To create the certificates, first install cert-manager:
81+
82+
```shell
83+
kubectl --context kind-hub apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.17.0/cert-manager.yaml
84+
kubectl --context kind-hub wait --for=condition=ready pod -l app.kubernetes.io/instance=cert-manager -n cert-manager --timeout=300s
85+
```
86+
87+
Next, create the certificate resources:
88+
89+
```shell
90+
kubectl --context kind-hub apply -f - <<EOF
91+
apiVersion: v1
92+
kind: Namespace
93+
metadata:
94+
name: open-cluster-management-addon
95+
---
96+
# Self-signed Issuer for bootstrapping the CA certificate
97+
apiVersion: cert-manager.io/v1
98+
kind: Issuer
99+
metadata:
100+
name: selfsigned-issuer
101+
namespace: open-cluster-management-addon
102+
spec:
103+
selfSigned: {}
104+
---
105+
# CA Certificate for cluster-proxy
106+
# This creates a self-signed CA that will be used to issue certificates for services
107+
apiVersion: cert-manager.io/v1
108+
kind: Certificate
109+
metadata:
110+
name: cluster-proxy-ca
111+
namespace: open-cluster-management-addon
112+
spec:
113+
isCA: true
114+
commonName: cluster-proxy-ca
115+
secretName: cluster-proxy-ca-secret
116+
duration: 87600h # 10 years
117+
privateKey:
118+
algorithm: RSA
119+
size: 4096
120+
issuerRef:
121+
name: selfsigned-issuer
122+
kind: Issuer
123+
124+
---
125+
# Issuer that uses the CA certificate to issue certificates
126+
# Changed from ClusterIssuer to Issuer to allow accessing secret in the same namespace
127+
apiVersion: cert-manager.io/v1
128+
kind: Issuer
129+
metadata:
130+
name: cluster-proxy-ca-issuer
131+
namespace: open-cluster-management-addon
132+
spec:
133+
ca:
134+
secretName: cluster-proxy-ca-secret
135+
---
136+
# Certificate for cluster-proxy-user-server
137+
# This creates a TLS certificate for the user server
138+
apiVersion: cert-manager.io/v1
139+
kind: Certificate
140+
metadata:
141+
name: cluster-proxy-user-serving-cert
142+
namespace: open-cluster-management-addon
143+
spec:
144+
secretName: cluster-proxy-user-serving-cert
145+
duration: 8760h # 1 year
146+
renewBefore: 720h # 30 days
147+
commonName: cluster-proxy-addon-user.open-cluster-management-addon.svc
148+
dnsNames:
149+
- cluster-proxy-addon-user
150+
- cluster-proxy-addon-user.open-cluster-management-addon
151+
- cluster-proxy-addon-user.open-cluster-management-addon.svc
152+
- cluster-proxy-addon-user.open-cluster-management-addon.svc.cluster.local
153+
privateKey:
154+
algorithm: RSA
155+
size: 2048
156+
issuerRef:
157+
name: cluster-proxy-ca-issuer
158+
kind: Issuer
159+
EOF
160+
```
161+
162+
Verify the secret is created:
163+
164+
```shell
165+
kubectl --context kind-hub get secret -n open-cluster-management-addon cluster-proxy-user-serving-cert
166+
```
167+
168+
### Installing the Cluster Proxy Helm Chart
169+
170+
Now install the cluster-proxy addon with the necessary configuration:
171+
172+
```shell
173+
# Set the gateway IP address for the proxy server
174+
# This is the Docker gateway IP that allows the Kind cluster to communicate with services
175+
# running on the host machine. The managed cluster will use this address to connect
176+
# to the proxy server running in the hub cluster.
177+
GATEWAY_IP="172.17.0.1"
178+
179+
kubectl config use-context kind-hub
180+
helm install -n open-cluster-management-addon --create-namespace \
181+
cluster-proxy ocm/cluster-proxy \
182+
--set "proxyServer.entrypointAddress=${GATEWAY_IP}" \
183+
--set "proxyServer.port=30091" \
184+
--set "enableServiceProxy=true"
185+
```
186+
187+
To expose the proxy server to the managed clusters, we need to create a service that makes the proxy server accessible from the external network.
188+
189+
```shell
190+
cat <<'EOF' | kubectl --context kind-hub apply -f -
191+
apiVersion: v1
192+
kind: Service
193+
metadata:
194+
name: proxy-entrypoint-external
195+
namespace: open-cluster-management-addon
196+
labels:
197+
app: cluster-proxy
198+
component: proxy-entrypoint-external
199+
spec:
200+
type: NodePort
201+
selector:
202+
proxy.open-cluster-management.io/component-name: proxy-server
203+
ports:
204+
- name: agent-server
205+
port: 8091
206+
targetPort: 8091
207+
nodePort: 30091
208+
protocol: TCP
209+
EOF
210+
```
211+
212+
### Verifying the Deployment
213+
214+
After completing the installation, verify that the `cluster-proxy-addon-user` deployment and service have been created and are running in the `open-cluster-management-addon` namespace:
215+
216+
```shell
217+
kubectl get deploy -n open-cluster-management-addon
218+
NAME READY UP-TO-DATE AVAILABLE AGE
219+
cluster-proxy-addon-user 1/1 1 1 10s
220+
```
221+
222+
```shell
223+
kubectl get svc -n open-cluster-management-addon
224+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
225+
cluster-proxy-addon-user ClusterIP 10.96.100.100 <none> 443/TCP 10s
226+
```
227+
228+
## Using Service Proxy to Access Managed Clusters
229+
230+
Now that the installation is complete, let's demonstrate how to use the Service Proxy feature to access resources in managed clusters. We'll access pods in the `open-cluster-management-agent` namespace in the `managed` cluster, which will also showcase the impersonation authentication mechanism.
231+
232+
### Creating a Hub User
233+
234+
First, create a hub user (a service account in the hub cluster) named `test-sa`:
235+
236+
```
237+
kubectl --context kind-hub create serviceaccount -n open-cluster-management-hub test-sa
238+
```
239+
240+
### Configuring RBAC Permissions
241+
242+
Next, create a Role and RoleBinding in the `managed` cluster to grant the `test-sa` user permission to list and get pods in the `open-cluster-management-agent` namespace:
243+
244+
```
245+
kubectl --context kind-managed apply -f - <<EOF
246+
apiVersion: rbac.authorization.k8s.io/v1
247+
kind: RoleBinding
248+
metadata:
249+
name: test-sa-rolebinding
250+
namespace: open-cluster-management-agent
251+
roleRef:
252+
apiGroup: rbac.authorization.k8s.io
253+
kind: Role
254+
name: test-sa-role
255+
subjects:
256+
- kind: User
257+
name: cluster:hub:system:serviceaccount:open-cluster-management-hub:test-sa
258+
apiGroup: rbac.authorization.k8s.io
259+
---
260+
apiVersion: rbac.authorization.k8s.io/v1
261+
kind: Role
262+
metadata:
263+
name: test-sa-role
264+
namespace: open-cluster-management-agent
265+
rules:
266+
- apiGroups: [""]
267+
resources: ["pods"]
268+
verbs: ["get", "list"]
269+
EOF
270+
```
271+
272+
**Important Note:**
273+
274+
- The `User` name follows the format `cluster:hub:system:serviceaccount:<namespace>:<serviceaccount>`, where `<namespace>` and `<serviceaccount>` are the namespace and name of the service account in the hub cluster.
275+
- Alternatively, you can use [cluster-permission](https://github.com/open-cluster-management-io/cluster-permission) to create roles and role bindings from the hub cluster side.
276+
277+
### Generating an Access Token
278+
279+
Generate a token for the `test-sa` service account:
280+
281+
```shell
282+
TOKEN=$(kubectl --context kind-hub -n open-cluster-management-hub create token test-sa)
283+
```
284+
285+
### Testing the Service Proxy
286+
287+
Now let's test accessing pods in the `managed` cluster through the `cluster-proxy-addon-user` service. We'll start a debug container in the hub cluster and use curl to make the request:
288+
289+
```bash
290+
POD=$(kubectl get pods -n open-cluster-management-addon -l component=cluster-proxy-addon-user --field-selector=status.phase=Running -o jsonpath='{.items[0].metadata.name}')
291+
292+
kubectl debug -it $POD -n open-cluster-management-addon --image=praqma/network-multitool -- sh -c "curl -k -H 'Authorization: Bearer $TOKEN' https://cluster-proxy-addon-user.open-cluster-management-addon.svc.cluster.local:9092/managed/api/v1/namespaces/open-cluster-management-agent/pods"
293+
```
294+
295+
The URL structure for accessing resources is:
296+
297+
```
298+
https://cluster-proxy-addon-user.<namespace>.svc.cluster.local:9092/<cluster-name>/<kubernetes-api-path>
299+
```
300+
301+
You should see a JSON response listing the pods in the `open-cluster-management-agent` namespace of the `managed` cluster, demonstrating successful authentication and authorization through the impersonation mechanism.
302+
303+
## Summary
304+
305+
In this blog post, we've demonstrated the new Service Proxy feature introduced in cluster-proxy v0.9.0. The key highlights include:
306+
307+
- **Service Proxy**: A new HTTPS-based method to access services in managed clusters without requiring the konnectivity client package
308+
- **Impersonation**: A user-friendly authentication mechanism that allows hub users to access managed cluster resources using their hub tokens
309+
- **Simple URL Structure**: Access managed cluster resources through a straightforward URL pattern
310+
311+
These features significantly simplify the process of accessing managed cluster services, making it easier to build tools and integrations on top of OCM's multi-cluster management capabilities.
312+
313+
We hope you find these new features useful! For more information, please visit the [cluster-proxy GitHub repository](https://github.com/open-cluster-management-io/cluster-proxy).

0 commit comments

Comments
 (0)