Skip to content

Commit 58caa4f

Browse files
committed
k8s resource alloc
Signed-off-by: Larry Peterson <[email protected]>
1 parent e34f75a commit 58caa4f

File tree

1 file changed

+52
-15
lines changed

1 file changed

+52
-15
lines changed

onramp/scale.rst

+52-15
Original file line numberDiff line numberDiff line change
@@ -12,16 +12,25 @@ to remove the Quick Start configuration by typing:
1212
1313
$ make aether-uninstall
1414
15-
There are two aspects of our deployment that scale independently. One
16-
is Aether proper: a Kubernetes cluster running the set of
17-
microservices that implement SD-Core and AMP (and optionally, other
18-
edge apps). The second is gNBsim: the emulated RAN that generates
19-
traffic directed at the Aether cluster. The assumption in this section
20-
is that there are at least two servers—one for the Aether cluster and
21-
one for gNBsim—with each able to scale independently. For example,
22-
having four servers would support a 3-node Aether cluster and a 1-node
23-
workload generator. This example configuration corresponds to the
24-
following ``hosts.ini`` file:
15+
Host Inventory File
16+
~~~~~~~~~~~~~~~~~~~~~~
17+
18+
Adding servers to a deployment is primarily a matter of editing the
19+
``hosts.inv`` file, with `host groups` defined according the role each
20+
server is to play. We'll introduce additional host groups in later
21+
sections, but for starters, there are two aspects of our deployment
22+
that scale independently. One is Aether proper: a Kubernetes cluster
23+
running the set of microservices that implement SD-Core and AMP (and
24+
optionally, other edge apps); this corresponds to a combination of the
25+
``master_nodes`` and ``worker_nodes`` groups. The second is gNBsim:
26+
the emulated RAN that generates traffic directed at the Aether
27+
cluster, corresponding to the ``gnbsim_nodes`` host group.
28+
29+
This section assumes there are at least two servers—one for the Aether
30+
cluster and one for gNBsim—with each able to scale independently. For
31+
example, having four servers would support a 3-node Aether cluster and
32+
a 1-node workload generator. This example configuration corresponds to
33+
the following ``hosts.ini`` file:
2534

2635
.. code-block::
2736
@@ -79,11 +88,39 @@ gNBs in place of gNBsim. Note that if you are primarily interested in
7988
the latter, you can still run Aether on a single server, and then
8089
connect that node to one or more physical gNBs.
8190

82-
Finally, apart from being able able to run SD-Core and gNBsim on
83-
separate nodes—thereby cleanly decoupling the Core from the RAN—one
84-
question we have not yet answered is why you might want to scale the
85-
Aether cluster to multiple nodes. One answer is that you are concerned
86-
about availability, so want to introduce redundancy.
91+
Allocating CPU Cores
92+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
93+
94+
Kubernetes provides a mechanism for allocating CPU cores to specific
95+
pods. OnRamp manages this capability in two steps.
96+
97+
First, directory ``deps/k8s/roles/rke2/templates`` contains two files
98+
used to configure a Kubernetes deployment. These files are referenced
99+
in ``vars/main.yml`` as variables
100+
``k8s.rke2.config.params_file.master`` and
101+
``k8s.rke2.config.params_file.worker``; edit these variables should
102+
you elect to substitute different files. Uncomment the block
103+
labeled *"Param's for Exclusive CPU"* in both files to enable the
104+
allocation feature. You need to reinstall Kubernetes for these changes
105+
to take effect.
106+
107+
Second, edit the values override file for whatever service is to be
108+
granted an exclusive CPU core. A typical example is to allocate a core
109+
to the UPF, which can be done by editing the ``omec-user-plane``
110+
section of ``deps/5gc/roles/core/templates/sdcore-5g-values.yaml``,
111+
changing variable ``resources.enabled`` from ``false`` to
112+
``true``. Similar variables exist for other SD-Core pods. You need to
113+
reinstall the 5G Core for this change to take effect.
114+
115+
116+
Other Options
117+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
118+
119+
Apart from being able able to run SD-Core and gNBsim on separate
120+
nodes—thereby cleanly decoupling the Core from the RAN—one question we
121+
have not yet answered is why you might want to scale the Aether
122+
cluster to multiple nodes. One answer is that you are concerned about
123+
availability, so want to introduce redundancy.
87124

88125
A second answer is that you want to run some other edge application,
89126
such as an IoT or AI/ML platform, on the Aether cluster. Such

0 commit comments

Comments
 (0)