Skip to content

Latest commit

 

History

History
583 lines (472 loc) · 30 KB

customization.md

File metadata and controls

583 lines (472 loc) · 30 KB

Cluster Customization

The OpenShift Installer allows for several different levels of customization. It is important to understand how and why each of these levels is exposed and the ramifications of making changes at each of these levels. This guide will attempt to walk through each of them and provide examples for why an administrator may want to make customizations.

Cluster customization can be broken into four major levels: OpenShift, Kubernetes, Platform, and OS. These four levels are rough abstraction layers (OpenShift being the highest layer and OS being the lowest) and fall into either the validated or unvalidated buckets. The levels within the validated bucket (OpenShift and Platform) encompass customization that is safe to perform - installation and automatic updates will succeed regardless of the changes made (to a reasonable degree). The levels within the unvalidated bucket (Kubernetes and OS) encompass customization that is not necessarily safe - after introducing changes, installation and automatic updates may not succeed.

OpenShift Customization

The most simple customization is exposed by the installer as an interactive series of prompts. These prompts are required and represent a high-level of customization. They are needed in order to get a running OpenShift cluster, but they aren't enough to get anything other than a vanilla deployment out of the box. Further customization is possible once the cluster has been provisioned, but isn't covered in this document as it is a "Day 2" operation.

Platform Customization

While the default cluster size may be sufficient for some, many will need to make alterations. This can include increasing the number of machines in the control plane, changing the type of the virtual machines that will be used (e.g. AWS instances), or adjusting the CIDR range used for the Kubernetes service network. This level of customization is exposed via the installer's install-config.yaml. The install-config can be accessed by running openshift-install create install-config. This file can then be modified as needed before running a later target.

The install-config.yaml generated by the installer will not have all of the available fields populated, so they may need to be manually added if they are needed.

The following install-config.yaml properties are available:

  • apiVersion (required string): The API version for the install-config.yaml content. The current version (as described in this documentation) is v1. The installer may also support older API versions.
  • additionalTrustBundle (optional string): a PEM-encoded X.509 certificate bundle that will be added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured.
  • baseDomain (required string): The base domain to which the cluster should belong.
  • publish (optional string): This controls how the user facing endpoints of the cluster like the Kubernetes API, OpenShift routes etc. are exposed. Valid values are External (the default) and Internal.
  • controlPlane (optional machine-pool): The configuration for the machines that comprise the control plane.
  • compute (optional array of machine-pools): The configuration for the machines that comprise the compute nodes.
  • fips (optional boolean): Enables FIPS mode (default false).
  • imageContentSources (optional array of objects): Sources and repositories for the release-image content. Each entry in the array is an object with the following properties:
    • source (required string): The repository that users refer to, e.g. in image pull specifications.
    • mirrors (optional array of strings): One or more repositories that may also contain the same images.
  • metadata (required object): Kubernetes resource ObjectMeta, from which only the name parameter is consumed.
    • name (required string): The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.
  • networking (optional object): The configuration for the pod network provider in the cluster.
    • clusterNetwork (optional array of objects): The IP address pools for pods. The default is 10.128.0.0/14 with a host prefix of /23.
      • cidr (required IP network): The IP block address pool.
      • hostPrefix (required integer): The prefix size to allocate to each node from the CIDR. For example, 24 would allocate 2^8=256 adresses to each node. If this field is not used by the plugin, it can be left unset.
    • machineNetwork (optional array of objects): The IP address pools for machines.
      • cidr (required IP network): The IP block address pool. The default is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default is 192.168.126.0/24.
    • networkType (optional string): The type of network to install. The default is OpenShiftSDN.
    • serviceNetwork (optional array of IP networks): The IP address pools for services. The default is 172.30.0.0/16.
  • platform (required object): The configuration for the specific platform upon which to perform the installation.
  • proxy (optional object): The proxy settings for the cluster. If unset, the cluster will not be configured to use a proxy.
    • httpProxy (optional string): The URL of the proxy for HTTP requests.
    • httpsProxy (optional string): The URL of the proxy for HTTPS requests.
    • noProxy (optional string): A comma-separated list of domains and CIDRs for which the proxy should not be used.
  • pullSecret (required string): The secret to use when pulling images.
  • sshKey (optional string): The public Secure Shell (SSH) key to provide access to instances.

IP networks

IP networks are represented as strings using Classless Inter-Domain Routing (CIDR) notation with a traditional IP address or network number, followed by the "/" (slash) character, followed by a decimal value between 0 and 32 that describes the number of significant bits. For example, 10.0.0.0/16 represents IP addresses 10.0.0.0 through 10.0.255.255.

Machine pools

The following machine-pool properties are available:

  • architecture (optional string): Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default).
  • hyperthreading (optional string): Determines the mode of hyperthreading that machines in the pool will utilize. Valid values are Enabled (the default) and Disabled.
  • name (required string): The name of the machine pool.
  • platform (optional object): Platform-specific machine-pool configuration.
  • replicas (optional integer): The machine count for the machine pool.

Examples

While all complete install-config.yaml will contain platform-specific sections, the following example fragments demonstrate platform-agnostic options:

Additional trust bundle

apiVersion: v1
additionalTrustBundle: |
  -----BEGIN CERTIFICATE-----
  ...base-64-encoded, DER Certificate Authority cert...
  -----END CERTIFICATE-----

  -----BEGIN CERTIFICATE-----
  ...base-64-encoded, DER Certificate Authority cert...
  -----END CERTIFICATE-----
baseDomain: example.com
metadata:
  name: test-cluster
platform: ...
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...

Custom machine pools

An example install config with custom machine pools to grow the size of the worker pool and disable hyperthreading:

apiVersion: v1
baseDomain: example.com
controlPlane:
  name: master
  hyperthreading: Disabled
compute:
- name: worker
  hyperthreading: Disabled
  replicas: 5
metadata:
  name: test-cluster
platform: ...
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...

Custom networking

An example install config with custom networking:

apiVersion: v1
baseDomain: example.com
metadata:
  name: test-cluster
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform: ...
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...

Image content sources

An example install config with custom image content sources:

apiVersion: v1
baseDomain: example.com
metadata:
  name: test-cluster
imageContentSources:
- mirrors:
  - registry.example.com/ocp4/openshift4
  source: quay.io/openshift-release-dev/ocp-release-nightly
- mirrors:
  - registry.example.com/ocp4/openshift4
  source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
platform: ...
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...

That configuration is compatible with mirrored releases created with mirror commands like:

$ oc adm release mirror \
>   --from=quay.io/openshift-release-dev/ocp-release-nightly:4.2.0-0.nightly-XXXXXX \
>   --to=registry.example.com/ocp4/openshift4 \
>   --to-release-image=registry.example.com/ocp4/openshift4:4.2.0-0.nightly-XXXXXX
...
Success
Update image:  registry.example.com/ocp4/openshift4:4.2.0-0.nightly-2019-09-11-114314
Mirror prefix: registry.example.com/ocp4/openshift4

To use the new mirrored repository to install, add the following section to the install-config.yaml:

imageContentSources:
- mirrors:
  - registry.example.com/ocp4/openshift4
  source: quay.io/openshift-release-dev/ocp-release-nightly
- mirrors:
  - registry.example.com/ocp4/openshift4
  source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
...

If your mirror(s) are signed by a certificate authority which RHCOS does not trust by default, you may also wish to configure an additional trust bundle.

Proxy

An example install config routing outgoing traffic through a proxy:

apiVersion: v1
baseDomain: example.com
metadata:
  name: test-cluster
proxy:
  httpsProxy: https://username:[email protected]:123/
  httpProxy: https://username:[email protected]:123/
  noProxy: 123.example.com,10.88.0.0/16
platform: ...
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...

If your proxy certificate is signed by a certificate authority which RHCOS does not trust by default, you may also wish to configure an additional trust bundle. If additionalTrustBundle and at least one proxy setting are configured, the cluster Proxy object will be configured with trustedCA referencing the additional trust bundle.

Kubernetes Customization (unvalidated)

In addition to customizing OpenShift and aspects of the underlying platform, the installer allows arbitrary modification to the Kubernetes objects that are injected into the cluster. Note that there is currently no validation on the modifications that are made, so it is possible that the changes will result in a non-functioning cluster. The Kubernetes manifests can be viewed and modified using the manifests and manifest-templates targets.

The manifests target will render the manifest templates and output the result into the asset directory. Perhaps the most common use for this target is to include additional manifests in the initial installation. These manifests could be added after the installation as a "Day 2" operation, but there may be cases where they are necessary beforehand.

The manifest-templates target will output the unrendered manifest templates into the asset directory. This allows modification to the templates before they have been rendered, which may be useful to users who wish to reuse the templates between cluster deployments.

Install Time Customization for Machine Configuration

IMPORTANT:

  • These customizations require using the manifests target that does not provide compatibility guarantees.

In most cases, user applications should be run on the cluster via Kubernetes workload objects (e.g. DaemonSet, Deployment, etc). For example, DaemonSets are the most stable way to run a logging agent on all hosts. However, there may be some cases where these workloads need to be executed prior to the node joining the Kubernetes cluster. For example, a compliance mandate like "the user must run auditing tools as soon as the operating system comes up" might require a custom systemd unit for an auditing container in the Ignition config for some or all nodes.

Further, some aspects of RHEL CoreOS machines (usually kernel arguments such as nosmt for disabling hyperthreading) may need to be configured before user workloads land on a system.

The configuration of machines in OpenShift is controlled using MachineConfig objects and what configuration is applied to a machine in the OpenShift cluster is based on the MachineConfigPool objects. For these "day 1" cases, MachineConfig objects can be provided as additional manifests.

  1. openshift-install --dir $INSTALL_DIR create manifests

  2. Copy files with MachineConfig objects to $INSTALL_DIR/openshift/ directory.

    These custom MachineConfig objects are black boxes to the installer and the installer only plays the role of oc create -f <custom-machine-config-object> early enough into cluster bootstrap to make sure the configuration is used by the MachineConfigOperator.

  3. openshift-install --dir $INSTALL_DIR create cluster

Control plane with no Taints

All control plane nodes by default register with a taint node-role.kubernetes.io/master=:NoSchedule making them unschedulable by most normal workloads. An installation that requires the control plane to boot without that taint can push a custom MachineConfig object with a kubelet.service that doesn't include the taint.

For example:

  1. Run manifests target to create all the manifests.

    $ mkdir no-taint-cluster
    
    $ cp aws-install-config.yaml no-taint-cluster/install-config.yaml
    
    $ openshift-install --dir no-taint-cluster create manifests
    INFO Consuming "Install Config" from target directory
    
    $ ls -l no-taint-cluster/**
    no-taint-cluster/manifests:
    total 68
    -rw-r--r--. 1 xxxxx xxxxx  169 Feb 28 10:54 04-openshift-machine-config-operator.yaml
    -rw-r--r--. 1 xxxxx xxxxx 1589 Feb 28 10:54 cluster-config.yaml
    -rw-r--r--. 1 xxxxx xxxxx  149 Feb 28 10:54 cluster-dns-02-config.yml
    -rw-r--r--. 1 xxxxx xxxxx  243 Feb 28 10:54 cluster-infrastructure-02-config.yml
    -rw-r--r--. 1 xxxxx xxxxx  154 Feb 28 10:54 cluster-ingress-02-config.yml
    -rw-r--r--. 1 xxxxx xxxxx  557 Feb 28 10:54 cluster-network-01-crd.yml
    -rw-r--r--. 1 xxxxx xxxxx  327 Feb 28 10:54 cluster-network-02-config.yml
    -rw-r--r--. 1 xxxxx xxxxx  264 Feb 28 10:54 cvo-overrides.yaml
    -rw-r--r--. 1 xxxxx xxxxx  118 Feb 28 10:54 kube-cloud-config.yaml
    -rw-r--r--. 1 xxxxx xxxxx 1304 Feb 28 10:54 kube-system-configmap-root-ca.yaml
    -rw-r--r--. 1 xxxxx xxxxx 4030 Feb 28 10:54 machine-config-server-tls-secret.yaml
    -rw-r--r--. 1 xxxxx xxxxx  856 Feb 28 10:54 pull.json
    
    no-taint-cluster/openshift:
    total 28
    -rw-r--r--. 1 xxxxx xxxxx  293 Feb 28 10:54 99_binding-discovery.yaml
    -rw-r--r--. 1 xxxxx xxxxx  181 Feb 28 10:54 99_kubeadmin-password-secret.yaml
    -rw-r--r--. 1 xxxxx xxxxx  330 Feb 28 10:54 99_openshift-cluster-api_cluster.yaml
    -rw-r--r--. 1 xxxxx xxxxx 1015 Feb 28 10:54 99_openshift-cluster-api_master-machines-0.yaml
    -rw-r--r--. 1 xxxxx xxxxx 2655 Feb 28 10:54 99_openshift-cluster-api_master-user-data-secret.yaml
    -rw-r--r--. 1 xxxxx xxxxx 1750 Feb 28 10:54 99_openshift-cluster-api_worker-machineset.yaml
    -rw-r--r--. 1 xxxxx xxxxx 2655 Feb 28 10:54 99_openshift-cluster-api_worker-user-data-secret.yaml
  2. Create a MachineConfig that includes kubelet.service that has no taints.

    cat > no-taint-cluster/openshift/99-master-kubelet-no-taint.yaml <<EOF
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: master
      name: 02-master-kubelet
    spec:
      config:
        ignition:
          version: 3.1.0
        systemd:
          units:
          - contents: |
              [Unit]
              Description=Kubernetes Kubelet
              Wants=rpc-statd.service
    
              [Service]
              Type=notify
              ExecStartPre=/bin/mkdir --parents /etc/kubernetes/manifests
              ExecStartPre=/bin/rm -f /var/lib/kubelet/cpu_manager_state
              EnvironmentFile=-/etc/kubernetes/kubelet-workaround
              EnvironmentFile=-/etc/kubernetes/kubelet-env
    
              ExecStart=/usr/bin/hyperkube \
                  kubelet \
                    --config=/etc/kubernetes/kubelet.conf \
                    --bootstrap-kubeconfig=/etc/kubernetes/kubeconfig \
                    --rotate-certificates \
                    --kubeconfig=/var/lib/kubelet/kubeconfig \
                    --container-runtime=remote \
                    --container-runtime-endpoint=/var/run/crio/crio.sock \
                    --allow-privileged \
                    --node-labels=node-role.kubernetes.io/master \
                    --minimum-container-ttl-duration=6m0s \
                    --client-ca-file=/etc/kubernetes/ca.crt \
                    --cloud-provider=aws \
                    --volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec \
                    \
                    --anonymous-auth=false \
    
              Restart=always
              RestartSec=10
    
              [Install]
              WantedBy=multi-user.target
            enabled: true
            name: kubelet.service
    EOF

    machineconfiguration.openshift.io/role: master label attaches this MachineConfig to the master MachineConfigPool. The default configuration for the kubelet.service on libvirt includes the taint.

  3. Run cluster target to create the cluster using the custom manifests.

    $ openshift-install --dir no-taint-cluster create cluster
    INFO Consuming "Openshift Manifests" from target directory
    INFO Consuming "Master Machines" from target directory
    INFO Consuming "Common Manifests" from target directory
    INFO Creating infrastructure resources...
    INFO Waiting up to 30m0s for the Kubernetes API at https://api.test-cluster.example.com:6443...
    ...

    Check that no control plane nodes registered with taints:

    $ oc --kubeconfig no-taint-cluster/auth/kubeconfig get nodes -ojson | jq '.items[] | select(.metadata.labels."node-role.kubernetes.io/master" == "") | .spec.taints'
    null

    Check that the 02-master-kubelet MachineConfig exists in the cluster:

    oc --kubeconfig no-taint-cluster/auth/kubeconfig get machineconfigs
    NAME                                                        GENERATEDBYCONTROLLER        IGNITIONVERSION   CREATED
    00-master                                                   3.11.0-744-g5b05d9d3-dirty   3.1.0             137m
    00-master-ssh                                               3.11.0-744-g5b05d9d3-dirty                     137m
    00-worker                                                   3.11.0-744-g5b05d9d3-dirty   3.1.0             137m
    00-worker-ssh                                               3.11.0-744-g5b05d9d3-dirty                     137m
    01-master-container-runtime                                 3.11.0-744-g5b05d9d3-dirty   3.1.0             137m
    01-master-kubelet                                           3.11.0-744-g5b05d9d3-dirty   3.1.0             137m
    02-master-kubelet                                                                        3.1.0             137m
    01-worker-container-runtime                                 3.11.0-744-g5b05d9d3-dirty   3.1.0             137m
    01-worker-kubelet                                           3.11.0-744-g5b05d9d3-dirty   3.1.0             137m
    99-master-3c81ffa3-3b8d-11e9-ac1e-52fdfc072182-registries   3.11.0-744-g5b05d9d3-dirty                     133m
    99-worker-3c83a226-3b8d-11e9-ac1e-52fdfc072182-registries   3.11.0-744-g5b05d9d3-dirty                     133m
    master-55491738d7cd1ad6c72891e77c35e024                     3.11.0-744-g5b05d9d3-dirty   3.1.0             137m
    worker-edab0895c59dba7a566f4b955d87d964                     3.11.0-744-g5b05d9d3-dirty   3.1.0             137m

Nodes with Custom Kernel Arguments

Custom kernel arguments can be applied to through manifests as an installer operation, and can also be applied as a MachineConfig as a day 2 operation. The kernel arguments are applied upon boot and will be honored by the Machine-Config-Operator from then on.

Example application of loglevel=7 (change Linux kernel log level to KERN_DEBUG) for master nodes:

  1. Run manifests target to create all the manifests.

    $ mkdir log_debug_cluster
    
    $ openshift-install --dir log_debug_cluster create manifests
    ...
    
    $ ls -l log_debug_cluster/openshift
    ...
    99_openshift-machineconfig_99-master-ssh.yaml
    99_openshift-machineconfig_99-worker-ssh.yaml
    ...
  2. . Create a MachineConfig that adds a kernel argument to change log level:

    cat > log_debug_cluster/openshift/99-master-kargs-loglevel.yaml <<EOF
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: "master"
      name: 99-master-kargs-loglevel
    spec:
      config:
        ignition:
          version: 3.1.0
      kernelArguments:
        - 'loglevel=7'
    EOF
  3. Run cluster target to create the cluster using the custom manifests.

    $ openshift-install --dir log_debug_cluster create cluster
    ...

    Check that the machineconfig has the kernel arguments applied

    $ oc --kubeconfig log_debug_cluster/auth/kubeconfig get machineconfigs
    NAME                                                        GENERATEDBYCONTROLLER                      IGNITIONVERSION   CREATED
    99-master-kargs-loglevel                                    bd846958bc95d049547164046a962054fca093df   3.1.0             26h
    99-master-ssh                                               bd846958bc95d049547164046a962054fca093df   3.1.0             26h
    ...
    rendered-master-5f4a5bd806567871be1b608474eca373            bd846958bc95d049547164046a962054fca093df   3.1.0             26h
    
    $ oc describe machineconfig/rendered-master-5f4a5bd806567871be1b608474eca373 | grep -A 1 "Kernel Arguments"
      Kernel Arguments:
        loglevel=7

    If you wish to confirm the kernel argument is indeed being applied on the system, you can oc debug into a node and check with rpm-ostree kargs.

Switching RHCOS host kernel using KernelType

With OCP 4.4 and onward release, it is possible to switch from traditional to Real Time (RT) kernel on RHCOS node. During install time, switching to RT kernel can be done through manifests as an installer operation. See customizing MachineConfig to configure kernelType during install time. To set kernelType as day 2 operation, see MachineConfiguration doc.

Example for switching to RT kernel on worker nodes during initial cluster install:

  1. Run manifests target to create all the manifests.

    $ mkdir realtime_kernel
    $ openshift-install --dir realtime_kernel create manifests
  2. Create a MachineConfig that sets kernelType to realtime:

    cat > realtime_kernel/openshift/99-worker-kerneltype.yaml <<EOF
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: "worker"
      name: 99-worker-kerneltype
    spec:
      config:
        ignition:
          version: 3.1.0
      kernelType: realtime
    EOF
  3. Run cluster target to create the cluster using the custom manifests.

    $ openshift-install --dir realtime_kernel create cluster

    Check that the MachineConfig has the kernelType applied

    $ oc --kubeconfig realtime_kernel/auth/kubeconfig get machineconfigs
    NAME                                                        GENERATEDBYCONTROLLER                      IGNITIONVERSION   AGE
    ...
    99-worker-kerneltype                                                                                   3.1.0             80m
    99-worker-ssh                                                                                          3.1.0             80m
    rendered-worker-853ba9bf0337db528a857a9c7380b95a            6306be9274cd3052f5075c81fa447c7895b7b9f4   3.1.0             78m
    ...
    
  4. To confirm that worker node has switched to RT kernel, access one of the worker node and run uname -a

    $ oc --kubeconfig realtime_kernel/auth/kubeconfig debug node/<worker_node>
    ...
    sh-4.2# uname -a
    Linux <worker_node> 4.18.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT Wed Nov 27 18:29:55 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Note: The RT kernel lowers throughput (performance) in return for improved worst-case latency bounds. This feature is intended only for use cases that require consistent low latency. For more information, see the Linux Foundation wiki and the RHEL RT portal.

Enabling RHCOS Extensions

RHCOS is a minimal OCP focused OS which provides capabilities common across all the platforms. With extensions support, beginning in OCP 4.6 and onward, users can enable a limited set of additional functionality on the RHCOS nodes. In OCP 4.6 the supported extension is usbguard.

Extensions can be installed by creating a MachineConfig object. It can be enabled during cluster installation as well as later on. See customizing MachineConfig to enable an extension during install time. For day2 install, see MachineConfiguration doc.

Example MachineConfig to install usbguard on worker nodes:

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
 labels:
   machineconfiguration.openshift.io/role: worker
 name: worker-extensions
spec:
 config:
   ignition:
     version: 3.1.0
 extensions:
   - usbguard

OS Customization (unvalidated)

In rare circumstances, certain modifications to the bootstrap and other machines may be necessary. The installer provides the "ignition-configs" target, which allows arbitrary modification to the Ignition Configs used to boot these machines. Note that there is currently no validation on the modifications that are made, so it is possible that the changes will result in a non-functioning cluster.

An example worker.ign is shown below. It has been modified to increase the HTTP timeouts used when fetching the generated worker config from the cluster. This isn't likely to be useful, but it does demonstrate what is possible.

{
  "ignition": {
    "version": "3.1.0",
    "config": {
      "merge": [{
        "source": "https://test-cluster-api.example.com:22623/config/worker"
      }]
    },
    "security": {
      "tls": {
        "certificateAuthorities": [{
          "source": "data:text/plain;charset=utf-8;base64,LS0tLS1CRU..."
        }]
      }
    },
    "timeouts": {
      "httpResponseHeaders": 120
    }
  },
  "passwd": {
    "users": [{
      "name": "core",
      "sshAuthorizedKeys": [
        "ssh-ed25519 AAAA..."
      ]
    }]
  }
}