diff --git a/xml/ha_autoyast_deploy.xml b/xml/ha_autoyast_deploy.xml index 4cb8732e..e2e84357 100644 --- a/xml/ha_autoyast_deploy.xml +++ b/xml/ha_autoyast_deploy.xml @@ -125,11 +125,17 @@ Bringing the cloned node online - Transfer the key configuration files from the already configured nodes - to the cloned node with &csync; as described in - . + Add the new node to the &csync; Sync Host list + as described in . + + + Transfer the key configuration files from the already configured nodes + to the cloned node with &csync; as described in + . + + To bring the node online, start the cluster services on the cloned diff --git a/xml/ha_config_cli.xml b/xml/ha_config_cli.xml index 966c5484..678a5803 100644 --- a/xml/ha_config_cli.xml +++ b/xml/ha_config_cli.xml @@ -169,16 +169,30 @@ crm cluster join --use-ssh-agent -c USER@NODE1ssh stage of the bootstrap scripts on its own. Run these commands after configuring the cluster in &yast;, but before bringing the cluster online. - - Run the following command on the first node: - + + Configuring passwordless SSH (and SSH agent forwarding) with crmsh + + + Run the following command on the first node: + user@node1> sudo --preserve-env=SSH_AUTH_SOCK \ crm cluster init ssh --use-ssh-agent - - Run the following command on all other nodes: - + + + + Start the cluster services on the first node so that the other nodes can + use the crm cluster join command: + +user@node1> sudo crm cluster start + + + + Run the following command on all other nodes: + user@node2> sudo --preserve-env=SSH_AUTH_SOCK \ crm cluster join ssh --use-ssh-agent -c USER@NODE1 + + diff --git a/xml/ha_yast_cluster.xml b/xml/ha_yast_cluster.xml index f52511e7..9da33db6 100644 --- a/xml/ha_yast_cluster.xml +++ b/xml/ha_yast_cluster.xml @@ -256,8 +256,11 @@ &csync; helps you to keep track of configuration changes and to - keep files synchronized across the cluster nodes. For details, see - . + keep files synchronized across the cluster nodes. If you are using + &yast; to set up the cluster for the first time, we strongly recommend + configuring &csync;. If you do not use &csync;, you must manually copy + all configuration files from the first node to the rest of the nodes in + the cluster. For details, see . @@ -1089,6 +1092,12 @@ following preparations: Preparing for initial synchronization with &csync; + + + Make sure passwordless SSH is configured between the nodes. This is required for + cluster communication. + + Copy the file /etc/csync2/csync2.cfg manually @@ -1152,26 +1161,6 @@ Finished with 1 errors. Bringing the cluster online - - Before starting the cluster, make sure passwordless SSH is configured between the nodes. - If you did not already configure passwordless SSH before setting up the cluster, you can - do so now by using the ssh stage of the bootstrap script: - - - Configuring passwordless SSH with &crmsh; - - - On the first node, run the following command: - -&prompt.root;crm cluster init ssh - - - - On the rest of the nodes, run the following command: - -&prompt.root;crm cluster join ssh -c NODE1 - - After the initial cluster configuration is done, start the cluster services on all cluster nodes to bring the stack online: @@ -1188,6 +1177,10 @@ Finished with 1 errors. Start the cluster services on all cluster nodes: &prompt.root;crm cluster start --all + + This command requires passwordless SSH access between the nodes. You can also + start individual nodes with crm cluster start. +