Skip to content

Commit 3a4a190

Browse files
committed
Admin Guide: integrate proofing corrections
1 parent dd82381 commit 3a4a190

7 files changed

+44
-42
lines changed

xml/ha_cluster_lvm.xml

+4-4
Original file line numberDiff line numberDiff line change
@@ -830,7 +830,7 @@ vdc 253:32 0 20G 0 disk
830830
logical volume for a cmirrord setup on &productname; 11 or 12 as
831831
described in <link
832832
xlink:href="https://www.suse.com/documentation/sle-ha-12/singlehtml/book_sleha/book_sleha.html#sec.ha.clvm.config.cmirrord"
833-
/>). </para>
833+
/>.)</para>
834834
</formalpara>
835835
<para>
836836
By default, <command>mdadm</command> reserves a certain amount of space
@@ -843,13 +843,13 @@ vdc 253:32 0 20G 0 disk
843843
The <option>data-offset</option> must leave enough space on the device
844844
for cluster MD to write its metadata to it. On the other hand, the offset
845845
must be small enough for the remaining capacity of the device to accommodate
846-
all physical volume extents of the migrated volume. Because the volume can
846+
all physical volume extents of the migrated volume. Because the volume may
847847
have spanned the complete device minus the mirror log, the offset must be
848848
smaller than the size of the mirror log.
849849
</para>
850850
<para>
851-
We recommend to set the <option>data-offset</option> to 128&nbsp;KB.
852-
If no value is specified for the offset, its default value is 1&nbsp;KB
851+
We recommend to set the <option>data-offset</option> to 128&nbsp;kB.
852+
If no value is specified for the offset, its default value is 1&nbsp;kB
853853
(1024&nbsp;bytes).
854854
</para>
855855
</listitem>

xml/ha_concepts.xml

+4-4
Original file line numberDiff line numberDiff line change
@@ -622,7 +622,7 @@
622622
<title>Cluster Resource Manager (Pacemaker)</title>
623623
<para>
624624
Pacemaker as cluster resource manager is the <quote>brain</quote>
625-
which reacts to events occurring in the cluster. Its is implemented as
625+
which reacts to events occurring in the cluster. It is implemented as
626626
<systemitem class="daemon">pacemaker-controld</systemitem>, the cluster
627627
controller, which coordinates all actions. Events can be nodes that join
628628
or leave the cluster, failure of resources, or scheduled activities such
@@ -637,7 +637,7 @@
637637
The local resource manager is located between the Pacemaker layer and the
638638
resources layer on each node. It is implemented as <systemitem
639639
class="daemon">pacemaker-execd</systemitem> daemon. Through this daemon,
640-
Pacemaker can start, stop and monitor resources.
640+
Pacemaker can start, stop, and monitor resources.
641641
</para>
642642
</listitem>
643643
</varlistentry>
@@ -688,8 +688,8 @@
688688
<sect3 xml:id="sec.ha.architecture.layers.rsc">
689689
<title>Resources and Resource Agents</title>
690690
<para>
691-
In an &ha; cluster, the services that need to be highly available are
692-
called resources. Resource agents (RAs) are scripts that start, stop and
691+
In a &ha; cluster, the services that need to be highly available are
692+
called resources. Resource agents (RAs) are scripts that start, stop, and
693693
monitor cluster resources.
694694
</para>
695695
</sect3>

xml/ha_config_basics.xml

+3-3
Original file line numberDiff line numberDiff line change
@@ -224,7 +224,7 @@
224224
Whenever communication fails between one or more nodes and the rest of the
225225
cluster, a cluster partition occurs. The nodes can only communicate with
226226
other nodes in the same partition and are unaware of the separated nodes.
227-
A cluster partition is defined as having quorum (can <quote>quorate</quote>)
227+
A cluster partition is defined as having quorum (being <quote>quorate</quote>)
228228
if it has the majority of nodes (or votes).
229229
How this is achieved is done by <emphasis>quorum calculation</emphasis>.
230230
Quorum is a requirement for fencing.
@@ -256,8 +256,8 @@ C = number of cluster nodes</screen>
256256
We strongly recommend to use either a two-node cluster or an odd number
257257
of cluster nodes.
258258
Two-node clusters make sense for stretched setups across two sites.
259-
Clusters with an odd number of nodes can be built on either one single
260-
site or might being spread across three sites.
259+
Clusters with an odd number of nodes can either be built on one single
260+
site or might be spread across three sites.
261261
</para>
262262
</listitem>
263263
</varlistentry>

xml/ha_fencing.xml

+10-8
Original file line numberDiff line numberDiff line change
@@ -184,15 +184,16 @@
184184
<term>pacemaker-fenced</term>
185185
<listitem>
186186
<para>
187-
pacemaker-fenced is a daemon which can be accessed by local processes or over
187+
<systemitem class="daemon">pacemaker-fenced</systemitem> is a daemon which can be accessed by local processes or over
188188
the network. It accepts the commands which correspond to fencing
189189
operations: reset, power-off, and power-on. It can also check the
190190
status of the fencing device.
191191
</para>
192192
<para>
193-
The pacemaker-fenced daemon runs on every node in the &ha; cluster. The
194-
pacemaker-fenced instance running on the DC node receives a fencing request
195-
from the pacemaker-controld. It is up to this and other pacemaker-fenced programs to carry
193+
The <systemitem class="daemon">pacemaker-fenced</systemitem> daemon runs on every node in the &ha; cluster. The
194+
<systemitem class="resource">pacemaker-fenced</systemitem> instance running on the DC node receives a fencing request
195+
from the <systemitem class="daemon">pacemaker-controld</systemitem>. It
196+
is up to this and other <systemitem class="daemon">pacemaker-fenced</systemitem> programs to carry
196197
out the desired fencing operation.
197198
</para>
198199
</listitem>
@@ -210,8 +211,9 @@
210211
<package>fence-agents</package> package, too,
211212
the plug-ins contained there are installed in
212213
<filename>/usr/sbin/fence_*</filename>.) All &stonith; plug-ins look
213-
the same to pacemaker-fenced, but are quite different on the other side
214-
reflecting the nature of the fencing device.
214+
the same to <systemitem class="daemon">pacemaker-fenced</systemitem>,
215+
but are quite different on the other side, reflecting the nature of the
216+
fencing device.
215217
</para>
216218
<para>
217219
Some plug-ins support more than one device. A typical example is
@@ -229,7 +231,7 @@
229231

230232
<para>
231233
To set up fencing, you need to configure one or more &stonith;
232-
resources&mdash;the pacemaker-fenced daemon requires no configuration. All
234+
resources&mdash;the <systemitem class="daemon">pacemaker-fenced</systemitem> daemon requires no configuration. All
233235
configuration is stored in the CIB. A &stonith; resource is a resource of
234236
class <literal>stonith</literal> (see
235237
<xref linkend="sec.ha.config.basics.raclasses"/>). &stonith; resources
@@ -328,7 +330,7 @@ commit</screen>
328330
outcome. The only way to do that is to assume that the operation is
329331
going to succeed and send the notification beforehand. But if the
330332
operation fails, problems could arise. Therefore, by convention,
331-
pacemaker-fenced refuses to terminate its host.
333+
<systemitem class="daemon">pacemaker-fenced</systemitem> refuses to terminate its host.
332334
</para>
333335
</example>
334336
<example>

xml/ha_glossary.xml

+5-5
Original file line numberDiff line numberDiff line change
@@ -171,7 +171,7 @@
171171
<glossdef>
172172
<para>
173173
The management entity responsible for coordinating all non-local
174-
interactions in an &ha; cluster. The &hasi; uses Pacemaker as CRM.
174+
interactions in a &ha; cluster. The &hasi; uses Pacemaker as CRM.
175175
The CRM is implemented as <systemitem
176176
class="daemon">pacemaker-controld</systemitem>. It interacts with several
177177
components: local resource managers, both on its own node and on the other nodes,
@@ -282,8 +282,8 @@
282282
isolated or failing cluster members. There are two classes of fencing:
283283
resource level fencing and node level fencing. Resource level fencing ensures
284284
exclusive access to a given resource. Node level fencing prevents a failed
285-
node from accessing shared resources entirely and prevents that resources run
286-
a node whose status is uncertain. This is usually done in a simple and
285+
node from accessing shared resources entirely and prevents resources from running
286+
on a node whose status is uncertain. This is usually done in a simple and
287287
abrupt way: reset or power off the node.
288288
</para>
289289
</glossdef>
@@ -329,7 +329,7 @@ performance will be met during a contractual measurement period.</para>
329329
The local resource manager is located between the Pacemaker layer and the
330330
resources layer on each node. It is implemented as <systemitem
331331
class="daemon">pacemaker-execd</systemitem> daemon. Through this daemon,
332-
Pacemaker can start, stop and monitor resources.
332+
Pacemaker can start, stop, and monitor resources.
333333
</para>
334334
</glossdef>
335335
</glossentry>
@@ -419,7 +419,7 @@ performance will be met during a contractual measurement period.</para>
419419
<glossentry xml:id="gloss.quorum"><glossterm>quorum</glossterm>
420420
<glossdef>
421421
<para>
422-
In a cluster, a cluster partition is defined to have quorum (can
422+
In a cluster, a cluster partition is defined to have quorum (be
423423
<quote>quorate</quote>) if it has the majority of nodes (or votes).
424424
Quorum distinguishes exactly one partition. It is part of the algorithm
425425
to prevent several disconnected partitions or nodes from proceeding and

xml/ha_hawk2_history_i.xml

+2-2
Original file line numberDiff line numberDiff line change
@@ -317,7 +317,7 @@
317317
<title>Viewing Transition Details in the History Explorer</title>
318318
<para>
319319
For each transition, the cluster saves a copy of the state which it provides
320-
as input to <systemitem class="daemon">pacemaker-schedulerd</systemitem>.
320+
as input to <systemitem class="daemon">pacemaker-schedulerd</systemitem>.
321321
The path to this archive is logged. All
322322
<filename>pe-*</filename> files are generated on the Designated
323323
Coordinator (DC). As the DC can change in a cluster, there may be
@@ -376,7 +376,7 @@
376376
<screen>crm history transition log <replaceable>peinput</replaceable></screen>
377377
<para>
378378
This includes details from the following daemons:
379-
<systemitem class="daemon">pacemaker-schedulerd </systemitem>,
379+
<systemitem class="daemon">pacemaker-schedulerd</systemitem>,
380380
<systemitem class="daemon">pacemaker-controld</systemitem>, and
381381
<systemitem class="daemon">pacemaker-execd</systemitem>.
382382
</para>

xml/ha_maintenance.xml

+16-16
Original file line numberDiff line numberDiff line change
@@ -147,7 +147,7 @@ Node <replaceable>&node2;</replaceable>: standby
147147

148148
<variablelist>
149149
<varlistentry xml:id="vle.ha.maint.mode.cluster">
150-
<!--<term>Putting the Cluster into Maintenance Mode</term>-->
150+
<!--<term>Putting the Cluster in Maintenance Mode</term>-->
151151
<term><xref linkend="sec.ha.maint.mode.cluster" xrefstyle="select:title"/></term>
152152
<listitem>
153153
<para>
@@ -158,7 +158,7 @@ Node <replaceable>&node2;</replaceable>: standby
158158
</listitem>
159159
</varlistentry>
160160
<varlistentry xml:id="vle.ha.maint.mode.node">
161-
<!--<term>Putting a Node into Maintenance Mode</term>-->
161+
<!--<term>Putting a Node in Maintenance Mode</term>-->
162162
<term><xref linkend="sec.ha.maint.mode.node" xrefstyle="select:title"/></term>
163163
<listitem>
164164
<para>
@@ -169,7 +169,7 @@ Node <replaceable>&node2;</replaceable>: standby
169169
</listitem>
170170
</varlistentry>
171171
<varlistentry xml:id="vle.ha.maint.node.standby">
172-
<!--<term>Putting a Node into Standby Mode</term>-->
172+
<!--<term>Putting a Node in Standby Mode</term>-->
173173
<term><xref linkend="sec.ha.maint.node.standby" xrefstyle="select:title"/></term>
174174
<listitem>
175175
<para>
@@ -186,7 +186,7 @@ Node <replaceable>&node2;</replaceable>: standby
186186
</listitem>
187187
</varlistentry>
188188
<varlistentry xml:id="vle.ha.maint.mode.rsc">
189-
<!--<term>Putting a Resource into Maintenance Mode</term>-->
189+
<!--<term>Putting a Resource in Maintenance Mode</term>-->
190190
<term><xref linkend="sec.ha.maint.mode.rsc" xrefstyle="select:title"/></term>
191191
<listitem>
192192
<para>
@@ -266,16 +266,16 @@ Node <replaceable>&node2;</replaceable>: standby
266266
</sect1>
267267

268268
<sect1 xml:id="sec.ha.maint.mode.cluster">
269-
<title>Putting the Cluster into Maintenance Mode</title>
269+
<title>Putting the Cluster in Maintenance Mode</title>
270270
<para>
271-
To put the cluster into maintenance mode on the &crmshell;, use the following command:</para>
271+
To put the cluster in maintenance mode on the &crmshell;, use the following command:</para>
272272
<screen>&prompt.root;<command>crm</command> configure property maintenance-mode=true</screen>
273273
<para>
274-
To put the cluster back into normal mode after your maintenance work is done, use the following command:</para>
274+
To put the cluster back to normal mode after your maintenance work is done, use the following command:</para>
275275
<screen>&prompt.root;<command>crm</command> configure property maintenance-mode=false</screen>
276276

277277
<procedure xml:id="pro.ha.maint.mode.cluster.hawk2">
278-
<title>Putting the Cluster into Maintenance Mode with &hawk2;</title>
278+
<title>Putting the Cluster in Maintenance Mode with &hawk2;</title>
279279
<step>
280280
<para>
281281
Start a Web browser and log in to the cluster as described in
@@ -315,16 +315,16 @@ Node <replaceable>&node2;</replaceable>: standby
315315
</sect1>
316316

317317
<sect1 xml:id="sec.ha.maint.mode.node">
318-
<title>Putting a Node into Maintenance Mode</title>
318+
<title>Putting a Node in Maintenance Mode</title>
319319
<para>
320-
To put a node into maintenance mode on the &crmshell;, use the following command:</para>
320+
To put a node in maintenance mode on the &crmshell;, use the following command:</para>
321321
<screen>&prompt.root;<command>crm</command> node maintenance <replaceable>NODENAME</replaceable></screen>
322322
<para>
323-
To put the node back into normal mode after your maintenance work is done, use the following command:</para>
323+
To put the node back to normal mode after your maintenance work is done, use the following command:</para>
324324
<screen>&prompt.root;<command>crm</command> node ready <replaceable>NODENAME</replaceable></screen>
325325

326326
<procedure xml:id="pro.ha.maint.mode.nodes.hawk2">
327-
<title>Putting a Node into Maintenance Mode with &hawk2;</title>
327+
<title>Putting a Node in Maintenance Mode with &hawk2;</title>
328328
<step>
329329
<para>
330330
Start a Web browser and log in to the cluster as described in
@@ -352,16 +352,16 @@ Node <replaceable>&node2;</replaceable>: standby
352352
</sect1>
353353

354354
<sect1 xml:id="sec.ha.maint.node.standby">
355-
<title>Putting a Node into Standby Mode</title>
355+
<title>Putting a Node in Standby Mode</title>
356356
<para>
357-
To put a node into standby mode on the &crmshell;, use the following command:</para>
357+
To put a node in standby mode on the &crmshell;, use the following command:</para>
358358
<screen>&prompt.root;crm node standby <replaceable>NODENAME</replaceable></screen>
359359
<para>
360360
To bring the node back online after your maintenance work is done, use the following command:</para>
361361
<screen>&prompt.root;crm node online <replaceable>NODENAME</replaceable></screen>
362362

363363
<procedure xml:id="pro.ha.maint.node.standby.hawk2">
364-
<title>Putting a Node into Standby Mode with &hawk2;</title>
364+
<title>Putting a Node in Standby Mode with &hawk2;</title>
365365
<step>
366366
<para>
367367
Start a Web browser and log in to the cluster as described in
@@ -518,7 +518,7 @@ Node <replaceable>&node2;</replaceable>: standby
518518
</sect1>
519519

520520
<sect1 xml:id="sec.ha.maint.shutdown.node.maint.mode">
521-
<title>Rebooting a Cluster Node While In Maintenance Mode</title>
521+
<title>Rebooting a Cluster Node While in Maintenance Mode</title>
522522
<note>
523523
<title>Implications</title>
524524
<para>

0 commit comments

Comments
 (0)