Skip to content

Commit 3716271

Browse files
committed
Merge branch 'feature/proofing_corrections' into maintenance/SLEHA12SP4
- integrated all proofing corrections (only affected Admin Guide, the other 5 guides/articles did not have any proofing changes)
2 parents 3aa0d1b + 3270406 commit 3716271

11 files changed

+66
-68
lines changed

xml/ha_clvm.xml

+2-2
Original file line numberDiff line numberDiff line change
@@ -173,7 +173,7 @@ cLVM) for more information and details to integrate here - really helpful-->
173173
<listitem>
174174
<para>
175175
Check if the <systemitem class="daemon">lvmetad</systemitem> daemon is
176-
disabled because it cannot work with cLVM. In <filename>/etc/lvm/lvm.conf</filename>,
176+
disabled, because it cannot work with cLVM. In <filename>/etc/lvm/lvm.conf</filename>,
177177
the keyword <literal>use_lvmetad</literal> must be set to <literal>0</literal>
178178
(the default is <literal>1</literal>).
179179
Copy the configuration to all nodes, if necessary.
@@ -516,7 +516,7 @@ cLVM) for more information and details to integrate here - really helpful-->
516516

517517

518518
<sect2 xml:id="sec.ha.clvm.scenario.iscsi">
519-
<title>Scenario: cLVM With iSCSI on SANs</title>
519+
<title>Scenario: cLVM with iSCSI on SANs</title>
520520
<para>
521521
The following scenario uses two SAN boxes which export their iSCSI
522522
targets to several clients. The general idea is displayed in

xml/ha_concepts.xml

+3-4
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
<para>
1313
&productnamereg; is an integrated suite of open source clustering
1414
technologies that enables you to implement highly available physical and
15-
virtual Linux clusters, and to eliminate single point of failure. It
15+
virtual Linux clusters, and to eliminate single points of failure. It
1616
ensures the high availability and manageability of critical
1717
resources including data, applications, and services. Thus, it helps you
1818
maintain business continuity, protect data integrity, and reduce
@@ -150,7 +150,6 @@
150150
<para>
151151
&productname; supports the clustering of both physical and
152152
virtual Linux servers. Mixing both types of servers is supported as well.
153-
&sls; &productnumber; ships with &xen;,
154153
&sls; &productnumber; ships with Xen and KVM (Kernel-based Virtual Machine).
155154
Both are open source virtualization hypervisors. Virtualization guest
156155
systems (also known as VMs) can be managed as services by the cluster.
@@ -185,7 +184,7 @@
185184
centers. The cluster usually uses unicast for communication between
186185
the nodes and manages failover internally. Network latency is usually
187186
low (&lt;5&nbsp;ms for distances of approximately 20 miles). Storage
188-
preferably is connected by fibre channel. Data replication is done by
187+
is preferably connected by fibre channel. Data replication is done by
189188
storage internally, or by host based mirror under control of the cluster.
190189
</para>
191190
</listitem>
@@ -749,7 +748,7 @@
749748
data or complete resource recovery. For this Pacemaker comes with a
750749
fencing subsystem, stonithd. &stonith; is an acronym for <quote>Shoot
751750
The Other Node In The Head</quote>.
752-
It usually is implemented with a &stonith; shared block device, remote
751+
It is usually implemented with a &stonith; shared block device, remote
753752
management boards, or remote power switches. In &pace;, &stonith;
754753
devices are modeled as resources (and configured in the CIB) to
755754
enable them to be easily used.

xml/ha_config_basics.xml

+11-11
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@
4545
<para>Two-node clusters</para>
4646
</listitem>
4747
<listitem>
48-
<para>clusters with more than two nodes. This means usually an odd number of nodes.</para>
48+
<para>Clusters with more than two nodes. This usually means an odd number of nodes.</para>
4949
</listitem>
5050
</itemizedlist>
5151
<para>
@@ -82,9 +82,9 @@
8282
</formalpara>
8383
<formalpara>
8484
<title>Usage scenario:</title>
85-
<para>Classical stretched clusters, focus on service high availability
85+
<para>Classic stretched clusters, focus on high availability of services
8686
and local data redundancy. For databases and enterprise
87-
resource planning. One of the most popular setup during the last
87+
resource planning. One of the most popular setups during the last few
8888
years.
8989
</para>
9090
</formalpara>
@@ -102,7 +102,7 @@
102102
</formalpara>
103103
<formalpara>
104104
<title>Usage scenario:</title>
105-
<para>Classical stretched cluster, focus on service high availability
105+
<para>Classic stretched cluster, focus on high availability of services
106106
and data redundancy. For example, databases, enterprise resource planning.
107107
</para>
108108
</formalpara>
@@ -224,7 +224,7 @@
224224
Whenever communication fails between one or more nodes and the rest of the
225225
cluster, a cluster partition occurs. The nodes can only communicate with
226226
other nodes in the same partition and are unaware of the separated nodes.
227-
A cluster partition is defined to have quorum (is <quote>quorate</quote>)
227+
A cluster partition is defined as having quorum (can <quote>quorate</quote>)
228228
if it has the majority of nodes (or votes).
229229
How this is achieved is done by <emphasis>quorum calculation</emphasis>.
230230
Quorum is a requirement for fencing.
@@ -256,7 +256,7 @@ C = number of cluster nodes</screen>
256256
of cluster nodes.
257257
Two-node clusters make sense for stretched setups across two sites.
258258
Clusters with an odd number of nodes can be built on either one single
259-
site or might be spread across three sites.
259+
site or might being spread across three sites.
260260
</para>
261261
</listitem>
262262
</varlistentry>
@@ -322,9 +322,9 @@ C = number of cluster nodes</screen>
322322
or a single node <quote>quorum</quote>&mdash;or not.
323323
</para>
324324
<para>
325-
For two node clusters the only meaningful behaviour is to always
326-
react in case of quorum loss. The first step always should be
327-
trying to fence the lost node.
325+
For two-node clusters the only meaningful behavior is to always
326+
react in case of quorum loss. The first step should always be
327+
to try to fence the lost node.
328328
</para>
329329
</listitem>
330330
</varlistentry>
@@ -450,7 +450,7 @@ C = number of cluster nodes</screen>
450450
use the following settings:
451451
</para>
452452
<example>
453-
<title>Excerpt of &corosync; Configuration for a N-Node Cluster</title>
453+
<title>Excerpt of &corosync; Configuration for an N-Node Cluster</title>
454454
<screen>quorum {
455455
provider: corosync_votequorum <co xml:id="co.corosync.quorum.n-node.corosync_votequorum"/>
456456
expected_votes: <replaceable>N</replaceable> <co xml:id="co.corosync.quorum.n-node.expected_votes"/>
@@ -470,7 +470,7 @@ C = number of cluster nodes</screen>
470470
<para>
471471
Enables the wait for all (WFA) feature.
472472
When WFA is enabled, the cluster will be quorate for the first time
473-
only after all nodes have been visible.
473+
only after all nodes have become visible.
474474
To avoid some start-up race conditions, setting <option>wait_for_all</option>
475475
to <literal>1</literal> may help.
476476
For example, in a five-node cluster every node has one vote and thus,

xml/ha_docupdates.xml

+3-3
Original file line numberDiff line numberDiff line change
@@ -146,7 +146,7 @@ toms 2014-08-12:
146146
(<link xlink:href="&bsc;1098429"/>,
147147
<link xlink:href="&bsc;1108586"/>,
148148
<link xlink:href="&bsc;1108604"/>,
149-
<link xlink:href="&bsc;1108624"/>.
149+
<link xlink:href="&bsc;1108624"/>).
150150
</para>
151151
</listitem>
152152
</itemizedlist>
@@ -203,7 +203,7 @@ toms 2014-08-12:
203203
</listitem>
204204
<listitem>
205205
<para>
206-
Added chapter <xref linkend="cha.ha.maintenance"/>. Moved respective
206+
Added <xref linkend="cha.ha.maintenance"/>. Moved respective
207207
sections from <xref linkend="cha.ha.config.basics"/>,
208208
<xref linkend="cha.conf.hawk2"/>, and <xref linkend="cha.ha.manual_config"/>
209209
there. The new chapter gives an overview of different options the cluster stack
@@ -464,7 +464,7 @@ toms 2014-08-12:
464464
<listitem>
465465
<para>
466466
In <xref linkend="sec.ha.cluster-md.overview"/>, mentioned that each
467-
disk need to be accessible by Cluster MD on each node (<link
467+
disk needs to be accessible by Cluster MD on each node (<link
468468
xlink:href="https://bugzilla.suse.com/show_bug.cgi?id=938502"/>).
469469
</para>
470470
</listitem>

xml/ha_fencing.xml

+5-5
Original file line numberDiff line numberDiff line change
@@ -149,10 +149,10 @@
149149
increasingly popular and may even become standard in off-the-shelf
150150
computers. However, if they share a power supply with their host (a
151151
cluster node), they might not work when needed. If a node stays without
152-
power, the device supposed to control it would be useless. Therefor, it
153-
is highly recommended using battery backed Lights-out devices.
154-
Another aspect is this devices are accessed by network. This might
155-
imply single point of failure, or security concerns.
152+
power, the device supposed to control it would be useless. Therefore, it
153+
is highly recommended to use battery backed lights-out devices.
154+
Another aspect is that these devices are accessed by network. This might
155+
imply a single point of failure, or security concerns.
156156
</para>
157157
</listitem>
158158
</varlistentry>
@@ -434,7 +434,7 @@ hostlist</screen>
434434
<para>The Kdump plug-in must be used in concert with another, real &stonith;
435435
device, for example, <literal>external/ipmi</literal>.
436436
The order of the fencing devices must be specified by <command>crm configure
437-
fencing_topology</command>. To achieve that Kdump is checked before
437+
fencing_topology</command>. For Kdump to be checked before
438438
triggering a real fencing mechanism (like <literal>external/ipmi</literal>),
439439
use a configuration similar to the following:</para>
440440
<screen>fencing_topology \

xml/ha_glossary.xml

+1-1
Original file line numberDiff line numberDiff line change
@@ -453,7 +453,7 @@ performance will be met during a contractual measurement period.</para>
453453
<glossentry xml:id="gloss.quorum"><glossterm>quorum</glossterm>
454454
<glossdef>
455455
<para>
456-
In a cluster, a cluster partition is defined to have quorum (is
456+
In a cluster, a cluster partition is defined to have quorum (can
457457
<quote>quorate</quote>) if it has the majority of nodes (or votes).
458458
Quorum distinguishes exactly one partition. It is part of the algorithm
459459
to prevent several disconnected partitions or nodes from proceeding and

xml/ha_maintenance.xml

+19-19
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@
2020
</para>
2121
<para>
2222
This chapter explains how to manually take down a cluster node without
23-
negative side-effects. It also gives an overview of different options the
23+
negative side effects. It also gives an overview of different options the
2424
cluster stack provides for executing maintenance tasks.
2525
</para>
2626
</abstract>
@@ -147,7 +147,7 @@ Node <replaceable>&node2;</replaceable>: standby
147147

148148
<variablelist>
149149
<varlistentry xml:id="vle.ha.maint.mode.cluster">
150-
<!--<term>Putting the Cluster Into Maintenance Mode</term>-->
150+
<!--<term>Putting the Cluster into Maintenance Mode</term>-->
151151
<term><xref linkend="sec.ha.maint.mode.cluster" xrefstyle="select:title"/></term>
152152
<listitem>
153153
<para>
@@ -158,7 +158,7 @@ Node <replaceable>&node2;</replaceable>: standby
158158
</listitem>
159159
</varlistentry>
160160
<varlistentry xml:id="vle.ha.maint.mode.node">
161-
<!--<term>Putting a Node Into Maintenance Mode</term>-->
161+
<!--<term>Putting a Node into Maintenance Mode</term>-->
162162
<term><xref linkend="sec.ha.maint.mode.node" xrefstyle="select:title"/></term>
163163
<listitem>
164164
<para>
@@ -175,7 +175,7 @@ Node <replaceable>&node2;</replaceable>: standby
175175
<para>
176176
A node that is in standby mode can no longer run resources. Any resources
177177
running on the node will be moved away or stopped (in case no other node
178-
is eligible to run the resource). Also, all monitor operations will be
178+
is eligible to run the resource). Also, all monitoring operations will be
179179
stopped on the node (except for those with
180180
<literal>role="Stopped"</literal>).
181181
</para>
@@ -186,11 +186,11 @@ Node <replaceable>&node2;</replaceable>: standby
186186
</listitem>
187187
</varlistentry>
188188
<varlistentry xml:id="vle.ha.maint.mode.rsc">
189-
<!--<term>Putting a Resource Into Maintenance Mode</term>-->
189+
<!--<term>Putting a Resource into Maintenance Mode</term>-->
190190
<term><xref linkend="sec.ha.maint.mode.rsc" xrefstyle="select:title"/></term>
191191
<listitem>
192192
<para>
193-
When this mode is enabled for a resource, no monitor operations will be
193+
When this mode is enabled for a resource, no monitoring operations will be
194194
triggered for the resource.
195195
</para>
196196
<para>
@@ -201,7 +201,7 @@ Node <replaceable>&node2;</replaceable>: standby
201201
</listitem>
202202
</varlistentry>
203203
<varlistentry xml:id="vle.ha.maint.rsc.unmanaged">
204-
<!--<term>Putting a Resource Into Unmanaged Mode</term>-->
204+
<!--<term>Putting a Resource into Unmanaged Mode</term>-->
205205
<term><xref linkend="sec.ha.maint.rsc.unmanaged" xrefstyle="select:title"/></term>
206206
<listitem>
207207
<para>
@@ -266,7 +266,7 @@ Node <replaceable>&node2;</replaceable>: standby
266266
</sect1>
267267

268268
<sect1 xml:id="sec.ha.maint.mode.cluster">
269-
<title>Putting the Cluster Into Maintenance Mode</title>
269+
<title>Putting the Cluster into Maintenance Mode</title>
270270
<para>
271271
To put the cluster into maintenance mode on the &crmshell;, use the following command:</para>
272272
<screen>&prompt.root;<command>crm</command> configure property maintenance-mode=true</screen>
@@ -275,7 +275,7 @@ Node <replaceable>&node2;</replaceable>: standby
275275
<screen>&prompt.root;<command>crm</command> configure property maintenance-mode=false</screen>
276276

277277
<procedure xml:id="pro.ha.maint.mode.cluster.hawk2">
278-
<title>Putting the Cluster Into Maintenance Mode with &hawk2;</title>
278+
<title>Putting the Cluster into Maintenance Mode with &hawk2;</title>
279279
<step>
280280
<para>
281281
Start a Web browser and log in to the cluster as described in
@@ -315,7 +315,7 @@ Node <replaceable>&node2;</replaceable>: standby
315315
</sect1>
316316

317317
<sect1 xml:id="sec.ha.maint.mode.node">
318-
<title>Putting a Node Into Maintenance Mode</title>
318+
<title>Putting a Node into Maintenance Mode</title>
319319
<para>
320320
To put a node into maintenance mode on the &crmshell;, use the following command:</para>
321321
<screen>&prompt.root;<command>crm</command> node maintenance <replaceable>NODENAME</replaceable></screen>
@@ -324,7 +324,7 @@ Node <replaceable>&node2;</replaceable>: standby
324324
<screen>&prompt.root;<command>crm</command> node ready <replaceable>NODENAME</replaceable></screen>
325325

326326
<procedure xml:id="pro.ha.maint.mode.nodes.hawk2">
327-
<title>Putting a Node Into Maintenance Mode with &hawk2;</title>
327+
<title>Putting a Node into Maintenance Mode with &hawk2;</title>
328328
<step>
329329
<para>
330330
Start a Web browser and log in to the cluster as described in
@@ -352,7 +352,7 @@ Node <replaceable>&node2;</replaceable>: standby
352352
</sect1>
353353

354354
<sect1 xml:id="sec.ha.maint.node.standby">
355-
<title>Putting a Node Into Standby Mode</title>
355+
<title>Putting a Node into Standby Mode</title>
356356
<para>
357357
To put a node into standby mode on the &crmshell;, use the following command:</para>
358358
<screen>&prompt.root;crm node standby <replaceable>NODENAME</replaceable></screen>
@@ -361,7 +361,7 @@ Node <replaceable>&node2;</replaceable>: standby
361361
<screen>&prompt.root;crm node online <replaceable>NODENAME</replaceable></screen>
362362

363363
<procedure xml:id="pro.ha.maint.node.standby.hawk2">
364-
<title>Putting a Node Into Standby Mode with &hawk2;</title>
364+
<title>Putting a Node into Standby Mode with &hawk2;</title>
365365
<step>
366366
<para>
367367
Start a Web browser and log in to the cluster as described in
@@ -394,7 +394,7 @@ Node <replaceable>&node2;</replaceable>: standby
394394
</sect1>
395395

396396
<sect1 xml:id="sec.ha.maint.mode.rsc">
397-
<title>Putting a Resource Into Maintenance Mode</title>
397+
<title>Putting a Resource into Maintenance Mode</title>
398398
<para>
399399
To put a resource into maintenance mode on the &crmshell;, use the following command:</para>
400400
<screen>&prompt.root;<command>crm</command> resource maintenance <replaceable>RESOURCE_ID</replaceable> true</screen>
@@ -403,7 +403,7 @@ Node <replaceable>&node2;</replaceable>: standby
403403
<screen>&prompt.root;<command>crm</command> resource maintenance <replaceable>RESOURCE_ID</replaceable> false</screen>
404404

405405
<procedure xml:id="pro.ha.maint.mode.rsc.hawk2">
406-
<title>Putting a Resource Into Maintenance Mode with &hawk2;</title>
406+
<title>Putting a Resource into Maintenance Mode with &hawk2;</title>
407407
<step>
408408
<para>
409409
Start a Web browser and log in to the cluster as described in
@@ -459,7 +459,7 @@ Node <replaceable>&node2;</replaceable>: standby
459459
</sect1>
460460

461461
<sect1 xml:id="sec.ha.maint.rsc.unmanaged">
462-
<title>Putting a Resource Into Unmanaged Mode</title>
462+
<title>Putting a Resource into Unmanaged Mode</title>
463463
<para>
464464
To put a resource into unmanaged mode on the &crmshell;, use the following command:</para>
465465
<screen>&prompt.root;<command>crm</command> resource unmanage <replaceable>RESOURCE_ID</replaceable></screen>
@@ -468,7 +468,7 @@ Node <replaceable>&node2;</replaceable>: standby
468468
<screen>&prompt.root;<command>crm</command> resource manage <replaceable>RESOURCE_ID</replaceable></screen>
469469

470470
<procedure xml:id="pro.ha.maint.rsc.unmanaged.hawk2">
471-
<title>Putting a Resource Into Unmanaged Mode with &hawk2;</title>
471+
<title>Putting a Resource into Unmanaged Mode with &hawk2;</title>
472472
<step>
473473
<para>
474474
Start a Web browser and log in to the cluster as described in
@@ -551,7 +551,7 @@ Node <replaceable>&node2;</replaceable>: standby
551551
<step>
552552
<para>
553553
Check if you have resources of the type <literal>ocf:pacemaker:controld</literal>
554-
or any dependencies on this type of resources. Resources of the type
554+
or any dependencies on this type of resource. Resources of the type
555555
<literal>ocf:pacemaker:controld</literal> are DLM resources.
556556
</para>
557557
<substeps>
@@ -564,7 +564,7 @@ Node <replaceable>&node2;</replaceable>: standby
564564
<para>
565565
The reason is that stopping &pace; also stops the &corosync; service, on
566566
whose membership and messaging services DLM depends. If &corosync; stops,
567-
the DLM resource will assume a split-brain scenario and trigger a fencing
567+
the DLM resource will assume a split brain scenario and trigger a fencing
568568
operation.
569569
</para>
570570
</step>

xml/ha_rear.xml

+1-1
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@
6666
<para>
6767
Understanding &rear;'s complex functionality is essential for making the
6868
tool work as intended. Therefore, read this chapter carefully and
69-
familiarize with &rear; before a disaster strikes. You should also be
69+
familiarize yourself with &rear; before a disaster strikes. You should also be
7070
aware of &rear;'s known limitations and test your system in advance.
7171
</para>
7272
</note>

xml/ha_requirements.xml

+2-3
Original file line numberDiff line numberDiff line change
@@ -137,9 +137,8 @@
137137
<para>
138138
When using DRBD* to implement a mirroring RAID system that distributes
139139
data across two machines, make sure to only access the device provided
140-
by DRBD&mdash;never the backing device. Use bonded NICs. Same NICs as
141-
the rest of the cluster uses are possible to leverage the redundancy
142-
provided there.
140+
by DRBD&mdash;never the backing device. Use bonded NICs. To leverage the
141+
redundancy it is possible to use the same NICs as the rest of the cluster.
143142
</para>
144143
</listitem>
145144
</itemizedlist>

0 commit comments

Comments
 (0)