Skip to content

Commit

Permalink
fix links in article
Browse files Browse the repository at this point in the history
  • Loading branch information
krastin committed Jul 22, 2024
1 parent c294942 commit 389d5ec
Showing 1 changed file with 9 additions and 6 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -48,10 +48,9 @@ Different options for your resilient datacenter present trade-offs between opera

The following sections explore several options for increasing Consul's fault tolerance. For enhanced reliability, we recommend taking a holistic approach by layering these multiple functionalities together.

- [Spread servers across infrastructure availability zones](#availability-zones).

- [Use a minimum quorum size to avoid performance impacts](#quorum-size).
- [<EnterpriseAlert inline /> Use redundancy zones to improve fault tolerance](#redundancy-zones).
- Spread servers across infrastructure [availability zones](#availability-zones).
- Use a [minimum quorum size](#quorum-size) to avoid performance impacts.
- <EnterpriseAlert inline /> Use [redundancy zones](#redundancy-zones) to improve fault tolerance.
- Use [Autopilot](#autopilot) to automatically prune failed servers and maintain quorum size.
- Use [cluster peering](#cluster-peering) to provide service redundancy.

Expand Down Expand Up @@ -167,8 +166,12 @@ Cluster peering lets you connect two or more independent Consul clusters using m

Cluster peering is the preferred way to interconnect clusters because it is operationally easier to configure and manage than WAN federation. Cluster peering communication between two datacenters runs only on one port on the related Consul mesh gateway, which makes it operationally easy to expose for routing purposes.


When you use cluster peering to connect admin partitions between datacenters, use Consul’s dynamic traffic management functionalities `service-splitter`, `service-router` and `service-failover` to configure your service mesh to automatically forward or failover service traffic between peer clusters. Consul can then manage the traffic intended for the service and do [failover](/consul/docs/connect/config-entries/service-resolver#spec-failover), [load-balancing](/consul/docs/connect/config-entries/service-resolver#spec-loadbalancer), or [redirection](/consul/docs/connect/config-entries/service-resolver#spec-redirect).


Cluster peering also extends service discovery across different datacenters independent of service mesh functions. After you peer datacenters, you can refer to services between datacenters with `<service>.virtual.peer.consul` in Consul DNS. For Consul Enterprise, your query string may need to include the namespace, partition, or both. Refer to the [Consul DNS documentation](/consul/docs/services/discovery/dns-static-lookups#service-virtual-ip-lookups) for details on building virtual service lookups.

For more information on cluster peering, refer to:
- [Cluster peering documentation](/consul/docs/connect/cluster-peering)
for a more detailed explanation
- [Cluster peering tutorial](/consul/tutorials/implement-multi-tenancy/cluster-peering)
to learn how to implement cluster peering

0 comments on commit 389d5ec

Please sign in to comment.