diff --git a/.github/workflows/labeller.yml b/.github/workflows/labeller.yml index 818cde1a7..b9d2f784d 100644 --- a/.github/workflows/labeller.yml +++ b/.github/workflows/labeller.yml @@ -6,9 +6,10 @@ on: jobs: triage: runs-on: ubuntu-latest - runs: - using: 'node16' steps: + - uses: actions/setup-node@v4 + with: + node-version: '16.x' - uses: actions/labeler@v2 with: repo-token: '${{ secrets.GITHUB_TOKEN }}' diff --git a/content/momentum/4/4-cluster-config-failover.md b/content/momentum/4/4-cluster-config-failover.md index 157103a97..8f8e3e9c3 100644 --- a/content/momentum/4/4-cluster-config-failover.md +++ b/content/momentum/4/4-cluster-config-failover.md @@ -1,15 +1,13 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "Configuring Momentum for High Availability and Failover" -description: "Momentum's architecture supports fault tolerant configurations This means that you can operate in an environment that is readily configured to support failing over automatically Components that support high availability and fault tolerance include the following ecconfigd Dura VIP™ bindings Centralized logging and Aggregration Per node data Per node logs can..." +description: "Momentum's architecture supports fault tolerant configurations This means that you can operate in an environment that is readily configured to support failing over automatically" --- Momentum's architecture supports fault-tolerant configurations. This means that you can operate in an environment that is readily configured to support failing over automatically. Components that support high availability and fault tolerance include the following: -* [`ecconfigd`](/momentum/4/conf-overview#conf.ecconfigd) - * [DuraVIP™ bindings](/momentum/4/4-cluster-config-duravip) * [Centralized logging and Aggregration](/momentum/4/log-aggregation) @@ -22,4 +20,4 @@ Components that support high availability and fault tolerance include the follow * [cidr_server](/momentum/4/4-cluster-cidr-server) and [as_logger](/momentum/4/modules/as-logger) - The **cidr_server** queries the data created by an as_logger module and displays the result in the cluster console. The **cidr_server** and as_logger can be configured to log data to a SAN. Locking semantics must be checked. \ No newline at end of file + The **cidr_server** queries the data created by an as_logger module and displays the result in the cluster console. The **cidr_server** and as_logger can be configured to log data to a SAN. Locking semantics must be checked. diff --git a/content/momentum/4/4-cluster.md b/content/momentum/4/4-cluster.md index 16a22809f..aec0f61cd 100644 --- a/content/momentum/4/4-cluster.md +++ b/content/momentum/4/4-cluster.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "Cluster-specific Configuration" -description: "Clustering is based on the concept of having a cluster of machines that communicate using a group communication messaging bus A cluster is comprised of at least one Manager node and one or more MTA nodes The Manager in the cluster will be your central point of management for the..." +description: "Clustering is based on the concept of having a cluster of machines that communicate using a group communication messaging bus A cluster is comprised of at least one Manager node and one or more MTA nodes" --- @@ -9,10 +9,6 @@ Clustering is based on the concept of having a cluster of machines that communic The clustering capabilities of Momentum enable the following features: -* Centralized management of configuration for multiple MTA nodes - -* Replicated, redundant, configuration repository with revision control - * Log aggregation pulling log files from MTA nodes to centralized location(s) on the network * Replication of a variety of real-time metrics to allow cluster-wide coordination for inbound and outbound traffic shaping @@ -47,49 +43,6 @@ For general information about Momentum's configuration files, see [“Configurat For additional details about editing your configuration files, see [“Changing Configuration Files”](/momentum/4/conf-overview#conf.manual.changes). -### Cluster-specific Configuration Management - -Momentum configuration files are maintained in a version control repository and exported to your cluster network via the [`ecconfigd`](/momentum/4/conf-overview#conf.ecconfigd) service running on the cluster manager. This daemon is auto-configuring and will replicate your configuration repositories to all participating cluster nodes. On the cluster manager, the repository resides in the `/var/ecconfigd/repo` directory. Nodes pull their configuration from this repository and store their working copy in the `/opt/msys/ecelerity/etc/conf` directory. - -The default installation has a cron job deployed on the nodes that uses [**eccfg pull**](/momentum/4/executable/eccfg) to update the local configuration from the `ecconfigd` service. **eccfg** is built in such a way that these updates are applied atomically to the configuration checkout directory. - -The tools that operate on the configuration checkout directory try very hard to avoid leaving it in a broken state. Every minute, each node will attempt to update its directory to match the repository. If you have made local changes to the directory, the update will attempt to merge updates from the repository with your changes. The update process will only modify the directory if the complete revision was able to be pulled. In other words, it will not modify the configuration checkout directory if doing so causes a conflict and will never leave a directory with a half-applied update. - -In some situations, it is possible to put the configuration replication into a conflicted state. For instance, in a two node cluster, if one of the nodes is unplugged from the network while configuration changes are made and committed on both nodes, when the network cable is re-connected, the configuration will attempt to sync but will notice that conflicting changes have been made. If conflicting changes were found, `ecconfigd` will warn you and provide you with instructions on how to resolve the conflict. You may need to manually resolve the conflicting configuration files. For instructions on changing configuration files, see [“Changing Configuration Files”](/momentum/4/conf-overview#conf.manual.changes). - -** 16.1.1.1. Repository Working Copy for Cluster** - -On the client side of the configuration management, each node has a working copy checkout of the repository located at `/opt/msys/ecelerity/etc/conf`. The following are descriptions of the subdirectories in a cluster configuration: - -* `global` – location for sharing cluster-wide configuration information between nodes - - Every node has access to this subdirectory. - -* `default` – contains your default configuration files, which are shared across multiple nodes - - `default` is the name of the default subcluster and represents the default configuration for nodes in that subcluster. - -* *`nodename`* – contains node-specific configuration files - - When you create a node-specific configuration file, a directory bearing the node name and a node-specific `ecelerity.conf` file are created on *all* nodes in the cluster. - - When nodes use common values for a number of options, if you wish you can put these options in a configuration file stored in the `global` directory rather than repeating them in each /opt/msys/ecelerity/etc/conf/*`nodename`*/ecelerity.conf file. However, you must add include statements to the /opt/msys/ecelerity/etc/conf/*`nodename`*/ecelerity.conf file on each node. - -* *`peer`* – any files shared by multiple nodes in a single subcluster - -By default the order is: - -``` -/opt/msys/ecelerity/etc -/opt/msys/ecelerity/etc/conf/global -/opt/msys/ecelerity/etc/conf/{NODENAME} -/opt/msys/ecelerity/etc/conf/default -``` - -Directories are separated by the standard path separator. - -If you wish to change the search order, set the environment variable `EC_CONF_SEARCH_PATH`. For more information about `EC_CONF_SEARCH_PATH`, see [*Configuring the Environment File*](/momentum/4/environment-file) . - ### Using Node-local `include` Files If you have any configurations specific to a particular node, fallback values for configuration options in that node-local configuration file *cannot* be included via the `/opt/msys/ecelerity/etc/conf/ecelerity.conf` file. For an included file, the parent file's path is added to the search path, so if a file is included from `/opt/msys/ecelerity/etc/conf/default/ecelerity.conf`, the search path becomes: @@ -109,4 +62,4 @@ Set `OPTION` in a `node-local.conf` file in all the /opt/msys/ecelerity/etc/conf Add an "include node-local.conf" statement to `/opt/msys/ecelerity/etc/default/ecelerity.conf`. -If there are major differences between node configurations, it is probably simpler to create a separate configuration file for each node as described in [“Repository Working Copy for Cluster”](/momentum/4/4-cluster#cluster.config_files.mgmt.cluster). \ No newline at end of file +If there are major differences between node configurations, it is probably simpler to create a separate configuration file for each node as described in [“Repository Working Copy for Cluster”](/momentum/4/4-cluster#cluster.config_files.mgmt.cluster). diff --git a/content/momentum/4/4-implementing-policy-scriptlets.md b/content/momentum/4/4-implementing-policy-scriptlets.md index 65b91ab1a..1ae394734 100644 --- a/content/momentum/4/4-implementing-policy-scriptlets.md +++ b/content/momentum/4/4-implementing-policy-scriptlets.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "Policy Scriptlets" -description: "Lua scripts provide you with the capability to express the logic behind your policy Aside from being very convenient policy scripts can be reloaded on the fly allowing real time adjustment of policy without interrupting service the Momentum implementation has extremely low overhead and tightly integrates with the event based..." +description: "Lua scripts provide you with the capability to express the logic behind your policy Momentum implementation has extremely low overhead and tightly integrates with the event based architecture" --- Lua scripts provide you with the capability to express the logic behind your policy. Aside from being very convenient (policy scripts can be reloaded on the fly, allowing real-time adjustment of policy without interrupting service), the Momentum implementation has extremely low overhead and tightly integrates with the event-based architecture, being able to suspend processing until asynchronous operations (such as DNS resolution, or database queries) complete. Note that variables used in a policy script are scoped locally and only persist in the particular policy script in which it is defined. Use the [validation context](/momentum/4/4-policy#policy.validation) to persist data over different policy phases and policy scripts. @@ -106,23 +106,11 @@ In the `default_policy.conf` file, you should also enable the datasource(s) suit ### Creating Policy Scripts -Following best practices when creating policy scripts is important, especially in a cluster environment when scripts are used on more than one node. Scripts should take advantage of Momentum's built-in revision control and be added to the repository using the [eccfg](/momentum/4/executable/eccfg) command. +Following best practices when creating policy scripts is important, especially in a cluster environment when scripts are used on more than one node. To create a policy script, perform the following: -1. Take steps to avoid conflicts. - - When working with files that are under revision control, it is important to take steps to avoid conflicts with changes made elsewhere in the system and to be able to track changes. For this reason, perform the following actions before creating any policy scripts: - - * Provision a user account for each admin user, so that the history in the repository is meaningful. - - * Ensure that you have the latest updates on the node where you are creating the scripts by running **`/opt/msys/ecelerity/bin/eccfg pull`** . - - ### Note - - Pay special attention to the instructions for using the **pull** command—if the configuration is updated your current directory may be invalidated. For more information, see [eccfg](/momentum/4/executable/eccfg). - -2. Create a directory for your script. +1. Create a directory for your script. Scripts should be created in a directory that is under revision control. Create a directory for your scripts in the working copy of the repository on a node where you intend to run the script: @@ -130,7 +118,7 @@ To create a policy script, perform the following: * If your scripts apply to only one node, create a node-specific directory. -3. Write your script. +2. Write your script. All scripts must @@ -175,7 +163,7 @@ To create a policy script, perform the following: These messages indicate a scriptlet error and give both the name of the script and the callout that failed. -4. Update your configuration to properly reference your script. +3. Update your configuration to properly reference your script. After writing a script and saving it to the repository, you must include it in the [`scriptlet`](/momentum/4/modules/scriptlet) module using a `script` stanza in your `ecelerity.conf` file. @@ -219,7 +207,7 @@ To create a policy script, perform the following: For additional details about editing your configuration files, see [“Changing Configuration Files”](/momentum/4/conf-overview#conf.manual.changes). -5. Check the validity of your script. +4. Check the validity of your script. Since a malformed configuration file will not reload, using **config reload** is one way of validating your scriptlet syntax. After your configuration has been changed, issue the command: @@ -244,7 +232,7 @@ To create a policy script, perform the following: However, please note that Message Systems does not provide support for the use of any third party tools included or referenced by name within our products or product documentation; support is the sole responsibility of the third party provider. -6. Debug your script. +5. Debug your script. Successfully reloading the configuration file does not guarantee that your script will run. Your script may be syntactically correct but have semantic errors. As always, you should test the functionality of scripts before implementing them in a production environment. @@ -288,24 +276,6 @@ To create a policy script, perform the following: note="No email received at this address", code="550"} ``` -7. Commit your changes. - - Once you are satisfied that your scripts function correctly, commit your changes. From the directory above your newly created directory, use **eccfg** to add both the directory and the script to the repository: - - * If you are adding a new script, issue the command - - **eccfg commit ––username *`admin_user`* ––password *`passwd`* ––add-all --message *`message here`*** . - - * If you are editing a script, you need not use the `––add-all` option. - -8. Repply your changes, if required. - - In all cases, edits made to the local configuration will need to be manually applied to the node via **config reload** . The **eccfg commit** command will not do it for you. If you have not reloaded your configuration, issue the console command: - - **`/opt/msys/ecelerity/bin/ec_console /tmp/2025 config reload`** - - If your changes affect more than one node, each node will check for an updated configuration each minute and automatically check out your changes and issue a **config reload** . - ### Examples This section includes examples of using policy scripts. @@ -336,4 +306,4 @@ Use `msg.priority` to read the priority of a message. ### Note -It is important not to overuse the priority setting. High priority messages should be reserved for messages that need to go out immediately, before other messages. Keeping high priority messages to a low percentage of the total message volume is important so the high priority messages do not cause delays for normal priority messages. A common use case for high priority messages is sending out password resets in the midst of a major mail campaign. \ No newline at end of file +It is important not to overuse the priority setting. High priority messages should be reserved for messages that need to go out immediately, before other messages. Keeping high priority messages to a low percentage of the total message volume is important so the high priority messages do not cause delays for normal priority messages. A common use case for high priority messages is sending out password resets in the midst of a major mail campaign. diff --git a/content/momentum/4/4-preface.md b/content/momentum/4/4-preface.md index 8a114e1ad..50e7313f6 100644 --- a/content/momentum/4/4-preface.md +++ b/content/momentum/4/4-preface.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/27/2020" +lastUpdated: "05/21/2024" title: "Preface" -description: "Certain typographical conventions are used in this document Take a moment to familiarize yourself with the following examples Text in this style indicates executable programs such as ecelerity Text in this style is used when referring to file names For example The ecelerity conf file is used to configure Momentum..." +description: "Certain typographical conventions are used in this document Take a moment to familiarize yourself with the following examples" --- ## Typographical Conventions Used in This Document @@ -37,5 +37,5 @@ The preceding line would appear unbroken in a log file but, if left as is, it wo Where possible, Unix command-line commands are broken using the ‘`\`’ character, making it possible to copy and paste commands. For example: -/opt/msys/ecelerity/bin/eccfg bootstrap --clustername *`name`* --username=admin \ - --password=*`admin cluster_host`* +sudo -u ecuser \ + /opt/msys/ecelerity/bin/ec_show -m *`msg-id`* diff --git a/content/momentum/4/add-remove-platform-nodes.md b/content/momentum/4/add-remove-platform-nodes.md index 73ffe2bc8..6991bc31c 100644 --- a/content/momentum/4/add-remove-platform-nodes.md +++ b/content/momentum/4/add-remove-platform-nodes.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "Adding and Removing Platform Nodes" -description: "This chapter describes how to add and remove a Platform node MTA Cassandra to and from an existing Momentum 4 2 cluster This section describes how to add a Platform node which involves installing the new node then making some manual configuration changes on the new node and on the..." +description: "This chapter describes how to add and remove a Platform node MTA Cassandra to and from an existing Momentum 4 cluster" --- @@ -73,13 +73,10 @@ These instructions apply to Momentum 4.2.1.*`x`*, where `x` > or = `11` } ``` -2. Use eccfg to commit the modified configuration, substituting your own admin password if environmental variable $ADMINPASS is not defined. - - `/opt/msys/ecelerity/bin/eccfg commit -u admin -p $ADMINPASS -m 'Add new Platform node to ecelerity cluster'` -3. Restart affected services. +2. Restart affected services. `service ecelerity restart` -4. Update the nginx configuration files. +3. Update the nginx configuration files. 1. Update the `click_proxy_upstream.conf` nginx configuration file by adding a "server" line for the new Platform host. @@ -132,12 +129,7 @@ These instructions apply to Momentum 4.2.1.*`x`*, where `x` > or = `11` 2. Install the meta package `msys-role-platform`. `yum install -y --config momentum.repo --enablerepo momentum msys-role-platform` -3. Bootstrap the Ecelerity configuration from the first server, substituting your own admin password if environmental variable $ADMINPASS is not defined. . - - chown -R ecuser:ecuser /opt/msys/ecelerity/etc/ - cd /opt/msys/ecelerity/etc/ - ../bin/eccfg bootstrap --clustername default -u admin -p $ADMINPASS *`FIRST.NODE.FQDN`* -4. Copy the existing configuration files from the first Platform node to the new node, substituting or setting the new node's hostname for environmental variable $NEWNODE. +3. Copy the existing configuration files from the first Platform node to the new node, substituting or setting the new node's hostname for environmental variable $NEWNODE. ``` # execute this on the first Platform node @@ -154,7 +146,7 @@ These instructions apply to Momentum 4.2.1.*`x`*, where `x` > or = `11` done ``` -5. Update the `cassandra.yaml` file on the new Platform node to replace `listen_address` with the correct local IP address for the new node. +4. Update the `cassandra.yaml` file on the new Platform node to replace `listen_address` with the correct local IP address for the new node. ``` # example @@ -163,14 +155,14 @@ These instructions apply to Momentum 4.2.1.*`x`*, where `x` > or = `11` listen_address: 10.77.0.245 ``` -6. Start Cassandra on the new node. +5. Start Cassandra on the new node. `# service msys-cassandra start` ### Note Depending on the amount of existing data in your Cassandra database, this may falsely report as failed (because the init script only waits a fixed amount of time for the service to start). Perform the next step below to determine the real status. If you do not get the indicated result, submit the start service command again, and if the desired result still does not result, check logs at `/var/log/msys-cassandra/` for error messages. -7. After Cassandra starts, check that the database has been replicated (UN means Up Normal) using `service msys-cassandra status` or `/opt/msys/3rdParty/cassandra/bin/nodetool status`. You should expect to see the new node participating in the Cassandra cluster. +6. After Cassandra starts, check that the database has been replicated (UN means Up Normal) using `service msys-cassandra status` or `/opt/msys/3rdParty/cassandra/bin/nodetool status`. You should expect to see the new node participating in the Cassandra cluster. ``` service msys-cassandra status @@ -187,7 +179,7 @@ These instructions apply to Momentum 4.2.1.*`x`*, where `x` > or = `11` UN 10.77.0.227 203.12 KB 256 27.3% 5525b410-3f3e-49ec-a176-0efa2383f3f4 rack1 ``` -8. Configure RabbitMQ on the new platform node. +7. Configure RabbitMQ on the new platform node. ``` # kill off qpidd service, which (if running) can interfere with RabbitMQ @@ -208,7 +200,7 @@ These instructions apply to Momentum 4.2.1.*`x`*, where `x` > or = `11` $RABBITMQCTL delete_user guest ``` -9. Start all remaining services on the new node. +8. Start all remaining services on the new node. ``` /etc/init.d/msys-riak start @@ -292,4 +284,4 @@ Perform the following steps on each Analytics node in your cluster. 3. On all original Platform nodes, the Cassandra database will have duplicate keys that have now been distributed to the added node. Run the following command on each Platform/Cassandra node: - `/opt/msys/3rdParty/cassandra/bin/nodetool cleanup` \ No newline at end of file + `/opt/msys/3rdParty/cassandra/bin/nodetool cleanup` diff --git a/content/momentum/4/byb-os.md b/content/momentum/4/byb-os.md deleted file mode 100644 index 59e39e27a..000000000 --- a/content/momentum/4/byb-os.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -lastUpdated: "03/26/2020" -title: "Operating System" -description: "Momentum 4 supports the latest version of the following operating systems Red Hat Enterprise Linux 6 x 86 64 Cent OS 6 Ensure that you have the correct operating system packages installed Prepare as many machines as you plan to use for Momentum 4 An optimum installation uses an odd..." ---- - -Momentum 4 supports the latest version of the following operating systems: - -* Red Hat Enterprise Linux 6 (x86_64) - -* CentOS 6 - -Ensure that you have the correct operating system packages installed. Prepare as many machines as you plan to use for Momentum 4\. An optimum installation uses an odd number of three or more Analytics *and* three or more Platform nodes. **Note:** Do not mix operating systems. \ No newline at end of file diff --git a/content/momentum/4/conf-overview.md b/content/momentum/4/conf-overview.md index 2df947b8a..fa25722a9 100644 --- a/content/momentum/4/conf-overview.md +++ b/content/momentum/4/conf-overview.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "Configuration Overview" -description: "Momentum is an exceptionally powerful all in one email infrastructure solution As such it can be configured to provide the full range of digital messaging channels and more This chapter gives an overview of Momentum's configuration and provides the background needed to configure your system to meet your specific application..." +description: "Momentum is an exceptionally powerful all in one email infrastructure solution As such it can be configured to provide the full range of digital messaging channels and more This chapter gives an overview of Momentum's configuration and provides the background needed to configure your system to meet your specific application" --- @@ -25,8 +25,6 @@ The `ecelerity.conf` file is the master configuration file for Momentum; while o * [`msgc_server.conf`](/momentum/4/config/ref-msgc-server-conf) - Momentum cluster messaging bus configuration file -If you make changes to a configuration file, be sure to use the [Momentum Configuration Server](/momentum/4/conf-overview#conf.ecconfigd) to commit your changes. - ### Comments and Whitespace In common with many other Unix configuration files, Momentum's configuration files use the `#` (commonly referred to as "hash" or "pound" sign) symbol to introduce a single line comment. Whitespace is unimportant in the various configuration stanza; feel free to pad the whitespace as you see fit for maximum readability. @@ -105,43 +103,11 @@ Finally, if the scope instance containing the change was only encountered in rea Any configuration files included with the `readonly_include` directive are read-only. Any configuration files included multiple times (overall, not necessarily from the same file) are read-only. Any configuration files loaded from a URI with a scheme other than 'file://', 'persist://' are read-only. All other configuration files are considered writable. -### Configuration Management (ecconfigd) - -Both single-node and clustered installations take advantage of Momentum's revision control system for configuration files. Any configuration changes should be committed to the Momentum Configuration Server **ecconfigd**, henceforth referred to as the configuration server. On start up, the script in the `/etc/init.d` directory runs the **ecconfigd** as a service on the node designated as Manager. For details about the configuration server, see [ecconfigd](/momentum/4/executable/ecconfigd). For details about the **ecconfigd** service in a cluster configuration, see [“Cluster-specific Configuration Management”](/momentum/4/4-cluster#cluster.config_files.mgmt). - -Use **ecconfigd_ctl** to start, stop, or restart the configuration server. For details about this command, see [ecconfigd_ctl](/momentum/4/executable/ecconfigd-ctl). - -Momentum's version control management tool is **eccfg**. It is used to track and update configuration file changes. For details about using this tool, see [eccfg](/momentum/4/executable/eccfg). - -** 15.1.3.1. Repository Working Copy for Single Node** - -The repository working copy directories are located at `/opt/msys/ecelerity/etc/conf/`. There are a number of directories below this. What they are depends upon whether you have installed Momentum in a single-node or cluster configuration and whether you have defined any subclusters. The following are descriptions of the subdirectories in a single-node configuration: - -* `global` – This directory exists but is not used in a single-node configuration. - -* `default` – files used by a single-node configuration - -By default the order is: - -``` -/opt/msys/ecelerity/etc -/opt/msys/ecelerity/etc/conf/global -/opt/msys/ecelerity/etc/conf/default -``` - -Directories are separated by the standard path separator. - -If you wish to change the search order, set the environment variable `EC_CONF_SEARCH_PATH`. For more information about `EC_CONF_SEARCH_PATH`, see [*Configuring the Environment File*](/momentum/4/environment-file) . - -For details about the working copy of the repository in a cluster configuration, see [“Repository Working Copy for Cluster”](/momentum/4/4-cluster#cluster.config_files.mgmt.cluster). - ### Changing Configuration Files -Since the configuration files are under revision control, it is important to take steps to avoid conflicts with changes made elsewhere in the system and to be able to track changes. For this reason, perform the following actions when editing any configuration files or script files: +It is important to take steps to avoid conflicts with changes made elsewhere in the system and to be able to track changes. For this reason, perform the following actions when editing any configuration files or script files: -1. Familiarize yourself with the Momentum repository management tool [eccfg](/momentum/4/executable/eccfg). - -2. Navigate to the appropriate directory: +1. Navigate to the appropriate directory: * For a single-node configuration, navigate to `/opt/msys/ecelerity/etc/conf/default` . @@ -149,28 +115,18 @@ Since the configuration files are under revision control, it is important to tak * For node-specific configuration, navigate to the sub-directory on the cluster manager that is below `/opt/msys/ecelerity/etc/conf` and bears the name of the node: /opt/msys/ecelerity/etc/conf/*`nodename`*. -3. Make sure that the working copy of the repository is up-to-date by issuing the command: - - eccfg pull --username *`name`* --password *`passwd`* -4. Make the necessary changes to the configuration file using the text editor of your choice. +2. Make the necessary changes to the configuration file using the text editor of your choice. -5. Test the validity of your changes using the [validate_config](/momentum/4/executable/validate-config) script: +3. Test the validity of your changes using the [validate_config](/momentum/4/executable/validate-config) script: `/opt/msys/ecelerity/bin/validate_config` -6. Check that your changes are valid by reloading the configuration before committing it. Issue the following command: +4. Check that your changes are valid by reloading the configuration before committing it. Issue the following command: `/opt/msys/ecelerity/bin/ec_console /tmp/2025 config reload` If there are any errors, the new configuration will not load and the error message, `"Reconfigure failed"`, will be displayed. -7. Once you are satisfied with your changes, commit them using the following command: - - /opt/msys/ecelerity/bin/eccfg commit --username *`admin_user`* \ - --password *`password`* - - If you are configuring a cluster, you should allow about a minute or so for the changes to propagate to all nodes. - -8. Implement your changes. +5. Implement your changes. * For a single-node configuration, open the console and issue the command: @@ -198,9 +154,7 @@ Avoid leaving uncommitted changes pending, especially in the working copy on a n As discussed in [“Using the `include` and `readonly_include` Directives”](/momentum/4/conf-overview#conf.files.includes), you can split your Momentum configuration into any number of configuration files. However, if you add new configuration files you must also add them to the repository. Follow these steps: -1. Familiarize yourself with the Momentum repository management tool [eccfg](/momentum/4/executable/eccfg). - -2. Navigate to the appropriate directory for the changes you intend to make. You will save your files to a different directory on a different node depending upon how narrowly or widely your configuration applies. +1. Navigate to the appropriate directory for the changes you intend to make. You will save your files to a different directory on a different node depending upon how narrowly or widely your configuration applies. * For a single-node configuration, navigate to `/opt/msys/ecelerity/etc/conf/default`. @@ -208,30 +162,20 @@ As discussed in [“Using the `include` and `readonly_include` Directives”](/m * For node-specific configuration, create a sub-directory on the cluster manager that is below `/opt/msys/ecelerity/etc/conf` and bears the name of the node: /opt/msys/ecelerity/etc/conf/*`nodename`*. Copy the appropriate configuration files from the `default` directory. -3. Make sure that the working copy of the repository is up-to-date by issuing the command: - - eccfg pull --username *`name`* --password *`passwd`* -4. Create and save the new configuration file. +2. Create and save the new configuration file. -5. Open the appropriate configuration file and include the new file using the `include` directive. +3. Open the appropriate configuration file and include the new file using the `include` directive. -6. Test the validity of your changes using the [validate_config](/momentum/4/executable/validate-config) script: +4. Test the validity of your changes using the [validate_config](/momentum/4/executable/validate-config) script: `/opt/msys/ecelerity/bin/validate_config` -7. Check that your changes are valid by reloading the configuration before committing it. Issue the following command: +5. Check that your changes are valid by reloading the configuration before committing it. Issue the following command: `/opt/msys/ecelerity/bin/ec_console /tmp/2025 config reload` If there are any errors, the new configuration will not load and the error message, `"Reconfigure failed"`, will be displayed. -8. Once you are satisfied with your changes, commit them using the following command: - - /opt/msys/ecelerity/bin/eccfg commit --username *`admin_user`* \ - --password *`password`* - - If you are configuring a cluster, you should allow about a minute or so for the changes to propagate to all nodes. - -9. Implement your changes. +6. Implement your changes. * For a single-node configuration, open the console and issue the command: @@ -249,4 +193,4 @@ As discussed in [“Using the `include` and `readonly_include` Directives”](/m Some configuration changes require restarting the ecelerity process, as documented throughout this guide. Running the **`config reload`** command will not suffice. - * For a node-specific configuration, use the [ec_ctl](/momentum/4/executable/ec-ctl) command to restart the ecelerity process. The **`config reload`** command will not load configuration changes. \ No newline at end of file + * For a node-specific configuration, use the [ec_ctl](/momentum/4/executable/ec-ctl) command to restart the ecelerity process. The **`config reload`** command will not load configuration changes. diff --git a/content/momentum/4/conf-performance-tips.md b/content/momentum/4/conf-performance-tips.md new file mode 100644 index 000000000..0a96854e2 --- /dev/null +++ b/content/momentum/4/conf-performance-tips.md @@ -0,0 +1,136 @@ +--- +lastUpdated: "05/21/2024" +title: "Performance Tips" +description: "This chapter provides you with some tips to optimize the Momentum performance ratings" +--- + +Momentum is an exceptionally powerful all-in-one email infrastructure solution. For several reasons, however, the default configuration shipped with the installation bundle does not run at full speed for all the use cases. This chapter provides you with some tips to optimize the Momentum performance ratings. + +## CPU Optimization + +With [Supercharger](/momentum/4/licensed-features-supercharger) licensed feature, Momentum runs on top of several [event loop](/momentum/4/multi-event-loops) schedulers and uses multicore CPUs with improved efficiency. In this model, it is also possible to assign dedicated event loops to listeners (e.g. the HTTP one) with desired concurrency. On the other hand, the default configuration is based solely on the thread pools to offload specific tasks, therefore Momentum keeps running on top of the original master event loop only, and it can be occasionally bottlenecked. + +The Supercharger's *"75% of CPU cores"* formula works fine on systems largely SMTP-driven. For systems with larger [message generation](momentum/4/message-gen) flows (i.e., REST injections), the number of event loops can be limited to 4 or 5, with higher concurrency values assigned to the `msg_gen` thread pools (see `gen_transactional_threads` configuration [here](momentum/4/modules/msg-gen)). For instance: + +``` +msg_gen { + (...) + gen_transactional_threads = 4 +} +``` + +Also, the CPU thread pool is expected to be used for a lot of functions in the REST flows, so it is recommended to increase the concurrency from its default value (of 4): + +``` +ThreadPool "CPU" { + concurrency = 8 +} +``` + +Last, it is recommended to assign separated event loops for listeners to reduce latency and improve the overall performance. For instance, the following configuration assigns dedicated event loops to the ESMTP and HTTP listeners: + +``` +ESMTP_Listener { + event_loop = "smtp_pool" + (...) +} +(...) +HTTP_Listener { + event_loop = "http_pool" + (...) +} +``` + +## Better Caching + +Momentum has some built-in caches that can be tuned to improve performance. The following are the most important ones: + +### Generic Getter + +This cache is used for parameters that are not in a binding/domain scope. So anything that's global, or module configuration, exist in the generic getter cache. This cache gets a lot of traffic, so setting it in `ecelerity.conf` to something like few million entries is reasonable: + +``` +generic_getter_cache_size = 4000000 +``` + +### Regex Match + +The match cache saves results of queries against regular expression domain stanzas. This cache is enabled by default, but its [size](momentum/4/config/ref-match-cache-size) is very small by default (16384 entries). Making it larger is a great idea, especially if user is using any regular expression domain stanzas: + +``` +match_cache_size = 2000000 +``` + +## Boosting `jemalloc` Performance + +`jemalloc` has demonstrated excellent performance and stability. Because of that, it became the default Momentum's memory allocator. However, it is possible to get even more from it by tuning the `MALLOC_CONF` environment variable. + +Add these lines to `/opt/msys/ecelerity/etc/environment` file (or create it): + +``` +MALLOC_CONF="background_thread:true" +export MALLOC_CONF +``` + +then (re)start the `ecelerity` service. + +## Tuning Lua + +Lua has a garbage collector that can be tuned to improve performance. The following are some recommended settings: + +In the `ecelerity.conf` file: + +``` +ThreadPool "gc" { + concurrency = 10 +} +(...) +scriptlet "scriptlet" { + (...) + gc_every = 20 + gc_step_on_recycle = true + gc_stepmul = 300 + gc_threadpool = “gc” + gc_trace_thresh = 1000 + gc_trace_xref_thresh = 1000 + global_trace_interval = 13 + max_uses_per_thread = 5000 + reap_interval = 13 + use_reusable_thread = true +} +``` + +Enforce these settings in the `/opt/msys/ecelerity/etc/environment` file: + +``` +USE_TRACE_THREADS=true +export USE_TRACE_THREADS +LUA_USE_TRACE_THREADS=true +export LUA_USE_TRACE_THREADS +LUA_NUM_TRACE_THREADS=8 +export LUA_NUM_TRACE_THREADS +LUA_NON_SIGNAL_COLLECTOR=true +export LUA_NON_SIGNAL_COLLECTOR +``` + +## Miscellaneous Configuration + +These are `ecelerity.conf` settings that are known to improve performance on different tasks of Momentum. Before applying them, however, review their documentation and make sure they fit to your environment and use cases: + +``` +fully_resolve_before_smtp = false +growbuf_size = 32768 +inline_transfail_processing = 0 +initial_hash_buckets = 64 +keep_message_dicts_in_memory = true +large_message_threshold = 262144 +max_resident_active_queue = 1000 +max_resident_messages = 100000 +``` + +## Miscellaneous Tips + +- Don't forget to [adjust `sysctl` settings](momentum/4/byb-sysctl-conf) for best TCP connections performance; +- Prefer [chunk_logger](momentum/4/modules/chunk-logger) over logging to `paniclog`. The reasons are taken from the `chunk_logger` page: + +> _Logging to the_ `paniclog` _in the scheduler thread (the main thread) can limit throughput and cause watchdog kills. (...) [It] involves disk I/O, and writing to the_ `paniclog` _in the scheduler thread may block its execution for a long time, thereby holding up other tasks in the scheduler thread and decreasing throughput._ diff --git a/content/momentum/4/config/ref-msgc-server-conf.md b/content/momentum/4/config/ref-msgc-server-conf.md index c964a7d2a..bd7697f7e 100644 --- a/content/momentum/4/config/ref-msgc-server-conf.md +++ b/content/momentum/4/config/ref-msgc-server-conf.md @@ -1,15 +1,13 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "msgc_server.conf File" -description: "The msgc server conf file contains the configuration relevant to the cluster messaging bus This file is referenced from the eccluster conf file on the cluster manager and from the ecelerity cluster conf file on nodes It MUST live in the global portion of the repository as it must be..." +description: "The msgc server conf file contains the configuration relevant to the cluster messaging bus This file is referenced from the eccluster conf file on the cluster manager and from the ecelerity cluster conf file on nodes" --- The `msgc_server.conf` file contains the configuration relevant to the cluster messaging bus. This file is referenced from the `eccluster.conf` file on the cluster manager and from the `ecelerity-cluster.conf` file on nodes. It MUST live in the global portion of the repository, as it must be the same for all nodes in the cluster. ### Note -Restart [Section 15.1.3, “Configuration Management (**ecconfigd**)”](conf.overview#conf.ecconfigd "15.1.3. Configuration Management (ecconfigd)") after making extensive changes to `msgc_server.conf`, such as adding multiple nodes. Use the command **`/etc/init.d/ecconfigd restart`** . - For a discussion of scopes and fallbacks, see [“Configuration Scopes and Fallback”](/momentum/4/4-ecelerity-conf-fallback). For a summary of all the non-module specific configuration options, refer to [*Configuration Options Summary*](/momentum/4/config-options-summary) . @@ -44,4 +42,4 @@ The msgcserver_listener mediates between msgc_servers and between msgc_servers a - \ No newline at end of file + diff --git a/content/momentum/4/environment-file.md b/content/momentum/4/environment-file.md index 2fbd2b466..55b843543 100644 --- a/content/momentum/4/environment-file.md +++ b/content/momentum/4/environment-file.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "Configuring the Environment File" -description: "Environment variables should be set or adjusted on startup If Momentum is started up using the ec ctl script any environment variables included in the environment file will be set Environment variables can be set in the opt msys ecelerity etc environment file The variables that can be set are..." +description: "Environment variables should be set or adjusted on startup If Momentum is started up using the ec ctl script any environment variables included in the environment file will be set" --- Environment variables should be set or adjusted on startup. If Momentum is started up using the [ec_ctl](/momentum/4/executable/ec-ctl) script, any environment variables included in the `environment` file will be set. @@ -16,10 +16,6 @@ Environment variables can be set in the `/opt/msys/ecelerity/etc/environment` fi This parameter should match what you have configured for your Control_Listener in `ecelerity.conf`. -* `EC_CONF_SEARCH_PATH` – this value defines the search path used by [**ecconfigd**](/momentum/4/conf-overview#conf.ecconfigd) to determine the applicable configuration file - - Add this variable to the environment file if you wish to change the search order. - * `EC_DIGEST_REALM` – MD5 digest realm (see [ec_md5passwd](/momentum/4/executable/ec-md-5-passwd).) * `ECELERITY_DNS_BACKEND` – the variable for setting the DNS resolver. @@ -60,8 +56,8 @@ Environment variables can be set in the `/opt/msys/ecelerity/etc/environment` fi * `TRY` – number of times to loop waiting for Momentum to start up - For examples of usage, see [ec_ctl](/momentum/4/executable/ec-ctl) and [ecconfigd_ctl](/momentum/4/executable/ecconfigd-ctl). + For an example of usage, see [ec_ctl](/momentum/4/executable/ec-ctl). ### Note -The `GIMLI_WATCHDOG_INTERVAL`, `GIMLI_WATCHDOG_START_INTERVAL`, and `GIMLI_WATCHDOG_STOP_INTERVAL` variables set the interval for restarting Momentum when it has been unresponsive. For more details execute **`man -M /opt/msys/gimli/man monitor`** . \ No newline at end of file +The `GIMLI_WATCHDOG_INTERVAL`, `GIMLI_WATCHDOG_START_INTERVAL`, and `GIMLI_WATCHDOG_STOP_INTERVAL` variables set the interval for restarting Momentum when it has been unresponsive. For more details execute **`man -M /opt/msys/gimli/man monitor`** . diff --git a/content/momentum/4/executable/create-ssl-cert.md b/content/momentum/4/executable/create-ssl-cert.md index 3731f9fde..bd0a1882f 100644 --- a/content/momentum/4/executable/create-ssl-cert.md +++ b/content/momentum/4/executable/create-ssl-cert.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "create_ssl_cert" -description: "create ssl cert create a self signed SSL certificate opt msys ecelerity bin create ssl cert service hostname prefix user During installation self signed SSL certificates valid for one year are created for some services Use this command to create a new certificate when the original expires When a certificate..." +description: "During installation self signed SSL certificates valid for one year are created for some services Use this command to create a new certificate when the original expires" --- @@ -16,19 +16,12 @@ create_ssl_cert — create a self-signed SSL certificate ## Description -During installation, self-signed SSL certificates valid for one year are created for some services. Use this command to create a new certificate when the original expires. When a certificate expires, you will see an error such as the following: +During installation, self-signed SSL certificates valid for one year are created. Use this command to create a new certificate when the original expires. When a certificate expires, you will get an error message. -``` -ERROR: premature EOF in "svn update '--config-dir' '/opt/msys/ecelerity/etc/.eccfg' » -'--username' 'ecuser' '--no-auth-cache' '--non-interactive' '--trust-server-cert' '.'" -svn: OPTIONS of 'https://mail2:2027/config/default/boot': Server certificate » -verification failed: certificate has expired, issuer is not trusted -``` +To create a new certificate, first stop the appropriate service and remove the old certificate. Then issue the **create_ssl_cert** command: -To create a new certificate, first stop the appropriate service and remove the old certificate. Then issue the **create_ssl_cert** command. For example, the following command creates a certificate for the **ecconfigd** service: - -shell> /opt/msys/ecelerity/bin/create_ssl_cert ecconfigd *`hostname`* \ -/var/ecconfigd/apache ecuser +shell> /opt/msys/ecelerity/bin/create_ssl_cert *`service`* *`hostname`* \ +*`prefix`* *`user`* The parameters passed to this command are as follows: @@ -38,17 +31,7 @@ The parameters passed to this command are as follows:
-The following services can be specified with this command: - -* `ecconfigd` - Momentum Configuration Server - - The **ecconfigd** service requires SSL and a certificate is created when Momentum is installed. For this reason, you will see the following message during installation: - - ``` - Generating a 2048 bit RSA private key - ... - writing new private key to '/var/ecconfigd/apache/server.key' - ``` +Currently, only this service can be specified with this command: * `msyspg` - Postgresql Server @@ -68,7 +51,7 @@ Specify the hostname of the machine that hosts the service for which you are cre
-For the **ecconfigd** service, use `/var/ecconfigd/apache`. For the **msyspg** service, use `/opt/msys/3rdParty/share/postgresql`. +For the **msyspg** service, use `/opt/msys/3rdParty/share/postgresql`.
@@ -76,7 +59,7 @@ For the **ecconfigd** service, use `/var/ecconfigd/apache`. For the **msyspg** s
-For the **ecconfigd** service, use `ecuser`. For the **msyspg** service, use `msyspg`. If you do not specify a user, the user defaults to `ecuser`. +For the **msyspg** service, use `msyspg`. If you do not specify a user, the user defaults to `ecuser`.
@@ -85,4 +68,4 @@ For the **ecconfigd** service, use `ecuser`. For the **msyspg** service, use `ms ## See Also -[ecconfigd](/momentum/4/executable/ecconfigd), [“Running the PostgreSQL Server”](/momentum/4/postgresql-server) \ No newline at end of file +[“Running the PostgreSQL Server”](/momentum/4/postgresql-server) diff --git a/content/momentum/4/executable/credmgr.md b/content/momentum/4/executable/credmgr.md index 805951bbf..0311c17b2 100644 --- a/content/momentum/4/executable/credmgr.md +++ b/content/momentum/4/executable/credmgr.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "credmgr" -description: "credmgr manipulate credentials used with the securecreds module opt msys ecelerity bin credmgr create db opt msys ecelerity bin credmgr create key opt msys ecelerity bin credmgr del cred opt msys ecelerity bin credmgr get cred opt msys ecelerity bin credmgr set cred credmgr is used in conjunction with the..." +description: "Use it to create the credentials database and the credentials key and to set, get, and delete credentials" --- @@ -116,10 +116,6 @@ Facility names are as follows: * `proxy` – any proxy server service -* `eccfg` – version control management tool. Use the hostname `ecconfigd` and the username `ecuser` with this facility. - -* `ecconfigd` – configuration management server. Use the hostname `_ecconfigd_` and the username `ecuser` with this facility -
-p *`password`* , --password=*`password`*
@@ -195,4 +191,4 @@ Examples of usage follow: ## See Also -[“securecreds – Password Encryption/Credential Retrieval”](/momentum/4/modules/securecreds) \ No newline at end of file +[“securecreds – Password Encryption/Credential Retrieval”](/momentum/4/modules/securecreds) diff --git a/content/momentum/4/executable/ec-rotate.md b/content/momentum/4/executable/ec-rotate.md index dd5156e21..742bbeb1f 100644 --- a/content/momentum/4/executable/ec-rotate.md +++ b/content/momentum/4/executable/ec-rotate.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "ec_rotate" -description: "ec rotate rotate Momentum logfiles opt msys ecelerity bin ec rotate c compress opt msys ecelerity bin ec rotate conf path to config file opt msys ecelerity bin ec rotate d default opt msys ecelerity bin ec rotate l logfile path to logfile opt msys ecelerity bin ec rotate logdir..." +description: "Momentum opens its logfiles at startup and maintains an open filehandle to them throughout its life cycle When you invoke ec_rotate the mainlog ec file is moved to mainlog ec 1 Momentum is instructed to re open its logfiles and a new mainlog ec file is created" --- @@ -50,14 +50,8 @@ The `ec_rotate.conf` file specifies the configuration for this command. By defau * `/var/log/ecelerity/smpplog.ec` -* `/var/log/ecelerity/ecconfigd.log` - * `/var/log/ecelerity/httplog.ec` -* `/var/ecconfigd/apache/access.log` - -* `/var/ecconfigd/apache/error.log` - * `/var/log/ecelerity/adaptive` ### Note @@ -142,4 +136,4 @@ The following is an example in which the logfiles are in `/var/log/email/` and 3 ``` /opt/msys/ecelerity/bin/ec_rotate -l /var/log/email/mainlog.ec \ -l /var/log/email/paniclog.ec -r 3 -``` \ No newline at end of file +``` diff --git a/content/momentum/4/executable/eccfg.md b/content/momentum/4/executable/eccfg.md index 3e0cbe12a..ef7aa0a95 100644 --- a/content/momentum/4/executable/eccfg.md +++ b/content/momentum/4/executable/eccfg.md @@ -1,9 +1,13 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "eccfg" -description: "eccfg Subversion repository management opt msys ecelerity bin eccfg h opt msys ecelerity bin eccfg bootstrap quiet debug username name password pass wc path clustername name singlenode host port opt msys ecelerity bin eccfg clone quiet debug username name password pass wc path source destination opt msys ecelerity bin eccfg..." +description: "eccfg is the Momentum version control management tool used to track and update configuration file changes" --- +| **WARNING** | +| -- | +| **This feature was deprecated in version 4.3.1.** | + ## Name @@ -389,4 +393,4 @@ Request that the cluster's nominal MASTER attempt to resolve the CONFLICT state. ## See Also -[ecconfigd](/momentum/4/executable/ecconfigd), [Section 15.1.3, “Configuration Management (**ecconfigd**)”](conf.overview#conf.ecconfigd "15.1.3. Configuration Management (ecconfigd)") \ No newline at end of file +[ecconfigd](/momentum/4/executable/ecconfigd) diff --git a/content/momentum/4/executable/ecconfigd-ctl.md b/content/momentum/4/executable/ecconfigd-ctl.md index ec45a6b84..147613ea3 100644 --- a/content/momentum/4/executable/ecconfigd-ctl.md +++ b/content/momentum/4/executable/ecconfigd-ctl.md @@ -1,9 +1,13 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "ecconfigd_ctl" -description: "ecconfigd ctl start stop or restart the Momentum Configuration Server opt msys ecelerity bin ecconfigd ctl start stop restart ecconfigd ctl is a shell script that you can use to start stop or restart the Momentum configuration server ecconfigd The TRY environment variable is one of the variables that may..." +description: "ecconfigd_ctl is a shell script that you can use to start, stop, or restart the Momentum configuration server ecconfigd" --- +| **WARNING** | +| -- | +| **This feature was deprecated in version 4.3.1.** | + ## Name @@ -25,4 +29,4 @@ If the `EXTRA_ARGS` environment variable is set, its contents will be passed as ## See Also -[ecconfigd](/momentum/4/executable/ecconfigd), [*Configuring the Environment File*](/momentum/4/environment-file) \ No newline at end of file +[ecconfigd](/momentum/4/executable/ecconfigd), [*Configuring the Environment File*](/momentum/4/environment-file) diff --git a/content/momentum/4/executable/ecconfigd.md b/content/momentum/4/executable/ecconfigd.md index 40ff80c13..127f19ded 100644 --- a/content/momentum/4/executable/ecconfigd.md +++ b/content/momentum/4/executable/ecconfigd.md @@ -1,9 +1,13 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "ecconfigd" -description: "ecconfigd Momentum Configuration Server opt msys ecelerity sbin ecconfigd log level level debug ecconfigd is the Momentum Configuration Server Configuration files are maintained in a version control repository and exported via this service The user for this service is ecuser The associated password is created during installation The service consists..." +description: "ecconfigd is the Momentum Configuration Server Configuration files are maintained in a version control repository and exported via this service The user for this service is ecuser The associated password is created during installation" --- +| **WARNING** | +| -- | +| **This feature was deprecated in version 4.3.1.** | + ## Name @@ -55,4 +59,4 @@ Set the log file verbosity. The log level is a number from 0 to 7, where higher ## See Also -[ecconfigd_ctl](/momentum/4/executable/ecconfigd-ctl), [eccfg](/momentum/4/executable/eccfg), [Section 15.1.3, “Configuration Management (**ecconfigd**)”](conf.overview#conf.ecconfigd "15.1.3. Configuration Management (ecconfigd)"), [create_ssl_cert](/momentum/4/executable/create-ssl-cert) \ No newline at end of file +[ecconfigd_ctl](/momentum/4/executable/ecconfigd-ctl), [eccfg](/momentum/4/executable/eccfg) diff --git a/content/momentum/4/hardware-config.md b/content/momentum/4/hardware-config.md index 5d602a41c..7221a026b 100644 --- a/content/momentum/4/hardware-config.md +++ b/content/momentum/4/hardware-config.md @@ -1,43 +1,43 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/01/2024" title: "Hardware Deployment Configuration" -description: "The Single Node Lab system is designed to support multiple functions in your environment including development testing staging and other non production roles The system deploys to a single server supporting all Momentum functionality The Single Node Lab System should not be used for performance testing nor is it recommended..." +description: "The Single Node Lab system is designed to support multiple functions in your environment including development testing staging and other non production roles" --- -The Single Node Lab system is designed to support multiple functions in your environment, including development, testing staging, and other non-production roles. The system deploys to a single server supporting all Momentum functionality. The Single Node Lab System should not be used for performance testing, nor is it recommended for production use due to its lack of redundancy. +The Single Node Lab system is designed to support multiple functions in your environment, including development, testing staging, and other non-production roles. The system deploys to a single server supporting all Momentum functionality. The Single Node Lab System should not be used for performance testing, nor it is recommended for production use due to its lack of redundancy. - +### Hardware Specifications - -| Resource | Minimum Specification | +| Resource | Specification | | --- | --- | -| CPU | 8 x 2.5 GHz Cores (Min Speed) | -| Memory | 32 GB RAM | -| Network Interface | 1 GB NIC | +| CPU Cores | 8 | +| CPU Speed | 3.2 GHz (min. 2.5 GHz) | +| Memory | 32 GiB (min. 16 GiB) RAM | +| Network Interface | 1 Gbps NIC | - +--- +> __TIP:__ If running in cloud environments, CPU-optimized instances are recommended over general-purpose and memory-optimized instances. +### Storage Configuration -| Array | Configuration | Mount Points and Notes | +| Array | Mount Points | Configuration | | --- | --- | --- | -| All Storage | 4 x 150 GB 15k RPM HDD |   | -| Message Spools | 2 x 150 GB in RAID1 | - -/var/spool/ecelerity - -Note: This array should be dedicated to the spools. - - | -| OS, App Binaries, Logs, Platform DB, Analytics DB | 2 x 150 GB in RAID1 | +| All Storage |   | 4 x 150 GiB 15k RPM HDD | +| Message Spools* | `/var/spool/ecelerity` | 2 x 150 GiB in RAID1 | +| OS
App Binaries
Logs
Platform DB
Analytics DB | `/` (root)
`/opt/msys`
`/var/log/ecelerity`
`/var/db/cassandra`
`/var/db/vertica` | 2 x 150 GiB in RAID1 | -* OS - / (root) +(*) _This array should be dedicated to the spools._ -* Logs - /var/log/ecelerity +### Reference Measurements -* App Binaries - /opt./msys +With the hardware specifications above, a reference system is able to sustain an ESMTP injection rate of: -* Platform DB - /var/db/cassandra +- 1.8 M messages/hour +- 100 kiB each message -* Analytics DB - /var/db/vertica +with: - | \ No newline at end of file +- CPU Usage: __65%__ (5-6 cores out of 8) +- Memory usage: + - Virtual: __2.2 GiB__ + - Resident: __500 MiB__ diff --git a/content/momentum/4/install-security-considerations.md b/content/momentum/4/install-security-considerations.md index 3d259b9b5..0ff213f7e 100644 --- a/content/momentum/4/install-security-considerations.md +++ b/content/momentum/4/install-security-considerations.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "Security Considerations" -description: "This section will document security issues and fixes for those issues A umask setting of 0027 in the shell startup file typically bashrc when using the bash shell will cause installation to fail because directories created by root will be inaccessible to the user ecuser With a umask setting of..." +description: "This section will document security issues and fixes for those issues" --- This section will document security issues and fixes for those issues. @@ -10,23 +10,12 @@ This section will document security issues and fixes for those issues. A umask setting of `0027` in the shell startup file, typically `~/.bashrc` when using the bash shell, will cause installation to fail because directories created by root will be inaccessible to the user `ecuser`. -With a umask setting of `0027`, when the initial configuration is being created, ecconfigd is started, but the Apache instance will not start. You will see output such as the following: - -``` -shell> CFG-07961 failed to stat -'/opt/msys/etc/installer/ecelerity.d/': Permission denied -Reconfigure failed. -Global configuration error. -``` - This is also true of any files that are created as the root user under the `/opt/msys/ecelerity/etc/conf` directory. To resolve this use a more permissive mask, for example `umask 0022`. Another option is to `chown ecuser:ecuser` all new configuration files, or make them world readable. Likewise for directories. -If you get the permissions wrong, then you will also not be able to log in to the web UI or use the **ecconfigd** command. - ### POODLE Vulnerability Fix The POODLE (Padding Oracle On Downgraded Legacy Encryption) vulnerability attacks the TLS protocol and forces clients to downgrade to the SSLv3, which has no known secure cipher suites available. This allows an attacker to read information encrypted with this version of the protocol in plain text. Another part of the POODLE attack is exploiting weaknesses in the CBC mode of operation. @@ -43,4 +32,4 @@ For more information, see the [GNUTLS website](http://www.gnutls.org/security.ht To fix this vulnerability in OpenSSL, make sure you are running Momentum 4.1.0.2 or later and set the [tls_protocols](/momentum/4/config/tls-protocols) configuration option to disable SSLv3 in your `ecelerity.conf` file: -`TLS_Protocols = "+ALL:-SSLv3"` \ No newline at end of file +`TLS_Protocols = "+ALL:-SSLv3"` diff --git a/content/momentum/4/log-monitoring.md b/content/momentum/4/log-monitoring.md index ef53c61fb..5a508b6fb 100644 --- a/content/momentum/4/log-monitoring.md +++ b/content/momentum/4/log-monitoring.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "Log Monitoring" -description: "This chapter lists and describes the logs associated with Momentum 4 Table 33 1 Log Monitoring Log File Name and Location Node Type Description var log ecelerity paniclog ec Platform Error level information for ecelerity is written here var log eccluster paniclog ec Manager Error level information for eccmgr is..." +description: "This chapter lists and describes the logs associated with Momentum 4" --- This chapter lists and describes the logs associated with Momentum 4. @@ -17,7 +17,6 @@ This chapter lists and describes the logs associated with Momentum 4. | /var/log/msys-cassandra | Platform and Manager | Cassandra log | | /var/log/msys-rabbitmq/rabbit@*node*.log | Platform | Internal logging **** Note** that the content of this log must not be altered or truncated - listed here for informational purposes only. | | /var/db/vertica/msys/dblog | Analytics | Vertica log | -| /var/ecconfigd/apache/error.log | Platform | Apache log | | /var/log/msys-nodejs/*.log | Analytics | NodeJS log | | /var/log/ecelerity/gencp_* | Platform | Internal logging ** **** Note** that the content of this log must not be altered or truncated - listed here for informational purposes only. These files show if you are getting a backup of transmission requests. | -| /var/log/ecelerity/traces/* | All | These files show if Ecelerity is having a problem. | \ No newline at end of file +| /var/log/ecelerity/traces/* | All | These files show if Ecelerity is having a problem. | diff --git a/content/momentum/4/log-rotating.md b/content/momentum/4/log-rotating.md index 7dcabbb26..cd53698b1 100644 --- a/content/momentum/4/log-rotating.md +++ b/content/momentum/4/log-rotating.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "Rotating Logs ec_rotate" -description: "Momentum provides a utility script ec rotate that you can use to rotate and compress logs that Momentum writes It is recommended that you run this script daily from your system's crontab etc cron d msys ecelerity core To invoke ec rotate execute ec rotate as the root user By..." +description: "Momentum provides a utility script ec rotate that you can use to rotate and compress logs that Momentum writes" --- @@ -63,10 +63,7 @@ logfiles = /var/log/ecelerity/mainlog.ec \ /var/log/ecelerity/bouncelog.ec \ /var/log/ecelerity/acctlog.ec \ /var/log/ecelerity/smpplog.ec \ - /var/log/ecelerity/ecconfigd.log \ - /var/log/ecelerity/httplog.ec \ - /var/ecconfigd/apache/access.log \ - /var/ecconfigd/apache/error.log + /var/log/ecelerity/httplog.ec logdirs = /var/log/ecelerity/adaptive retention = 7 @@ -112,4 +109,4 @@ Retention period in days - \ No newline at end of file + diff --git a/content/momentum/4/modules/securecreds.md b/content/momentum/4/modules/securecreds.md index 881c7ab74..c677fc37c 100644 --- a/content/momentum/4/modules/securecreds.md +++ b/content/momentum/4/modules/securecreds.md @@ -1,9 +1,13 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "securecreds – Password Encryption/Credential Retrieval" -description: "The securecreds module enforces password encryption and implements credential retrieval operations when certain facilities are accessed If this module is enabled the following facilities make use of it All datasource modules ecconfigd Momentum configuration management server eccfg Momentum version control management tool Open SSL s EVP API performs the encryption..." +description: "The securecreds module enforces password encryption and implements credential retrieval operations when certain facilities are accessed" --- +| **WARNING** | +| -- | +| **`ecconfigd` and `eccfg` features were deprecated in version 4.3.1.** | + The securecreds module enforces password encryption and implements credential retrieval operations when certain facilities are accessed. If this module is enabled, the following facilities make use of it: @@ -72,4 +76,4 @@ The location of the credentials key file. The default value is `/opt/msys/etc/cr ### Warning -We strongly recommend you not change the default values. If you absolutely must change the location of these files, please create symlinks to the default locations. Also note that the credentials database and key are local to each node in a cluster. \ No newline at end of file +We strongly recommend you not change the default values. If you absolutely must change the location of these files, please create symlinks to the default locations. Also note that the credentials database and key are local to each node in a cluster. diff --git a/content/momentum/4/node-remove.md b/content/momentum/4/node-remove.md index 96bf88881..633fefc41 100644 --- a/content/momentum/4/node-remove.md +++ b/content/momentum/4/node-remove.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "Removing a Platform Node" -description: "This section describes how to remove a functional Platform node which involves removing the node from the Cassandra and Momentum clusters and making some manual configuration changes on the remaining Platform nodes and on the existing Analytics nodes These instructions apply to Momentum 4 2 1 x where x or..." +description: "This section describes how to remove a functional Platform node which involves removing the node from the Cassandra and Momentum clusters and making some manual configuration changes on the remaining Platform nodes and on the existing Analytics nodes" --- This section describes how to remove a functional Platform node, which involves removing the node from the Cassandra and Momentum clusters and making some manual configuration changes on the remaining Platform nodes and on the existing Analytics nodes. @@ -144,10 +144,7 @@ You can perform the following steps on any remaining Platform node. } ``` -3. Use eccfg to commit the modified configuration, substituting your own admin password if environmental variable $ADMINPASS is not defined. - - `$ /opt/msys/ecelerity/bin/eccfg commit -u admin -p $ADMINPASS -m 'Removing a platform node from the cluster'` -4. Restart ecelerity on ALL of the remaining Platform nodes. +3. Restart ecelerity on ALL of the remaining Platform nodes. `$ service ecelerity restart` @@ -280,4 +277,4 @@ Move the spool files to a functional Platform node. For more information, see [s `echo "cluster membership" | /opt/msys/ecelerity/bin/ec_console` 2. Double-check the Cassandra cluster status. - `service msys-cassandra status` \ No newline at end of file + `service msys-cassandra status` diff --git a/content/momentum/4/production-config.md b/content/momentum/4/production-config.md index 91284e7e2..9b75662c7 100644 --- a/content/momentum/4/production-config.md +++ b/content/momentum/4/production-config.md @@ -1,285 +1,163 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "Production Environment Configurations" -description: "This section provides hardware specifications for different target volume levels All systems are rated for use with CPU utilization at 50 in order to accommodate traffic spikes All volumes are specified with the assumption of an average message size of 100 k B The Enterprise Basic Configuration consists of three..." +description: "This section provides hardware specifications for different target volume levels" --- -This section provides hardware specifications for different target volume levels. All systems are rated for use with CPU utilization at 50% in order to accommodate traffic spikes. All volumes are specified with the assumption of an average message size of 100 kB. +This section provides hardware specifications for different target volume levels. The system deploys to a dedicated server supporting the cluster management and other servers supporting other typical Momentum functionalities of an MTA node. -### Enterprise Basic Cluster +--- +> __TIP:__ If running in cloud environments, CPU-optimized instances are recommended over general-purpose and memory-optimized instances. -The Enterprise Basic Configuration consists of three nodes running all roles with the resources specified below. The system supports the following performance ratings. +## Cluster Manager Node - +The Cluster Manager is a dedicated node that aggregates the logs of all MTAs of the cluster and optionally centralizes some data storage in a PostgreSQL server. The Cluster Manager node is not intended to process any email traffic. The hardware specifications for this specific node are: +| Resource | Specification | +| --- | --- | +| CPU Cores | 8 | +| CPU Speed | 3.2 GHz (min. 2.5 GHz) | +| Memory | 16 GiB RAM | +| Network Interface | 1 Gbps NIC | +| Storage | 2 x 600 GiB 15k RPM HDD in RAID1 | -| Node Capacity | +## MTA Nodes -Cluster Capacity +The MTA nodes are the workhorses of the Momentum cluster. The following topologies are rated for use with CPU utilization at 50% in order to accommodate traffic spikes. All volumes are specified with the assumption of an average message size of 100 kiB. -(2 Nodes Operational) +--- +> __NOTE:__ The Cluster Manager node is not counted in the following configurations. +--- +> __TIP:__ More the number of CPU cores in the configurations below, higher performance ratings than listed can be achieved with the [Supercharger](/momentum/4/licensed-features-supercharger) feature, i.e., configuring [multiple event loops](/momentum/4/multi-event-loops). - | +### Enterprise Basic -Peak Cluster Capacity +The Enterprise Basic configuration consists of three nodes running all MTA roles with the resources specified below. -(3 Nodes Operational) +#### Performance Ratings - | +| MTA Node Capacity | Cluster Capacity
(2 Nodes Operational) | Peak Cluster Capacity
(3 Nodes Operational) | | --- | --- | --- | -| 500,000 Msg/hr | 1 M Msg/hr | 1.5 M Msg/hr | - - +| 1.5 M msgs/hr | 3 M msgs/hr | 4.5 M msgs/hr | +#### Hardware Specifications -| Resource | Minimum Specification | +| Resource | Specification | | --- | --- | -| CPU | 8 x 2.5 GHz Cores (Min Speed) | -| Memory | 32 GB RAM | -| Network Interface | 1 GB NIC | +| CPU Cores | 8 | +| CPU Speed | 3.2 GHz (min. 2.5 GHz) | +| Memory | 32 GiB (min. 16 GiB) RAM | +| Network Interface | 1 Gbps NIC | - +#### Storage Configuration - -| Array | Configuration | Mount Points and Notes | +| Array | Mount Points | Configuration | | --- | --- | --- | -| All Storage | 6 x 300 GB 15k RPM HDD |   | -| Message Spools | 2 x 300 GB in RAID1 | - -/var/spool/ecelerity - -Note: This array should be dedicated to the spools. - - | -| OS, App Binaries, Logs, Platform DB, Analytics DB | 2 x 300 GB in RAID1 | - -* OS - / (root) - -* Logs - /var/log/ecelerity - -* App Binaries - /opt./msys - -* Platform DB - /var/db/cassandra - -* Analytics DB - /var/db/vertica - - | - -### Enterprise Standard Cluster - -The Enterprise Standard Configuration consists of three nodes running all roles with the resources specified below. The system supports the following performance ratings. - - - - -| Node Capacity | +| All Storage |   | 6 x 300 GiB 15k RPM HDD | +| Message Spools* | `/var/spool/ecelerity` | 2 x 300 GiB in RAID1 | +| OS
App Binaries
Logs
Platform DB
Analytics DB | `/` (root)
`/opt/msys`
`/var/log/ecelerity`
`/var/db/cassandra`
`/var/db/vertica` | 2 x 300 GiB in RAID1 | -Cluster Capacity +(*) _This array should be dedicated to the spools._ -(2 Nodes Operational) +### Enterprise Standard - | +The Enterprise Standard configuration consists of three nodes running all MTA roles with the resources specified below. -Peak Cluster Capacity +#### Performance Ratings -(3 Nodes Operational) - - | +| MTA Node Capacity | Cluster Capacity
(2 Nodes Operational) | Peak Cluster Capacity
(3 Nodes Operational) | | --- | --- | --- | -| 1 M Msg/hr | 2 M Msg/hr | 3 M Msg/hr | - - +| 3 M msgs/hr | 6 M msgs/hr | 9 M msgs/hr | +#### Hardware Specifications -| Resource | Minimum Specification | +| Resource | Specification | | --- | --- | -| CPU | 16 x 2.5 GHz Cores (Min Speed) | -| Memory | 64 GB RAM | -| Network Interface | 1 GB NIC | - - +| CPU Cores | 16 | +| CPU Speed | 3.2 GHz (min. 2.5 GHz) | +| Memory | 64 GiB (min. 32 GiB) RAM | +| Network Interface | 1 Gbps NIC | +#### Storage Configuration -| Array | Configuration | Mount Points and Notes | +| Array | Mount Points | Configuration | | --- | --- | --- | -| All Storage | 8 x 300 GB 15k RPM HDD |   | -| Message Spools | 4 x 300 GB in RAID10 | - -/var/spool/ecelerity - -Note: This array should be dedicated to the spools. - - | -| OS, App Binaries, Logs, Platform DB | 2 x 300 GB in RAID1 | - -* OS - / (root) - -* Logs - /var/log/ecelerity - -* App Binaries - /opt./msys - -* Platform DB - /var/db/cassandra - - | -| Analytics DB | 2 x 300 GB in RAID1 | - -Analytics DB - /var/db/vertica - -Note: This array should be dedicated to the Analytics DB. - - | - -### Enterprise Plus Cluster - -The Enterprise Plus Configuration consists of three nodes running all roles with the resources specified below. The system supports the following performance ratings. - - - - -| Node Capacity | +| All Storage |   | 8 x 300 GiB 15k RPM HDD | +| Message Spools* | `/var/spool/ecelerity` | 4 x 300 GiB in RAID10 | +| OS
App Binaries
Logs
Platform DB | `/` (root)
`/opt/msys`
`/var/log/ecelerity`
`/var/db/cassandra` | 2 x 300 GiB in RAID1 | +| Analytics DB* | `/var/db/vertica` | 2 x 300 GiB in RAID1 | -Cluster Capacity +(*) _These arrays should be dedicated._ -(2 Nodes Operational) +### Enterprise Plus - | +The Enterprise Plus configuration consists of three nodes running all MTA roles with the resources specified below. -Peak Cluster Capacity +#### Performance Ratings -(3 Nodes Operational) - - | +| MTA Node Capacity | Cluster Capacity
(2 Nodes Operational) | Peak Cluster Capacity
(3 Nodes Operational) | | --- | --- | --- | -| 1.5 M Msg/hr | 3 M Msg/hr | 4.5 M Msg/hr | - - +| 6 M msgs/hr | 12 M msgs/hr | 18 M msgs/hr | +#### Hardware Specifications -| Resource | Minimum Specification | +| Resource | Specification | | --- | --- | -| CPU | 20 x 2.5 GHz Cores (Min Speed) | -| Memory | 64 GB RAM | -| Network Interface | 1 GB NIC | - - +| CPU Cores | 32 | +| CPU Speed | 3.2 GHz (min. 2.5 GHz) | +| Memory | 64 GiB RAM | +| Network Interface | 1 Gbps NIC | +#### Storage Configuration -| Array | Configuration | Mount Points and Notes | +| Array | Mount Points | Configuration | | --- | --- | --- | -| All Storage | 8 x 600 GB 15k RPM HDD |   | -| Message Spools | 4 x 600 GB in RAID10 | - -/var/spool/ecelerity - -Note: This array should be dedicated to the spools. - - | -| OS, App Binaries, Logs, Platform DB | 2 x 600 GB in RAID1 | - -* OS - / (root) - -* Logs - /var/log/ecelerity - -* App Binaries - /opt./msys - -* Platform DB - /var/db/cassandra - - | -| Analytics DB | 2 x 600 GB in RAID1 | - -Analytics DB - /var/db/vertica - -Note: This array should be dedicated to the Analytics DB. - - | - -### Enterprise Scaling Cluster - -The Enterprise Scaling Configuration consists of both an Analytics Cluster and a Platform Cluster. Because large volume deployments require more resources for sending than for analytics, Message Systems recommends separating the Platform and Analytics roles to separate clusters. This configuration allows you to scale the Platform cluster independent of the analytics cluster. The baseline configuration consists of a three-node Analytics Cluster and a three-node Platform Cluster. You may scale sending capacity by incrementally adding Platform nodes to the cluster as needed. - -The baseline system supports the following performance ratings. - - - - -| +| All Storage |   | 8 x 600 GiB 15k RPM HDD | +| Message Spools* | `/var/spool/ecelerity` | 4 x 600 GiB in RAID10 | +| OS
App Binaries
Logs
Platform DB | `/` (root)
`/opt/msys`
`/var/log/ecelerity`
`/var/db/cassandra` | 2 x 600 GiB in RAID1 | +| Analytics DB* | `/var/db/vertica` | 2 x 600 GiB in RAID1 | -Baseline Cluster Capacity +(*) _These arrays should be dedicated._ -(2 Nodes Operational) +## Enterprise Scaling Cluster - | +The Enterprise Scaling configuration consists of both an Analytics Cluster and a Platform Cluster. Because large volume deployments require more resources for sending than for analytics, Message Systems recommends separating the Platform and Analytics roles to separate clusters. This configuration allows you to scale the Platform cluster independent of the analytics cluster. -Baseline Peak Cluster Capacity +The baseline configuration consists of a three-node Analytics Cluster and a three-node Platform Cluster. You may scale sending capacity by incrementally adding Platform nodes to the cluster as needed. -(3 Nodes Operational) +### Baseline Performance Ratings - | Incremental Platform Node Capacity | +| Baseline Cluster Capacity
(2 Nodes Operational) | Baseline Peak Cluster Capacity
(3 Nodes Operational) | Incremental Platform Node Capacity | | --- | --- | --- | -| 3 M Msg/hr | 4.5 M Msg/hr | 1.5 M Msg/hr | +| 12 M msgs/hr | 18 M msgs/hr | 6 M msgs/hr | - +### Hardware Specifications - -| Resource | Minimum Specification | +| Resource | Specification | | --- | --- | -| CPU | 20 x 2.5 GHz Cores (Min Speed) | -| Memory | 64 GB RAM | -| Network Interface | 1 GB NIC | +| CPU Cores | 32 | +| CPU Speed | 3.2 GHz (min. 2.5 GHz) | +| Memory | 64 GiB RAM | +| Network Interface | 1 Gbps NIC | - +### Storage Configuration +#### Platform Node -| Array | Configuration | Mount Points and Notes | +| Array | Mount Points | Configuration | | --- | --- | --- | -| All Storage | 8 x 600 GB 15k RPM HDD |   | -| Message Spools | 4 x 600 GB in RAID10 | - -/var/spool/ecelerity - -Note: This array should be dedicated to the spools. - - | -| OS, App Binaries, Logs, Platform DB | 2 x 600 GB in RAID1 | - -* OS - / (root) +| All Storage |   | 6 x 600 GiB 15k RPM HDD | +| Message Spools* | `/var/spool/ecelerity` | 4 x 600 GiB in RAID10 | +| OS
App Binaries
Logs
Platform DB | `/` (root)
`/opt/msys`
`/var/log/ecelerity`
`/var/db/cassandra` | 2 x 600 GiB in RAID1 | -* Logs - /var/log/ecelerity +(*) _This array should be dedicated to the spools._ -* App Binaries - /opt./msys +#### Analytics Node -* Platform DB - /var/db/cassandra - - | - - - - -| Resource | Minimum Specification | -| --- | --- | -| CPU | 20 x 2.5 GHz Cores (Min Speed) | -| Memory | 64 GB RAM | -| Network Interface | 1 GB NIC | - - - - -| Array | Configuration | Mount Points and Notes | +| Array | Mount Points | Configuration | | --- | --- | --- | -| All Storage | 4 x 600 GB 15k RPM HDD |   | -| OS, App Binaries, Logs | 2 x 600 GB in RAID1 | - -* OS - / (root) - -* Logs - /var/log/ecelerity - -* App Binaries - /opt./msys - - | -| Analytics DB | 2 x 600 GB in RAID1 | - -Analytics DB - /var/db/vertica - -Note: This array should be dedicated to the Analytics DB. +| All Storage |   | 4 x 600 GiB 15k RPM HDD | +| OS
App Binaries
Logs | `/` (root)
`/opt/msys`
`/var/log/ecelerity` | 2 x 600 GiB in RAID1 | +| Analytics DB* | `/var/db/vertica` | 2 x 600 GiB in RAID1 | - | \ No newline at end of file +(*) _This array should be dedicated to the Analytics DB._ diff --git a/content/momentum/4/upgrade-two-tier-configuration-rolling.md b/content/momentum/4/upgrade-two-tier-configuration-rolling.md index b94939ddb..7ce2d8efd 100644 --- a/content/momentum/4/upgrade-two-tier-configuration-rolling.md +++ b/content/momentum/4/upgrade-two-tier-configuration-rolling.md @@ -17,7 +17,7 @@ Instructions for upgrading a combined node configuration are included as additio ### Tip -In a combined node configuration, Analytics and Platform nodes are the same nodes, which means instructions will be done on all nodes unless specified otherwise. In addition, primary nodes (i.e., the first Platform node and the first Analytics node) are the same node, and this node runs the `ecconfigd` configuration manager process. +In a combined node configuration, Analytics and Platform nodes are the same nodes, which means instructions will be done on all nodes unless specified otherwise. ### Note @@ -55,4 +55,4 @@ An overview of the rolling upgrade process is shown below. ### Note -Be sure to read the [Release Notes](https://support.messagesystems.com/start.php) for the version of Momentum that you are installing. \ No newline at end of file +Be sure to read the [Release Notes](https://support.messagesystems.com/start.php) for the version of Momentum that you are installing. diff --git a/content/momentum/4/upgrade-two-tier-preparation-ecelerity-rolling.md b/content/momentum/4/upgrade-two-tier-preparation-ecelerity-rolling.md index bfd047554..aa533ddc0 100644 --- a/content/momentum/4/upgrade-two-tier-preparation-ecelerity-rolling.md +++ b/content/momentum/4/upgrade-two-tier-preparation-ecelerity-rolling.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "Upgrade Ecelerity and the Cassandra Schema on the First Platform Node" -description: "The following steps connect Ecelerity and Cassandra These steps should be done only on the first Platform node Be sure you perform these steps in the order shown below Upgrade the RPM packages with yum this includes all Momentum and third party packages Set up Cassandra Momentum compatibility i e..." +description: "The following steps connect Ecelerity and Cassandra These steps should be done only on the first Platform node" --- 1. The following steps connect Ecelerity and Cassandra. These steps should be done only on the **first Platform node** . Be sure you perform these steps in the order shown below. @@ -12,10 +12,7 @@ description: "The following steps connect Ecelerity and Cassandra These steps sh 2. Set up Cassandra-Momentum compatibility (i.e., the Cassandra schema to be used) (**first Platform node only** ). `/opt/msys/ecelerity/bin/cassandra_momo_setup.sh --multinode /opt/msys/ecelerity/etc` - 3. Start ecconfigd (**first Platform node only** ). - - `service ecconfigd start` - 4. Start Momentum. + 3. Start Momentum. `service ecelerity start` ### Note @@ -47,4 +44,4 @@ description: "The following steps connect Ecelerity and Cassandra These steps sh $CQLSH -k authentication -f $UPG/V2015.06.16_00.00.00__add_saml_column.cql 2>&1 >> cassandra_schema.log $CQLSH -k authentication -f $UPG/V2015.06.17_00.00.00__add_valid_ip_column.cql 2>&1 >> cassandra_schema.log $CQLSH -k authentication -f $UPG/V2015.06.22_00.00.00__add_last_login_column.cql 2>&1 >> cassandra_schema.log - ``` \ No newline at end of file + ``` diff --git a/content/momentum/navigation.yml b/content/momentum/navigation.yml index ad8832b87..efc5a5ba8 100644 --- a/content/momentum/navigation.yml +++ b/content/momentum/navigation.yml @@ -49,8 +49,6 @@ title: Before You Begin - link: '/momentum/4/before-you-begin#byb.msg.gen.license' title: Momentum License - - link: /momentum/4/byb-os - title: Operating System - link: /momentum/4/byb-redundancy title: Redundancy - link: /momentum/4/byb-load-balancing @@ -210,6 +208,8 @@ title: Configuration Files - link: /momentum/4/conf-options title: Configuration Options + - link: /momentum/4/conf-performance-tips + title: Performance Tips - link: /momentum/4/4-ecelerity-conf-fallback title: Configuration Scopes and Fallback - link: /momentum/4/listeners @@ -2007,4 +2007,4 @@ - link: /momentum/changelog/legacy/message-central title: Message Central Legacy Changelog - link: /momentum/changelog/legacy/message-scope - title: Message Scope Legacy Changelog \ No newline at end of file + title: Message Scope Legacy Changelog diff --git a/cypress/content/momentum/1st-level/index.md b/cypress/content/momentum/1st-level/index.md index 501b2bb6a..340044c66 100644 --- a/cypress/content/momentum/1st-level/index.md +++ b/cypress/content/momentum/1st-level/index.md @@ -43,5 +43,5 @@ The preceding line would appear unbroken in a log file but, if left as is, it wo Where possible, Unix command-line commands are broken using the ‘`\`’ character, making it possible to copy and paste commands. For example: -/opt/msys/ecelerity/bin/eccfg bootstrap --clustername *`name`* --username=admin \ - --password=*`admin cluster_host`* \ No newline at end of file +sudo -u ecuser \ + /opt/msys/ecelerity/bin/ec_show -m *`msg-id`*