diff --git a/.github/workflows/labeller.yml b/.github/workflows/labeller.yml index 818cde1a7..b9d2f784d 100644 --- a/.github/workflows/labeller.yml +++ b/.github/workflows/labeller.yml @@ -6,9 +6,10 @@ on: jobs: triage: runs-on: ubuntu-latest - runs: - using: 'node16' steps: + - uses: actions/setup-node@v4 + with: + node-version: '16.x' - uses: actions/labeler@v2 with: repo-token: '${{ secrets.GITHUB_TOKEN }}' diff --git a/content/momentum/4/4-cluster-config-failover.md b/content/momentum/4/4-cluster-config-failover.md index 157103a97..8f8e3e9c3 100644 --- a/content/momentum/4/4-cluster-config-failover.md +++ b/content/momentum/4/4-cluster-config-failover.md @@ -1,15 +1,13 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "Configuring Momentum for High Availability and Failover" -description: "Momentum's architecture supports fault tolerant configurations This means that you can operate in an environment that is readily configured to support failing over automatically Components that support high availability and fault tolerance include the following ecconfigd Dura VIP™ bindings Centralized logging and Aggregration Per node data Per node logs can..." +description: "Momentum's architecture supports fault tolerant configurations This means that you can operate in an environment that is readily configured to support failing over automatically" --- Momentum's architecture supports fault-tolerant configurations. This means that you can operate in an environment that is readily configured to support failing over automatically. Components that support high availability and fault tolerance include the following: -* [`ecconfigd`](/momentum/4/conf-overview#conf.ecconfigd) - * [DuraVIP™ bindings](/momentum/4/4-cluster-config-duravip) * [Centralized logging and Aggregration](/momentum/4/log-aggregation) @@ -22,4 +20,4 @@ Components that support high availability and fault tolerance include the follow * [cidr_server](/momentum/4/4-cluster-cidr-server) and [as_logger](/momentum/4/modules/as-logger) - The **cidr_server** queries the data created by an as_logger module and displays the result in the cluster console. The **cidr_server** and as_logger can be configured to log data to a SAN. Locking semantics must be checked. \ No newline at end of file + The **cidr_server** queries the data created by an as_logger module and displays the result in the cluster console. The **cidr_server** and as_logger can be configured to log data to a SAN. Locking semantics must be checked. diff --git a/content/momentum/4/4-cluster.md b/content/momentum/4/4-cluster.md index 16a22809f..aec0f61cd 100644 --- a/content/momentum/4/4-cluster.md +++ b/content/momentum/4/4-cluster.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "Cluster-specific Configuration" -description: "Clustering is based on the concept of having a cluster of machines that communicate using a group communication messaging bus A cluster is comprised of at least one Manager node and one or more MTA nodes The Manager in the cluster will be your central point of management for the..." +description: "Clustering is based on the concept of having a cluster of machines that communicate using a group communication messaging bus A cluster is comprised of at least one Manager node and one or more MTA nodes" --- @@ -9,10 +9,6 @@ Clustering is based on the concept of having a cluster of machines that communic The clustering capabilities of Momentum enable the following features: -* Centralized management of configuration for multiple MTA nodes - -* Replicated, redundant, configuration repository with revision control - * Log aggregation pulling log files from MTA nodes to centralized location(s) on the network * Replication of a variety of real-time metrics to allow cluster-wide coordination for inbound and outbound traffic shaping @@ -47,49 +43,6 @@ For general information about Momentum's configuration files, see [“Configurat For additional details about editing your configuration files, see [“Changing Configuration Files”](/momentum/4/conf-overview#conf.manual.changes). -### Cluster-specific Configuration Management - -Momentum configuration files are maintained in a version control repository and exported to your cluster network via the [`ecconfigd`](/momentum/4/conf-overview#conf.ecconfigd) service running on the cluster manager. This daemon is auto-configuring and will replicate your configuration repositories to all participating cluster nodes. On the cluster manager, the repository resides in the `/var/ecconfigd/repo` directory. Nodes pull their configuration from this repository and store their working copy in the `/opt/msys/ecelerity/etc/conf` directory. - -The default installation has a cron job deployed on the nodes that uses [**eccfg pull**](/momentum/4/executable/eccfg) to update the local configuration from the `ecconfigd` service. **eccfg** is built in such a way that these updates are applied atomically to the configuration checkout directory. - -The tools that operate on the configuration checkout directory try very hard to avoid leaving it in a broken state. Every minute, each node will attempt to update its directory to match the repository. If you have made local changes to the directory, the update will attempt to merge updates from the repository with your changes. The update process will only modify the directory if the complete revision was able to be pulled. In other words, it will not modify the configuration checkout directory if doing so causes a conflict and will never leave a directory with a half-applied update. - -In some situations, it is possible to put the configuration replication into a conflicted state. For instance, in a two node cluster, if one of the nodes is unplugged from the network while configuration changes are made and committed on both nodes, when the network cable is re-connected, the configuration will attempt to sync but will notice that conflicting changes have been made. If conflicting changes were found, `ecconfigd` will warn you and provide you with instructions on how to resolve the conflict. You may need to manually resolve the conflicting configuration files. For instructions on changing configuration files, see [“Changing Configuration Files”](/momentum/4/conf-overview#conf.manual.changes). - -** 16.1.1.1. Repository Working Copy for Cluster** - -On the client side of the configuration management, each node has a working copy checkout of the repository located at `/opt/msys/ecelerity/etc/conf`. The following are descriptions of the subdirectories in a cluster configuration: - -* `global` – location for sharing cluster-wide configuration information between nodes - - Every node has access to this subdirectory. - -* `default` – contains your default configuration files, which are shared across multiple nodes - - `default` is the name of the default subcluster and represents the default configuration for nodes in that subcluster. - -* *`nodename`* – contains node-specific configuration files - - When you create a node-specific configuration file, a directory bearing the node name and a node-specific `ecelerity.conf` file are created on *all* nodes in the cluster. - - When nodes use common values for a number of options, if you wish you can put these options in a configuration file stored in the `global` directory rather than repeating them in each /opt/msys/ecelerity/etc/conf/*`nodename`*/ecelerity.conf file. However, you must add include statements to the /opt/msys/ecelerity/etc/conf/*`nodename`*/ecelerity.conf file on each node. - -* *`peer`* – any files shared by multiple nodes in a single subcluster - -By default the order is: - -``` -/opt/msys/ecelerity/etc -/opt/msys/ecelerity/etc/conf/global -/opt/msys/ecelerity/etc/conf/{NODENAME} -/opt/msys/ecelerity/etc/conf/default -``` - -Directories are separated by the standard path separator. - -If you wish to change the search order, set the environment variable `EC_CONF_SEARCH_PATH`. For more information about `EC_CONF_SEARCH_PATH`, see [*Configuring the Environment File*](/momentum/4/environment-file) . - ### Using Node-local `include` Files If you have any configurations specific to a particular node, fallback values for configuration options in that node-local configuration file *cannot* be included via the `/opt/msys/ecelerity/etc/conf/ecelerity.conf` file. For an included file, the parent file's path is added to the search path, so if a file is included from `/opt/msys/ecelerity/etc/conf/default/ecelerity.conf`, the search path becomes: @@ -109,4 +62,4 @@ Set `OPTION` in a `node-local.conf` file in all the /opt/msys/ecelerity/etc/conf Add an "include node-local.conf" statement to `/opt/msys/ecelerity/etc/default/ecelerity.conf`. -If there are major differences between node configurations, it is probably simpler to create a separate configuration file for each node as described in [“Repository Working Copy for Cluster”](/momentum/4/4-cluster#cluster.config_files.mgmt.cluster). \ No newline at end of file +If there are major differences between node configurations, it is probably simpler to create a separate configuration file for each node as described in [“Repository Working Copy for Cluster”](/momentum/4/4-cluster#cluster.config_files.mgmt.cluster). diff --git a/content/momentum/4/4-implementing-policy-scriptlets.md b/content/momentum/4/4-implementing-policy-scriptlets.md index 65b91ab1a..1ae394734 100644 --- a/content/momentum/4/4-implementing-policy-scriptlets.md +++ b/content/momentum/4/4-implementing-policy-scriptlets.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "Policy Scriptlets" -description: "Lua scripts provide you with the capability to express the logic behind your policy Aside from being very convenient policy scripts can be reloaded on the fly allowing real time adjustment of policy without interrupting service the Momentum implementation has extremely low overhead and tightly integrates with the event based..." +description: "Lua scripts provide you with the capability to express the logic behind your policy Momentum implementation has extremely low overhead and tightly integrates with the event based architecture" --- Lua scripts provide you with the capability to express the logic behind your policy. Aside from being very convenient (policy scripts can be reloaded on the fly, allowing real-time adjustment of policy without interrupting service), the Momentum implementation has extremely low overhead and tightly integrates with the event-based architecture, being able to suspend processing until asynchronous operations (such as DNS resolution, or database queries) complete. Note that variables used in a policy script are scoped locally and only persist in the particular policy script in which it is defined. Use the [validation context](/momentum/4/4-policy#policy.validation) to persist data over different policy phases and policy scripts. @@ -106,23 +106,11 @@ In the `default_policy.conf` file, you should also enable the datasource(s) suit ### Creating Policy Scripts -Following best practices when creating policy scripts is important, especially in a cluster environment when scripts are used on more than one node. Scripts should take advantage of Momentum's built-in revision control and be added to the repository using the [eccfg](/momentum/4/executable/eccfg) command. +Following best practices when creating policy scripts is important, especially in a cluster environment when scripts are used on more than one node. To create a policy script, perform the following: -1. Take steps to avoid conflicts. - - When working with files that are under revision control, it is important to take steps to avoid conflicts with changes made elsewhere in the system and to be able to track changes. For this reason, perform the following actions before creating any policy scripts: - - * Provision a user account for each admin user, so that the history in the repository is meaningful. - - * Ensure that you have the latest updates on the node where you are creating the scripts by running **`/opt/msys/ecelerity/bin/eccfg pull`** . - - ### Note - - Pay special attention to the instructions for using the **pull** command—if the configuration is updated your current directory may be invalidated. For more information, see [eccfg](/momentum/4/executable/eccfg). - -2. Create a directory for your script. +1. Create a directory for your script. Scripts should be created in a directory that is under revision control. Create a directory for your scripts in the working copy of the repository on a node where you intend to run the script: @@ -130,7 +118,7 @@ To create a policy script, perform the following: * If your scripts apply to only one node, create a node-specific directory. -3. Write your script. +2. Write your script. All scripts must @@ -175,7 +163,7 @@ To create a policy script, perform the following: These messages indicate a scriptlet error and give both the name of the script and the callout that failed. -4. Update your configuration to properly reference your script. +3. Update your configuration to properly reference your script. After writing a script and saving it to the repository, you must include it in the [`scriptlet`](/momentum/4/modules/scriptlet) module using a `script` stanza in your `ecelerity.conf` file. @@ -219,7 +207,7 @@ To create a policy script, perform the following: For additional details about editing your configuration files, see [“Changing Configuration Files”](/momentum/4/conf-overview#conf.manual.changes). -5. Check the validity of your script. +4. Check the validity of your script. Since a malformed configuration file will not reload, using **config reload** is one way of validating your scriptlet syntax. After your configuration has been changed, issue the command: @@ -244,7 +232,7 @@ To create a policy script, perform the following: However, please note that Message Systems does not provide support for the use of any third party tools included or referenced by name within our products or product documentation; support is the sole responsibility of the third party provider. -6. Debug your script. +5. Debug your script. Successfully reloading the configuration file does not guarantee that your script will run. Your script may be syntactically correct but have semantic errors. As always, you should test the functionality of scripts before implementing them in a production environment. @@ -288,24 +276,6 @@ To create a policy script, perform the following: note="No email received at this address", code="550"} ``` -7. Commit your changes. - - Once you are satisfied that your scripts function correctly, commit your changes. From the directory above your newly created directory, use **eccfg** to add both the directory and the script to the repository: - - * If you are adding a new script, issue the command - - **eccfg commit ––username *`admin_user`* ––password *`passwd`* ––add-all --message *`message here`*** . - - * If you are editing a script, you need not use the `––add-all` option. - -8. Repply your changes, if required. - - In all cases, edits made to the local configuration will need to be manually applied to the node via **config reload** . The **eccfg commit** command will not do it for you. If you have not reloaded your configuration, issue the console command: - - **`/opt/msys/ecelerity/bin/ec_console /tmp/2025 config reload`** - - If your changes affect more than one node, each node will check for an updated configuration each minute and automatically check out your changes and issue a **config reload** . - ### Examples This section includes examples of using policy scripts. @@ -336,4 +306,4 @@ Use `msg.priority` to read the priority of a message. ### Note -It is important not to overuse the priority setting. High priority messages should be reserved for messages that need to go out immediately, before other messages. Keeping high priority messages to a low percentage of the total message volume is important so the high priority messages do not cause delays for normal priority messages. A common use case for high priority messages is sending out password resets in the midst of a major mail campaign. \ No newline at end of file +It is important not to overuse the priority setting. High priority messages should be reserved for messages that need to go out immediately, before other messages. Keeping high priority messages to a low percentage of the total message volume is important so the high priority messages do not cause delays for normal priority messages. A common use case for high priority messages is sending out password resets in the midst of a major mail campaign. diff --git a/content/momentum/4/4-preface.md b/content/momentum/4/4-preface.md index 8a114e1ad..50e7313f6 100644 --- a/content/momentum/4/4-preface.md +++ b/content/momentum/4/4-preface.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/27/2020" +lastUpdated: "05/21/2024" title: "Preface" -description: "Certain typographical conventions are used in this document Take a moment to familiarize yourself with the following examples Text in this style indicates executable programs such as ecelerity Text in this style is used when referring to file names For example The ecelerity conf file is used to configure Momentum..." +description: "Certain typographical conventions are used in this document Take a moment to familiarize yourself with the following examples" --- ## Typographical Conventions Used in This Document @@ -37,5 +37,5 @@ The preceding line would appear unbroken in a log file but, if left as is, it wo Where possible, Unix command-line commands are broken using the ‘`\`’ character, making it possible to copy and paste commands. For example: -/opt/msys/ecelerity/bin/eccfg bootstrap --clustername *`name`* --username=admin \ - --password=*`admin cluster_host`* +sudo -u ecuser \ + /opt/msys/ecelerity/bin/ec_show -m *`msg-id`* diff --git a/content/momentum/4/add-remove-platform-nodes.md b/content/momentum/4/add-remove-platform-nodes.md index 73ffe2bc8..6991bc31c 100644 --- a/content/momentum/4/add-remove-platform-nodes.md +++ b/content/momentum/4/add-remove-platform-nodes.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "Adding and Removing Platform Nodes" -description: "This chapter describes how to add and remove a Platform node MTA Cassandra to and from an existing Momentum 4 2 cluster This section describes how to add a Platform node which involves installing the new node then making some manual configuration changes on the new node and on the..." +description: "This chapter describes how to add and remove a Platform node MTA Cassandra to and from an existing Momentum 4 cluster" --- @@ -73,13 +73,10 @@ These instructions apply to Momentum 4.2.1.*`x`*, where `x` > or = `11` } ``` -2. Use eccfg to commit the modified configuration, substituting your own admin password if environmental variable $ADMINPASS is not defined. - - `/opt/msys/ecelerity/bin/eccfg commit -u admin -p $ADMINPASS -m 'Add new Platform node to ecelerity cluster'` -3. Restart affected services. +2. Restart affected services. `service ecelerity restart` -4. Update the nginx configuration files. +3. Update the nginx configuration files. 1. Update the `click_proxy_upstream.conf` nginx configuration file by adding a "server" line for the new Platform host. @@ -132,12 +129,7 @@ These instructions apply to Momentum 4.2.1.*`x`*, where `x` > or = `11` 2. Install the meta package `msys-role-platform`. `yum install -y --config momentum.repo --enablerepo momentum msys-role-platform` -3. Bootstrap the Ecelerity configuration from the first server, substituting your own admin password if environmental variable $ADMINPASS is not defined. . - - chown -R ecuser:ecuser /opt/msys/ecelerity/etc/ - cd /opt/msys/ecelerity/etc/ - ../bin/eccfg bootstrap --clustername default -u admin -p $ADMINPASS *`FIRST.NODE.FQDN`* -4. Copy the existing configuration files from the first Platform node to the new node, substituting or setting the new node's hostname for environmental variable $NEWNODE. +3. Copy the existing configuration files from the first Platform node to the new node, substituting or setting the new node's hostname for environmental variable $NEWNODE. ``` # execute this on the first Platform node @@ -154,7 +146,7 @@ These instructions apply to Momentum 4.2.1.*`x`*, where `x` > or = `11` done ``` -5. Update the `cassandra.yaml` file on the new Platform node to replace `listen_address` with the correct local IP address for the new node. +4. Update the `cassandra.yaml` file on the new Platform node to replace `listen_address` with the correct local IP address for the new node. ``` # example @@ -163,14 +155,14 @@ These instructions apply to Momentum 4.2.1.*`x`*, where `x` > or = `11` listen_address: 10.77.0.245 ``` -6. Start Cassandra on the new node. +5. Start Cassandra on the new node. `# service msys-cassandra start` ### Note Depending on the amount of existing data in your Cassandra database, this may falsely report as failed (because the init script only waits a fixed amount of time for the service to start). Perform the next step below to determine the real status. If you do not get the indicated result, submit the start service command again, and if the desired result still does not result, check logs at `/var/log/msys-cassandra/` for error messages. -7. After Cassandra starts, check that the database has been replicated (UN means Up Normal) using `service msys-cassandra status` or `/opt/msys/3rdParty/cassandra/bin/nodetool status`. You should expect to see the new node participating in the Cassandra cluster. +6. After Cassandra starts, check that the database has been replicated (UN means Up Normal) using `service msys-cassandra status` or `/opt/msys/3rdParty/cassandra/bin/nodetool status`. You should expect to see the new node participating in the Cassandra cluster. ``` service msys-cassandra status @@ -187,7 +179,7 @@ These instructions apply to Momentum 4.2.1.*`x`*, where `x` > or = `11` UN 10.77.0.227 203.12 KB 256 27.3% 5525b410-3f3e-49ec-a176-0efa2383f3f4 rack1 ``` -8. Configure RabbitMQ on the new platform node. +7. Configure RabbitMQ on the new platform node. ``` # kill off qpidd service, which (if running) can interfere with RabbitMQ @@ -208,7 +200,7 @@ These instructions apply to Momentum 4.2.1.*`x`*, where `x` > or = `11` $RABBITMQCTL delete_user guest ``` -9. Start all remaining services on the new node. +8. Start all remaining services on the new node. ``` /etc/init.d/msys-riak start @@ -292,4 +284,4 @@ Perform the following steps on each Analytics node in your cluster. 3. On all original Platform nodes, the Cassandra database will have duplicate keys that have now been distributed to the added node. Run the following command on each Platform/Cassandra node: - `/opt/msys/3rdParty/cassandra/bin/nodetool cleanup` \ No newline at end of file + `/opt/msys/3rdParty/cassandra/bin/nodetool cleanup` diff --git a/content/momentum/4/byb-os.md b/content/momentum/4/byb-os.md deleted file mode 100644 index 59e39e27a..000000000 --- a/content/momentum/4/byb-os.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -lastUpdated: "03/26/2020" -title: "Operating System" -description: "Momentum 4 supports the latest version of the following operating systems Red Hat Enterprise Linux 6 x 86 64 Cent OS 6 Ensure that you have the correct operating system packages installed Prepare as many machines as you plan to use for Momentum 4 An optimum installation uses an odd..." ---- - -Momentum 4 supports the latest version of the following operating systems: - -* Red Hat Enterprise Linux 6 (x86_64) - -* CentOS 6 - -Ensure that you have the correct operating system packages installed. Prepare as many machines as you plan to use for Momentum 4\. An optimum installation uses an odd number of three or more Analytics *and* three or more Platform nodes. **Note:** Do not mix operating systems. \ No newline at end of file diff --git a/content/momentum/4/conf-overview.md b/content/momentum/4/conf-overview.md index 2df947b8a..fa25722a9 100644 --- a/content/momentum/4/conf-overview.md +++ b/content/momentum/4/conf-overview.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "Configuration Overview" -description: "Momentum is an exceptionally powerful all in one email infrastructure solution As such it can be configured to provide the full range of digital messaging channels and more This chapter gives an overview of Momentum's configuration and provides the background needed to configure your system to meet your specific application..." +description: "Momentum is an exceptionally powerful all in one email infrastructure solution As such it can be configured to provide the full range of digital messaging channels and more This chapter gives an overview of Momentum's configuration and provides the background needed to configure your system to meet your specific application" --- @@ -25,8 +25,6 @@ The `ecelerity.conf` file is the master configuration file for Momentum; while o * [`msgc_server.conf`](/momentum/4/config/ref-msgc-server-conf) - Momentum cluster messaging bus configuration file -If you make changes to a configuration file, be sure to use the [Momentum Configuration Server](/momentum/4/conf-overview#conf.ecconfigd) to commit your changes. - ### Comments and Whitespace In common with many other Unix configuration files, Momentum's configuration files use the `#` (commonly referred to as "hash" or "pound" sign) symbol to introduce a single line comment. Whitespace is unimportant in the various configuration stanza; feel free to pad the whitespace as you see fit for maximum readability. @@ -105,43 +103,11 @@ Finally, if the scope instance containing the change was only encountered in rea Any configuration files included with the `readonly_include` directive are read-only. Any configuration files included multiple times (overall, not necessarily from the same file) are read-only. Any configuration files loaded from a URI with a scheme other than 'file://', 'persist://' are read-only. All other configuration files are considered writable. -### Configuration Management (ecconfigd) - -Both single-node and clustered installations take advantage of Momentum's revision control system for configuration files. Any configuration changes should be committed to the Momentum Configuration Server **ecconfigd**, henceforth referred to as the configuration server. On start up, the script in the `/etc/init.d` directory runs the **ecconfigd** as a service on the node designated as Manager. For details about the configuration server, see [ecconfigd](/momentum/4/executable/ecconfigd). For details about the **ecconfigd** service in a cluster configuration, see [“Cluster-specific Configuration Management”](/momentum/4/4-cluster#cluster.config_files.mgmt). - -Use **ecconfigd_ctl** to start, stop, or restart the configuration server. For details about this command, see [ecconfigd_ctl](/momentum/4/executable/ecconfigd-ctl). - -Momentum's version control management tool is **eccfg**. It is used to track and update configuration file changes. For details about using this tool, see [eccfg](/momentum/4/executable/eccfg). - -** 15.1.3.1. Repository Working Copy for Single Node** - -The repository working copy directories are located at `/opt/msys/ecelerity/etc/conf/`. There are a number of directories below this. What they are depends upon whether you have installed Momentum in a single-node or cluster configuration and whether you have defined any subclusters. The following are descriptions of the subdirectories in a single-node configuration: - -* `global` – This directory exists but is not used in a single-node configuration. - -* `default` – files used by a single-node configuration - -By default the order is: - -``` -/opt/msys/ecelerity/etc -/opt/msys/ecelerity/etc/conf/global -/opt/msys/ecelerity/etc/conf/default -``` - -Directories are separated by the standard path separator. - -If you wish to change the search order, set the environment variable `EC_CONF_SEARCH_PATH`. For more information about `EC_CONF_SEARCH_PATH`, see [*Configuring the Environment File*](/momentum/4/environment-file) . - -For details about the working copy of the repository in a cluster configuration, see [“Repository Working Copy for Cluster”](/momentum/4/4-cluster#cluster.config_files.mgmt.cluster). - ### Changing Configuration Files -Since the configuration files are under revision control, it is important to take steps to avoid conflicts with changes made elsewhere in the system and to be able to track changes. For this reason, perform the following actions when editing any configuration files or script files: +It is important to take steps to avoid conflicts with changes made elsewhere in the system and to be able to track changes. For this reason, perform the following actions when editing any configuration files or script files: -1. Familiarize yourself with the Momentum repository management tool [eccfg](/momentum/4/executable/eccfg). - -2. Navigate to the appropriate directory: +1. Navigate to the appropriate directory: * For a single-node configuration, navigate to `/opt/msys/ecelerity/etc/conf/default` . @@ -149,28 +115,18 @@ Since the configuration files are under revision control, it is important to tak * For node-specific configuration, navigate to the sub-directory on the cluster manager that is below `/opt/msys/ecelerity/etc/conf` and bears the name of the node: /opt/msys/ecelerity/etc/conf/*`nodename`*. -3. Make sure that the working copy of the repository is up-to-date by issuing the command: - - eccfg pull --username *`name`* --password *`passwd`* -4. Make the necessary changes to the configuration file using the text editor of your choice. +2. Make the necessary changes to the configuration file using the text editor of your choice. -5. Test the validity of your changes using the [validate_config](/momentum/4/executable/validate-config) script: +3. Test the validity of your changes using the [validate_config](/momentum/4/executable/validate-config) script: `/opt/msys/ecelerity/bin/validate_config` -6. Check that your changes are valid by reloading the configuration before committing it. Issue the following command: +4. Check that your changes are valid by reloading the configuration before committing it. Issue the following command: `/opt/msys/ecelerity/bin/ec_console /tmp/2025 config reload` If there are any errors, the new configuration will not load and the error message, `"Reconfigure failed"`, will be displayed. -7. Once you are satisfied with your changes, commit them using the following command: - - /opt/msys/ecelerity/bin/eccfg commit --username *`admin_user`* \ - --password *`password`* - - If you are configuring a cluster, you should allow about a minute or so for the changes to propagate to all nodes. - -8. Implement your changes. +5. Implement your changes. * For a single-node configuration, open the console and issue the command: @@ -198,9 +154,7 @@ Avoid leaving uncommitted changes pending, especially in the working copy on a n As discussed in [“Using the `include` and `readonly_include` Directives”](/momentum/4/conf-overview#conf.files.includes), you can split your Momentum configuration into any number of configuration files. However, if you add new configuration files you must also add them to the repository. Follow these steps: -1. Familiarize yourself with the Momentum repository management tool [eccfg](/momentum/4/executable/eccfg). - -2. Navigate to the appropriate directory for the changes you intend to make. You will save your files to a different directory on a different node depending upon how narrowly or widely your configuration applies. +1. Navigate to the appropriate directory for the changes you intend to make. You will save your files to a different directory on a different node depending upon how narrowly or widely your configuration applies. * For a single-node configuration, navigate to `/opt/msys/ecelerity/etc/conf/default`. @@ -208,30 +162,20 @@ As discussed in [“Using the `include` and `readonly_include` Directives”](/m * For node-specific configuration, create a sub-directory on the cluster manager that is below `/opt/msys/ecelerity/etc/conf` and bears the name of the node: /opt/msys/ecelerity/etc/conf/*`nodename`*. Copy the appropriate configuration files from the `default` directory. -3. Make sure that the working copy of the repository is up-to-date by issuing the command: - - eccfg pull --username *`name`* --password *`passwd`* -4. Create and save the new configuration file. +2. Create and save the new configuration file. -5. Open the appropriate configuration file and include the new file using the `include` directive. +3. Open the appropriate configuration file and include the new file using the `include` directive. -6. Test the validity of your changes using the [validate_config](/momentum/4/executable/validate-config) script: +4. Test the validity of your changes using the [validate_config](/momentum/4/executable/validate-config) script: `/opt/msys/ecelerity/bin/validate_config` -7. Check that your changes are valid by reloading the configuration before committing it. Issue the following command: +5. Check that your changes are valid by reloading the configuration before committing it. Issue the following command: `/opt/msys/ecelerity/bin/ec_console /tmp/2025 config reload` If there are any errors, the new configuration will not load and the error message, `"Reconfigure failed"`, will be displayed. -8. Once you are satisfied with your changes, commit them using the following command: - - /opt/msys/ecelerity/bin/eccfg commit --username *`admin_user`* \ - --password *`password`* - - If you are configuring a cluster, you should allow about a minute or so for the changes to propagate to all nodes. - -9. Implement your changes. +6. Implement your changes. * For a single-node configuration, open the console and issue the command: @@ -249,4 +193,4 @@ As discussed in [“Using the `include` and `readonly_include` Directives”](/m Some configuration changes require restarting the ecelerity process, as documented throughout this guide. Running the **`config reload`** command will not suffice. - * For a node-specific configuration, use the [ec_ctl](/momentum/4/executable/ec-ctl) command to restart the ecelerity process. The **`config reload`** command will not load configuration changes. \ No newline at end of file + * For a node-specific configuration, use the [ec_ctl](/momentum/4/executable/ec-ctl) command to restart the ecelerity process. The **`config reload`** command will not load configuration changes. diff --git a/content/momentum/4/conf-performance-tips.md b/content/momentum/4/conf-performance-tips.md new file mode 100644 index 000000000..0a96854e2 --- /dev/null +++ b/content/momentum/4/conf-performance-tips.md @@ -0,0 +1,136 @@ +--- +lastUpdated: "05/21/2024" +title: "Performance Tips" +description: "This chapter provides you with some tips to optimize the Momentum performance ratings" +--- + +Momentum is an exceptionally powerful all-in-one email infrastructure solution. For several reasons, however, the default configuration shipped with the installation bundle does not run at full speed for all the use cases. This chapter provides you with some tips to optimize the Momentum performance ratings. + +## CPU Optimization + +With [Supercharger](/momentum/4/licensed-features-supercharger) licensed feature, Momentum runs on top of several [event loop](/momentum/4/multi-event-loops) schedulers and uses multicore CPUs with improved efficiency. In this model, it is also possible to assign dedicated event loops to listeners (e.g. the HTTP one) with desired concurrency. On the other hand, the default configuration is based solely on the thread pools to offload specific tasks, therefore Momentum keeps running on top of the original master event loop only, and it can be occasionally bottlenecked. + +The Supercharger's *"75% of CPU cores"* formula works fine on systems largely SMTP-driven. For systems with larger [message generation](momentum/4/message-gen) flows (i.e., REST injections), the number of event loops can be limited to 4 or 5, with higher concurrency values assigned to the `msg_gen` thread pools (see `gen_transactional_threads` configuration [here](momentum/4/modules/msg-gen)). For instance: + +``` +msg_gen { + (...) + gen_transactional_threads = 4 +} +``` + +Also, the CPU thread pool is expected to be used for a lot of functions in the REST flows, so it is recommended to increase the concurrency from its default value (of 4): + +``` +ThreadPool "CPU" { + concurrency = 8 +} +``` + +Last, it is recommended to assign separated event loops for listeners to reduce latency and improve the overall performance. For instance, the following configuration assigns dedicated event loops to the ESMTP and HTTP listeners: + +``` +ESMTP_Listener { + event_loop = "smtp_pool" + (...) +} +(...) +HTTP_Listener { + event_loop = "http_pool" + (...) +} +``` + +## Better Caching + +Momentum has some built-in caches that can be tuned to improve performance. The following are the most important ones: + +### Generic Getter + +This cache is used for parameters that are not in a binding/domain scope. So anything that's global, or module configuration, exist in the generic getter cache. This cache gets a lot of traffic, so setting it in `ecelerity.conf` to something like few million entries is reasonable: + +``` +generic_getter_cache_size = 4000000 +``` + +### Regex Match + +The match cache saves results of queries against regular expression domain stanzas. This cache is enabled by default, but its [size](momentum/4/config/ref-match-cache-size) is very small by default (16384 entries). Making it larger is a great idea, especially if user is using any regular expression domain stanzas: + +``` +match_cache_size = 2000000 +``` + +## Boosting `jemalloc` Performance + +`jemalloc` has demonstrated excellent performance and stability. Because of that, it became the default Momentum's memory allocator. However, it is possible to get even more from it by tuning the `MALLOC_CONF` environment variable. + +Add these lines to `/opt/msys/ecelerity/etc/environment` file (or create it): + +``` +MALLOC_CONF="background_thread:true" +export MALLOC_CONF +``` + +then (re)start the `ecelerity` service. + +## Tuning Lua + +Lua has a garbage collector that can be tuned to improve performance. The following are some recommended settings: + +In the `ecelerity.conf` file: + +``` +ThreadPool "gc" { + concurrency = 10 +} +(...) +scriptlet "scriptlet" { + (...) + gc_every = 20 + gc_step_on_recycle = true + gc_stepmul = 300 + gc_threadpool = “gc” + gc_trace_thresh = 1000 + gc_trace_xref_thresh = 1000 + global_trace_interval = 13 + max_uses_per_thread = 5000 + reap_interval = 13 + use_reusable_thread = true +} +``` + +Enforce these settings in the `/opt/msys/ecelerity/etc/environment` file: + +``` +USE_TRACE_THREADS=true +export USE_TRACE_THREADS +LUA_USE_TRACE_THREADS=true +export LUA_USE_TRACE_THREADS +LUA_NUM_TRACE_THREADS=8 +export LUA_NUM_TRACE_THREADS +LUA_NON_SIGNAL_COLLECTOR=true +export LUA_NON_SIGNAL_COLLECTOR +``` + +## Miscellaneous Configuration + +These are `ecelerity.conf` settings that are known to improve performance on different tasks of Momentum. Before applying them, however, review their documentation and make sure they fit to your environment and use cases: + +``` +fully_resolve_before_smtp = false +growbuf_size = 32768 +inline_transfail_processing = 0 +initial_hash_buckets = 64 +keep_message_dicts_in_memory = true +large_message_threshold = 262144 +max_resident_active_queue = 1000 +max_resident_messages = 100000 +``` + +## Miscellaneous Tips + +- Don't forget to [adjust `sysctl` settings](momentum/4/byb-sysctl-conf) for best TCP connections performance; +- Prefer [chunk_logger](momentum/4/modules/chunk-logger) over logging to `paniclog`. The reasons are taken from the `chunk_logger` page: + +> _Logging to the_ `paniclog` _in the scheduler thread (the main thread) can limit throughput and cause watchdog kills. (...) [It] involves disk I/O, and writing to the_ `paniclog` _in the scheduler thread may block its execution for a long time, thereby holding up other tasks in the scheduler thread and decreasing throughput._ diff --git a/content/momentum/4/config/ref-msgc-server-conf.md b/content/momentum/4/config/ref-msgc-server-conf.md index c964a7d2a..bd7697f7e 100644 --- a/content/momentum/4/config/ref-msgc-server-conf.md +++ b/content/momentum/4/config/ref-msgc-server-conf.md @@ -1,15 +1,13 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "msgc_server.conf File" -description: "The msgc server conf file contains the configuration relevant to the cluster messaging bus This file is referenced from the eccluster conf file on the cluster manager and from the ecelerity cluster conf file on nodes It MUST live in the global portion of the repository as it must be..." +description: "The msgc server conf file contains the configuration relevant to the cluster messaging bus This file is referenced from the eccluster conf file on the cluster manager and from the ecelerity cluster conf file on nodes" --- The `msgc_server.conf` file contains the configuration relevant to the cluster messaging bus. This file is referenced from the `eccluster.conf` file on the cluster manager and from the `ecelerity-cluster.conf` file on nodes. It MUST live in the global portion of the repository, as it must be the same for all nodes in the cluster. ### Note -Restart [Section 15.1.3, “Configuration Management (**ecconfigd**)”](conf.overview#conf.ecconfigd "15.1.3. Configuration Management (ecconfigd)") after making extensive changes to `msgc_server.conf`, such as adding multiple nodes. Use the command **`/etc/init.d/ecconfigd restart`** . - For a discussion of scopes and fallbacks, see [“Configuration Scopes and Fallback”](/momentum/4/4-ecelerity-conf-fallback). For a summary of all the non-module specific configuration options, refer to [*Configuration Options Summary*](/momentum/4/config-options-summary) . @@ -44,4 +42,4 @@ The msgcserver_listener mediates between msgc_servers and between msgc_servers a - \ No newline at end of file + diff --git a/content/momentum/4/environment-file.md b/content/momentum/4/environment-file.md index 2fbd2b466..55b843543 100644 --- a/content/momentum/4/environment-file.md +++ b/content/momentum/4/environment-file.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "Configuring the Environment File" -description: "Environment variables should be set or adjusted on startup If Momentum is started up using the ec ctl script any environment variables included in the environment file will be set Environment variables can be set in the opt msys ecelerity etc environment file The variables that can be set are..." +description: "Environment variables should be set or adjusted on startup If Momentum is started up using the ec ctl script any environment variables included in the environment file will be set" --- Environment variables should be set or adjusted on startup. If Momentum is started up using the [ec_ctl](/momentum/4/executable/ec-ctl) script, any environment variables included in the `environment` file will be set. @@ -16,10 +16,6 @@ Environment variables can be set in the `/opt/msys/ecelerity/etc/environment` fi This parameter should match what you have configured for your Control_Listener in `ecelerity.conf`. -* `EC_CONF_SEARCH_PATH` – this value defines the search path used by [**ecconfigd**](/momentum/4/conf-overview#conf.ecconfigd) to determine the applicable configuration file - - Add this variable to the environment file if you wish to change the search order. - * `EC_DIGEST_REALM` – MD5 digest realm (see [ec_md5passwd](/momentum/4/executable/ec-md-5-passwd).) * `ECELERITY_DNS_BACKEND` – the variable for setting the DNS resolver. @@ -60,8 +56,8 @@ Environment variables can be set in the `/opt/msys/ecelerity/etc/environment` fi * `TRY` – number of times to loop waiting for Momentum to start up - For examples of usage, see [ec_ctl](/momentum/4/executable/ec-ctl) and [ecconfigd_ctl](/momentum/4/executable/ecconfigd-ctl). + For an example of usage, see [ec_ctl](/momentum/4/executable/ec-ctl). ### Note -The `GIMLI_WATCHDOG_INTERVAL`, `GIMLI_WATCHDOG_START_INTERVAL`, and `GIMLI_WATCHDOG_STOP_INTERVAL` variables set the interval for restarting Momentum when it has been unresponsive. For more details execute **`man -M /opt/msys/gimli/man monitor`** . \ No newline at end of file +The `GIMLI_WATCHDOG_INTERVAL`, `GIMLI_WATCHDOG_START_INTERVAL`, and `GIMLI_WATCHDOG_STOP_INTERVAL` variables set the interval for restarting Momentum when it has been unresponsive. For more details execute **`man -M /opt/msys/gimli/man monitor`** . diff --git a/content/momentum/4/executable/create-ssl-cert.md b/content/momentum/4/executable/create-ssl-cert.md index 3731f9fde..bd0a1882f 100644 --- a/content/momentum/4/executable/create-ssl-cert.md +++ b/content/momentum/4/executable/create-ssl-cert.md @@ -1,7 +1,7 @@ --- -lastUpdated: "03/26/2020" +lastUpdated: "05/21/2024" title: "create_ssl_cert" -description: "create ssl cert create a self signed SSL certificate opt msys ecelerity bin create ssl cert service hostname prefix user During installation self signed SSL certificates valid for one year are created for some services Use this command to create a new certificate when the original expires When a certificate..." +description: "During installation self signed SSL certificates valid for one year are created for some services Use this command to create a new certificate when the original expires" --- @@ -16,19 +16,12 @@ create_ssl_cert — create a self-signed SSL certificate ## Description -During installation, self-signed SSL certificates valid for one year are created for some services. Use this command to create a new certificate when the original expires. When a certificate expires, you will see an error such as the following: +During installation, self-signed SSL certificates valid for one year are created. Use this command to create a new certificate when the original expires. When a certificate expires, you will get an error message. -``` -ERROR: premature EOF in "svn update '--config-dir' '/opt/msys/ecelerity/etc/.eccfg' » -'--username' 'ecuser' '--no-auth-cache' '--non-interactive' '--trust-server-cert' '.'" -svn: OPTIONS of 'https://mail2:2027/config/default/boot': Server certificate » -verification failed: certificate has expired, issuer is not trusted -``` +To create a new certificate, first stop the appropriate service and remove the old certificate. Then issue the **create_ssl_cert** command: -To create a new certificate, first stop the appropriate service and remove the old certificate. Then issue the **create_ssl_cert** command. For example, the following command creates a certificate for the **ecconfigd** service: - -shell> /opt/msys/ecelerity/bin/create_ssl_cert ecconfigd *`hostname`* \ -/var/ecconfigd/apache ecuser +shell> /opt/msys/ecelerity/bin/create_ssl_cert *`service`* *`hostname`* \ +*`prefix`* *`user`* The parameters passed to this command are as follows: @@ -38,17 +31,7 @@ The parameters passed to this command are as follows: