From 1d171db1fc1cbc53e09bd1b18e305ede62ab09d7 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Thu, 11 Jul 2024 15:16:16 -0400 Subject: [PATCH 01/59] PGD re-edit batch 8 - reference --- .../pgd/5/reference/catalogs-internal.mdx | 83 ++++++------ .../docs/pgd/5/reference/catalogs-visible.mdx | 124 +++++++++--------- .../pgd/5/reference/conflict_functions.mdx | 6 +- .../docs/pgd/5/reference/conflicts.mdx | 27 ++-- .../reference/streamtriggers/rowfunctions.mdx | 19 ++- .../reference/streamtriggers/rowvariables.mdx | 3 +- 6 files changed, 132 insertions(+), 130 deletions(-) diff --git a/product_docs/docs/pgd/5/reference/catalogs-internal.mdx b/product_docs/docs/pgd/5/reference/catalogs-internal.mdx index ece72baa72b..5d7b51205bc 100644 --- a/product_docs/docs/pgd/5/reference/catalogs-internal.mdx +++ b/product_docs/docs/pgd/5/reference/catalogs-internal.mdx @@ -2,7 +2,7 @@ title: Internal catalogs and views indexdepth: 3 --- -Catalogs and views are presented here in alphabetical order. +Catalogs and views are listed here in alphabetical order. ### `bdr.ddl_epoch` @@ -26,7 +26,7 @@ node. Specifically, it tracks: * Node joins (to the cluster) * Raft state changes (that is, whenever the node changes its role in the consensus -protocol - leader, follower, or candidate to leader) - see [Monitoring Raft consensus](../monitoring/sql#monitoring-raft-consensus) +protocol - leader, follower, or candidate to leader); see [Monitoring Raft consensus](../monitoring/sql#monitoring-raft-consensus) * Whenever a worker has errored out (see [bdr.workers](/pgd/latest/reference/catalogs-visible/#bdrworkers) and [Monitoring PGD replication workers](../monitoring/sql#monitoring-pgd-replication-workers)) @@ -34,44 +34,44 @@ and [Monitoring PGD replication workers](../monitoring/sql#monitoring-pgd-replic | Name | Type | Description | | -------------- | ----------- | ----------------------------------------------------------------------------------- | -| event_node_id | oid | The ID of the node to which the event refers to | -| event_type | int | The type of the event (a node, raft or worker related event) | -| event_sub_type | int | The sub-type of the event, i.e. if it's a join, a state change or an error | -| event_source | text | The name of the worker process where the event was sourced | -| event_time | timestamptz | The timestamp at which the event occurred | -| event_text | text | A textual representation of the event (e.g. the error of the worker) | +| event_node_id | oid | ID of the node to which the event refers | +| event_type | int | Type of the event (a node, raft, or worker-related event) | +| event_sub_type | int | Subtype of the event, that is, if it's a join, a state change, or an error | +| event_source | text | Name of the worker process where the event was sourced | +| event_time | timestamptz | Timestamp at which the event occurred | +| event_text | text | Textual representation of the event (for example, the error of the worker) | | event_detail | text | A more detailed description of the event (for now, only relevant for worker errors) | ### `bdr.event_summary` A view of the `bdr.event_history` catalog that displays the information in a more human-friendly format. Specifically, it displays the event types and subtypes -as textual representations, rather than integers. +as textual representations rather than integers. ### `bdr.node_config` -An internal catalog table with per node configuration options. +An internal catalog table with per-node configuration options. #### `bdr.node_config` columns | Name | Type | Description | | ----------------------- | -------- | ---------------------------------------- | -| node_id | oid | The node ID | +| node_id | oid | Node ID | | node_route_priority | int | Priority assigned to this node | | node_route_fence | boolean | Switch to fence this node | | node_route_writes | boolean | Switch to allow writes | | node_route_reads | boolean | Switch to allow reads | -| node_route_dsn | text | The interface of this node | +| node_route_dsn | text | Interface of this node | ### `bdr.node_group_config` -An internal catalog table with per node group configuration options. +An internal catalog table with per-node group configuration options. #### `bdr.node_group_config` columns | Name | Type | Description | | ----------------------- | -------- | ---------------------------------------- | -| node_group_id | oid | The node group ID | +| node_group_id | oid | Node group ID | | route_writer_max_lag | bigint | Maximum write lag accepted | | route_reader_max_lag | bigint | Maximum read lag accepted | | route_writer_wait_flush | boolean | Switch if we need to wait for the flush | @@ -92,6 +92,7 @@ Per-node-group routing configuration options. | route_reader_max_lag | bigint | Maximum read lag accepted | | route_writer_wait_flush | boolean | Wait for flush | + -At list price, estimated overall monthly management costs are $600–$800 for a single region. Check with your Google Cloud account manager for specifics that apply to your account. +At list price, estimated overall monthly management costs are $600–$800 for a single region. Check with your Google Cloud account manager for specifics that apply to your account. ## Apache Superset costs -Enabling [Apache Superset](/biganimal/latest/using_cluster/06_analyze_with_superset/) to analyze your data has an added cost. In most cases the costs are approximately $150 per month, based on your cloud provider, instance and storage type selections, and other factors. +Enabling [Apache Superset](/edb-postgres-ai/cloud-service/using_cluster/your_cloud_account/analyze_with_superset/) to analyze your data has an added cost. In most cases the costs are approximately $150 per month, based on your cloud provider, instance and storage type selections, and other factors. ## PgBouncer costs -Enabling [PgBouncer](../getting_started/creating_a_cluster/#pgbouncer) to pool your connections incurs additional costs that depend on your cloud provider. In addition to the cloud provider costs, PgBouncer connects to your primary server and requires an IP address. BigAnimal provisions up to three instances per PgBouncer-enabled cluster to ensure that performance is unaffected, so each availability zone receives its own instance of PgBouncer. The extra VM costs are the 2vcpu SKU times the number of PgBouncer instances. For AWS, the instance type is c5.large. For Azure, the instance type is F2s_v2. +Enabling [PgBouncer](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/#pgbouncer) to pool your connections incurs additional costs that depend on your cloud provider. In addition to the cloud provider costs, PgBouncer connects to your primary server and requires an IP address. BigAnimal provisions up to three instances per PgBouncer-enabled cluster to ensure that performance is unaffected, so each availability zone receives its own instance of PgBouncer. The extra VM costs are the 2vcpu SKU times the number of PgBouncer instances. For AWS, the instance type is c5.large. For Azure, the instance type is F2s_v2. ## Payments and billing Your payment and billing options include: -- Digital self-service using a credit card -- Direct purchase using the Sales Order form -- Azure Marketplace +- Digital self-service using a credit card +- Direct purchase using the Sales Order form +- Azure Marketplace ### Digital self-service BigAnimal charges the credit card for your [EDB account](https://www.enterprisedb.com/accounts/register) month to month and sends invoices to your email. This invoice includes database costs. If you're using BigAnimal's cloud account, it also includes infrastructure costs. -!!! Note +!!! Note + If you want to take advantage of discounts, contact [Sales](https://info.enterprisedb.com/EDB-Contact-Us.html). ### Direct purchase + If you're using BigAnimal's cloud account, you're invoiced monthly, or you can arrange for a longer term. If you're using your Microsoft Azure or AWS account, usage details are included in your invoice. Account owners can download a usage report in CSV format from the BigAnimal Usage page. You can set the time frame, database type, and cloud provider prior to downloading the report. The information on the page refreshes hourly. diff --git a/product_docs/docs/eprs/7/01_introduction/03_certified_supported_versions.mdx b/product_docs/docs/eprs/7/01_introduction/03_certified_supported_versions.mdx index 6c6e59cfff0..77f50931b25 100644 --- a/product_docs/docs/eprs/7/01_introduction/03_certified_supported_versions.mdx +++ b/product_docs/docs/eprs/7/01_introduction/03_certified_supported_versions.mdx @@ -13,17 +13,20 @@ You can use the following database product versions with Replication Server: - SQL Server 2014 version 12.0.5000.0 is explicitly certified. Newer minor versions in the 12.0 line are supported as well. !!!Note -All PostgreSQL and EDB Postgres Advanced Server versions available as BigAnimal single-node and primary/standby high-availability cluster types are also supported for SMR configurations. See the BigAnimal (EDB’s managed database cloud service) [documentation](/biganimal/latest) for more information about BigAnimal’s [supported cluster types](/biganimal/latest/overview/02_high_availability/). See the [database version policy documentation](/biganimal/latest/overview/05_database_version_policy/) for the versions of PostgreSQL and EDB Postgres Advanced Server available in BigAnimal. - + +All PostgreSQL and EDB Postgres Advanced Server versions available as BigAnimal single-node and primary/standby high-availability cluster types are also supported for SMR configurations. See the BigAnimal (EDB’s managed database cloud service) [documentation](/biganimal/latest) for more information about BigAnimal’s [supported cluster types](/edb-postgres-ai/cloud-service/references/supported_cluster_types/). See the [database version policy documentation](/edb-postgres-ai/cloud-service/references/supported_database_versions/) for the versions of PostgreSQL and EDB Postgres Advanced Server available in BigAnimal. + EDB Postgres Distributed (PGD) v5.3.0 is explicitly certified as a Publishing database for trigger mode and as Subscription database for both trigger and wal modes. !!! As of Replication Server 7.1.0: -- SQL Server 2016 version 13.00.5026 is explicitly certified. Newer minor versions in the 13.0 line are supported as well. -- SQL Server 2017 version 14.0.1000.169 is explicitly certified. Newer minor versions in the 14.0 line are supported as well. -- SQL Server 2019 version 15.0.2000.5 is explicitly certified. Newer minor versions in the 15.0 line are supported as well. + +- SQL Server 2016 version 13.00.5026 is explicitly certified. Newer minor versions in the 13.0 line are supported as well. +- SQL Server 2017 version 14.0.1000.169 is explicitly certified. Newer minor versions in the 14.0 line are supported as well. +- SQL Server 2019 version 15.0.2000.5 is explicitly certified. Newer minor versions in the 15.0 line are supported as well. Contact your EnterpriseDB Account Manager or [sales@enterprisedb.com](mailto:sales@enterprisedb.com) if you require support for other platforms. !!! Note + Replication server isn't tested and isn't officially supported for use with Oracle RAC and Exadata, but it might work when connected to a single persistent node. To determine its ability to work with RAC or Exadata, contact your EDB representative. diff --git a/product_docs/docs/migration_toolkit/55/02_supported_operating_systems_and_database_versions.mdx b/product_docs/docs/migration_toolkit/55/02_supported_operating_systems_and_database_versions.mdx index 1c81d400b4d..2fc05a65523 100644 --- a/product_docs/docs/migration_toolkit/55/02_supported_operating_systems_and_database_versions.mdx +++ b/product_docs/docs/migration_toolkit/55/02_supported_operating_systems_and_database_versions.mdx @@ -25,17 +25,20 @@ You can use the following database product versions with Migration Toolkit: - (Sybase) SAP Adaptive Server Enterprise 15.7 - (Sybase) SAP Adaptive Server Enterprise 16.0 -!!! Note -All the PostgreSQL and EDB Postgres Advanced Server versions available as BigAnimal single-node, primary/standby high-availability, and distributed high-availability cluster types are also supported for use with Migration Toolkit. See the BigAnimal (EDB’s managed database cloud service) [documentation](/biganimal/latest) for more information about BigAnimal’s [supported cluster types](/biganimal/latest/overview/02_high_availability/). See the [database version policy documentation](/biganimal/latest/overview/05_database_version_policy/) for the versions of PostgreSQL and EDB Postgres Advanced Server available in BigAnimal. +!!!Note + +All the PostgreSQL and EDB Postgres Advanced Server versions available as BigAnimal single-node, primary/standby high-availability, and distributed high-availability cluster types are also supported for use with Migration Toolkit. See the BigAnimal (EDB’s managed database cloud service) [documentation](/biganimal/latest) for more information about BigAnimal’s [supported cluster types](/edb-postgres-ai/cloud-service/references/supported_cluster_types/). See the [database version policy documentation](/edb-postgres-ai/cloud-service/references/supported_database_versions/) for the versions of PostgreSQL and EDB Postgres Advanced Server available in BigAnimal. The superuser role isn't available in BigAnimal EDB Postgres Advanced Server distributed high-availability cluster databases. As a result, the following Migration Toolkit capabilities that require superuser access aren't supported when targeting BigAnimal distributed high-availability clusters: -- The `-copyViaDBLinkOra` option that enables data migration using the db_link_ora extension -- The Oracle-compatible SQL command `CREATE DATABASE LINK` + +- The `-copyViaDBLinkOra` option that enables data migration using the db_link_ora extension +- The Oracle-compatible SQL command `CREATE DATABASE LINK` !!! Contact your EnterpriseDB Account Manager or [sales@enterprisedb.com](mailto:sales@enterprisedb.com) if you require support for other database products. -!!! Note +!!!Note + The Migration Toolkit isn't tested with and doesn't officially support use with Oracle Real Application Clusters (RAC) and Exadata. However, Migration Toolkit might work when it's connected to a single persistent node. For more information, contact your EDB representative. !!! @@ -43,6 +46,6 @@ The Migration Toolkit isn't tested with and doesn't officially support use with Migration Toolkit supports installations on Linux, Windows, and MacOS platforms. See [Product Compatibility](https://www.enterprisedb.com/platform-compatibility#mtk) for details. - !!! Note + The ojdbc7.jar can cause large data migrations to be incomplete due to a limit on records over Integer.MAX_VALUE (2147483647) while fetching from the ResultSet. To avoid this issue, we recommend upgrading to ojdbc8.jar. diff --git a/product_docs/docs/migration_toolkit/55/06_building_toolkit.properties_file.mdx b/product_docs/docs/migration_toolkit/55/06_building_toolkit.properties_file.mdx index bf8dd5f1132..9b43e9f2357 100644 --- a/product_docs/docs/migration_toolkit/55/06_building_toolkit.properties_file.mdx +++ b/product_docs/docs/migration_toolkit/55/06_building_toolkit.properties_file.mdx @@ -27,6 +27,7 @@ Before executing Migration Toolkit commands, modify the `toolkit.properties` fil - `TARGET_DB_PASSWORD` specifies the password of the target database user. !!! Note + Unless specified in the command line, Migration Toolkit expects the source database to be Oracle and the target database to be EDB Postgres Advanced Server. For any other source or target database, specify the `-sourcedbtype` or `-targetdbtype` options as described in [Migrating from a non-Oracle source database](/migration_toolkit/latest/07_invoking_mtk/#migrating-from-a-non-oracle-source-database). For specifying a target database on BigAnimal, see [Defining a BigAnimal URL](#defining-a-biganimal-url). @@ -59,19 +60,19 @@ The URL conforms to JDBC standards and takes the form: An Advanced Server URL contains the following information: -- `jdbc` — The protocol is always `jdbc`. +- `jdbc` — The protocol is always `jdbc`. -- `edb` — If you're using Advanced Server, specify `edb` for the subprotocol value. +- `edb` — If you're using Advanced Server, specify `edb` for the subprotocol value. -- `` — The name or IP address of the host where the Postgres instance is running. +- `` — The name or IP address of the host where the Postgres instance is running. -- `` — The port number that the Advanced Server database listener is monitoring. The default port number is 5444. +- `` — The port number that the Advanced Server database listener is monitoring. The default port number is 5444. -- `` — The name of the source or target database. +- `` — The name of the source or target database. -- `{TARGET_DB_USER|SRC_DB_USER}` — Specifies a user with privileges to create each type of object migrated. If migrating data into a table, the specified user might also require insert, truncate, and references privileges for each target table. +- `{TARGET_DB_USER|SRC_DB_USER}` — Specifies a user with privileges to create each type of object migrated. If migrating data into a table, the specified user might also require insert, truncate, and references privileges for each target table. -- `{TARGET_DB_PASSWORD|SRC_DB_PASSWORD}` — Set to the password of the privileged Advanced Server user. +- `{TARGET_DB_PASSWORD|SRC_DB_PASSWORD}` — Set to the password of the privileged Advanced Server user. @@ -100,19 +101,19 @@ A PostgreSQL URL conforms to JDBC standards and takes the form: The URL contains the following information: -- `jdbc` — The protocol is always `jdbc`. +- `jdbc` — The protocol is always `jdbc`. -- `postgresql` — If you're using PostgreSQL, specify `postgresql` for the subprotocol value. +- `postgresql` — If you're using PostgreSQL, specify `postgresql` for the subprotocol value. -- `` — The name or IP address of the host where the Postgres instance is running. +- `` — The name or IP address of the host where the Postgres instance is running. -- `` — The port number that the Postgres database listener is monitoring. The default port number is 5432. +- `` — The port number that the Postgres database listener is monitoring. The default port number is 5432. -- `` — The name of the source or target database. +- `` — The name of the source or target database. -- `{SRC_DB_USER|TARGET_DB_USER}` — Specify a user with privileges to create each type of object migrated. If migrating data into a table, the specified user might also need insert, truncate, and references privileges for each target table. +- `{SRC_DB_USER|TARGET_DB_USER}` — Specify a user with privileges to create each type of object migrated. If migrating data into a table, the specified user might also need insert, truncate, and references privileges for each target table. -- `{SRC_DB_PASSWORD|TARGET_DB_PASSWORD}` — Set to the password of the privileged PostgreSQL user. +- `{SRC_DB_PASSWORD|TARGET_DB_PASSWORD}` — Set to the password of the privileged PostgreSQL user. @@ -137,27 +138,29 @@ When migrating to BigAnimal, `TARGET_DB_URL` takes the form of a JDBC URL. For e ```text jdbc:://[:]/?sslmode= ``` -!!! Note + +!!!Note + Many of the values you need for the target database URL are available from the BigAnimal portal. In BigAnimal, select your cluster and go to the **Connect** tab to find the values. !!! The URL contains the following information: -- `jdbc` — The protocol is always `jdbc`. +- `jdbc` — The protocol is always `jdbc`. -- `postgres_type` — The subprotocol is the Postgres type. Specify `edb` if you're using Advanced Server or `postgresql` if you're using PostgreSQL. +- `postgres_type` — The subprotocol is the Postgres type. Specify `edb` if you're using Advanced Server or `postgresql` if you're using PostgreSQL. -- `` — The host name of your cluster. You can copy it from the **Host** field on the **Connect** tab in the BigAnimal portal. +- `` — The host name of your cluster. You can copy it from the **Host** field on the **Connect** tab in the BigAnimal portal. -- `` — The port number that the database listener is monitoring. You can copy it from the **Port** field on the **Connect** tab in the BigAnimal portal. +- `` — The port number that the database listener is monitoring. You can copy it from the **Port** field on the **Connect** tab in the BigAnimal portal. -- `` — The name of the target database. Set this to the name of the database in your cluster that you want to use as your migration target database. The name of the default database for your cluster is shown in the **Dbname** field on the **Connect** tab in the BigAnimal portal. Often a separate database is created for use as the migration target. +- `` — The name of the target database. Set this to the name of the database in your cluster that you want to use as your migration target database. The name of the default database for your cluster is shown in the **Dbname** field on the **Connect** tab in the BigAnimal portal. Often a separate database is created for use as the migration target. -- `TARGET_DB_USER` — Specifies the name of a privileged database user. You can copy it from the **User** field on the **Connect** tab in the BigAnimal portal. +- `TARGET_DB_USER` — Specifies the name of a privileged database user. You can copy it from the **User** field on the **Connect** tab in the BigAnimal portal. -- `TARGET_DB_PASSWORD` — Contains the password of the specified user. +- `TARGET_DB_PASSWORD` — Contains the password of the specified user. -- `sslmode` — Either "require" or "verify-full". See [Recommended settings for SSL mode](../../../biganimal/latest/using_cluster/02_connecting_your_cluster/connecting_from_a_client/#recommended-settings-for-ssl-mode). Listed at the end of the **Service URI** value on the **Connect** tab in the BigAnimal portal. +- `sslmode` — Either "require" or "verify-full". See [Recommended settings for SSL mode](/edb-postgres-ai/cloud-service/using_cluster/connect_from_a_client/#recommended-settings-for-ssl-mode). Listed at the end of the **Service URI** value on the **Connect** tab in the BigAnimal portal. ## Defining an Oracle URL @@ -181,32 +184,32 @@ jdbc:oracle:thin:@//:{} An Oracle URL contains the following information: -- `jdbc` — The protocol is always `jdbc`. +- `jdbc` — The protocol is always `jdbc`. -- `oracle` — The subprotocol is always `oracle`. +- `oracle` — The subprotocol is always `oracle`. -- `thin` — The driver type. Specify a driver type of `thin`. +- `thin` — The driver type. Specify a driver type of `thin`. -- `` — The name or IP address of the host where the Oracle server is running. +- `` — The name or IP address of the host where the Oracle server is running. -- `` — The port number that the Oracle database listener is monitoring. +- `` — The port number that the Oracle database listener is monitoring. -- `` — The database SID of the Oracle database. +- `` — The database SID of the Oracle database. -- `` — The name of the Oracle service. +- `` — The name of the Oracle service. -- `SRC_DB_USER` — Specifies the name of a privileged Oracle user. The Oracle user needs read access to the source database objects you want to migrate. If you want to migrate users/roles and related profiles/privileges, grant the Oracle user SELECT privileges on the following Oracle catalog objects: - - - `DBA_ROLES` - - `DBA_USERS` - - `DBA_TAB_PRIVS` - - `DBA_PROFILES` - - `DBA_ROLE_PRIVS` - - `ROLE_ROLE_PRIVS` - - `DBA_SYS_PRIVS` +- `SRC_DB_USER` — Specifies the name of a privileged Oracle user. The Oracle user needs read access to the source database objects you want to migrate. If you want to migrate users/roles and related profiles/privileges, grant the Oracle user SELECT privileges on the following Oracle catalog objects: + - `DBA_ROLES` + - `DBA_USERS` + - `DBA_TAB_PRIVS` + - `DBA_PROFILES` + - `DBA_ROLE_PRIVS` + - `ROLE_ROLE_PRIVS` + - `DBA_SYS_PRIVS` -- `SRC_DB_PASSWORD` — Contains the password of the specified user. + +- `SRC_DB_PASSWORD` — Contains the password of the specified user. @@ -232,52 +235,53 @@ If you're connecting to a SQL Server database, `SRC_DB_URL` takes the form of a By default, Microsoft JDBC Driver for SQL Server uses TLS encryption for all communication between the client and the SQL Server. Set `encrypt` to `false`, if you want to disable TLS encryption. For more information about connecting to a Microsoft SQL Server using JDBC type 4 driver, see [Building the connection URL](https://docs.microsoft.com/en-us/sql/connect/jdbc/building-the-connection-url?view=sql-server-ver16). - ```text - jdbc:sqlserver://:[;databaseName=][;encrypt=false] - ``` +```text +jdbc:sqlserver://:[;databaseName=][;encrypt=false] +``` A SQL server URL contains the following information: - - `jdbc` — The protocol is always `jdbc`. +- `jdbc` — The protocol is always `jdbc`. - - `sqlserver` — The server type is always `sqlserver`. +- `sqlserver` — The server type is always `sqlserver`. - - `` — The name or IP address of the host where the source server is running. +- `` — The name or IP address of the host where the source server is running. - - `` — The port number that the source database listener is monitoring. +- `` — The port number that the source database listener is monitoring. - - `` — The name of the source database. +- `` — The name of the source database. - - `SRC_DB_USER` — Specifies the name of a privileged SQL Server user. +- `SRC_DB_USER` — Specifies the name of a privileged SQL Server user. - - `SRC_DB_PASSWORD` — Contains the password of the specified user. +- `SRC_DB_PASSWORD` — Contains the password of the specified user. ### JTDS URL - ```text - jdbc:jtds:sqlserver://:/ - ``` - +```text +jdbc:jtds:sqlserver://:/ +``` + !!! Tip - The JTDS driver is an open source JDBC 3.0 type 4 driver that supports older versions of Microsoft SQL Server. See [http://jtds.sourceforge.net/](http://jtds.sourceforge.net/). When connecting newer versions of Microsoft SQL Server with Migration Toolkit, the Microsoft JDBC Driver for SQL Server is recommended. + + The JTDS driver is an open source JDBC 3.0 type 4 driver that supports older versions of Microsoft SQL Server. See . When connecting newer versions of Microsoft SQL Server with Migration Toolkit, the Microsoft JDBC Driver for SQL Server is recommended. A SQL server URL contains the following information: - - `jdbc` — The protocol is always `jdbc`. +- `jdbc` — The protocol is always `jdbc`. - - `jtds` — The driver name is always `jtds`. +- `jtds` — The driver name is always `jtds`. - - `sqlserver` — The server type is always `sqlserver`. +- `sqlserver` — The server type is always `sqlserver`. - - `` — The name or IP address of the host where the source server is running. +- `` — The name or IP address of the host where the source server is running. - - `` — The port number that the source database listener is monitoring. +- `` — The port number that the source database listener is monitoring. - - `` — The name of the source database. +- `` — The name of the source database. - - `SRC_DB_USER` — Specifies the name of a privileged SQL Server user. +- `SRC_DB_USER` — Specifies the name of a privileged SQL Server user. - - `SRC_DB_PASSWORD` — Contains the password of the specified user. +- `SRC_DB_PASSWORD` — Contains the password of the specified user. ## Defining a MySQL URL @@ -295,27 +299,27 @@ jdbc:mysql://[:]/ The URL contains the following information: -- `jdbc` — The protocol is always `jdbc`. +- `jdbc` — The protocol is always `jdbc`. -- `mysql` — The subprotocol is always `mysql`. +- `mysql` — The subprotocol is always `mysql`. -- `` — The name or IP address of the host where the source server is running. +- `` — The name or IP address of the host where the source server is running. -- `[]` — The port number that the MySQL database listener is monitoring. +- `[]` — The port number that the MySQL database listener is monitoring. -- `` — The name of the source database. +- `` — The name of the source database. -- `SRC_DB_USER` — Specifies the name of a privileged MySQL user. +- `SRC_DB_USER` — Specifies the name of a privileged MySQL user. -- `SRC_DB_PASSWORD` — Contains the password of the specified user. +- `SRC_DB_PASSWORD` — Contains the password of the specified user. !!! Note - - If datatype `tinyInt(1)` is used to store byte values other than 0 and 1 in the MySQL source database, make sure to append the optional parameter `?tinyInt1isBit=false` in the MySQL Connector/J JDBC Driver URL. - - Due to a bug in the MySQL Connector/J JDBC Driver 8.0.26, the migration of foreign key constraints fails in certain cases. EDB recommends that you don't use this driver for migrating data using Migration Toolkit. Instead, use MySQL Connector/J JDBC Driver 8.0.30 or later. + - If datatype `tinyInt(1)` is used to store byte values other than 0 and 1 in the MySQL source database, make sure to append the optional parameter `?tinyInt1isBit=false` in the MySQL Connector/J JDBC Driver URL. - For detailed information about this bug, see the [MySQL bug report](https://bugs.mysql.com/bug.php?id=95280). + - Due to a bug in the MySQL Connector/J JDBC Driver 8.0.26, the migration of foreign key constraints fails in certain cases. EDB recommends that you don't use this driver for migrating data using Migration Toolkit. Instead, use MySQL Connector/J JDBC Driver 8.0.30 or later. + For detailed information about this bug, see the [MySQL bug report](https://bugs.mysql.com/bug.php?id=95280). `TINYINT(1)` is mapped to `BIT(1)` in PostgreSQL/EDB Postgres Advanced Server (which might not be expected in some cases). But as the MySQL JDBC driver reports it as `BIT(1)`, Migration Toolkit maps it to `BIT(1)` in PostgreSQL/EDB Postgres Advanced Server. @@ -329,7 +333,6 @@ In this case, the JDBC driver reports `TINYINT(1)` as `TINYINT` and is mapped to - ## Defining a Sybase URL Migration Toolkit helps with migration from a Sybase database to an Advanced Server database. When migrating from Sybase, you must specify connection specifications for the Sybase source database in the `toolkit.properties` file. The connection information must include: @@ -343,25 +346,25 @@ When migrating from Sybase, `SRC_DB_URL` takes the form of a JTDS URL. For examp ```text jdbc:jtds:sybase://[:]/ ``` -!!! Tip - For an open source JDBC 3.0 type 4 driver for Sybase ASE and older versions of Microsoft SQL Server, see [http://jtds.sourceforge.net/](http://jtds.sourceforge.net/). -A Sybase URL contains the following information: +!!! Tip -- `jdbc` — The protocol is always `jdbc`. + For an open source JDBC 3.0 type 4 driver for Sybase ASE and older versions of Microsoft SQL Server, see . -- `jtds` — The driver name is always `jtds`. +A Sybase URL contains the following information: -- `sybase` — The server type is always `sybase`. +- `jdbc` — The protocol is always `jdbc`. -- `` — The name or IP address of the host where the source server is running. +- `jtds` — The driver name is always `jtds`. -- `` — The port number that the Sybase database listener is monitoring. +- `sybase` — The server type is always `sybase`. -- `` — The name of the source database. +- `` — The name or IP address of the host where the source server is running. -- `SRC_DB_USER` — Specifies the name of a privileged Sybase user. +- `` — The port number that the Sybase database listener is monitoring. -- `SRC_DB_PASSWORD` — Contains the password of the specified user. +- `` — The name of the source database. +- `SRC_DB_USER` — Specifies the name of a privileged Sybase user. +- `SRC_DB_PASSWORD` — Contains the password of the specified user. diff --git a/product_docs/docs/pem/8/monitoring_performance/pem_remote_monitoring.mdx b/product_docs/docs/pem/8/monitoring_performance/pem_remote_monitoring.mdx index b54131483a9..b079e0173c1 100644 --- a/product_docs/docs/pem/8/monitoring_performance/pem_remote_monitoring.mdx +++ b/product_docs/docs/pem/8/monitoring_performance/pem_remote_monitoring.mdx @@ -11,23 +11,23 @@ To remotely monitor a Postgres cluster with PEM, you must register the cluster w The following scenarios require remote monitoring using PEM: -- Postgres cluster running on AWS RDS -- [Postgres cluster running on BigAnimal](../../../biganimal/latest/using_cluster/05_monitoring_and_logging/) +- Postgres cluster running on AWS RDS +- [Postgres cluster running on BigAnimal](/edb-postgres-ai/cloud-service/using_cluster/monitoring_and_logging/) PEM remote monitoring supports: -| Feature Name | Remote monitoring supported? | Comments | -| --------------------------------------------------------------------------------------------------------------------------------- | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| [Manage charts](charts/) | Yes | | -| [System reports](../reports/#system-configuration-report) | Yes | | -| [Capacity Manager](capacity_manager/) | Limited | There's no correlation between the Postgres cluster and operating system metrics. | -| [Manage alerts](alerts/) | Limited | When you run an alert script on the Postges cluster, it runs on the machine where the bound PEM agent is running and not on the actual Postgres cluster machine. | -| [Manage dashboards](dashboards/) | Limited | Some dashboards might not be able to show complete data. For example, the operating system information where the Postgres cluster is running isn't displayed as it isn't available. | -| [Manage probes](probes/) | Limited | Some of the PEM probes don't return information, and some of the functionality might be affected. For details about probe functionality, see [PEM agent privileges](../managing_pem_agent/#agent-privileges). | -| [Postgres Expert](../tuning_performance/postgres_expert/) | Limited | The Postgres Expert provides partial information as operating system information isn't available. | -| [Scheduled tasks](../pem_online_help/04_toc_pem_features/15_pem_scheduled_task_tab/) | Limited | Scheduled tasks work only for Postgres clusters, and scripts run on a remote agent. | -| [Core usage reports](../reports/#core-usage-report) | Limited | The Core Usage reports don't show complete information. For example, the platform, number of cores, and total RAM aren't displayed. | -| [Audit manager](audit_manager/) | No | | -| [Log manager](log_manager/) | No | | -| [Postgres Log Analysis Expert](log_manager/#postgres-log-analysis-expert) | No | | -| [Tuning wizard](../tuning_performance/tuning_wizard/) | No | | \ No newline at end of file +| Feature Name | Remote monitoring supported? | Comments | +| ------------------------------------------------------------------------------------ | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| [Manage charts](charts/) | Yes | | +| [System reports](../reports/#system-configuration-report) | Yes | | +| [Capacity Manager](capacity_manager/) | Limited | There's no correlation between the Postgres cluster and operating system metrics. | +| [Manage alerts](alerts/) | Limited | When you run an alert script on the Postges cluster, it runs on the machine where the bound PEM agent is running and not on the actual Postgres cluster machine. | +| [Manage dashboards](dashboards/) | Limited | Some dashboards might not be able to show complete data. For example, the operating system information where the Postgres cluster is running isn't displayed as it isn't available. | +| [Manage probes](probes/) | Limited | Some of the PEM probes don't return information, and some of the functionality might be affected. For details about probe functionality, see [PEM agent privileges](../managing_pem_agent/#agent-privileges). | +| [Postgres Expert](../tuning_performance/postgres_expert/) | Limited | The Postgres Expert provides partial information as operating system information isn't available. | +| [Scheduled tasks](../pem_online_help/04_toc_pem_features/15_pem_scheduled_task_tab/) | Limited | Scheduled tasks work only for Postgres clusters, and scripts run on a remote agent. | +| [Core usage reports](../reports/#core-usage-report) | Limited | The Core Usage reports don't show complete information. For example, the platform, number of cores, and total RAM aren't displayed. | +| [Audit manager](audit_manager/) | No | | +| [Log manager](log_manager/) | No | | +| [Postgres Log Analysis Expert](log_manager/#postgres-log-analysis-expert) | No | | +| [Tuning wizard](../tuning_performance/tuning_wizard/) | No | | diff --git a/product_docs/docs/pem/9/monitoring_performance/pem_remote_monitoring.mdx b/product_docs/docs/pem/9/monitoring_performance/pem_remote_monitoring.mdx index b03679e5703..b53900a4d2a 100644 --- a/product_docs/docs/pem/9/monitoring_performance/pem_remote_monitoring.mdx +++ b/product_docs/docs/pem/9/monitoring_performance/pem_remote_monitoring.mdx @@ -11,23 +11,23 @@ To remotely monitor a Postgres cluster with PEM, you must register the cluster w The following scenarios require remote monitoring using PEM: -- Postgres cluster running on AWS RDS -- [Postgres cluster running on BigAnimal](/biganimal/latest/using_cluster/05_monitoring_and_logging/) +- Postgres cluster running on AWS RDS +- [Postgres cluster running on BigAnimal](/edb-postgres-ai/cloud-service/using_cluster/monitoring_and_logging/) PEM remote monitoring supports: -| Feature Name | Remote monitoring supported? | Comments | -| --------------------------------------------------------------------------------------------------------------------------------- | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| [Manage charts](charts/) | Yes | | -| [System reports](../reports/#system-configuration-report) | Yes | | -| [Capacity Manager](capacity_manager/) | Limited | There's no correlation between the Postgres cluster and operating system metrics. | -| [Manage alerts](alerts/) | Limited | When you run an alert script on the Postges cluster, it runs on the machine where the bound PEM agent is running and not on the actual Postgres cluster machine. | -| [Manage dashboards](dashboards/) | Limited | Some dashboards might not be able to show complete data. For example, the operating system information where the Postgres cluster is running isn't displayed as it isn't available. | -| [Manage probes](probes/) | Limited | Some of the PEM probes don't return information, and some of the functionality might be affected. For details about probe functionality, see [PEM agent privileges](../managing_pem_agent/setting_agent_privileges). | -| [Postgres Expert](../tuning_performance/postgres_expert/) | Limited | The Postgres Expert provides partial information as operating system information isn't available. | -| [Scheduled tasks](../pem_web_interface/#management-menu) | Limited | Scheduled tasks work only for Postgres clusters, and scripts run on a remote agent. | -| [Core usage reports](../reports/#core-usage-report) | Limited | The Core Usage reports don't show complete information. For example, the platform, number of cores, and total RAM aren't displayed. | -| [Audit manager](audit_manager/) | No | | -| [Log manager](log_manager/) | No | | -| [Postgres Log Analysis Expert](log_manager/#postgres-log-analysis-expert) | No | | -| [Tuning wizard](../tuning_performance/tuning_wizard/) | No | | \ No newline at end of file +| Feature Name | Remote monitoring supported? | Comments | +| ------------------------------------------------------------------------- | ---------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| [Manage charts](charts/) | Yes | | +| [System reports](../reports/#system-configuration-report) | Yes | | +| [Capacity Manager](capacity_manager/) | Limited | There's no correlation between the Postgres cluster and operating system metrics. | +| [Manage alerts](alerts/) | Limited | When you run an alert script on the Postges cluster, it runs on the machine where the bound PEM agent is running and not on the actual Postgres cluster machine. | +| [Manage dashboards](dashboards/) | Limited | Some dashboards might not be able to show complete data. For example, the operating system information where the Postgres cluster is running isn't displayed as it isn't available. | +| [Manage probes](probes/) | Limited | Some of the PEM probes don't return information, and some of the functionality might be affected. For details about probe functionality, see [PEM agent privileges](../managing_pem_agent/setting_agent_privileges). | +| [Postgres Expert](../tuning_performance/postgres_expert/) | Limited | The Postgres Expert provides partial information as operating system information isn't available. | +| [Scheduled tasks](../pem_web_interface/#management-menu) | Limited | Scheduled tasks work only for Postgres clusters, and scripts run on a remote agent. | +| [Core usage reports](../reports/#core-usage-report) | Limited | The Core Usage reports don't show complete information. For example, the platform, number of cores, and total RAM aren't displayed. | +| [Audit manager](audit_manager/) | No | | +| [Log manager](log_manager/) | No | | +| [Postgres Log Analysis Expert](log_manager/#postgres-log-analysis-expert) | No | | +| [Tuning wizard](../tuning_performance/tuning_wizard/) | No | | diff --git a/product_docs/docs/pgd/4/deployments/index.mdx b/product_docs/docs/pgd/4/deployments/index.mdx index acd775c9cf0..0cf2469c499 100644 --- a/product_docs/docs/pgd/4/deployments/index.mdx +++ b/product_docs/docs/pgd/4/deployments/index.mdx @@ -8,11 +8,10 @@ navigation: You can deploy and install EDB Postgres Distributed products using the following methods: -- TPAexec is an orchestration tool that uses Ansible to build Postgres clusters as specified by TPA (Trusted Postgres Architecture), a set of reference architectures that document how to set up and operate Postgres in various scenarios. TPA represents the best practices followed by EDB, and its recommendations are as applicable to quick testbed setups as to production environments. To deploy PGD using TPA, see the [TPA documentation](tpaexec/installing_tpaexec/). +- TPAexec is an orchestration tool that uses Ansible to build Postgres clusters as specified by TPA (Trusted Postgres Architecture), a set of reference architectures that document how to set up and operate Postgres in various scenarios. TPA represents the best practices followed by EDB, and its recommendations are as applicable to quick testbed setups as to production environments. To deploy PGD using TPA, see the [TPA documentation](tpaexec/installing_tpaexec/). -- Manual installation is also available where TPA is not an option. Details of how to deploy PGD manually are in the [manual installation](/pgd/4/deployments/manually/) section of the documentation. +- Manual installation is also available where TPA is not an option. Details of how to deploy PGD manually are in the [manual installation](/pgd/4/deployments/manually/) section of the documentation. -- BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility, running in your cloud account and operated by the Postgres experts. BigAnimal makes it easy to set up, manage, and scale your databases. The addition of distributed high availability support through EDB Postres Distributed allows single-region or multi-region clusters with one or two data groups. See the [Distributed high availability](/biganimal/latest/overview/02_high_availability/distributed_highavailability/) topic in the [BigAnimal documentation](/biganimal/latest) for more information. - -- EDB Postgres Distributed for Kubernetes is a Kubernetes operator is designed, developed, and supported by EDB that covers the full lifecycle of a highly available Postgres database clusters with a multi-master architecture, using BDR replication. It is based on the open source CloudNativePG operator, and provides additional value such as compatibility with Oracle using EDB Postgres Advanced Server and additional supported platforms such as IBM Power and OpenShift. +- BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility, running in your cloud account and operated by the Postgres experts. BigAnimal makes it easy to set up, manage, and scale your databases. The addition of distributed high availability support through EDB Postres Distributed allows single-region or multi-region clusters with one or two data groups. See the [Distributed high availability](/edb-postgres-ai/cloud-service/references/supported_cluster_types/distributed_highavailability/) topic in the [BigAnimal documentation](/biganimal/latest) for more information. +- EDB Postgres Distributed for Kubernetes is a Kubernetes operator is designed, developed, and supported by EDB that covers the full lifecycle of a highly available Postgres database clusters with a multi-master architecture, using BDR replication. It is based on the open source CloudNativePG operator, and provides additional value such as compatibility with Oracle using EDB Postgres Advanced Server and additional supported platforms such as IBM Power and OpenShift. diff --git a/product_docs/docs/pgd/5/deploy-config/deploy-biganimal/index.mdx b/product_docs/docs/pgd/5/deploy-config/deploy-biganimal/index.mdx index 97e4f2cc02a..768f76ad09a 100644 --- a/product_docs/docs/pgd/5/deploy-config/deploy-biganimal/index.mdx +++ b/product_docs/docs/pgd/5/deploy-config/deploy-biganimal/index.mdx @@ -10,9 +10,9 @@ EDB BigAnimal is a fully managed database-as-a-service with built-in Oracle comp This section covers how to work with EDB Postgres Distributed when deployed on BigAnimal. -* [Creating a distributed high-availability cluster](/biganimal/latest/getting_started/creating_a_cluster/creating_a_dha_cluster/) in the BigAnimal documentation works through the steps needed to: - * Prepare your cloud environment for a distributed high-availability cluster. - * Sign in to BigAnimal. - * Create a distributed high-availability cluster, including: - * Creating and configuring a data group. - * Optionally creating and configuring a second data group in a different region. +- [Creating a distributed high-availability cluster](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_dha_cluster/) in the BigAnimal documentation works through the steps needed to: + - Prepare your cloud environment for a distributed high-availability cluster. + - Sign in to BigAnimal. + - Create a distributed high-availability cluster, including: + - Creating and configuring a data group. + - Optionally creating and configuring a second data group in a different region. diff --git a/product_docs/docs/pgd/5/planning/deployments.mdx b/product_docs/docs/pgd/5/planning/deployments.mdx index 96d00fead81..bc06a32c892 100644 --- a/product_docs/docs/pgd/5/planning/deployments.mdx +++ b/product_docs/docs/pgd/5/planning/deployments.mdx @@ -7,11 +7,11 @@ redirects: You can deploy and install EDB Postgres Distributed products using the following methods: -- [Trusted Postgres Architect](/tpa/latest) (TPA) is an orchestration tool that uses Ansible to build Postgres clusters using a set of reference architectures that document how to set up and operate Postgres in various scenarios. TPA represents the best practices followed by EDB, and its recommendations apply to quick testbed setups just as they do to production environments. TPA's flexibility allows deployments to virtual machines, AWS cloud instances, or Linux host hardware. See [Deploying with TPA](../deploy-config/deploy-tpa/deploying/) for more information. +- [Trusted Postgres Architect](/tpa/latest) (TPA) is an orchestration tool that uses Ansible to build Postgres clusters using a set of reference architectures that document how to set up and operate Postgres in various scenarios. TPA represents the best practices followed by EDB, and its recommendations apply to quick testbed setups just as they do to production environments. TPA's flexibility allows deployments to virtual machines, AWS cloud instances, or Linux host hardware. See [Deploying with TPA](../deploy-config/deploy-tpa/deploying/) for more information. - EDB Postgres AI Cloud Service is a fully managed database-as-a-service with built-in Oracle compatibility that runs in your cloud account or Cloud Service's cloud account where it's operated by EDB's Postgres experts. EDB Postgres AI Cloud Service makes it easy to set up, manage, and scale your databases. The addition of distributed high-availability support powered by EDB Postgres Distributed (PGD) enables single- and multi-region Always-on clusters. See [Distributed high availability](/biganimal/latest/overview/02_high_availability/distributed_highavailability/) in the [Cloud Service documentation](/biganimal/latest) for more information. -- [EDB Postgres Distributed for Kubernetes](/postgres_distributed_for_kubernetes/latest/) is a Kubernetes operator designed, developed, and supported by EDB. It covers the full lifecycle of highly available Postgres database clusters with a multi-master architecture, using PGD replication. It's based on the open source CloudNativePG operator and provides additional value, such as compatibility with Oracle using EDB Postgres Advanced Server, Transparent Data Encryption (TDE) using EDB Postgres Extended or Advanced Server, and additional supported platforms including IBM Power and OpenShift. +- [EDB Postgres Distributed for Kubernetes](/postgres_distributed_for_kubernetes/latest/) is a Kubernetes operator designed, developed, and supported by EDB. It covers the full lifecycle of highly available Postgres database clusters with a multi-master architecture, using PGD replication. It's based on the open source CloudNativePG operator and provides additional value, such as compatibility with Oracle using EDB Postgres Advanced Server, Transparent Data Encryption (TDE) using EDB Postgres Extended or Advanced Server, and additional supported platforms including IBM Power and OpenShift. @@ -21,10 +21,10 @@ You can deploy and install EDB Postgres Distributed products using the following | Active-Active support | 2+ regions | 2 regions | 2 regions | | Write/Read routing | Local or global | Local | Local | | Automated failover | AZ or Region | AZ | AZ | -| Major version upgrades | | - | - | -| Subscriber-only nodes
(read replicas) | | - | - | -| Logical standby nodes | | - | - | -| PgBouncer | | - | - | +| Major version upgrades | | - | - | +| Subscriber-only nodes
(read replicas) | | - | - | +| Logical standby nodes | | - | - | +| PgBouncer | | - | - | | Selective data replication | | | | | Maintenance windows per region | | | | -| Target availability | 99.999% SLO | 99.99 SLA (single)
99.995% SLA (multi) | 99.999% SLO | +| Target availability | 99.999% SLO | 99.99 SLA (single)
99.995% SLA (multi) | 99.999% SLO | diff --git a/product_docs/docs/pgd/5/quickstart/index.mdx b/product_docs/docs/pgd/5/quickstart/index.mdx index 00c4d288975..2c5bc84eb6a 100644 --- a/product_docs/docs/pgd/5/quickstart/index.mdx +++ b/product_docs/docs/pgd/5/quickstart/index.mdx @@ -22,25 +22,24 @@ EDB Postgres Distributed (PGD) is a multi-master replicating implementation of P ### Other deployment options -* If you prefer to have a fully managed EDB Postgres Distributed experience, PGD is now available as an option on BigAnimal, EDB's cloud platform for Postgres. See [BigAnimal distributed high-availability clusters](/biganimal/latest/overview/02_high_availability/distributed_highavailability/). +- If you prefer to have a fully managed EDB Postgres Distributed experience, PGD is now available as an option on BigAnimal, EDB's cloud platform for Postgres. See [BigAnimal distributed high-availability clusters](/edb-postgres-ai/cloud-service/references/supported_cluster_types/distributed_highavailability/). -* If you prefer to deploy PGD on Kubernetes, you can use the EDB PGD Operator for Kubernetes. See [EDB PGD Operator for Kubernetes](/postgres_distributed_for_kubernetes/latest/quickstart). +- If you prefer to deploy PGD on Kubernetes, you can use the EDB PGD Operator for Kubernetes. See [EDB PGD Operator for Kubernetes](/postgres_distributed_for_kubernetes/latest/quickstart). ### What's in this quick start PGD is very configurable. To quickly evaluate and deploy PGD, use this quick start. It'll get you up and running with a fully configured PGD cluster using the same tools that you'll use to deploy to production. This quick start includes: -* A short introduction to Trusted Postgres Architect (TPA) and how it helps you configure, deploy, and manage EDB Postgres Distributed -* A guide to selecting Docker, Linux hosts, or AWS quick starts - * The Docker quick start - * The Linux host quick start - * The AWS quick start -* Connecting applications to your cluster -* Further explorations with your cluster including - * Replication - * Conflicts - * Failover - +- A short introduction to Trusted Postgres Architect (TPA) and how it helps you configure, deploy, and manage EDB Postgres Distributed +- A guide to selecting Docker, Linux hosts, or AWS quick starts + - The Docker quick start + - The Linux host quick start + - The AWS quick start +- Connecting applications to your cluster +- Further explorations with your cluster including + - Replication + - Conflicts + - Failover ## Introducing PGD and TPA @@ -59,31 +58,32 @@ You will then use TPA to provision and deploy the required configuration and sof ## Selecting Docker, Linux hosts, or AWS quick starts Three quick starts are currently available: -* Docker — Provisions, deploys, and hosts the cluster on Docker containers on a single machine. -* Linux hosts — Deploys and hosts the cluster on Linux servers that you already provisioned with an operating system and SSH connectivity. These can be actual physical servers or virtual machines, deployed on-premises or in the cloud. -* AWS — Provisions, deploys, and hosts the cluster on AWS. + +- Docker — Provisions, deploys, and hosts the cluster on Docker containers on a single machine. +- Linux hosts — Deploys and hosts the cluster on Linux servers that you already provisioned with an operating system and SSH connectivity. These can be actual physical servers or virtual machines, deployed on-premises or in the cloud. +- AWS — Provisions, deploys, and hosts the cluster on AWS. ### Docker quick start The Docker quick start is ideal for those looking to initially explore PGD and its capabilities. This configuration of PGD isn't suitable for production use but can be valuable for testing the functionality and behavior of PGD clusters. You might also find it useful when familiarizing yourself with PGD commands and APIs to prepare for deploying on cloud, VM, or Linux hosts. -* [Begin the Docker quick start](quick_start_docker). +- [Begin the Docker quick start](quick_start_docker). ### Linux host quick start The Linux hosts quick start is suited if you intend to install PGD on your own hosts, where you have complete control of the hardware and software, or in a private cloud. The overall configuration is similar to the Docker configuration but is more persistent over system restarts and closer to a single-region production deployment of PGD. -* [Begin the Linux host quick start](quick_start_linux). +- [Begin the Linux host quick start](quick_start_linux). ### AWS quick start The AWS quick start is more extensive and deploys the PGD cluster onto EC2 nodes on Amazon's cloud. The cluster's overall configuration is similar to the Docker quick start. However, instead of using Docker containers, it uses t3.micro instances of Amazon EC2 to provide the compute power. The AWS deployment is more persistent and not subject to the limitations of the Docker quick start deployment. However, it requires more initial setup to configure the AWS CLI. -* [Begin the AWS quick start](quick_start_aws). +- [Begin the AWS quick start](quick_start_aws). ## Further explorations with your cluster -* [Connect applications to your PGD cluster](connecting_applications/). -* [Find out how a PGD cluster stands up to downtime of data nodes or proxies](further_explore_failover/). -* [Learn about how EDB Postgres Distributed manages conflicting updates](further_explore_conflicts/). -* [Move beyond the quick starts](next_steps/). +- [Connect applications to your PGD cluster](connecting_applications/). +- [Find out how a PGD cluster stands up to downtime of data nodes or proxies](further_explore_failover/). +- [Learn about how EDB Postgres Distributed manages conflicting updates](further_explore_conflicts/). +- [Move beyond the quick starts](next_steps/). From 9566486b8db22bb210b6b36f6144b5ad7042bd37 Mon Sep 17 00:00:00 2001 From: Josh Heyer Date: Fri, 19 Jul 2024 06:05:27 +0000 Subject: [PATCH 48/59] more link fixes (by hand, mostly) --- .../creating_cluster/creating_a_cluster.mdx | 6 ++++-- .../cloud-service/getting_started/index.mdx | 4 ++-- .../edb_hosted_cloud_service.mdx | 2 +- .../your_cloud_account/azure_market_setup.mdx | 2 +- .../index.mdx | 2 ++ ...getting_started_with_your_cloud_account.mdx | 2 +- .../preparing_gcp/index.mdx | 2 +- .../managing_your_cluster/managing_cluster.mdx | 4 ++-- .../modifying_your_cluster/index.mdx | 4 ++-- .../upgrading_your_cluster.mdx | 2 +- .../pgd_cli_ba.mdx | 18 ++++++++++-------- .../references/supported_regions/index.mdx | 2 +- .../cloud-service/security/security.mdx | 4 +++- .../02_connecting_from_aws/index.mdx | 2 +- .../connecting_from_azure/index.mdx | 2 +- .../connecting_from_gcp/index.mdx | 2 +- .../managing_postgres_extensions.mdx | 2 +- .../tagging/create_and_manage_tags.mdx | 2 +- .../managing_superset_access.mdx | 2 +- .../using_the_api/access_key/index.mdx | 4 ++-- .../console/estate/agent/install-agent.mdx | 2 +- .../console/quickstart/index.mdx | 2 +- .../console/using/estate/index.mdx | 4 ++-- .../console/using/introduction.mdx | 2 +- .../organizations/identity_provider/index.mdx | 2 +- .../console/using/organizations/index.mdx | 4 ++-- .../using/organizations/machine_users.mdx | 4 ++-- .../console/using/organizations/users.mdx | 4 ++-- .../edb-postgres-ai/console/using/overview.mdx | 6 +++--- .../using/projects/settings/security.mdx | 2 +- .../console/using/projects/users.mdx | 2 +- .../using/projects/viewing_projects.mdx | 6 +++--- .../overview/guide-and-getting-started.mdx | 2 +- .../overview/overview-and-concepts.mdx | 2 +- 34 files changed, 61 insertions(+), 53 deletions(-) diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster.mdx index e5e401dcc07..087a247367c 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster.mdx @@ -5,6 +5,7 @@ deepToC: true redirects: #adding hierarchy to the structure (Creating a Cluster topic nows has a child topic) so created a folder and moved the contents from 03_create_cluster to index.mdx - ../03_create_cluster/ + - /biganimal/latest/getting_started/creating_a_cluster/ --- !!!Note "When using Your Cloud" @@ -249,7 +250,7 @@ Enable **Identity and Access Management (IAM) Authentication** to turn on the ab #### Superuser Access -Enable **Superuser Access** to grant superuser privileges to the edb_admin role. This option is available for single-node and primary/standby high-availability clusters. See [Notes on the edb_admin role](/edb-postgres-ai/cloud-service/using_cluster/postgres_access/database_authentication/#notes-on-the-edb_admin-role). +Enable **Superuser Access** to grant superuser privileges to the edb_admin role. This option is available for single-node and primary/standby high-availability clusters. See [Notes on the edb_admin role](/edb-postgres-ai/cloud-service/using_cluster/postgres_access/admin_roles/#notes-on-the-edb_admin-role). ### Security @@ -259,7 +260,8 @@ Enable **Transparent Data Encryption (TDE)** to use your own encryption key. Thi - To enable and use TDE for a cluster, you must first enable the encryption key and add it at the project level before creating a cluster. To add a key, see [Adding a TDE key at project level](../../administering_cluster/projects.mdx/#adding-a-tde-key). -- To enable and use TDE for a cluster, you must complete the configuration on the platform of your key management provider after creating a cluster. See [Completing the TDE configuration](#completing-the-TDE-configuration) for more information. +- To enable and use TDE for a cluster, you must complete the configuration on the platform of your key management provider after creating a cluster. See [Completing the TDE configuration](#completing-the-tde-configuration) for more information. +!!!ompleting-the-TDE-configuration) for more information. !!! #### Completing the TDE configuration diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/index.mdx index 0f06c8f8c31..731687d70ef 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/index.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/index.mdx @@ -17,7 +17,7 @@ EDB Postgres AI Cloud Service provides hosted and managed Postgres database clus To get started with EDB Postgres AI Cloud Service databases you will need an EDB Postgres AI account. -Create an EDB account. For more information, see [Create an EDB account](../../console/#accessing-the-edb-postgres-ai-console). After setting up the account, you can access all of the features and capabilities of the EDB Postgres AI console. +Create an EDB account. For more information, see [Create an EDB account](../../console/using/introduction/#accessing-the-edb-postgres-ai-console). After setting up the account, you can access all of the features and capabilities of the EDB Postgres AI console. EDB Postgres AI Cloud Service can host your database for you on AWS, GCP, or Azure managing the infrastructure and billing in one place - this is using EDB Hosted Cloud Service - or you can use your own cloud account to have databases deployed into your own cloud - this is using Your Cloud Account. @@ -29,7 +29,7 @@ For the rest of this Getting Started section, we will use EDB Hosted Cloud Servi From the Console, navigate to a project. From within the Project overview, select **Create New** and then select **Database Cluster**. -This will take you to the Create Cluster page. For more details on the options available here, See the [Creating a cluster](../creating_a_cluster/) section. +This will take you to the Create Cluster page. For more details on the options available here, See the [Creating a cluster](../getting_started/creating_cluster/) section. ## Using your cluster diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/planning/choosing_your_deployment/edb_hosted_cloud_service.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/planning/choosing_your_deployment/edb_hosted_cloud_service.mdx index 2345f9b321b..8cc4507a880 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/planning/choosing_your_deployment/edb_hosted_cloud_service.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/planning/choosing_your_deployment/edb_hosted_cloud_service.mdx @@ -27,7 +27,7 @@ The Hosted option is available for EDB Postgres Advanced Server (EPAS) and EDB P -You can deploy EDB's Postgres Advanced Server (EPAS) or Postgres Extended Server (PGE) databases as hosted databases, with a range of [deployment options](deployment) for high availability and fault tolerance. +You can deploy EDB's Postgres Advanced Server (EPAS) or Postgres Extended Server (PGE) databases as hosted databases, with a range of [deployment options](../choosing_cluster_type/) for high availability and fault tolerance. ## Deploying a cluster with EDB Hosted Cloud Service diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/azure_market_setup.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/azure_market_setup.mdx index 930edfde2b1..d2583ba0ed8 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/azure_market_setup.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/azure_market_setup.mdx @@ -55,7 +55,7 @@ You can log in to your Cloud Service account using your Azure AD identity. ### Invite users -You can invite new users by sharing the link to the EDB Postgres AI Console and having them log in with their Microsoft Azure Active Directory account. New users aren't assigned any roles by default. After they log in the first time, you see them in the User Management list and can assign them a role with permissions to Cloud Service. See [Users](/edb-postgres-ai/console/using/organizations/users/#users) for instructions. +You can invite new users by sharing the link to the EDB Postgres AI Console and having them log in with their Microsoft Azure Active Directory account. New users aren't assigned any roles by default. After they log in the first time, you see them in the User Management list and can assign them a role with permissions to Cloud Service. See [Users](/edb-postgres-ai/console/using/organizations/users/) for instructions. !!! Note diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/deploying_using_your_cloud_account/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/deploying_using_your_cloud_account/index.mdx index 70e05b406db..100451b3260 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/deploying_using_your_cloud_account/index.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/deploying_using_your_cloud_account/index.mdx @@ -5,6 +5,8 @@ navigation: - deploy_aws - deploy_azure - deploy_gcp +redirects: +- /biganimal/latest/planning/deployment_options/ --- You can choose your own cloud account to manage databases on EDB Postgres AI Cloud Service: diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/getting_started_with_your_cloud_account.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/getting_started_with_your_cloud_account.mdx index 2a8e72e0bb8..9d43f610e11 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/getting_started_with_your_cloud_account.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/getting_started_with_your_cloud_account.mdx @@ -39,7 +39,7 @@ Use the following high-level steps to connect an EDB Postgres AI Project to your 6. Activate and manage regions. See [Managing regions](managing_regions/). -7. Create a cluster. When prompted for **Where to deploy**, select **Your Cloud Account**. See [Creating a cluster](../creating_a_cluster/). +7. Create a cluster. When prompted for **Where to deploy**, select **Your Cloud Account**. See [Creating a cluster](../creating_cluster/creating_a_cluster/). 8. Use your cluster. See [Using your cluster](/edb-postgres-ai/cloud-service/using_cluster/). diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/preparing_cloud_account/preparing_gcp/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/preparing_cloud_account/preparing_gcp/index.mdx index b162cdf8c35..5074efeff0e 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/preparing_cloud_account/preparing_gcp/index.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/preparing_cloud_account/preparing_gcp/index.mdx @@ -17,7 +17,7 @@ Alternatively, you can have an equivalent single role, such as: - roles/owner -BigAnimal requires you to check the readiness of your Google Cloud (GCP) account before you deploy your clusters. (You don't need to perform this check if you're using BigAnimal's cloud account as your [deployment option](/biganimal/latest/planning/deployment_options/). The checks that you perform ensure that your Google Cloud account is prepared to meet your clusters' requirements and resource limits. +BigAnimal requires you to check the readiness of your Google Cloud (GCP) account before you deploy your clusters. (You don't need to perform this check if you're using BigAnimal's cloud account as your [deployment option](/edb-postgres-ai/cloud-service/getting_started/planning/choosing_your_deployment/). The checks that you perform ensure that your Google Cloud account is prepared to meet your clusters' requirements and resource limits. ## Required APIs and services diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/managing_your_cluster/managing_cluster.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/managing_your_cluster/managing_cluster.mdx index 2a69a8b9b4e..fa54dc0ae4d 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/managing_your_cluster/managing_cluster.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/managing_your_cluster/managing_cluster.mdx @@ -17,9 +17,9 @@ While paused, clusters aren't upgraded or patched, but upgrades are applied when After seven days, single-node and high-availability clusters automatically resume. Resuming a cluster applies any pending maintenance upgrades. Monitoring begins again. -With CLI 3.7.0 and later, you can [pause and resume a cluster using the CLI](../getting_started/managing_cluster/#pausing-and-resuming-clusters). +With CLI 3.7.0 and later, you can [pause and resume a cluster using the CLI](/edb-postgres-ai/cloud-service/using_cluster/cli/managing_clusters/#pause-a-cluster). -You can enable in-app inbox or email notifications to get alerted when the paused cluster is or will be reactivated. For more information, see [managing notifications](/edb-postgres-ai/console/using/notifications/#manage-notifications). +You can enable in-app inbox or email notifications to get alerted when the paused cluster is or will be reactivated. For more information, see [managing notifications](/edb-postgres-ai/console/using/notifications/#managing-notifications). ### Pausing a cluster diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/managing_your_cluster/modifying_your_cluster/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/managing_your_cluster/modifying_your_cluster/index.mdx index bf9313e8efb..3c2d1975cca 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/managing_your_cluster/modifying_your_cluster/index.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/managing_your_cluster/modifying_your_cluster/index.mdx @@ -21,7 +21,7 @@ You can also modify your cluster by installing Postgres extensions. See [Postgre !!! Note - Any changes made to the cluster's instance type, volume type, or volume properties aren't automatically applied to replica settings. To avoid lag during replication, ensure that replica instance and storage types are at least as large as the source cluster's instance and storage types. See [Modify a faraway replica](/edb-postgres-ai/cloud-service/using_cluster/faraway_replicas/#modify-a-faraway-replica). + Any changes made to the cluster's instance type, volume type, or volume properties aren't automatically applied to replica settings. To avoid lag during replication, ensure that replica instance and storage types are at least as large as the source cluster's instance and storage types. See [Modify a faraway replica](/edb-postgres-ai/cloud-service/using_cluster/faraway_replicas/#modify-a-replica). | Settings | Tab | Notes | | ---------------------------------------------------- | ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | @@ -33,7 +33,7 @@ You can also modify your cluster by installing Postgres extensions. See [Postgre | Volume properties | **Cluster Settings** | It can take up to six hours to tune IOPS or resize the disks of your cluster because AWS requires a cooldown period after volume modifications, as explained in [Limitations](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/modify-volume-requirements.html). The volume properties are disabled and can't be modified while this is in progress. | | Networking type (public or private) | **Cluster Settings** | If you're using Azure and previously set up a private link and want to change to a public network, you must remove the private link resources before making the change. | | Nodes (for a distributed high-availability cluster) | **Data Groups** | After you create your cluster, you can't change the number of data nodes. | - | Database configuration parameters | **DB Configuration** | If you're using faraway replicas, only a small subset of parameters are editable. These parameters need to be modified in the replica when increased in the replica's source cluster. See [Modify a faraway replica](/edb-postgres-ai/cloud-service/using_cluster/faraway_replicas/#modify-a-faraway-replica) for details. | + | Database configuration parameters | **DB Configuration** | If you're using faraway replicas, only a small subset of parameters are editable. These parameters need to be modified in the replica when increased in the replica's source cluster. See [Modify a faraway replica](/edb-postgres-ai/cloud-service/using_cluster/faraway_replicas/#modify-a-replica) for details. | | Retention period for backups | **Additional Settings** | — | | Custom maintenance window | **Additional Settings** | Set or modify a maintenance window in which maintenance upgrades occur for the cluster. See [Maintenance](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/#maintenance). | | Read-only workloads | **Additional Settings** | Enabling read-only workloads can incur higher cloud infrastructure charges. | diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/managing_your_cluster/upgrading_your_cluster.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/managing_your_cluster/upgrading_your_cluster.mdx index 56fc69ad7c8..5e3b25ceb2c 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/managing_your_cluster/upgrading_your_cluster.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/managing_your_cluster/upgrading_your_cluster.mdx @@ -24,7 +24,7 @@ Depending on where your older and newer versioned Cloud Service instances are lo To perform a major version upgrade: -1. [Create a Cloud Service instance.](#create-a-biganimal-instance) +1. [Create a Cloud Service instance.](#create-a-cloud-service-instance) 2. [Gather instance information.](#gather-instance-information) 3. [Confirm the Postgres versions before migration.](#confirm-the-postgres-versions-before-migration) 4. [Migrate the database schema.](#migrate-the-database-schema) diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/pgd_cli_ba.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/pgd_cli_ba.mdx index 995fd8eec55..93b5f53c81a 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/pgd_cli_ba.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/pgd_cli_ba.mdx @@ -28,15 +28,16 @@ sudo yum install edb-pgd5-cli To connect to your distributed high-availability Cloud Service cluster using the PGD CLI, you need to [discover the database connection string](/pgd/latest/cli/discover_connections/). From your Console: -1. Log in to the [Cloud Service clusters](https://portal.biganimal.com/clusters) view. -1. To show only clusters that work with PGD CLI, in the filter, set **Cluster Type** to **Distributed High Availability**. -1. Select your cluster. -1. In the view of your cluster, select the **Connect** tab. -1. Copy the read/write URI from the connection info. This is your connection string. +1. Log in to the [Cloud Service clusters](https://portal.biganimal.com/clusters) view. +2. To show only clusters that work with PGD CLI, in the filter, set **Cluster Type** to **Distributed High Availability**. +3. Select your cluster. +4. In the view of your cluster, select the **Connect** tab. +5. Copy the read/write URI from the connection info. This is your connection string. ### Using the PGD CLI with your database connection string -!!! Important +!!!Important + PGD doesn't prompt for interactive passwords. Accordingly, you need a [`.pgpass` file](https://www.postgresql.org/docs/current/libpq-pgpass.html) properly configured to allow access to the cluster. Your Cloud Service cluster's connection information page has all the information needed for the file. Without a properly configured `.pgpass`, you receive a database connection error when using a PGD CLI command, even when using the correct database connection string with the `--dsn` flag. @@ -50,10 +51,11 @@ pgd show-nodes --dsn "" ## PGD commands in Cloud Service -!!! Note +!!!Note + Three EDB Postgres Distributed CLI commands don't work with distributed high-availability Cloud Service clusters: `create-proxy`, `delete-proxy`, and `alter-proxy-option`. These commands are managed by Cloud Service, as Cloud Service runs on Kubernetes. It's a technical best practice to have the Kubernetes operator handle these functions. !!! - + The examples that follow show the most common PGD CLI commands with a Cloud Service cluster. ### `pgd check-health` diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_regions/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_regions/index.mdx index 59914f25673..2aa1397fd4d 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_regions/index.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_regions/index.mdx @@ -3,7 +3,7 @@ title: "Supported regions" deepToC: true --- -Region support varies by whether you're using [your cloud account](#your-cloud-account) or [EDB Hosted Cloud Service account](#biganimals-cloud-account) as your [deployment option](/edb-postgres-ai/cloud-service/getting_started/planning/choosing_your_deployment/). +Region support varies by whether you're using [your cloud account](#your-cloud-account) or [EDB Hosted Cloud Service account](#edb-hosted-cloud-service-account) as your [deployment option](/edb-postgres-ai/cloud-service/getting_started/planning/choosing_your_deployment/). See [Country and geographical region reference](country_reference) for information on geographical region short names and the countries that are in each geographical region. diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/security/security.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/security/security.mdx index edef2f466bf..9c440e4fb29 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/security/security.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/security/security.mdx @@ -1,6 +1,8 @@ --- title: "Security" deepToC: true +redirects: +- /biganimal/latest/overview/03_security/ --- Cloud Service runs on EDB's Cloud Service account or Your Cloud Account. Every Cloud Service cluster is logically isolated from other Cloud Service clusters, but the security properties of the system are different in each [deployment option](/edb-postgres-ai/cloud-service/getting_started/planning/choosing_your_deployment/). The key security features are: @@ -51,7 +53,7 @@ This overview shows the supported cluster-to-key combinations. To enable TDE: -- Before you create a TDE-enabled cluster, you must [add a TDE key](../../administering_cluster/projects/#adding-a-tde-key). +- Before you create a TDE-enabled cluster, you must [add a TDE key](/edb-postgres-ai/console/using/projects/settings/security/#adding-a-tde-key). - See [Creating a new cluster - Security](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/#security) to enable a TDE key during the cluster creation. diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/connecting_your_cluster/02_connecting_from_aws/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/connecting_your_cluster/02_connecting_from_aws/index.mdx index b1f0eaa8a44..00ebe9d26ed 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/connecting_your_cluster/02_connecting_from_aws/index.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/connecting_your_cluster/02_connecting_from_aws/index.mdx @@ -14,7 +14,7 @@ The way you create a private endpoint differs when you're using your AWS account ## Using EDB Hosted Cloud Service -When using EDB Hosted Cloud Service, you provide Cloud Service with your AWS account ID when creating a cluster (see [Networking](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/#network-logs--telemetry-section)). Cloud Service, in turn, provides you with an AWS service name, which you can use to connect to your cluster privately. +When using EDB Hosted Cloud Service, you provide Cloud Service with your AWS account ID when creating a cluster (see [Networking](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/#cluster-settings-tab)). Cloud Service, in turn, provides you with an AWS service name, which you can use to connect to your cluster privately. 1. When creating your cluster, on the **Cluster Settings** tab, in the **Network** section: diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/connecting_your_cluster/connecting_from_azure/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/connecting_your_cluster/connecting_from_azure/index.mdx index 56b0a2411ca..b59badb85ad 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/connecting_your_cluster/connecting_from_azure/index.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/connecting_your_cluster/connecting_from_azure/index.mdx @@ -27,7 +27,7 @@ If you set up a private endpoint and want to change to a public network, you mus ### Using EDB Hosted Cloud Service -When using BigAnimal's cloud account, when creating a cluster, you provide BigAnimal with your Azure subscription ID (see [Networking](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/#network-logs--telemetry-section)). BigAnimal, in turn, provides you with a private link alias, which you can use to connect to your cluster privately. +When using BigAnimal's cloud account, when creating a cluster, you provide BigAnimal with your Azure subscription ID (see [Networking](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/#cluster-settings-tab)). BigAnimal, in turn, provides you with a private link alias, which you can use to connect to your cluster privately. 1. When creating your cluster, on the **Cluster Settings** tab, in the **Network** section: diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/connecting_your_cluster/connecting_from_gcp/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/connecting_your_cluster/connecting_from_gcp/index.mdx index 1f6eac2509b..f33e40f2e59 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/connecting_your_cluster/connecting_from_gcp/index.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/connecting_your_cluster/connecting_from_gcp/index.mdx @@ -8,7 +8,7 @@ The way you create a private Google Cloud endpoint differs when you're using you ## Using EDB Hosted Cloud Service -When using EDB Hosted Cloud Service, when creating a cluster, you provide Cloud Service with your Google Cloud project ID (see [Networking](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/#network-logs--telemetry-section)). Cloud Service, in turn, provides you with a Google Cloud service attachment, which you can use to connect to your cluster privately. +When using EDB Hosted Cloud Service, when creating a cluster, you provide Cloud Service with your Google Cloud project ID (see [Networking](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/#cluster-settings-tab)). Cloud Service, in turn, provides you with a Google Cloud service attachment, which you can use to connect to your cluster privately. 1. When creating your cluster, on the **Cluster Settings** tab, in the **Network** section: diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/managing_postgres_extensions.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/managing_postgres_extensions.mdx index b972aca0003..99f414df237 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/managing_postgres_extensions.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/managing_postgres_extensions.mdx @@ -9,7 +9,7 @@ Cloud Service supports many Postgres extensions. See [Postgres extensions availa ## Extensions available when using your cloud account -Installing many Postgres extensions requires superuser privileges. The table in [Postgres extensions available by deployment](/pg_extensions/) indicates whether an extension requires superuser privileges. If you're using your cloud account, you can grant superuser privileges to edb_admin so that you can install these extensions on your cluster (see [superuser](postgres_access/database_authentication/#superuser)). +Installing many Postgres extensions requires superuser privileges. The table in [Postgres extensions available by deployment](/pg_extensions/) indicates whether an extension requires superuser privileges. If you're using your cloud account, you can grant superuser privileges to edb_admin so that you can install these extensions on your cluster (see [superuser](postgres_access/admin_roles/#superuser)). ## Extensions available when using EDB Hosted Cloud Service diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/tagging/create_and_manage_tags.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/tagging/create_and_manage_tags.mdx index b18e8af9053..676783927d9 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/tagging/create_and_manage_tags.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/tagging/create_and_manage_tags.mdx @@ -63,7 +63,7 @@ To assign a tag to an existing project: ### Create a tag while creating a resource -Create and assign a tag while [creating a project](/biganimal/release/administering_cluster/projects.mdx#creating-a-project) and [creating a cluster](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/#cluster-settings-tab). +Create and assign a tag while [creating a project](/edb-postgres-ai/console/using/projects/managing_projects/#creating-a-new-project) and [creating a cluster](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/#cluster-settings-tab). ## Edit a tag diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/your_cloud_account/managing_superset_access.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/your_cloud_account/managing_superset_access.mdx index 9256af3ba33..ab4e6a2fee3 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/your_cloud_account/managing_superset_access.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/your_cloud_account/managing_superset_access.mdx @@ -30,4 +30,4 @@ The Superset roles map to Cloud Service permissions. - Access to Superset is currently limited to the initial default project set up by Cloud Service. The user needs to have a project role for the initial project to access Superset. - While Admin users have access to all databases by default, both Alpha and Gamma users need to be given access by way of the Superset sql_lab role on a per-database basis. The sql_lab role grants access to SQL Lab. -To assign Cloud Service user roles, see [Users](/edb-postgres-ai/console/using/organizations/users/#users). +To assign Cloud Service user roles, see [Users](/edb-postgres-ai/console/using/organizations/users/). diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/using_the_api/access_key/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/using_the_api/access_key/index.mdx index cb25eca3561..c54b4ac85d4 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/using_the_api/access_key/index.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/using_the_api/access_key/index.mdx @@ -4,11 +4,11 @@ title: "Access key for API" An access key provides an authentication process for Cloud Service users to access the Cloud Service API directly, without the `OAuth2` authorization flow. The access key links to only one user. Each access key link is immutable since its creation. Each access key has an expiration time you specify, ranging from 1 to 365 days. -An access key belongs to only one organization. An access key can be created for a [machine user](/edb-postgres-ai/console/using/organizations/users/#machine-users) or for a normal user. A normal user can't use their access key across the organizations. The key is managed by an organization owner for the machine user, whereas the normal user manages their own access key. Once the access key expires, you must create a new one. Also, if you lose the access key, you have to delete it and create a new one. +An access key belongs to only one organization. An access key can be created for a [machine user](/edb-postgres-ai/console/using/organizations/machine_users/) or for a normal user. A normal user can't use their access key across the organizations. The key is managed by an organization owner for the machine user, whereas the normal user manages their own access key. Once the access key expires, you must create a new one. Also, if you lose the access key, you have to delete it and create a new one. | User type | Quota - Maximum keys | Keys created and managed | | ------------ | -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Machine user | 2 | Two access keys, [created](/edb-postgres-ai/console/using/organizations/users/#add-machine-user) while the organization owner is creating a machine user. Optionally, can be individually added from **Access keys** tab on machine user's details page using **Create New Key** option. | +| Machine user | 2 | Two access keys, [created](/edb-postgres-ai/console/using/organizations/machine_users/#adding-a-machine-user) while the organization owner is creating a machine user. Optionally, can be individually added from **Access keys** tab on machine user's details page using **Create New Key** option. | | Normal user | 1 | One access key, [created](#create-your-personal-access-key) and managed by user from their home page. | An access key can be revoked from a user. Revoking the key doesn't affect the role or any other authentication process of the linked user. diff --git a/advocacy_docs/edb-postgres-ai/console/estate/agent/install-agent.mdx b/advocacy_docs/edb-postgres-ai/console/estate/agent/install-agent.mdx index 0bb9078f16f..d21aefce3e7 100644 --- a/advocacy_docs/edb-postgres-ai/console/estate/agent/install-agent.mdx +++ b/advocacy_docs/edb-postgres-ai/console/estate/agent/install-agent.mdx @@ -75,7 +75,7 @@ Create a Beacon configuration directory in your home directory: mkdir ${HOME}/.beacon ``` -Next, configure Beacon Agent by setting the access key (the one you obtained while [Creating a machine user](create-machine-user)) and project ID: +Next, configure Beacon Agent by setting the access key (the one you obtained while [Creating a machine user](create-machine-user/)) and project ID: ``` export BEACON_AGENT_ACCESS_KEY= diff --git a/advocacy_docs/edb-postgres-ai/console/quickstart/index.mdx b/advocacy_docs/edb-postgres-ai/console/quickstart/index.mdx index fa679daf6a8..eb7718d2447 100644 --- a/advocacy_docs/edb-postgres-ai/console/quickstart/index.mdx +++ b/advocacy_docs/edb-postgres-ai/console/quickstart/index.mdx @@ -2,7 +2,7 @@ title: Quickstart navTitle: Quickstart indexCards: simple -description: A speed run through getting onto EBD Postgres AI, creating a Cloud Service database cluster and connecting to it from your local machine. +description: A speed run through getting onto EDB Postgres AI, creating a Cloud Service database cluster and connecting to it from your local machine. navigation: - create_account_and_sign_in - the_project_and_clusters_views diff --git a/advocacy_docs/edb-postgres-ai/console/using/estate/index.mdx b/advocacy_docs/edb-postgres-ai/console/using/estate/index.mdx index b1d290dcd2f..a9abfe91fdb 100644 --- a/advocacy_docs/edb-postgres-ai/console/using/estate/index.mdx +++ b/advocacy_docs/edb-postgres-ai/console/using/estate/index.mdx @@ -10,7 +10,7 @@ navigation: - storage-locations --- -The Estate view is your everything view of every resource - clusters, hosted, and managed, self-managed, analytics lakehouses, and managed storage locations - in every project. It cuts through the [Projects](..;/projects) demarcation to give a single unified view of all your resources. +The Estate view is your everything view of every resource - clusters, hosted, and managed, self-managed, analytics lakehouses, and managed storage locations - in every project. It cuts through the [Projects](../projects/) demarcation to give a single unified view of all your resources. Rather than grouped into projects, the Estate overview grouped into types of resources. @@ -34,7 +34,7 @@ Read more about viewing and managing [EDB Postgres AI Clusters](edb-postgres-ai- Using an agent you can include self-managed Postgres clusters installed both on-premises and in the cloud as part of your EDB Estate view by using an agent. The agent collects metrics from an associated cluster and feed it to the EDB Estate. It’s in this pane that the information appears. -The **Configure Agent** button takes you through the steps needed to configure the Estate to receive data from an agent. See the [Agent](../../estate/agent) documentation for more details and in particular [Install Agent](../../../estate/agent/install-agent) on how to install the agent on your platform. +The **Configure Agent** button takes you through the steps needed to configure the Estate to receive data from an agent. See the [Agent](../../estate/agent/) documentation for more details and in particular [Install Agent](../../estate/agent/install-agent/) on how to install the agent on your platform. Selecting the **Self Managed Postgres** title takes you to the __Self Managed Postgres__ pane of the full Estate view. diff --git a/advocacy_docs/edb-postgres-ai/console/using/introduction.mdx b/advocacy_docs/edb-postgres-ai/console/using/introduction.mdx index 75fbac00165..a4a3cc8f13b 100644 --- a/advocacy_docs/edb-postgres-ai/console/using/introduction.mdx +++ b/advocacy_docs/edb-postgres-ai/console/using/introduction.mdx @@ -6,7 +6,7 @@ deepToC: true --- !!! Tip -Eager to get started with a database? Jump straight to the [Quickstart](quickstart) to create your first database cluster. +Eager to get started with a database? Jump straight to the [Quickstart](../quickstart/) to create your first database cluster. !!! The EDB Postgres® AI Console is a web-based user interface that provides a single pane of glass for managing and monitoring EDB Postgres AI Database Cloud Service, EDB Postgres AI databases, non-EDB Postgres such as AWS RDS, and any other Postgres installation. The EDB Postgres AI Console provides a unified view of the EDB Postgres AI Database Cloud Service and EDB Postgres AI databases, allowing users to manage and monitor their databases, users, and resources from a single interface. diff --git a/advocacy_docs/edb-postgres-ai/console/using/organizations/identity_provider/index.mdx b/advocacy_docs/edb-postgres-ai/console/using/organizations/identity_provider/index.mdx index 6b374a0f4cb..ac00c3bce38 100644 --- a/advocacy_docs/edb-postgres-ai/console/using/organizations/identity_provider/index.mdx +++ b/advocacy_docs/edb-postgres-ai/console/using/organizations/identity_provider/index.mdx @@ -165,7 +165,7 @@ Once you establish the identity provider, you can create a EDB Postgres AI tile You and other users can log in to EDB Postgres AI using your identity provider credentials. -You can rename the default project EDB Postgres AI creates for you. See [Editing a project](/biganimal/release/administering_cluster/projects/#editing-a-project). +You can rename the default project EDB Postgres AI creates for you. See [Editing a project](/edb-postgres-ai/console/using/projects/managing_projects/#renaming-a-project). You can [set up your cloud service provider](/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/connecting_to_your_cloud/) so that you or other users with the correct permissions can create clusters. diff --git a/advocacy_docs/edb-postgres-ai/console/using/organizations/index.mdx b/advocacy_docs/edb-postgres-ai/console/using/organizations/index.mdx index fb7041ddd66..5f6b6879641 100644 --- a/advocacy_docs/edb-postgres-ai/console/using/organizations/index.mdx +++ b/advocacy_docs/edb-postgres-ai/console/using/organizations/index.mdx @@ -22,9 +22,9 @@ Users and Machine Users for an organization are all managed from the **User Mana * [Managing user access](./users) covers the details of managing users, including inviting users to your organization and assigning roles to them. -* [Machine Users](./machine-users) covers the details of creating machine users and assigning roles to them. +* [Machine Users](./machine_users/) covers the details of creating machine users and assigning roles to them. -* [Identity Providers](./identity-providers) covers the details of setting up identity providers for your organization. +* [Identity Providers](./identity_provider/) covers the details of setting up identity providers for your organization. diff --git a/advocacy_docs/edb-postgres-ai/console/using/organizations/machine_users.mdx b/advocacy_docs/edb-postgres-ai/console/using/organizations/machine_users.mdx index 9ad202be3a3..772e0a5737c 100644 --- a/advocacy_docs/edb-postgres-ai/console/using/organizations/machine_users.mdx +++ b/advocacy_docs/edb-postgres-ai/console/using/organizations/machine_users.mdx @@ -39,9 +39,9 @@ To add a machine user: 1. Select a value for **Expires in (1-365 Day/s)** field to set the lifetime of the access key. 1. To save the settings and provide the generated access key for the user, select **Add User**. -Copy and this access key and save it in a secure location. The access key is available only when you create it. If you lose your access key, you must delete it and create a new one. For more information, see [Access key](../reference/access_key/). +Copy and this access key and save it in a secure location. The access key is available only when you create it. If you lose your access key, you must delete it and create a new one. For more information, see [Access key](/edb-postgres-ai/cloud-service/using_the_api/access_key/). -Assign some organization role or project role to this newly created machine user. For more information, see [users](#users). +Assign some organization role or project role to this newly created machine user. For more information, see [users](users). !!! Note The user management on EDB Postgres AI's UI at project level can assign the project roles to the machine user, but it cannot manage the machine users or their access keys. diff --git a/advocacy_docs/edb-postgres-ai/console/using/organizations/users.mdx b/advocacy_docs/edb-postgres-ai/console/using/organizations/users.mdx index 6f5f2368b9d..d4674b2d3d5 100644 --- a/advocacy_docs/edb-postgres-ai/console/using/organizations/users.mdx +++ b/advocacy_docs/edb-postgres-ai/console/using/organizations/users.mdx @@ -24,7 +24,7 @@ To access an EDB Postgres AI organization, each user needs to either have an EDB All EDB Postgres AI organizations have a default identity provider based around EDB accounts. A user can sign up for an EDB account allowing the owner or administrator of an organization to add them to the organization. -You can configure other identity providers for your organization. For more information on how to do that, see [Setting up your identity provider](/identity_provider/). +You can configure other identity providers for your organization. For more information on how to do that, see [Setting up your identity provider](identity_provider/). You can invite people to your organization by selecting **User Management** from the dropdown menu in the top right of your EDB Postgres AI console page. This takes you to the **Users Management** view. In the top right hand side of the display is a **Add New User** button. Select that and in the **Add New User** dialog that appears, you can enter the users email address and assign organization level roles to the user. If you don't select any roles, the user only added to the organization. To work on a project, the user needs assigning to a project-level role later . Click **Send Invite** to send the invitation to the user. @@ -66,4 +66,4 @@ Organization owners can assign users organization-level roles to enable them to 5. Select **Submit**. -See [Adding a user to a project](/edb-postgres-ai/console/using/projects/#adding_a_user_to_a_project) for information on adding users to projects. +See [Adding a user to a project](/edb-postgres-ai/console/using/projects/users/#adding-a-user-to-a-project) for information on adding users to projects. diff --git a/advocacy_docs/edb-postgres-ai/console/using/overview.mdx b/advocacy_docs/edb-postgres-ai/console/using/overview.mdx index f7784fffa6e..54f377a9155 100644 --- a/advocacy_docs/edb-postgres-ai/console/using/overview.mdx +++ b/advocacy_docs/edb-postgres-ai/console/using/overview.mdx @@ -31,13 +31,13 @@ A user is a person who has access to EDB Postgres AI. A new user has their own o The user can then receive invitations to become a member of an organization sent by the organization owner or by other users with the appropriate permissions. Upon accepting the invitation, the user becomes a member of the organization, in addition to their existing memberships. Owners and admins can assign roles to them within the organization and within projects. This includes becoming owner or admin of an organization or project. -Read more about [Users](./organizations/users) in organizations and [Users](./projects/users) in projects. +Read more about [Users](./organizations/users) in organizations and [Users](./projects/users/) in projects. ## Machine users Machine users are special users that can drive programmatic access to EDB Postgres AI. They can be created by an organization owner and can be assigned roles within the organization and within projects. Machine users can't invite users, and the only way to authenticate and authorize a machine user is with an access key. -Read more about [Machine Users](./organizations/machine-users) in organizations. +Read more about [Machine Users](./organizations/machine_users/) in organizations. ## Handling notifications @@ -49,5 +49,5 @@ Read more about [Notifications](./notifications). Account activity is a log of all the actions taken by users in your EDB Postgres AI account. The account activity log is a record of all the actions taken by users in your EDB Postgres AI account. The log includes actions taken by users in organizations and projects. The account activity log is available to organization owners and project owners. -Read more about [Account Activity](./account-activity). +Read more about [Account Activity](./account_activity/). diff --git a/advocacy_docs/edb-postgres-ai/console/using/projects/settings/security.mdx b/advocacy_docs/edb-postgres-ai/console/using/projects/settings/security.mdx index d2fb44e444f..c83362fc438 100644 --- a/advocacy_docs/edb-postgres-ai/console/using/projects/settings/security.mdx +++ b/advocacy_docs/edb-postgres-ai/console/using/projects/settings/security.mdx @@ -29,7 +29,7 @@ If the key you added was created in a different Google Cloud Platform account th Now, use this TDE key to [create a cluster](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/#security). -For more information about TDE support, see [Transparent Data Encryption](../overview/03_security#your-own-encryption-key---transparent-data-encryption-tde) +For more information about TDE support, see [Transparent Data Encryption](/edb-postgres-ai/cloud-service/security/security/#your-own-encryption-key---transparent-data-encryption-tde) ## Deleting a TDE key diff --git a/advocacy_docs/edb-postgres-ai/console/using/projects/users.mdx b/advocacy_docs/edb-postgres-ai/console/using/projects/users.mdx index 189f42e016b..1cc11369198 100644 --- a/advocacy_docs/edb-postgres-ai/console/using/projects/users.mdx +++ b/advocacy_docs/edb-postgres-ai/console/using/projects/users.mdx @@ -75,7 +75,7 @@ If the user is already a member of the organization, to add the user to the proj 5. Depending on the level of access you want for the user, select the appropriate role. 6. Select **Submit**. -You can enable in-app inbox or email notifications to get alerted when a user is invited to a project. For more information, see [Notifications](notifications) +You can enable in-app inbox or email notifications to get alerted when a user is invited to a project. For more information, see [Notifications](../notifications) ## Viewing outstanding invitations diff --git a/advocacy_docs/edb-postgres-ai/console/using/projects/viewing_projects.mdx b/advocacy_docs/edb-postgres-ai/console/using/projects/viewing_projects.mdx index a7e8824e02e..b1d27ee2a9d 100644 --- a/advocacy_docs/edb-postgres-ai/console/using/projects/viewing_projects.mdx +++ b/advocacy_docs/edb-postgres-ai/console/using/projects/viewing_projects.mdx @@ -11,7 +11,7 @@ There are a range of ways to view a project or projects within the EDB Postgres The single pane view of projects shows the top three active projects in your organization. To see more projects, click **View All Projects** (which opens the [Projects tab](#projects-in-the-projects-tab)). -To see the detailed [Project view](#projects-in-the-project-view), select the project name. +To see the detailed [Project view](#a-project-in-the-project-view), select the project name. ![EDB Postgres AI projects](images/spog_projects_view.png) @@ -36,7 +36,7 @@ On the right-hand side of each entry is an ellipsis menu button. Selecting this Clicking on the **Projects** tab in the top bar opens a view of all the projects in your organization. Each project displays its own pane with a summary of the project, including the number of EDB Postgres AI clusters in the project, number of users with roles in the project, and tags associated with the project. -To see the detailed [Project view](#projects-in-the-project-view), select the project name. +To see the detailed [Project view](#a-project-in-the-project-view), select the project name. ![EDB Postgres AI projects](images/example_project_in_projects.png) @@ -66,7 +66,7 @@ On the left-hand side, a menu offers the following options: | Option | Description | |----------------------------------------------------|--------------------------------------------------------| | [Project Name](#project-name) | View or switch to other Projects | -| [Overview](overview) | View the project overview | +| [Overview](project_overview) | View the project overview | | [Clusters](clusters) | View or edit the clusters in the project | | [Storage Locations](storage_locations) | View or edit the storage locations in the project | | [Regions](regions) | View or edit the regions in the project | diff --git a/advocacy_docs/edb-postgres-ai/overview/guide-and-getting-started.mdx b/advocacy_docs/edb-postgres-ai/overview/guide-and-getting-started.mdx index 02d0572dcd0..d30d3294209 100644 --- a/advocacy_docs/edb-postgres-ai/overview/guide-and-getting-started.mdx +++ b/advocacy_docs/edb-postgres-ai/overview/guide-and-getting-started.mdx @@ -26,4 +26,4 @@ You'll want to look at the [EDB Postgres® AI Platform Agent](/edb-postgres-ai/c ## Do you want to know more about the EDB Postgres AI Cloud Service? -You'll want to look at the [EDB Postgres® AI Cloud Service](/edb-postgres-ai/cloud-service) documentation, which covers the Cloud Service and its databases. +You'll want to look at the [EDB Postgres® AI Cloud Service](/edb-postgres-ai/cloud-service/) documentation, which covers the Cloud Service and its databases. diff --git a/advocacy_docs/edb-postgres-ai/overview/overview-and-concepts.mdx b/advocacy_docs/edb-postgres-ai/overview/overview-and-concepts.mdx index b4d2846fa38..a7929ccd4de 100644 --- a/advocacy_docs/edb-postgres-ai/overview/overview-and-concepts.mdx +++ b/advocacy_docs/edb-postgres-ai/overview/overview-and-concepts.mdx @@ -44,7 +44,7 @@ All of these components are available on the EDB Postgres AI Cloud Service, and Filtering out the data noise and revealing insights and value, Lakehouse analytics brings both structured relational data in Postgres and unstructured data in object storage together for exploration. -- **[Lakehouse nodes](/edb-postgres-ai/analytics/concepts)** +- **[Lakehouse nodes](/edb-postgres-ai/analytics/concepts/)** - At the heart of Analytics is custom-built object storage for your data. Built to bring structured and unstructured data together, Lakehouse nodes support numerous formats to bring cold data in, ready for analysis. ## [EDB Postgres AI AI/ML](/edb-postgres-ai/ai-ml) From 408c459db8f12741267d2db0ced3e41d09cec2ee Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Fri, 26 Jul 2024 08:40:04 +0100 Subject: [PATCH 49/59] Fixing the connection screens Signed-off-by: Dj Walker-Morgan --- .../connecting_to_the_database_cluster.mdx | 55 ------------------- ...ting_to_the_database_cluster_with_psql.mdx | 49 +++++++++++++++++ ...onnecting_to_the_database_with_pgadmin.mdx | 47 ++++++++++++++++ .../images/qs_connect_credentials.png | 4 +- .../images/qs_overview_quick_connect.png | 3 + .../using/projects/viewing_projects.mdx | 54 +++++++++--------- 6 files changed, 128 insertions(+), 84 deletions(-) delete mode 100644 advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_cluster.mdx create mode 100644 advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_cluster_with_psql.mdx create mode 100644 advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_with_pgadmin.mdx create mode 100644 advocacy_docs/edb-postgres-ai/console/quickstart/images/qs_overview_quick_connect.png diff --git a/advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_cluster.mdx b/advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_cluster.mdx deleted file mode 100644 index cfc93b270bc..00000000000 --- a/advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_cluster.mdx +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: Connecting to the database cluster -navTitle: Connecting to the cluster -description: How to get the credentials and connect to the database cluster you just created in the EDB Postgres AI Console. ---- - -When the cluster is ready, you'll see it as created in the **Clusters** view. - -
- -![Quickstart Cluster Provisioned](images/qs_cluster_provisioned.png) - -
- -You'll need the cluster credentials to connect to the database. Select the lock icon on the right-hand side of the cluster entry to view the credentials. The lock icon is a short-cut to the **Connect** tab of the full cluster view. - -
- -![Quickstart Connect Credentials](images/qs_connect_credentials.png) - -
- -Assuming that you are using `psql` as your Postgres client as suggested in the [previous step](creating_a_database_cluster.mdx), we will proceed to connect to the database cluster. - -Copy the Read/write URI to your clipboard. This URI is the connection string you'll use to connect to the database. - -Open a terminal on your local system and type `psql "` and paste the URI you copied then type `"` to close the double quoted URI string. It should look something like this: - -!!! tip Double quotes -Why the double quotes around the URI? The URI contains characters that are special to the shell, like `?` and `&`. The double quotes prevent the shell from interpreting these characters as special and instead pass them as part of the URI. -!!! - -Your command line should look like this: - -```bash -psql "postgres://edb_admin@p-n6wkzdbw1d.pg.biganimal.io:5432/edb_admin?sslmode=require" -``` - -Press ENTER. You'll see a password prompt. This is the password you copied earlier. Enter the password and press ENTER. - -``` -psql "postgres://edb_admin@p-n6wkzdbw1d.pg.biganimal.io:5432/edb_admin?sslmode=require" -__OUTPUT__ -Password for user edb_admin: (enter the password you copied earlier) -psql (16.1, server 16.3 (Debian 16.3.0-1.bookworm)) -SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off) -Type "help" for help. - -edb_admin=> -``` - - -You are now connected to the database cluster. To be precise, you are now logged in as the `edb_admin` user of the database. Treat this user as a superuser and be careful with what you do. You can create new users and databases, and do anything else you need to do with the database. - - diff --git a/advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_cluster_with_psql.mdx b/advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_cluster_with_psql.mdx new file mode 100644 index 00000000000..0b3504930ee --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_cluster_with_psql.mdx @@ -0,0 +1,49 @@ +--- +title: Connecting to the database cluster with psql +navTitle: Connecting with psql +description: How to get the credentials and connect to the database cluster you just created in the EDB Postgres AI Console with psql. +--- + +When the cluster is ready, you'll see it as created in the **Clusters** view. + +
+ +![Quickstart Cluster Provisioned](images/qs_cluster_provisioned.png) + +
+ +You'll need the cluster credentials to connect to the database. Select cluster's name to view the cluster details. On the first tab, **Overview**, you'll see a **Quick Connect** field. + +
+ +![Quickstart Connect Credentials](images/qs_overview_quick_connect.png) + +
+ +This is a complete command for connecting to the database, assuming that is, that you are using `psql` as your Postgres client as suggested in the [previous step](creating_a_database_cluster.mdx). + +Copy the **Quick Connect** field to your clipboard and paste it into a terminal on your local system. + +Your command line should look like this: + +```bash +psql "postgres://edb_admin@p-n6wkz0pihw.pg.biganimal.io:5432/edb_admin?sslmode=require" +``` + +Press enter. You'll see a password prompt. This is the password you copied earlier. Enter the password and press enter again. + +``` +psql "postgres://edb_admin@p-n6wkzdbw1d.pg.biganimal.io:5432/edb_admin?sslmode=require" +__OUTPUT__ +Password for user edb_admin: (enter the password you copied earlier) +psql (16.1, server 16.3 (Debian 16.3.0-1.bookworm)) +SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off) +Type "help" for help. + +edb_admin=> +``` + + +You are now connected to the database cluster. To be precise, you are now logged in as the `edb_admin` user of the database. Treat this user as a superuser and be careful with what you do. You can create new users and databases, and do anything else you need to do with the database. + + diff --git a/advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_with_pgadmin.mdx b/advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_with_pgadmin.mdx new file mode 100644 index 00000000000..00fdd0f705f --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_with_pgadmin.mdx @@ -0,0 +1,47 @@ +--- +title: Connecting to the database cluster with pgAdmin +navTitle: Connecting with pgAdmin +description: How to get the credentials and connect to the database cluster you just created in the EDB Postgres AI Console. +--- + +The psql client is not the only client for Postgres. For example, pgAdmin 4 provides a rich graphical interface for working with Postgres databases. In this step, we'll show you how to connect to the database cluster you created in the [previous step](creating_a_database_cluster.mdx) using pgAdmin 4. + + + +Assuming the cluster is ready, you'll see it as created in the **Clusters** view. + +
+ +![Quickstart Cluster Provisioned](images/qs_cluster_provisioned.png) + +
+ +[The pgAdmin project](https://www.pgadmin.org/download/) allows you to inspect, monitor, manage, and query your cluster's databases from a desktop or web UI. Download and install pgAdmin 4 on your local system. Run the application. It opens with a home screen which includes a **Quick Links** section. Click **Add New Server**. + +You'll need the cluster credentials to connect to the database. Back at the Console's overview, select the lock icon on the right-hand side of the cluster entry to view the credentials. The lock icon is a short-cut to the **Connect** tab of the full cluster view. (You can also click the cluster name to view the full cluster details and then click the **Connect** tab.) + +
+ +![Quickstart Connect Credentials](images/qs_connect_credentials.png) + +
+ +Assuming that you are using `psql` as your Postgres client as suggested in the [previous step](creating_a_database_cluster.mdx), we will proceed to connect to the database cluster. + +From the welcome page of pgAdmin, select **Add New Server**. You're prompted to configure the connection. + +Enter a name for the name field (or use the name you previously gave to your cluster!), and then select **Connection**. + +1. In the **Host name/address** field, enter your cluster's hostname. This is the value in the Connect tab's **Read/Write Host** field. +1. In the **Maintenance database** field, enter `edb_admin`. This is the value in the Connect tab's **Dbname** field and is the database that pgAdmin initially connects to. +1. In the **Username** field, enter `edb_admin`. This is the value in the Connect tab's **Username** field and is the user that pgAdmin connects with. +1. In the **Password** field, enter the password you provided when configuring your cluster. You may want to save this for convenience while testing. +1. Select the **SSL** tab, and change SSL mode to **Require**. +1. Select **Save**. + + +At this point pgAdmin tries to establish a connection to your database. When successful, it displays the dashboard along with the list of available databases on the left. + +You are now connected to the database cluster. To be precise, you are now logged in as the `edb_admin` user of the `edb_admin` database. Treat this user as a superuser and be careful with what you do. You can create new users and databases, and do anything else you need to do with the database. + + diff --git a/advocacy_docs/edb-postgres-ai/console/quickstart/images/qs_connect_credentials.png b/advocacy_docs/edb-postgres-ai/console/quickstart/images/qs_connect_credentials.png index e34bf3c0053..f0bbee9dd1c 100644 --- a/advocacy_docs/edb-postgres-ai/console/quickstart/images/qs_connect_credentials.png +++ b/advocacy_docs/edb-postgres-ai/console/quickstart/images/qs_connect_credentials.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:2bbc13616bff18f24be86306be5c3c3d7e83bbac22ef5a4ece368d9fb62f4bf7 -size 156320 +oid sha256:d66b57ca6e4bf5e164ca89eb63fa8129ca3c909b96ff4ec64ac833cb6625daa6 +size 155466 diff --git a/advocacy_docs/edb-postgres-ai/console/quickstart/images/qs_overview_quick_connect.png b/advocacy_docs/edb-postgres-ai/console/quickstart/images/qs_overview_quick_connect.png new file mode 100644 index 00000000000..319b87b54c0 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/console/quickstart/images/qs_overview_quick_connect.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9eb022e5d1e4857cb48b6972f5561c10a27ce2c9516e554eb0bcd0024dcf382b +size 88501 diff --git a/advocacy_docs/edb-postgres-ai/console/using/projects/viewing_projects.mdx b/advocacy_docs/edb-postgres-ai/console/using/projects/viewing_projects.mdx index b1d27ee2a9d..739428d769d 100644 --- a/advocacy_docs/edb-postgres-ai/console/using/projects/viewing_projects.mdx +++ b/advocacy_docs/edb-postgres-ai/console/using/projects/viewing_projects.mdx @@ -19,17 +19,17 @@ In this view, you can see the project name, the number of resources in the proje There is also the option to add tags to a project. On the right-hand side of each entry is an ellipsis menu button. Selecting this button opens a menu with the options available to the current user for that project. -| Option | Description | -|----------------------------------------|----------------------------------------------------------------| -| [Clusters](clusters) | View or edit the clusters in the project | -| [Regions](regions) | View or edit the regions in the project | -| [Users](users) | View or edit the users in the project | -| [Cloud Providers](cloud_providers) | View or edit the cloud providers in the project | -| [Usage Report](usage_report) | View details of cluster usage within the project | -| [Activity log](activity_log) | View the activity log for the project | -| [Integrations](settings/integrations) | View or edit the integrations available to the project | -| [Edit](managing_projects#renaming-a-project) | Edit the project details (name and tags) | -| [Delete](managing_projects#confirming-a-project-deletion) | Delete the project | +| Option | Description | +|-----------------------------------------------------------|--------------------------------------------------------| +| [Clusters](clusters) | View or edit the clusters in the project | +| [Regions](regions) | View or edit the regions in the project | +| [Users](users) | View or edit the users in the project | +| [Cloud Providers](cloud_providers) | View or edit the cloud providers in the project | +| [Usage Report](usage_report) | View details of cluster usage within the project | +| [Activity log](activity_log) | View the activity log for the project | +| [Integrations](settings/integrations) | View or edit the integrations available to the project | +| [Edit](managing_projects#renaming-a-project) | Edit the project details (name and tags) | +| [Delete](managing_projects#confirming-a-project-deletion) | Delete the project | ### Projects in the Projects tab @@ -44,16 +44,16 @@ Also shown in a display of active cloud service providers (CSPs) configured with These show as a display of the CSP logos in the project pane. On the right-hand side of each entry is an ellipsis menu button. Selecting this button opens a menu with the options available to the current user for that project. -| Option | Description | -|------------------------------------------|--------------------------------------------------------| -| [Clusters](clusters) | View or edit the clusters in the project | -| [Regions](regions) | View or edit the regions in the project | -| [Users](users) | View or edit the users in the project | -| [Cloud Providers](cloud_providers) | View or edit the cloud providers in the project | -| [Usage Report](usage_report) | View details of cluster usage within the project | -| [Activity log](activity_log) | View the activity log for the project | -| [Integrations](settings/integrations) | View or edit the integrations available to the project | -| [Edit](settings/profile) | Edit the project details (name and tags) | +| Option | Description | +|-----------------------------------------------------------|--------------------------------------------------------| +| [Clusters](clusters) | View or edit the clusters in the project | +| [Regions](regions) | View or edit the regions in the project | +| [Users](users) | View or edit the users in the project | +| [Cloud Providers](cloud_providers) | View or edit the cloud providers in the project | +| [Usage Report](usage_report) | View details of cluster usage within the project | +| [Activity log](activity_log) | View the activity log for the project | +| [Integrations](settings/integrations) | View or edit the integrations available to the project | +| [Edit](settings/profile) | Edit the project details (name and tags) | | [Delete](managing_projects#confirming-a-project-deletion) | Delete the project | ### A Project in the Project view @@ -63,10 +63,10 @@ On the left-hand side, a menu offers the following options: ![EDB Postgres AI project view](images/project_view.png) -| Option | Description | -|----------------------------------------------------|--------------------------------------------------------| -| [Project Name](#project-name) | View or switch to other Projects | -| [Overview](project_overview) | View the project overview | +| Option | Description | +|---------------------------------------------------|--------------------------------------------------------| +| [Project Name](#project-name) | View or switch to other Projects | +| [Overview](project_overview) | View the project overview | | [Clusters](clusters) | View or edit the clusters in the project | | [Storage Locations](storage_locations) | View or edit the storage locations in the project | | [Regions](regions) | View or edit the regions in the project | @@ -74,11 +74,11 @@ On the left-hand side, a menu offers the following options: | [Cloud Providers](cloud_providers) | View or edit the cloud providers in the project | | [Usage Report](usage_report) | View details of cluster usage within the project | | [Activity log](activity_log) | View the activity log for the project | -| Settings | Displays settings options in the menu | +| Settings | Displays settings options in the menu | |   [Integrations](settings/integrations) | View or edit the integrations available to the project | |   [Profile](settings/profile) | View or edit the profile of the project | |   [Security](settings/security) | View or edit the security settings of the project | -| Migrate | Displays migration options in the menu | +| Migrate | Displays migration options in the menu | |   [Migrations](migrate/migrations) | Allows you to configure a new migration | From b3fa98764589fe7e9645df028d9788aafbb09b9d Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Mon, 29 Jul 2024 17:22:19 +0100 Subject: [PATCH 50/59] More exploring section with simpler path Signed-off-by: Dj Walker-Morgan --- ...onnecting_to_the_database_with_dbeaver.mdx | 53 ++++++++++++ ... connecting_to_the_database_with_psql.mdx} | 6 ++ .../creating_a_database_cluster.mdx | 4 +- .../quickstart/exploring_the_database.mdx | 81 +++++++++++++++++++ .../console/quickstart/index.mdx | 4 +- 5 files changed, 145 insertions(+), 3 deletions(-) create mode 100644 advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_with_dbeaver.mdx rename advocacy_docs/edb-postgres-ai/console/quickstart/{connecting_to_the_database_cluster_with_psql.mdx => connecting_to_the_database_with_psql.mdx} (90%) create mode 100644 advocacy_docs/edb-postgres-ai/console/quickstart/exploring_the_database.mdx diff --git a/advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_with_dbeaver.mdx b/advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_with_dbeaver.mdx new file mode 100644 index 00000000000..b1792a6ef16 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_with_dbeaver.mdx @@ -0,0 +1,53 @@ +--- +title: Connecting to the database cluster with DBeaver +navTitle: Connecting with DBeaver +description: How to get the credentials and connect to the database cluster you just created in the EDB Postgres AI Console using DBeaver. +--- + +Another popular client for Postgres is [DBeaver](https://dbeaver.io/) which offers its own powerful graphical interface for working with Postgres databases. In this step, we'll show you how to connect to the database cluster you created in the [earlier step](creating_a_database_cluster.mdx) using DBeaver. + +Assuming the cluster is ready, you'll see it as created in the **Clusters** view. + +
+ +![Quickstart Cluster Provisioned](images/qs_cluster_provisioned.png) + +
+ +[DBeaver](https://dbeaver.io/download/) allows you to inspect, monitor, manage and query your clusters databases from your desktop. Download and install DBeaver for your local system and install it. + +You'll need the cluster credentials to connect to the database. Back at the Console's overview, select the lock icon on the right-hand side of the cluster entry to view the credentials. The lock icon is a short-cut to the **Connect** tab of the full cluster view. (You can also click the cluster name to view the full cluster details and then click the **Connect** tab.) + +
+ +![Quickstart Connect Credentials](images/qs_connect_credentials.png)db + +
+ +1. Launch [DBeaver](https://dbeaver.io/). +1. Open the **Connect to a database** dialog box, by either selecting **New Database Connection** on the toolbar or selecting **Database** > **New Database Connection** on the menu bar. +1. Select **PostgreSQL** and select **Next**. + + !!! tip + You may be asked to download the PostgreSQL JDBC driver, especially if this is your first time using DBeaver. Allow DBeaver to install the driver. + +1. On the **Main** tab: + + - Enter your cluster's read/write host name in the **Host** field. + - Enter `edb_admin` in the **Database** field. + - Select the **Show all databases** checkbox next to the **Database** field. + - Enter `edb_admin` in the **Username** field. + - Enter your cluster's password in the **Password** field. + +1. On the **SSL** tab: + + - Select the **Use SSL** checkbox. + - For the **SSL mode:** field, select **require**. + +1. To verify that DBeaver can connect to your cluster, select **Test connection**. +1. To save the connection, select **Finish**. + + +You are now connected to the database cluster. To be precise, you are now logged in as the `edb_admin` user of the database. Treat this user as a superuser and be careful with what you do. You can create new users and databases, and do anything else you need to do with the database. + + diff --git a/advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_cluster_with_psql.mdx b/advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_with_psql.mdx similarity index 90% rename from advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_cluster_with_psql.mdx rename to advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_with_psql.mdx index 0b3504930ee..920e944dcc2 100644 --- a/advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_cluster_with_psql.mdx +++ b/advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_with_psql.mdx @@ -46,4 +46,10 @@ edb_admin=> You are now connected to the database cluster. To be precise, you are now logged in as the `edb_admin` user of the database. Treat this user as a superuser and be careful with what you do. You can create new users and databases, and do anything else you need to do with the database. +### Next steps + +- [Exploring the database](exploring_the_database) +- [Connecting with pgAdmin](connecting_to_the_database_with_pgadmin) +- [Connecting with DBeaver](connecting_to_the_database_with_dbeaver) + diff --git a/advocacy_docs/edb-postgres-ai/console/quickstart/creating_a_database_cluster.mdx b/advocacy_docs/edb-postgres-ai/console/quickstart/creating_a_database_cluster.mdx index 3ced9686eca..22b04c1e1e0 100644 --- a/advocacy_docs/edb-postgres-ai/console/quickstart/creating_a_database_cluster.mdx +++ b/advocacy_docs/edb-postgres-ai/console/quickstart/creating_a_database_cluster.mdx @@ -66,7 +66,7 @@ brew install libpq On Windows users can download the Postgresql from the [EDB site](https://www.enterprisedb.com/downloads/postgres-postgresql-downloads) and only install the client. -!!! +### Next steps -When the provisioning has completed, you can move on to [connecting to the database cluster](connecting_to_the_database_cluster.mdx). +When the provisioning has completed, you can move on to [connecting to the database cluster](connecting_to_the_database_with_psql). diff --git a/advocacy_docs/edb-postgres-ai/console/quickstart/exploring_the_database.mdx b/advocacy_docs/edb-postgres-ai/console/quickstart/exploring_the_database.mdx new file mode 100644 index 00000000000..0e3789e24e2 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/console/quickstart/exploring_the_database.mdx @@ -0,0 +1,81 @@ +--- +title: Exploring the database +navTitle: Exploring +description: Exploring the database in EDB Postgres AI with psql +--- + +Now that you have created a database cluster and connected to it, you can start exploring the database. In this step, we'll show you how to use `psql` to connect to the database cluster and run some basic queries. + +Assuming that you are using `psql` as your Postgres client as suggested in the [previous step](connecting_to_the_database_with_psql.mdx), proceed to connect to the database cluster. + +Once you have connected you can run some basic queries. But first you'll want to create a database of your own to work with. + + +### Create a database + +1. Create a user with the edb_admin role: + ```sql + CREATE USER explorer WITH PASSWORD 'yourpasswordhere'; + ``` + +1. Create a database: + ```sql + CREATE DATABASE explore; + ``` +1. Grant the user access to the database: + ```sql + GRANT explorer to edb_admin; + ``` + +1. Connect to the database: + ```sql + \c explore + ``` + +### Create a table and data + +Let's create a table of integers and populate it with some random values. + +1. Create a table: + ```sql + CREATE TABLE quicktest ( id SERIAL PRIMARY KEY, value INT ); + ``` + +1. Populate the table: + ```sql + INSERT INTO quicktest (value) SELECT random()*10000 + FROM generate_series(1,10000); + ``` + +### Run some queries + +1. Get a sum of the value column (for checking): + ```sql + select COUNT(*),SUM(value) from quicktest; + ``` +1. Get the average value: + ```sql + select AVG(value) from quicktest; + ``` +1. Get the maximum value: + ```sql + select MAX(value) from quicktest; + ``` +1. Get the minimum value: + ```sql + select MIN(value) from quicktest; + ``` +1. Get the standard deviation of the values: + ```sql + select STDDEV(value) from quicktest; + ``` +1. Get the ten lowest values in the table: + ```sql + select * from quicktest order by value limit 10; + ``` +1. Get the ten highest values in the table: + ```sql + select * from quicktest order by value desc limit 10; + ``` + + diff --git a/advocacy_docs/edb-postgres-ai/console/quickstart/index.mdx b/advocacy_docs/edb-postgres-ai/console/quickstart/index.mdx index eb7718d2447..f1819e0118c 100644 --- a/advocacy_docs/edb-postgres-ai/console/quickstart/index.mdx +++ b/advocacy_docs/edb-postgres-ai/console/quickstart/index.mdx @@ -7,7 +7,9 @@ navigation: - create_account_and_sign_in - the_project_and_clusters_views - creating_a_database_cluster -- connecting_to_the_database_cluster +- connecting_to_the_database_with_psql +- connecting_to_the_database_with_pgadmin +- connecting_to_the_database_with_dbeaver --- This quickstart guide takes you through the steps to get started with the EDB Postgres AI Console and Cloud Service. You'll learn how to create an account on EDB Postgres AI, then use that account to create a single node database cluster on the Cloud Service. Finally, you'll learn how to connect to your new database cluster. From 4a4529043b8ca1ee0ac8ea306209bc1cb10ef33a Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 30 Jul 2024 10:18:04 +0100 Subject: [PATCH 51/59] Review fixups from meeting Signed-off-by: Dj Walker-Morgan --- .../distributed_high_availability/index.mdx | 7 +++++++ .../references/foreign_data_wrappers.mdx | 3 ++- .../cloud-service/references/index.mdx | 7 +++++-- .../cloud-service/references/poolers.mdx | 1 + .../references/supported_cluster_types/index.mdx | 1 + .../references/supported_database_versions.mdx | 1 + .../references/supported_extension_tools.mdx | 1 + .../references/supported_regions/index.mdx | 1 + .../cloud-service/using_cluster/tagging/index.mdx | 2 +- .../using_cluster/your_cloud_account/index.mdx | 5 +++-- .../console/quickstart/exploring_the_database.mdx | 2 +- src/pages/index.js | 12 ++++++------ 12 files changed, 30 insertions(+), 13 deletions(-) create mode 100644 advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/index.mdx diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/index.mdx new file mode 100644 index 00000000000..0b03d06aed0 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/index.mdx @@ -0,0 +1,7 @@ +--- +title: Distributed High Availability on Cloud Service +navTitle: Distributed High Availability +description: The PGD defaults and commands for Distributed high availability on EDB Postgres AI Cloud Service. +--- + +When running a distributed high-availability cluster on Cloud Service, you can use the [PGD CLI](/pgd/latest/cli/) to manage cluster operations. Examples of these operations include switching over write leaders, performing cluster health checks, and viewing various details about nodes, groups, or other aspects of the cluster. diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/references/foreign_data_wrappers.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/references/foreign_data_wrappers.mdx index 67fe035adcc..2c46d5f97b4 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/references/foreign_data_wrappers.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/references/foreign_data_wrappers.mdx @@ -1,9 +1,10 @@ --- title: Foreign data wrappers +description: Using foreign data wrappers in EDB Postgres AI Cloud Service --- Cloud Service supports EDB's MongoDB Foreign Data Wrapper and MySQL Foreign Data Wrapper. They allow you to connect your Postgres database server to external data sources. See: - [MongoDB Foreign Data Wrapper](/mongo_data_adapter/latest/) — Accesses data that resides on a MongoDB database from a Postgres database server. -- [MySQL Foreign Data Wrapper](/mysql_data_adapter/latest/) — Accesses data that resides on a MySQL database from a Postgres database server. \ No newline at end of file +- [MySQL Foreign Data Wrapper](/mysql_data_adapter/latest/) — Accesses data that resides on a MySQL database from a Postgres database server. diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/references/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/references/index.mdx index a46c4b8bdf7..120060c28cb 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/references/index.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/references/index.mdx @@ -1,6 +1,6 @@ --- title: Supported configurations -description: Available cluster types, supported database versions, and the cloud regions. +description: "A reference for all supported configurations of clusters on EDB Postgres AI Cloud Service." navigation: - supported_cluster_types - supported_database_versions @@ -9,4 +9,7 @@ navigation: - poolers - foreign_data_wrappers - distributed_high_availability ---- \ No newline at end of file +--- + +This reference guide provides information on the supported configurations for EDB Postgres AI Cloud Service. + diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/references/poolers.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/references/poolers.mdx index 455c0908224..89cc80b03ed 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/references/poolers.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/references/poolers.mdx @@ -1,5 +1,6 @@ --- title: EDB PgBouncer +description: EDB PgBouncer availability on Cloud Service. --- EDB PgBouncer can manage your connections to Postgres databases and help your workloads run more efficiently. It's particularly useful if you plan to use more than a few hundred active connections. You can enable EDB PgBouncer to be entirely managed by Cloud Service, when creating your cluster. See [Creating a cluster](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/#pgbouncer). diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_cluster_types/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_cluster_types/index.mdx index 7d5fb81a38b..7d16990e21b 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_cluster_types/index.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_cluster_types/index.mdx @@ -3,6 +3,7 @@ title: "Supported cluster types" deepToC: true redirects: - 02_high_availibility +description: "Cloud Service supports three cluster types and faraway replicas." navigation: - single_node - primary_standby_highavailability diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_database_versions.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_database_versions.mdx index fd93c360182..dc08f26c4d8 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_database_versions.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_database_versions.mdx @@ -1,5 +1,6 @@ --- title: "Database version policy" +description: "How Cloud Service supports the major Postgres versions." --- We support the major Postgres versions from the date they're made available until the version is retired by EDB (generally five years). See [Platform Compatibility ](https://www.enterprisedb.com/platform-compatibility#epas) for more details on support dates for PostgreSQL, EDB Postgres Advanced Server, and EDB Postgres Extended Server. See [End-of-life policy](#end-of-life-policy) for details on our policy for retiring deprecated versions. diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_extension_tools.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_extension_tools.mdx index 8b1a5c61725..4504f94bfad 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_extension_tools.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_extension_tools.mdx @@ -1,6 +1,7 @@ --- title: Supported Postgres extensions and tools navTitle: Supported extensions and tools +description: Cloud Service supported postgres extensions and tools available for use with your cluster. --- Cloud Service supports a number of Postgres extensions and tools, which you can install on or alongside your cluster. diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_regions/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_regions/index.mdx index 2aa1397fd4d..955153d4f4d 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_regions/index.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/references/supported_regions/index.mdx @@ -1,5 +1,6 @@ --- title: "Supported regions" +description: "CSP Regions supported by EDB Postgres AI Cloud Service" deepToC: true --- diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/tagging/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/tagging/index.mdx index ae7ee7edf79..82e9703a62e 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/tagging/index.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/tagging/index.mdx @@ -1,5 +1,5 @@ --- -title: Tagging Cloud Service resources +title: Tagging resources description: How to tag a resource in EDB Postgres AI Cloud Service. --- diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/your_cloud_account/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/your_cloud_account/index.mdx index 8633168bb3a..d2060030ec0 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/your_cloud_account/index.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/your_cloud_account/index.mdx @@ -1,9 +1,10 @@ --- title: Your Cloud Account -description: How to work with Your Cloud Account. +description: Using your own cloud account to enable AWS Secrets Manager and Apache Superset and your own policies on EDB Postgres AI Cloud Service. navigation: - managing_superset_access - analyze_with_superset - aws_secrets_manager_integration - customizing_compliance ---- \ No newline at end of file +--- + diff --git a/advocacy_docs/edb-postgres-ai/console/quickstart/exploring_the_database.mdx b/advocacy_docs/edb-postgres-ai/console/quickstart/exploring_the_database.mdx index 0e3789e24e2..fd38a2ad7ba 100644 --- a/advocacy_docs/edb-postgres-ai/console/quickstart/exploring_the_database.mdx +++ b/advocacy_docs/edb-postgres-ai/console/quickstart/exploring_the_database.mdx @@ -49,7 +49,7 @@ Let's create a table of integers and populate it with some random values. ### Run some queries -1. Get a sum of the value column (for checking): +1. Get a sum of the value column (and a count of the rows): ```sql select COUNT(*),SUM(value) from quicktest; ``` diff --git a/src/pages/index.js b/src/pages/index.js index ee2eb8c1028..086568b5498 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -199,14 +199,14 @@ const Page = () => { headingText="Cloud Service" to="/edb-postgres-ai/cloud-service" > - - Hosted databases + + Getting started - - Managed databases + + Using your cluster - - Deployment options + + Supported configurations From 52284d221009c067a2f7ab847fd0a06aff8dceb6 Mon Sep 17 00:00:00 2001 From: Josh Heyer Date: Tue, 30 Jul 2024 18:40:31 +0000 Subject: [PATCH 52/59] Fix these same links yet again Because the content was re-introduced with the errors fixed elsewhere Not that I'm tired of staring at 'em or anything --- .../quickstart/connecting_to_the_database_with_dbeaver.mdx | 1 + .../console/quickstart/creating_a_database_cluster.mdx | 2 ++ .../docs/biganimal/release/administering_cluster/projects.mdx | 2 +- .../release/getting_started/creating_a_cluster/index.mdx | 2 +- .../biganimal/release/getting_started/managing_cluster.mdx | 4 ++-- .../docs/biganimal/release/getting_started/overview.mdx | 4 ++-- .../preparing_cloud_account/preparing_gcp/index.mdx | 2 +- .../docs/biganimal/release/overview/03_security/index.mdx | 2 +- .../01_connecting_from_azure/index.mdx | 2 +- .../02_connecting_from_aws/index.mdx | 2 +- .../02_connecting_your_cluster/connecting_from_gcp/index.mdx | 2 +- 11 files changed, 14 insertions(+), 11 deletions(-) diff --git a/advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_with_dbeaver.mdx b/advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_with_dbeaver.mdx index b1792a6ef16..e5c31b8c428 100644 --- a/advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_with_dbeaver.mdx +++ b/advocacy_docs/edb-postgres-ai/console/quickstart/connecting_to_the_database_with_dbeaver.mdx @@ -30,6 +30,7 @@ You'll need the cluster credentials to connect to the database. Back at the Cons !!! tip You may be asked to download the PostgreSQL JDBC driver, especially if this is your first time using DBeaver. Allow DBeaver to install the driver. + !!! 1. On the **Main** tab: diff --git a/advocacy_docs/edb-postgres-ai/console/quickstart/creating_a_database_cluster.mdx b/advocacy_docs/edb-postgres-ai/console/quickstart/creating_a_database_cluster.mdx index 22b04c1e1e0..c63f337322b 100644 --- a/advocacy_docs/edb-postgres-ai/console/quickstart/creating_a_database_cluster.mdx +++ b/advocacy_docs/edb-postgres-ai/console/quickstart/creating_a_database_cluster.mdx @@ -66,6 +66,8 @@ brew install libpq On Windows users can download the Postgresql from the [EDB site](https://www.enterprisedb.com/downloads/postgres-postgresql-downloads) and only install the client. +!!! + ### Next steps When the provisioning has completed, you can move on to [connecting to the database cluster](connecting_to_the_database_with_psql). diff --git a/product_docs/docs/biganimal/release/administering_cluster/projects.mdx b/product_docs/docs/biganimal/release/administering_cluster/projects.mdx index fb5d528b9fa..3c9c740a8a0 100644 --- a/product_docs/docs/biganimal/release/administering_cluster/projects.mdx +++ b/product_docs/docs/biganimal/release/administering_cluster/projects.mdx @@ -23,7 +23,7 @@ To add a user: 4. Depending on the level of access you want for the user, select the appropriate role. 5. Select **Submit**. -You can enable in-app inbox or email notifications to get alerted when a user is invited to a project. For more information, see [managing notifications](notifications/#manage-notifications). +You can enable in-app inbox or email notifications to get alerted when a user is invited to a project. For more information, see [managing notifications](notifications/#managing-notifications). ## Creating a project diff --git a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx index 28ebf644a96..6564504065d 100644 --- a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx @@ -235,7 +235,7 @@ Enable **Transparent Data Encryption (TDE)** to use your own encryption key. Thi !!!Note "Important" - To enable and use TDE for a cluster, you must first enable the encryption key and add it at the project level before creating a cluster. To add a key, see [Adding a TDE key at project level](../../administering_cluster/projects.mdx/#adding-a-tde-key). -- To enable and use TDE for a cluster, you must complete the configuration on the platform of your key management provider after creating a cluster. See [Completing the TDE configuration](#completing-the-TDE-configuration) for more information. +- To enable and use TDE for a cluster, you must complete the configuration on the platform of your key management provider after creating a cluster. See [Completing the TDE configuration](#completing-the-tde-configuration) for more information. !!! #### Completing the TDE configuration diff --git a/product_docs/docs/biganimal/release/getting_started/managing_cluster.mdx b/product_docs/docs/biganimal/release/getting_started/managing_cluster.mdx index 8afdf613c5b..6658a28d44b 100644 --- a/product_docs/docs/biganimal/release/getting_started/managing_cluster.mdx +++ b/product_docs/docs/biganimal/release/getting_started/managing_cluster.mdx @@ -16,9 +16,9 @@ While paused, clusters aren't upgraded or patched, but upgrades are applied when After seven days, single-node and high-availability clusters automatically resume. Resuming a cluster applies any pending maintenance upgrades. Monitoring begins again. -With CLI 3.7.0 and later, you can [pause and resume a cluster using the CLI](../../reference/cli/managing_clusters/#pausing-a-cluster). +With CLI 3.7.0 and later, you can [pause and resume a cluster using the CLI](../reference/cli/managing_clusters/#pause-a-cluster). -You can enable in-app inbox or email notifications to get alerted when the paused cluster is or will be reactivated. For more information, see [managing notifications](../administering_cluster/notifications/#manage-notifications). +You can enable in-app inbox or email notifications to get alerted when the paused cluster is or will be reactivated. For more information, see [managing notifications](../administering_cluster/notifications/#managing-notifications). ### Pausing a cluster diff --git a/product_docs/docs/biganimal/release/getting_started/overview.mdx b/product_docs/docs/biganimal/release/getting_started/overview.mdx index 20bd1a228d3..086e9394559 100644 --- a/product_docs/docs/biganimal/release/getting_started/overview.mdx +++ b/product_docs/docs/biganimal/release/getting_started/overview.mdx @@ -16,7 +16,7 @@ Use the following high-level steps to set up a BigAnimal account and begin using 1. Create an EDB account. For more information, see [Create an EDB account](../free_trial/detail/create_an_account/). After setting up the account, you can access all of the features and capabilities of the BigAnimal portal. -1. Create a cluster. When prompted for **Where to deploy**, select **BigAnimal**. See [Creating a cluster](../creating_a_cluster/). +1. Create a cluster. When prompted for **Where to deploy**, select **BigAnimal**. See [Creating a cluster](creating_a_cluster/). 1. Use your cluster. See [Using your cluster](../using_cluster/). @@ -73,7 +73,7 @@ Use the following high-level steps to connect BigAnimal to your own cloud accoun 1. Activate and manage regions. See [Managing regions](activating_regions/). -1. Create a cluster. When prompted for **Where to deploy**, select **Your Cloud Account**. See [Creating a cluster](../creating_a_cluster/). +1. Create a cluster. When prompted for **Where to deploy**, select **Your Cloud Account**. See [Creating a cluster](creating_a_cluster/). 1. Use your cluster. See [Using your cluster](../using_cluster/). diff --git a/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/preparing_gcp/index.mdx b/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/preparing_gcp/index.mdx index 28411b30faf..8f21bdeefb3 100644 --- a/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/preparing_gcp/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/preparing_gcp/index.mdx @@ -14,7 +14,7 @@ Ensure you have at least the following combined roles: Alternatively, you can have an equivalent single role, such as: - roles/owner -BigAnimal requires you to check the readiness of your Google Cloud (GCP) account before you deploy your clusters. (You don't need to perform this check if you're using BigAnimal's cloud account as your [deployment option](../planning/deployment_options). The checks that you perform ensure that your Google Cloud account is prepared to meet your clusters' requirements and resource limits. +BigAnimal requires you to check the readiness of your Google Cloud (GCP) account before you deploy your clusters. (You don't need to perform this check if you're using BigAnimal's cloud account as your [deployment option](/biganimal/release/planning/deployment_options/). The checks that you perform ensure that your Google Cloud account is prepared to meet your clusters' requirements and resource limits. ## Required APIs and services diff --git a/product_docs/docs/biganimal/release/overview/03_security/index.mdx b/product_docs/docs/biganimal/release/overview/03_security/index.mdx index d90580eb892..5aefd2e212f 100644 --- a/product_docs/docs/biganimal/release/overview/03_security/index.mdx +++ b/product_docs/docs/biganimal/release/overview/03_security/index.mdx @@ -51,7 +51,7 @@ This overview shows the supported cluster-to-key combinations. To enable TDE: -- Before you create a TDE-enabled cluster, you must [add a TDE key](../../administering_cluster/projects##adding-a-tde-key). +- Before you create a TDE-enabled cluster, you must [add a TDE key](../../administering_cluster/projects/#adding-a-tde-key). - See [Creating a new cluster - Security](../../getting_started/creating_a_cluster#security) to enable a TDE key during the cluster creation. diff --git a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx index 2ba3ae950e1..6eed8a25327 100644 --- a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx @@ -24,7 +24,7 @@ If you set up a private endpoint and want to change to a public network, you mus ### Using BigAnimal's cloud account -When using BigAnimal's cloud account, when creating a cluster, you provide BigAnimal with your Azure subscription ID (see [Networking](/biganimal/latest/getting_started/creating_a_cluster/#network-logs--telemetry-section)). BigAnimal, in turn, provides you with a private link alias, which you can use to connect to your cluster privately. +When using BigAnimal's cloud account, when creating a cluster, you provide BigAnimal with your Azure subscription ID (see [Networking](/biganimal/latest/getting_started/creating_a_cluster/#cluster-settings-tab)). BigAnimal, in turn, provides you with a private link alias, which you can use to connect to your cluster privately. 1. When creating your cluster, on the **Cluster Settings** tab, in the **Network** section: 1. Select **Private**. diff --git a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/02_connecting_from_aws/index.mdx b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/02_connecting_from_aws/index.mdx index 0d6c7864b44..024f9e7ab7a 100644 --- a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/02_connecting_from_aws/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/02_connecting_from_aws/index.mdx @@ -15,7 +15,7 @@ The way you create a private endpoint differs when you're using your AWS account ## Using BigAnimal's cloud account -When using BigAnimal's cloud account, you provide BigAnimal with your AWS account ID when creating a cluster (see [Networking](/biganimal/latest/getting_started/creating_a_cluster/#network-logs--telemetry-section)). BigAnimal, in turn, provides you with an AWS service name, which you can use to connect to your cluster privately. +When using BigAnimal's cloud account, you provide BigAnimal with your AWS account ID when creating a cluster (see [Networking](/biganimal/latest/getting_started/creating_a_cluster/#cluster-settings-tab)). BigAnimal, in turn, provides you with an AWS service name, which you can use to connect to your cluster privately. 1. When creating your cluster, on the **Cluster Settings** tab, in the **Network** section: 1. Select **Private**. diff --git a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/connecting_from_gcp/index.mdx b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/connecting_from_gcp/index.mdx index 14db3add6a8..1bd5659759e 100644 --- a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/connecting_from_gcp/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/connecting_from_gcp/index.mdx @@ -6,7 +6,7 @@ navTitle: From Google Cloud The way you create a private Google Cloud endpoint differs when you're using your Google Cloud account versus using BigAnimal's cloud account. ## Using BigAnimal's cloud account -When using BigAnimal's cloud account, when creating a cluster, you provide BigAnimal with your Google Cloud project ID (see [Networking](/biganimal/latest/getting_started/creating_a_cluster/#network-logs--telemetry-section)). BigAnimal, in turn, provides you with a Google Cloud service attachment, which you can use to connect to your cluster privately. +When using BigAnimal's cloud account, when creating a cluster, you provide BigAnimal with your Google Cloud project ID (see [Networking](/biganimal/latest/getting_started/creating_a_cluster/#cluster-settings-tab)). BigAnimal, in turn, provides you with a Google Cloud service attachment, which you can use to connect to your cluster privately. 1. When creating your cluster, on the **Cluster Settings** tab, in the **Network** section: 1. Select **Private**. From 10ba60916aef6350b0145e6b93fa30f4b50ff002 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Fri, 2 Aug 2024 15:01:37 +0100 Subject: [PATCH 53/59] Sync with UPM-31294 Signed-off-by: Dj Walker-Morgan --- .../your_cloud_account/azure_market_setup.mdx | 62 ------------------- .../connecting_azure.mdx | 2 +- .../fault_injection_testing/index.mdx | 4 +- 3 files changed, 3 insertions(+), 65 deletions(-) delete mode 100644 advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/azure_market_setup.mdx diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/azure_market_setup.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/azure_market_setup.mdx deleted file mode 100644 index d2583ba0ed8..00000000000 --- a/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/azure_market_setup.mdx +++ /dev/null @@ -1,62 +0,0 @@ ---- -title: "Setting up your Azure Marketplace account" -description: "How to set up, configure, and invite users to Cloud Service after purchasing from Azure Marketplace" -redirects: -- /biganimal/latest/getting_started/02_connecting_to_your_cloud/02_azure_market_setup/ ---- - -Connect your cloud account with your Azure subscription. - -Before starting, in Azure Active Directory, ensure your user type is Member (not Guest). - -## 1. Select the EDB offer in the Azure portal. - -1. Sign in to the [Azure portal](https://portal.azure.com/) and go to Azure **Marketplace**. - -2. Find an offer from **EnterpriseDB Corporation** and select it. - -3. From the **Select Plan** list, select an available plan. - -4. Select **Set up + subscribe**. - -## 2. Fill out the details for your plan. - -1. In the **Project details** section, enter or create a resource group for your subscription. See [What is a resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal#what-is-a-resource-group) for more information. - -2. In the **SaaS details** section, enter the SaaS subscription name. - -3. Select **Review + subscribe**. - -## 3. Accept terms of use. - -1. Review the terms of use provided by EDB. - -2. Select **Subscribe**. - -## 4. Configure your account. - -1. To configure Cloud Service to use your Azure subscription and your Azure AD Application, select **Configure account now**. - -2. Fill in the **Your Cloud Service Organization Name** parameter with the SaaS Subscription Name you assigned as your Cloud Service Organization. - -3. Select **Submit**. - -## What's next - -You can now: - -- [Log in to Cloud Service](#log-in) -- [Invite new users](#invite-users) -- [Set up your cloud service provider](connecting_to_your_cloud/) - -### Log in - -You can log in to your Cloud Service account using your Azure AD identity. - -### Invite users - -You can invite new users by sharing the link to the EDB Postgres AI Console and having them log in with their Microsoft Azure Active Directory account. New users aren't assigned any roles by default. After they log in the first time, you see them in the User Management list and can assign them a role with permissions to Cloud Service. See [Users](/edb-postgres-ai/console/using/organizations/users/) for instructions. - -!!! Note - - Azure AD email domain is likely different from the email domain regularly used by your organization. diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/connecting_to_your_cloud/connecting_azure.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/connecting_to_your_cloud/connecting_azure.mdx index 331fcd363c8..9d8990c686c 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/connecting_to_your_cloud/connecting_azure.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/connecting_to_your_cloud/connecting_azure.mdx @@ -59,4 +59,4 @@ To connect your cloud: biganimal cloud-provider connect --provider azure --project ``` -After your cloud account is successfully connected to Cloud Service, you and other users with the correct permissions can create clusters. +After your cloud account is successfully connected to Cloud Service, you, and other users with the correct permissions can create clusters. diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/fault_injection_testing/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/fault_injection_testing/index.mdx index 04a5fbab3ff..a62a8b09174 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/fault_injection_testing/index.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/fault_injection_testing/index.mdx @@ -12,9 +12,9 @@ the availability and recovery of the cluster. Before using fault injection testing, ensure you meet the following requirements: -- You've connected your Cloud Service account with your Azure subscription. See [Setting up your Azure Marketplace account](/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/azure_market_setup/) for more information. +- You've connected your Cloud Service account with your Azure subscription. See [Connecting to your Azure cloud](/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/connecting_to_your_cloud/connecting_azure/) for more information. - You have permissions in your Azure subscription to view and delete VMs and also the ability to view Kubernetes pods via Azure Kubernetes Service RBAC Reader. -- You have PGD CLI installed. See [Installing PGD CLI](/pgd/latest/cli/installing/) for more information. +- You have PGD CLI installed. See [Installing PGD CLI](/pgd/latest/cli/installing/) for more information. - You've created a `pgd-cli-config.yml` file in your home directory. See [Configuring PGD CLI](/pgd/latest/cli/configuring_cli/) for more information. ## Fault injection testing steps From 230b4fd147ce5eadff104ecde914754328f2e013 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Mon, 5 Aug 2024 14:15:25 +0530 Subject: [PATCH 54/59] Update advocacy_docs/edb-postgres-ai/cloud-service/support_services/index.mdx Co-authored-by: dbwagoner <143614338+dbwagoner@users.noreply.github.com> --- .../edb-postgres-ai/cloud-service/support_services/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/support_services/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/support_services/index.mdx index 6c16d5edea2..f763781db49 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/support_services/index.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/support_services/index.mdx @@ -1,6 +1,6 @@ --- title: Support services -description: How to create a support case the support portal of Cloud Service. +description: How to create a support case in the support portal of Cloud Service. redirects: - ../06_support --- From 8418b6f5249aea6dae3c5f09719c4ddb46314599 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Mon, 5 Aug 2024 09:47:26 +0100 Subject: [PATCH 55/59] Update advocacy_docs/edb-postgres-ai/console/using/index.mdx --- advocacy_docs/edb-postgres-ai/console/using/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/edb-postgres-ai/console/using/index.mdx b/advocacy_docs/edb-postgres-ai/console/using/index.mdx index 9aa23882b15..854b7ab3a85 100644 --- a/advocacy_docs/edb-postgres-ai/console/using/index.mdx +++ b/advocacy_docs/edb-postgres-ai/console/using/index.mdx @@ -1,7 +1,7 @@ --- title: Using the console indexCards: simple -description: Using the EDB Postgres AI console to from creating clusters and managing users to connecting cloud services and enabling encryption +description: Using the EDB Postgres AI console from creating clusters and managing users to connecting cloud services and enabling encryption. navigation: - introduction - overview From d7026fb32aa80a4b76da9cf19492b1f12ffb937c Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 6 Aug 2024 11:45:25 +0100 Subject: [PATCH 56/59] Bad Link fixes - removed duplicated step from deploy_azure.mdx Signed-off-by: Dj Walker-Morgan --- .../deploying_using_your_cloud_account/deploy_azure.mdx | 1 - .../using/organizations/identity_provider/index.mdx | 7 ------- 2 files changed, 8 deletions(-) diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/deploying_using_your_cloud_account/deploy_azure.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/deploying_using_your_cloud_account/deploy_azure.mdx index 41ebe2dec07..2ebc447ff25 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/deploying_using_your_cloud_account/deploy_azure.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/deploying_using_your_cloud_account/deploy_azure.mdx @@ -6,7 +6,6 @@ description: Choose Azure to manage databases on EDB Postgres AI Cloud Service. To use Azure as your cloud account: - Sign in for the first time with your EDB account, and then either use the EDB Postgres AI Console as your identity provider or [set up your own provider](/edb-postgres-ai/console/using/organizations/identity_provider/) afterward. -- [Connect your Azure Marketplace account](/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/azure_market_setup/) to Cloud Service. - Check the readiness of [your Azure subscription](/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/preparing_cloud_account/preparing_azure/) before deploying. - [Connect your Azure cloud](/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/connecting_to_your_cloud/connecting_azure/) to Cloud Service. - [Connect to Cloud Service](/edb-postgres-ai/cloud-service/using_cluster/connecting_your_cluster/connecting_from_azure/) from your application's virtual network in Azure. diff --git a/advocacy_docs/edb-postgres-ai/console/using/organizations/identity_provider/index.mdx b/advocacy_docs/edb-postgres-ai/console/using/organizations/identity_provider/index.mdx index ac00c3bce38..02d5738e183 100644 --- a/advocacy_docs/edb-postgres-ai/console/using/organizations/identity_provider/index.mdx +++ b/advocacy_docs/edb-postgres-ai/console/using/organizations/identity_provider/index.mdx @@ -6,13 +6,6 @@ description: Describes identity provider setup options After signing in for the first time with your EDB account, you can either use the EDB Postgres AI platform as your identity provider or set up your own. -!!!Note - -If you purchased through Azure Marketplace, EDB Postgres AI authenticates users using Azure Active Directory (AD) and you don't have to complete these steps. Azure AD is linked during subscription. Also, you can still invite users that have an EDB account through the EDB Postgres AI portal. - -See [Setting up your Azure Marketplace account](/edb-postgres-ai/cloud-service/getting_started/your_cloud_account/azure_market_setup/). -!!! - When using your own identity provider, you add users to EDB Postgres AI by adding them to the designated group in your identity provider. Once you've logged into EDB Postgres AI using your own identity provider, you can set up your cloud service provider in the EDB Postgres AI portal to complete onboarding. If you're using the EDB Postgres AI platform as your identity provider, you can also invite users that have an EDB account by selecting **Invite New User** on the Users page. After providing their EDB account email and their role, you can send them an invitation link. Ensure the user accepts the invitation within 48 hours, or you'll have to send a new invitation. From f2b65f270d4854806d6b74e9c40baf98d57f9f24 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 6 Aug 2024 12:23:11 +0100 Subject: [PATCH 57/59] Added csp_tagging to rebrand content, fixed link in old BA content. Signed-off-by: Dj Walker-Morgan --- .../your_cloud_account/csp_tagging.mdx | 126 ++++++++++++++++++ .../your_cloud_account/index.mdx | 3 +- .../managing_cloud_account/csp_tagging.mdx | 4 +- 3 files changed, 129 insertions(+), 4 deletions(-) create mode 100644 advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/your_cloud_account/csp_tagging.mdx diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/your_cloud_account/csp_tagging.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/your_cloud_account/csp_tagging.mdx new file mode 100644 index 00000000000..7d4ed39df70 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/your_cloud_account/csp_tagging.mdx @@ -0,0 +1,126 @@ +--- +title: "Tagging AWS resources" +deepToC: true +description: How to tag your AWS resources using EDB Postgres AI Cloud Service to enable better audit policy and cost control. +redirects: + - /purl/cloudservice/csp_tagging +--- + +Tags are key-value pairs you can apply to cloud provider resources created on your own AWS cloud account. Resource tagging allows you to have fine-grained audit policy and cost control over your resources. When you add a tag, the following resources are labeled: + +- EC2 instances +- S3 buckets +- Load balancers +- Volumes + +You can create and manage tags on a per-project basis in the EDB Postgres® AI Console. The Console then applies the tag to the resources of all regions configured in your account. + +## Prerequisite + +You have the **Owner** role for the **Project** for which you want to create and manage tags. + +## Create Tags + +To create a tag: + +1. Log in to the Console. + +1. Go to the project's page. + +1. In the project's list, select your **Project**. + +1. On your project's home page left navigation, select **Cloud Providers**. + +1. On the cloud provider's page, select **Your Cloud Account**. + +1. On your cloud account tab, under **AWS**, select **Manage Tags**. + +1. Select **Add a Key Value pair**. + +1. Provide **Key** and **Value** and select **Save**. + +1. A **Confirm changes to AWS tags** message pops up. + +1. Select **Confirm** to add the tag. + +1. This **Key:Value** pair is added at your project level. It is populated on the EC2 instances, S3 buckets, load balancers and volumes managed in that project, for all regions and clusters. + +This new tag appears in the list under **AWS** on Cloud Service Providers. + +This tag is also propagated on your AWS resources. For instance, you can view this tag in the **Tags** tab of your EC2 instance resources on the AWS console. + +### Considerations + +Consider following while adding a **Key:Value** pair: + +- A number of case-sensitive keywords are reserved by AWS and EDB Cloud Service, and therefore cannot be entered as **Key**s. + +
Reserved keywords + + Reserved tag key prefixes: + + `aws:`, `AWS:`, `user:`, `k8s.io/`, `eks:`, `kubernetes.io/`, `elbv2.k8s.aws/`, `service.k8s.aws/`, `ebs.csi.aws.com/` + + Reserved tag keys: + + `Resource_type`, `Project`, `Environment`, `Topology`, `BAID`, `ManagedBy`, `biganimal-cluster`, `biganimal-project`, `CSIVolumeName`, `Name`. + +
+ +
+ +- Review the [tagging limitations of AWS](https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html#tag-conventions). + +- In AWS, a resource can have a maximum number of 50 tags. The EDB Cloud Service and AWS Management Console apply some tags by default, so we recommend not adding more than 20 of your own tags. Take into account that the number of tags per resource is defined by the tags created in the EDB Console, in your cloud service provider's dashboard, with terraform, and other synced systems. + +## Update Tags + +To update a tag: + +1. Log in to the Console. + +1. Go to the project's page. + +1. In the project's list, select your **Project**. + +1. On your project's home page left navigation, select **Cloud Providers**. + +1. On the cloud provider's page, select **Your Cloud Account**. + +1. On your cloud account tab, under **AWS**, select **Manage Tags**. + +1. Update the **Value** field of any of the existing tags. + +1. Select **Save**. + +1. A **Confirm changes to AWS tags** message pops up. + +1. Select **Confirm** to update the tag. + +1. This **Key:Value** pair is updated at your project level. It is populated on the EC2 instances, S3 buckets, load balancers and volumes managed in that project, for all regions and clusters. + + +## Delete tags + +1. Log in to the Console. + +1. Go to the project's page. + +1. In the project's list, select your **Project**. + +1. On your project's home page left navigation, select **Cloud Providers**. + +1. On the cloud provider's page, select **Your Cloud Account**. + +1. On your cloud account tab, under **AWS**, select **Manage Tags**. + +1. Select the minus symbol (**—**) next to the Key value pair to delete the tag. + +1. Select **Save**. + +1. A **Confirm changes to AWS tags** message pops up. + +1. Select **Confirm** to delete the tag and update the tags list. + +This tag is also deleted on the AWS Console. You can confirm the tag's deletion by selecting the **Tags** tab of your EC2 instance resources on AWS console. + diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/your_cloud_account/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/your_cloud_account/index.mdx index d2060030ec0..c0acc31a605 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/your_cloud_account/index.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/your_cloud_account/index.mdx @@ -1,10 +1,11 @@ --- title: Your Cloud Account -description: Using your own cloud account to enable AWS Secrets Manager and Apache Superset and your own policies on EDB Postgres AI Cloud Service. +description: Using your own cloud account for tagging and to enable AWS Secrets Manager and Apache Superset and your own policies on EDB Postgres AI Cloud Service. navigation: - managing_superset_access - analyze_with_superset - aws_secrets_manager_integration - customizing_compliance +- csp_tagging --- diff --git a/product_docs/docs/biganimal/release/using_cluster/managing_cloud_account/csp_tagging.mdx b/product_docs/docs/biganimal/release/using_cluster/managing_cloud_account/csp_tagging.mdx index b7042faea72..8ab96fd53fd 100644 --- a/product_docs/docs/biganimal/release/using_cluster/managing_cloud_account/csp_tagging.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/managing_cloud_account/csp_tagging.mdx @@ -1,11 +1,9 @@ --- title: "Tagging AWS resources" deepToC: true -redirects: - - /purl/cloudservice/csp_tagging --- -Tags are key-value pairs you can apply to cloud provider resources created on your own cloud account, also known as [Your Cloud Account](/edb-postgres-ai/cloud-service/managed/). Resource tagging allows you to have fine-grained audit policy and cost control over your resources. When you add a tag, the following resources are labeled: +Tags are key-value pairs you can apply to cloud provider resources created on your own cloud account, also known as [Your Cloud Account](/edb-postgres-ai/cloud-service/getting-started/your-cloud-account). Resource tagging allows you to have fine-grained audit policy and cost control over your resources. When you add a tag, the following resources are labeled: - EC2 instances - S3 buckets From b5445976ceac8c88252aa11331c30a9992520b14 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 6 Aug 2024 12:38:14 +0100 Subject: [PATCH 58/59] Refix link in BA docs. Signed-off-by: Dj Walker-Morgan --- .../using_cluster/managing_cloud_account/csp_tagging.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/managing_cloud_account/csp_tagging.mdx b/product_docs/docs/biganimal/release/using_cluster/managing_cloud_account/csp_tagging.mdx index 8ab96fd53fd..43c414c8fb7 100644 --- a/product_docs/docs/biganimal/release/using_cluster/managing_cloud_account/csp_tagging.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/managing_cloud_account/csp_tagging.mdx @@ -3,7 +3,7 @@ title: "Tagging AWS resources" deepToC: true --- -Tags are key-value pairs you can apply to cloud provider resources created on your own cloud account, also known as [Your Cloud Account](/edb-postgres-ai/cloud-service/getting-started/your-cloud-account). Resource tagging allows you to have fine-grained audit policy and cost control over your resources. When you add a tag, the following resources are labeled: +Tags are key-value pairs you can apply to cloud provider resources created on your own cloud account, also known as [Your Cloud Account](/edb-postgres-ai/cloud-service/using_cluster/your_cloud_account). Resource tagging allows you to have fine-grained audit policy and cost control over your resources. When you add a tag, the following resources are labeled: - EC2 instances - S3 buckets From 9c1bd40906fc3073688a7166688f8d9e5d1de6c0 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 6 Aug 2024 18:10:16 +0100 Subject: [PATCH 59/59] Fixes and linkcheck run Signed-off-by: Dj Walker-Morgan --- .../using_cluster/identity_provider/index.mdx | 154 --------------- .../cloud-service/using_cluster/index.mdx | 1 - .../create_a_cluster/create_cluster_cli.mdx | 11 +- .../create_cluster_portal.mdx | 12 +- .../release/free_trial/quickstart.mdx | 179 +++++++++--------- .../release/known_issues/known_issues_pgd.mdx | 18 +- .../release/migration/dha_bulk_migration.mdx | 114 +++++------ .../biganimal/release/overview/poolers.mdx | 7 +- .../biganimal/release/overview/updates.mdx | 8 +- .../01_postgres_access/index.mdx | 56 +++--- .../01_connecting_from_azure/index.mdx | 91 +++++---- .../02_connecting_from_aws/index.mdx | 89 +++++---- .../connecting_from_gcp/index.mdx | 157 ++++++++------- .../03_modifying_your_cluster/index.mdx | 65 +++---- .../fault_injection_testing/index.mdx | 45 +++-- .../using_cluster/managing_replicas.mdx | 174 +++++++++-------- .../release/using_cluster/pgd_cli_ba.mdx | 25 +-- .../pgd/5/reference/conflict_functions.mdx | 2 +- 18 files changed, 556 insertions(+), 652 deletions(-) delete mode 100644 advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/identity_provider/index.mdx diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/identity_provider/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/identity_provider/index.mdx deleted file mode 100644 index 4062ceb5a27..00000000000 --- a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/identity_provider/index.mdx +++ /dev/null @@ -1,154 +0,0 @@ ---- -title: "Setting up your identity provider" -navTitle: "Setting up your identity provider" -description: Describes identity provider setup options ---- - -After signing in for the first time with your EDB account, you can either use the BigAnimal portal as your identity provider or set up your own. - -When using your own identity provider, you add users to BigAnimal by adding them to the designated group in your identity provider. Once you've logged into BigAnimal using your own identity provider, you can set up your cloud service provider in the BigAnimal portal to complete onboarding. - -If you're using the BigAnimal portal as your identity provider, you can also invite users that have an EDB account by selecting **Invite New User** on the Users page. After providing their EDB account email and their role, you can send them an invitation link. Ensure the user accepts the invitation within 48 hours, or you'll have to send a new invitation. - -For more information on roles, see [Managing portal access](/biganimal/release/administering_cluster/01_portal_access/). - -## Setting up your own identity provider - -BigAnimal supports single sign-on through SAML identity providers. The SAML application enables access to BigAnimal for groups selected in your identity provider. You configure your SAML application to send a SAML assertion response to BigAnimal for each user. - -The identity provider application provides: -- Signature certificate -- Single sign-on URL (Sign In Endpoint) - -The BigAnimal service provider provides: -- ACS URL (Assertion Consumer Service) -- SP Entity ID (Audience URI) - -The Set Up Identity Provider page in BigAnimal provides the ACS URL and Audience URI, which you can copy to use when configuring your SAML application. The page also includes information about mandatory attributes for the configuration. - -You need the following SAML assertions to map the user information in your identity provider application to BigAnimal: - -``` -http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier -http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name -http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname -http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname -http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress -``` - -### Configure SAML - -On the Set Up Identity Provider page: - -1. In the **Connection Info** section, copy the following URLs to use in your identity provider configuration: - - | URL | Description | - | ------------------------------ | ----------- | - | Assertion Consumer Service URL | BigAnimal-specific URL to which SAML assertions from your identity provider are sent | - | Audience URI | The entity or audience for which the SAML assertion is intended | -2. In the **Configure SAML response** section, configure your identity provider to send the SAML response. The SAML response must include the following attributes: - - `givenname` — BigAnimal uses the value as the given name in the profile of the authenticated user. - - `surname` — BigAnimal uses the value as the surname in the profile of the authenticated user. - - `name` — BigAnimal uses the value as the full name (`givenname` joined with `surname`) in the profile of the authenticated user. - - `emailaddress` — BigAnimal uses the value for the email address in the profile of the authenticated user. - - Provide a `NameID` element in the `Subject` element. Provide your email ID as the value to `NameID`. BigAnimal uses the email ID you provide as your username and primary email, so format `NameID` like an email address. - - For example: - - ![](../images/nameID.png) -3. In the **SAML settings** section, enter the configuration information for your preferred SAML identity provider: - - | Field | Description | - | ---------- | ----------- | - | Single Sign-On URL | The identity provider's sign-on URL. | - | Identity Provider Signature Certificate | Identity provider's assertion signing certificate (`.cer` or `.cert`). Coordinate with your identity provider partner to obtain this certificate securely. | - | Request Binding | SAML Authentication Request Protocol binding used to send the authentication request: HTTP-Redirect, HTTP-Post, or Hybrid (SAML request is REDIRECT and response is POST). | - | Response Signature Algorithm (RSA) Algorithm | The signature algorithm used to sign the SAML AuthNRequest (RSA SHA-1 or RSA SHA-256). | -4. Select **Test Connection**. If you connect to your identity provider successfully, your identity provider's login screen appears. If an error message appears, contact [Support](/biganimal/latest/overview/support). - -Once your identity provider is set up, you can view your connection status, ID, login URL, audience URI, and assertion consumer service URL from the BigAnimal portal on the Identity Provider page. Select **Admin > Identity Provider** to access it. - -!!!SeeAlso "Further reading" - [Setting up specific identity providers](knowledge_articles) - -### Add a domain - -You need a verified domain so your users can have a streamlined login experience with their email address. - -1. On the **Domains** tab, enter the domain name and select **Next: Verify Domain**. -2. To add it as a TXT record on that domain in your DNS provider's management console, copy the TXT record and follow the instructions in the on-screen verify box: - - 1. Log in to your domain registrar or web host account. - 1. Navigate to the DNS settings for the domain you want to verify. - 1. Add a TXT record. - 1. In the **Name** field, enter `@`. - 1. In the **Value** field, enter the verification string provided, for example, `edb-biganimal-verification=VjpcxtIC57DujkKMtECSwo67FyfCExku`. - 1. Save your changes and wait for the DNS propagation period to complete. This can take up to 48 hours. - -3. Select **Done**. - - Your domain and its status appear on the **Domains** tab, where you can delete or verify it. - Domains can take up to 48 hours for the change of the domain record by the DNS provider to propagate before you can verify it. - -4. If your domain hasn't verified after a day, you can debug whether your domain has the matching verification text field. - o check the exact value of the required TXT field, select **Verify** next to the domain at `/settings/domains`. - Query your domain directly with DNS tools, such as nslookup, to check if you have an exact match for a text = "verification" field. - Domains can have many TXT fields. As long as one matches, it should verify. - -``` -> nslookup -type=TXT mydomain.com - -;; Truncated, retrying in TCP mode. -Server: 192.168.1.1 -Address: 192.168.1.1#53 -Non-authoritative answer: -... -mydomain.com text = “edb-biganimal-verification=VjpcxtIC57DujkKMtECSwo67FyfCExku” -``` - -To add another domain, select **Add Domain**. - -When you have at least one verified domain (with **Status = Verified**, in green), the identity provider status becomes **Active** on the **Identity Providers** tab. -When the domain is no longer verified, the status becomes **Inactive**. - -!!! Note - Your DNS provider can take up to 48 hours to update. Once the domain is verified, the identity provider status can take up to three minutes to update. - -### Domain expiry - -The EDB system has a 10-day expiry set for checking whether domains are verified. - -You buy domains from DNS providers by way of a leasing system. If the lease expires, you no longer own the domain, and it disappears from the Internet. -If this happens, you need to renew your domain with your DNS provider. - -Whether the domain failed to verify within the 10 days or it expired months later, -it appears as **Status = Expired** (in red). -You can't reinstate an expired domain -because expiry means you might no longer own the domain. You need to verify it again. - -- To delete the domain, select the bin icon. -- To re-create the domain, select **Add Domain**. Set a new verification key for the domain and update the TXT record for it in your DNS provider's management console, as described in [Add a doman](#add-a-domain). - -### Manage roles for added users - -You add users through your identity provider. A user who you add in the identity provider is automatically added to BigAnimal. BigAnimal assigns them with the default role of organization member. You manage roles and permissions from BigAnimal. See [Managing portal access](/biganimal/latest/administering_cluster/01_portal_access/). - -!!! Note - - A user is created in BigAnimal only after they log in. After they log in, you can change their BigAnimal role. - -### Add a tile - -Once you establish the identity provider, you can create a BigAnimal tile for users to access the organization's BigAnimal application. To do so, copy the quick sign-in URL from the **Settings > Identity Provider** page of the BigAnimal portal. For details on how to add a tile, refer to your identify provider documentation for instructions on setting up SSO access to your application. - - -## Next steps - -You and other users can log in to BigAnimal using your identity provider credentials. - -You can rename the default project BigAnimal creates for you. See [Editing a project](/biganimal/release/administering_cluster/projects/#editing-a-project). - -You can [set up your cloud service provider](/biganimal/latest/getting_started/02_connecting_to_your_cloud/) so that you or other users with the correct permissions can create clusters. - -You can assign roles to your default project. See [Managing portal access](/biganimal/release/administering_cluster/01_portal_access). diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/index.mdx index c42265fdbaa..e3522146332 100644 --- a/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/index.mdx +++ b/advocacy_docs/edb-postgres-ai/cloud-service/using_cluster/index.mdx @@ -6,7 +6,6 @@ navigation: - connect_from_a_client - edb_hosted - your_cloud_account -- identity_provider - faraway_replicas - postgres_access - cli diff --git a/product_docs/docs/biganimal/release/free_trial/detail/create_a_cluster/create_cluster_cli.mdx b/product_docs/docs/biganimal/release/free_trial/detail/create_a_cluster/create_cluster_cli.mdx index 29e12115bbf..dd131e9efcb 100644 --- a/product_docs/docs/biganimal/release/free_trial/detail/create_a_cluster/create_cluster_cli.mdx +++ b/product_docs/docs/biganimal/release/free_trial/detail/create_a_cluster/create_cluster_cli.mdx @@ -4,13 +4,13 @@ navTitle: "Using the command line" indexCards: none --- - We'll be using the [BigAnimal command line interface](/biganimal/latest/reference/cli/), which is a convenient wrapper to the [BigAnimal API](/biganimal/latest/reference/api/). To start, download [the latest binary](/biganimal/latest/reference/cli/) and move it to wherever your system finds executable files (somewhere on your PATH). !!! Note Linux and MacOS note + If you're on a Linux or MacOS system, you'll need to mark the `biganimal` file as executable by running `chmod +x [/path/to/biganimal]` before you can use it. - + Example (for Linux or MacOS): ```shell @@ -33,6 +33,7 @@ press [Enter] to continue in the web browser... ``` !!! Note Linux dependencies + The BigAnimal CLI uses the xdg-open utility to open a browser on Linux systems. On minimal systems, you might need to install this dependency before creating a credential. @@ -51,8 +52,8 @@ __OUTPUT__ ``` !!! Note Caution - If you add another credential, the newly created credential will be set as the new default context credential. You’ll need to add `--credential [newuser]` to the following commands to override the default credentials. If you have only one, the option isn't needed. You can change the default credential using `biganimal config set context_credential [name]`. + If you add another credential, the newly created credential will be set as the new default context credential. You’ll need to add `--credential [newuser]` to the following commands to override the default credentials. If you have only one, the option isn't needed. You can change the default credential using `biganimal config set context_credential [name]`. Use the `biganimal region show` command to see the available active regions you can pick from for your cluster. @@ -80,6 +81,7 @@ biganimal cluster create --config-file create_cluster.yaml __OUTPUT__ Are you sure you want to create cluster "test_cluster"? [y|N]: ``` + Select `y`. If successful, `cluster create` will give you the ID of your new cluster (you'll use this to manage it) as well as @@ -130,6 +132,7 @@ psql 'postgres://edb_admin@p-xxxxxxxxxx.pg.biganimal.io:5432/edb_admin?sslmode=r ``` !!! Note "Other options for connecting" + Sure, psql is great, but maybe you want to use another client. See [Connect to your cluster](../connect_to_a_cluster) for other options. ## Next steps @@ -139,4 +142,4 @@ psql 'postgres://edb_admin@p-xxxxxxxxxx.pg.biganimal.io:5432/edb_admin?sslmode=r ## Further reading [BigAnimal CLI reference](/biganimal/latest/reference/cli/) and -[Creating a cluster](/biganimal/latest/getting_started/creating_a_cluster/) in the full version documentation. +[Creating a cluster](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/) in the full version documentation. diff --git a/product_docs/docs/biganimal/release/free_trial/detail/create_a_cluster/create_cluster_portal.mdx b/product_docs/docs/biganimal/release/free_trial/detail/create_a_cluster/create_cluster_portal.mdx index 8b5604f8b5b..e24035b02e5 100644 --- a/product_docs/docs/biganimal/release/free_trial/detail/create_a_cluster/create_cluster_portal.mdx +++ b/product_docs/docs/biganimal/release/free_trial/detail/create_a_cluster/create_cluster_portal.mdx @@ -6,16 +6,15 @@ indexCards: none ## Navigating to the Create Cluster page -1. Navigate to the [BigAnimal portal](https://portal.biganimal.com). Sign in with [your account](../create_an_account). +1. Navigate to the [BigAnimal portal](https://portal.biganimal.com). Sign in with [your account](../create_an_account). -2. Select the **Clusters** link on the left to navigate to the [Clusters](https://portal.biganimal.com/clusters) page. +2. Select the **Clusters** link on the left to navigate to the [Clusters](https://portal.biganimal.com/clusters) page. -3. Select **Create New Cluster**, which opens the [Create Cluster page](https://portal.biganimal.com/create-cluster). +3. Select **Create New Cluster**, which opens the [Create Cluster page](https://portal.biganimal.com/create-cluster). -4. Select the options you want for your cluster. See [Creating a cluster](/biganimal/latest/getting_started/creating_a_cluster/) in the full version documentation for details on the options. +4. Select the options you want for your cluster. See [Creating a cluster](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/) in the full version documentation for details on the options. - -## Provisioning +## Provisioning Select **Create Cluster**, and you're brought back to the Clusters page with your newly configured cluster now populating the list. @@ -26,4 +25,3 @@ Select the provisioned cluster to view the parameters you'll need to connect to ## Next steps [Connect to your cluster](../connect_to_a_cluster). - diff --git a/product_docs/docs/biganimal/release/free_trial/quickstart.mdx b/product_docs/docs/biganimal/release/free_trial/quickstart.mdx index d1685d8f3c4..0bbd9725a75 100644 --- a/product_docs/docs/biganimal/release/free_trial/quickstart.mdx +++ b/product_docs/docs/biganimal/release/free_trial/quickstart.mdx @@ -16,14 +16,13 @@ If you haven't done so already, you'll need to [create your EDB account](detail/ Then, use your newly created account to access the [BigAnimal](https://portal.biganimal.com) portal. +## Step 2: Create a cluster -## Step 2: Create a cluster - -1. On the overview page, select **Create New Cluster**. +1. On the overview page, select **Create New Cluster**. You should now find yourself at the [Create Cluster page](https://portal.biganimal.com/create-cluster). -1. Select the options for your cluster. See [Creating a cluster](/biganimal/latest/getting_started/creating_a_cluster/) for more information. +2. Select the options for your cluster. See [Creating a cluster](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/) for more information. ### Managing your cluster @@ -31,23 +30,23 @@ After you select **Create Cluster**, you return to the Clusters page with your n ## Step 3: Connect to your new cluster -1. Select your cluster to get an overview of how it has been configured. Select the **Connect** tab to see more information about how to connect to your cluster. -2. Select the **Overview** tab and copy the **Quick Connect** command. Paste it into a terminal where psql is installed. It will prompt for your password and put you on a SQL command line. For example: +1. Select your cluster to get an overview of how it has been configured. Select the **Connect** tab to see more information about how to connect to your cluster. +2. Select the **Overview** tab and copy the **Quick Connect** command. Paste it into a terminal where psql is installed. It will prompt for your password and put you on a SQL command line. For example: - ```shell - psql -W "postgres://edb_admin@p-qzwv2ns7pj.pg.biganimal.io:5432/edb_admin?sslmode=require" - Password: - __OUTPUT__ - psql (15.2) - SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off) - Type "help" for help. + ```shell + psql -W "postgres://edb_admin@p-qzwv2ns7pj.pg.biganimal.io:5432/edb_admin?sslmode=require" + Password: + __OUTPUT__ + psql (15.2) + SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off) + Type "help" for help. - edb_admin=> - ``` + edb_admin=> + ``` !!! Note - While psql is a good all-around option for working with Postgres databases, you can use the client of your choice. See [Connect to a cluster](detail/connect_to_a_cluster) for more ideas. + While psql is a good all-around option for working with Postgres databases, you can use the client of your choice. See [Connect to a cluster](detail/connect_to_a_cluster) for more ideas. ## Things to try @@ -55,91 +54,91 @@ After you select **Create Cluster**, you return to the Clusters page with your n We're going to create some sample math data, so we're going to create a database called `math`. We could use the default `edb_admin` database, but best practice is to isolate data. -1. Create a new `math` database. +1. Create a new `math` database. + + ```sql + create user math with password 'math_password'; + create database math; + ``` - ```sql - create user math with password 'math_password'; - create database math; - ``` +2. Grant the `math` role to edb_admin. -2. Grant the `math` role to edb_admin. - - ```sql - grant math to edb_admin; - ``` -2. Connect to the `math` database. You're prompted for the edb_admin password you provided in Step 2. + ```sql + grant math to edb_admin; + ``` - ```sql - \connect math - ``` +3. Connect to the `math` database. You're prompted for the edb_admin password you provided in Step 2. + ```sql + \connect math + ``` ### Populate a table and query it We're going to use temporary tables to calculate prime numbers using a [Sieve of Eratosthenes](https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes). -1. Create a table called `primes` for storing prime numbers. - - ```sql - CREATE TABLE primes ( - num INTEGER, - PRIMARY KEY (num) - ); - ``` - -2. Populate the table with all prime numbers up to 1000. (This code is based on [code from David Fetter](https://wiki.postgresql.org/wiki/Sieve_of_Eratosthenes).) - - ```sql - -- Based on https://wiki.postgresql.org/wiki/Sieve_of_Eratosthenes - - WITH RECURSIVE - t0(m) AS ( - VALUES(1000) - ), - t1(n) AS ( - VALUES(2) - UNION ALL - SELECT n+1 FROM t1 WHERE n < (SELECT m FROM t0) - ), - t2 (n, i) AS ( - SELECT 2*n, 2 - FROM t1 WHERE 2*n <= (SELECT m FROM t0) - UNION ALL - ( - WITH t3(k) AS ( - SELECT max(i) OVER () + 1 FROM t2 - ), - t4(k) AS ( - SELECT DISTINCT k FROM t3 - ) - SELECT k*n, k - FROM - t1 - CROSS JOIN - t4 - WHERE k*k <= (SELECT m FROM t0) - ) - ) - INSERT INTO primes ( - SELECT n FROM t1 EXCEPT - SELECT n FROM t2 - ORDER BY 1 - ); - ``` - -3. Select the largest prime number less than 1000. - - ```sql - SELECT max(num) - FROM primes - WHERE num < 1000; - ``` +1. Create a table called `primes` for storing prime numbers. + + ```sql + CREATE TABLE primes ( + num INTEGER, + PRIMARY KEY (num) + ); + ``` + +2. Populate the table with all prime numbers up to 1000. (This code is based on [code from David Fetter](https://wiki.postgresql.org/wiki/Sieve_of_Eratosthenes).) + + ```sql + -- Based on https://wiki.postgresql.org/wiki/Sieve_of_Eratosthenes + + WITH RECURSIVE + t0(m) AS ( + VALUES(1000) + ), + t1(n) AS ( + VALUES(2) + UNION ALL + SELECT n+1 FROM t1 WHERE n < (SELECT m FROM t0) + ), + t2 (n, i) AS ( + SELECT 2*n, 2 + FROM t1 WHERE 2*n <= (SELECT m FROM t0) + UNION ALL + ( + WITH t3(k) AS ( + SELECT max(i) OVER () + 1 FROM t2 + ), + t4(k) AS ( + SELECT DISTINCT k FROM t3 + ) + SELECT k*n, k + FROM + t1 + CROSS JOIN + t4 + WHERE k*k <= (SELECT m FROM t0) + ) + ) + INSERT INTO primes ( + SELECT n FROM t1 EXCEPT + SELECT n FROM t2 + ORDER BY 1 + ); + ``` + +3. Select the largest prime number less than 1000. + + ```sql + SELECT max(num) + FROM primes + WHERE num < 1000; + ``` ## Further reading Now that you've got the basics, see what else BigAnimal offers: -- [Backup and restore](detail/experiment/backup_and_restore) -- [Import data](detail/experiment/import_data) -- [CLI reference](../reference/cli) -- [API reference](../reference/api) +- [Backup and restore](detail/experiment/backup_and_restore) +- [Import data](detail/experiment/import_data) +- [CLI reference](../reference/cli) +- [API reference](../reference/api) diff --git a/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx b/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx index d472725761b..f99711d6163 100644 --- a/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx +++ b/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx @@ -9,19 +9,22 @@ redirects: These are currently known issues in EDB Postgres Distributed (PGD) on BigAnimal as deployed in distributed high availability clusters. These known issues are tracked in our ticketing system and are expected to be resolved in a future release. -For general PGD known issues, refer to the [Known Issues](/pgd/latest/known_issues/) and [Limitations](/pgd/latest/limitations/) in the PGD documentation. +For general PGD known issues, refer to the [Known Issues](/pgd/latest/known_issues/) and [Limitations](/pgd/latest/planning/limitations/) in the PGD documentation. ## Management/administration -### Deleting a PGD data group may not fully reconcile +### Deleting a PGD data group may not fully reconcile + When deleting a PGD data group, the target group resources is physically deleted, but in some cases we have observed that the PGD nodes may not be completely partitioned from the remaining PGD Groups. We recommend avoiding use of this feature until this is fixed and removed from the known issues list. ### Adjusting PGD cluster architecture may not fully reconcile + In rare cases, we have observed that changing the node architecture of an existing PGD cluster may not complete. If a change hasn't taken effect in 1 hour, reach out to Support. ### PGD cluster may fail to create due to Azure SKU issue + In some cases, although a regional quota check may have passed initially when the PGD cluster is created, it may fail if an SKU critical for the witness nodes is unavailable across three availability zones. To check for this issue at the time of a region quota check, run: @@ -36,34 +39,41 @@ We're going to be provisioning a number of instances of in • A multiple (2 or 3) of your largest table
or
• More than one third of the capacity of your dedicated WAL disk (if configured) | - | GUC variable | Setting | - |----------------------|----------------------------------------------------------------------------------------------------------------------------------------------| - | maintenance_work_mem | 1GB | - | wal_sender_timeout | 60min | - | wal_receiver_timeout | 60min | - | max_wal_size | Set to either:
• A multiple (2 or 3) of your largest table
or
• More than one third of the capacity of your dedicated WAL disk (if configured) | - Make note of the target's proxy hostname (target-proxy) and port (target-port). You also need a user (target-user) and password (target-password) for the target cluster. The following instructions give examples for a cluster named `ab-cluster` with an `ab-group` subgroup and three nodes: `ab-node-1`, `ab-node-2`, and `ab-node3`. The cluster is accessed through a host named `ab-proxy` (the target-proxy). @@ -33,30 +32,28 @@ On BigAnimal, a cluster is configured, by default, with an `edb_admin` user (the The target-password for the target-user is available from the BigAnimal dashboard for the cluster. A database named `bdrdb` (the target-dbname) was also created. - ## Identify your data source You need the source hostname (source-host), port (source-port), database name (source-dbname), user, and password for your source database. Also, you currently need a list of tables in the database that you want to migrate to the target database. - ## Prepare a bastion server Create a virtual machine with your preferred operating system in the cloud to orchestrate your bulk loading. -* Use your EDB account. - * Obtain your EDB repository token from the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page. -* Set environment variables. - * Set the `EDB_SUBSCRIPTION_TOKEN` environment variable to the repository token. -* Configure the repositories. - * Run the automated installer to install the repositories. -* Install the required software. - * Install and configure: - * psql - * PGD CLI - * Migration Toolkit - * LiveCompare +- Use your EDB account. + - Obtain your EDB repository token from the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page. +- Set environment variables. + - Set the `EDB_SUBSCRIPTION_TOKEN` environment variable to the repository token. +- Configure the repositories. + - Run the automated installer to install the repositories. +- Install the required software. + - Install and configure: + - psql + - PGD CLI + - Migration Toolkit + - LiveCompare ### Use your EDB account @@ -74,13 +71,15 @@ export EDB_SUBSCRIPTION_TOKEN=your-repository-token The required software is available from the EDB repositories. You need to install the EDB repositories on your bastion server. -* Red Hat +- Red Hat + ```shell curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/setup.rpm.sh" | sudo -E bash curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/enterprise/setup.rpm.sh" | sudo -E bash ``` -* Ubuntu/Debian +- Ubuntu/Debian + ```shell curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/setup.deb.sh" | sudo -E bash curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/enterprise/setup.deb.sh" | sudo -E bash @@ -94,12 +93,14 @@ Once the repositories are configured, you can install the required software. The psql command is the interactive terminal for working with PostgreSQL. It's a client application and can be installed on any operating system. Packaged with psql are pg_dump and pg_restore, command-line utilities for dumping and restoring PostgreSQL databases. -* Ubuntu +- Ubuntu + ```shell sudo apt install postgresql-client-16 ``` -* Red Hat +- Red Hat + ```shell sudo dnf install postgresql-client-16 ``` @@ -118,15 +119,18 @@ Ensure that your passwords are appropriately escaped in the `.pgpass` file. If a chmod 0600 $HOME/.pgpass ``` -#### Installing PGD CLI +#### Installing PGD CLI PGD CLI is a command-line interface for managing and monitoring PGD clusters. It's a Go application and can be installed on any operating system. -* Ubuntu +- Ubuntu + ```shell sudo apt-get install edb-pgd5-cli ``` -* Red Hat + +- Red Hat + ```shell sudo dnf install edb-pgd5-cli ``` @@ -151,19 +155,21 @@ cluster: Save it as `pgd-cli-config.yml`. -See also [Installing PGD CLI](/pgd/latest/cli/installing_cli/). - +See also [Installing PGD CLI](/pgd/latest/cli/installing/). #### Installing Migration Toolkit EDB's Migration Toolkit (MTK) is a command-line tool you can use to migrate data from a source database to a target database. It's a Java application and requires a Java runtime environment to be installed. -* Ubuntu +- Ubuntu + ```shell sudo apt-get -y install edb-migrationtoolkit sudo wget https://jdbc.postgresql.org/download/postgresql-42.7.2.jar -P /usr/edb/migrationtoolkit/lib ``` -* Red Hat + +- Red Hat + ```shell sudo apt-get -y install edb-migrationtoolkit sudo wget https://jdbc.postgresql.org/download/postgresql-42.7.2.jar -P /usr/edb/migrationtoolkit/lib @@ -175,11 +181,14 @@ See also [Installing Migration Toolkit](/migration_toolkit/latest/installing/) EDB LiveCompare is an application you can use to compare two databases and generate a report of the differences. You'll use it later in this process to verify the data migration. -* Ubuntu +- Ubuntu + ``` sudo apt-get -y install edb-livecompare ``` -* Red Hat + +- Red Hat + ``` sudo dnf -y install edb-livecompare ``` @@ -192,7 +201,6 @@ On the target cluster and within the regional group required, select one node to If you have a group `ab-group` with `ab-node-1`, `ab-node-2`, and `ab-node-3`, you can select `ab-node-1` as the destination node. - ### Set up a fence Fence off all other nodes except for the destination node. @@ -208,7 +216,6 @@ select bdr.alter_node_option('ab-node-3','route_fence','t'); The next time you connect with psql, you're directed to the write leader, which should be the destination node. To ensure that it is, you need to send two more commands. - ### Make the destination node both write and raft leader To minimize the possibility of disconnections, move the raft and write leader roles to the destination node. @@ -221,7 +228,6 @@ bdr.raft_leadership_transfer('ab-node-1',true,'ab-group'); Because you fenced off the other nodes in the group, this command triggers a write leader election in the `ab-group` that elects the `ab-node-1` as write leader. - ### Record then clear default commit scopes You need to make a record of the default commit scopes in the cluster. The next step overwrites the settings. (At the end of this process, you need to restore them.) Run: @@ -237,7 +243,7 @@ This command produces an output similar to:: -----------------+---------------------- world | ab-group | ba001_ab-group-a - ``` +``` Record these values. You can now overwrite the settings: @@ -249,7 +255,8 @@ select bdr.alter_node_group_option('ab-group','default_commit_scope', 'local'); Check that the target cluster is healthy. -* To check the overall health of the cluster, run` pgd -f pgd-cli-config.yml check-health` : +- To check the overall health of the cluster, run` pgd -f pgd-cli-config.yml check-health` : + ``` Check Status Message ----- ------ ------- @@ -259,9 +266,11 @@ Raft Ok Raft Consensus is working correctly Replslots Ok All BDR replication slots are working correctly Version Ok All nodes are running same BDR versions ``` + (When the cluster is healthy, all checks pass.) -* To verify the configuration of the cluster, run `pgd -f pgd-cli-config.yml verify-cluster`: +- To verify the configuration of the cluster, run `pgd -f pgd-cli-config.yml verify-cluster`: + ``` Check Status Groups ----- ------ ------ @@ -272,9 +281,11 @@ Witness-only group does not have any child groups There is at max 1 witness-only group iff there is even number of local Data Groups Ok There are at least 2 proxies configured per Data Group if routing is enabled Ok ``` + (When the cluster is verified, all checks.) -* To check the status of the nodes, run `pgd -f pgd-cli-config.yml show-nodes`: +- To check the status of the nodes, run `pgd -f pgd-cli-config.yml show-nodes`: + ``` Node Node ID Group Type Current State Target State Status Seq ID ---- ------- ----- ---- ------------- ------------ ------ ------ @@ -283,14 +294,13 @@ ab-node-2 2587806295 ab-group data ACTIVE ACTIVE Up 2 ab-node-3 199017004 ab-group data ACTIVE ACTIVE Up 3 ``` +- To confirm the raft leader, run `pgd -f pgd-cli-config.yml show-raft`. -* To confirm the raft leader, run `pgd -f pgd-cli-config.yml show-raft`. +- To confirm the replication slots, run `pgd -f pgd-cli-config.yml show-replslots`. -* To confirm the replication slots, run `pgd -f pgd-cli-config.yml show-replslots`. +- To confirm the subscriptions, run `pgd -f pgd-cli-config.yml show-subscriptions`. -* To confirm the subscriptions, run `pgd -f pgd-cli-config.yml show-subscriptions`. - -* To confirm the groups, run `pgd -f pgd-cli-config.yml show-groups`. +- To confirm the groups, run `pgd -f pgd-cli-config.yml show-groups`. These commands provide a snapshot of the state of the cluster before the migration begins. @@ -298,11 +308,10 @@ These commands provide a snapshot of the state of the cluster before the migrati Currently, you must migrate the data in four phases: -1. Transferring the “pre-data” using pg_dump and pg_restore, which exports and imports all the data definitions. -1. Transfer the role definitions using pg_dumpall and psql. -1. Using MTK with the `--dataonly` option to transfer only the data from each table, repeating as necessary for each table. -1. Transferring the “post-data” using pg_dump and pg_restore, which completes the data transfer. - +1. Transferring the “pre-data” using pg_dump and pg_restore, which exports and imports all the data definitions. +2. Transfer the role definitions using pg_dumpall and psql. +3. Using MTK with the `--dataonly` option to transfer only the data from each table, repeating as necessary for each table. +4. Transferring the “post-data” using pg_dump and pg_restore, which completes the data transfer. ### Transferring the pre-data @@ -472,4 +481,3 @@ LiveCompare compares the source and target databases and generates a report of t Review the report to ensure that the data migration was successful. Refer to the [LiveCompare](/livecompare/latest/) documentation for more information on using LiveCompare. - diff --git a/product_docs/docs/biganimal/release/overview/poolers.mdx b/product_docs/docs/biganimal/release/overview/poolers.mdx index 2c93116214b..6a4e1c7c552 100644 --- a/product_docs/docs/biganimal/release/overview/poolers.mdx +++ b/product_docs/docs/biganimal/release/overview/poolers.mdx @@ -2,15 +2,12 @@ title: EDB PgBouncer --- -EDB PgBouncer can manage your connections to Postgres databases and help your workloads run more efficiently. It's particularly useful if you plan to use more than a few hundred active connections. You can enable EDB PgBouncer to be entirely managed by BigAnimal, when creating your cluster. See [Creating a cluster](/biganimal/latest/getting_started/creating_a_cluster/#pgbouncer). +EDB PgBouncer can manage your connections to Postgres databases and help your workloads run more efficiently. It's particularly useful if you plan to use more than a few hundred active connections. You can enable EDB PgBouncer to be entirely managed by BigAnimal, when creating your cluster. See [Creating a cluster](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/#pgbouncer). BigAnimal provisions up to three instances per EDB PgBouncer-enabled cluster to ensure that performance is unaffected, so each availability zone receives its own instance of EDB PgBouncer. !!!Note + Currently, you can't enable EDB PgBouncer when creating a distributed high-availability cluster. If you want to deploy and manage PgBouncer outside of BigAnimal, see the [How to configure EDB PgBouncer with BigAnimal cluster](https://support.biganimal.com/hc/en-us/articles/4848726654745-How-to-configure-PgBouncer-with-BigAnimal-Cluster) knowledge-base article. - - - - diff --git a/product_docs/docs/biganimal/release/overview/updates.mdx b/product_docs/docs/biganimal/release/overview/updates.mdx index e18d887171f..f6494042f85 100644 --- a/product_docs/docs/biganimal/release/overview/updates.mdx +++ b/product_docs/docs/biganimal/release/overview/updates.mdx @@ -14,7 +14,7 @@ In some cases, these updates might terminate existing network connections to you ## Specifying maintenance windows -If you want to control when the updates are pushed, you can specify a weekly maintenance window for each cluster or each data group in the case of a distributed high-availability cluster. BigAnimal displays a *scheduled maintenance* message on your cluster list four hours prior to the scheduled maintenance time to remind you of the upcoming maintenance window. This reminder allows you to make any necessary preparations, such as saving your work and closing any open connections. For more information on specifying maintenance windows, see [Maintenance](/biganimal/latest/getting_started/creating_a_cluster/#maintenance). +If you want to control when the updates are pushed, you can specify a weekly maintenance window for each cluster or each data group in the case of a distributed high-availability cluster. BigAnimal displays a *scheduled maintenance* message on your cluster list four hours prior to the scheduled maintenance time to remind you of the upcoming maintenance window. This reminder allows you to make any necessary preparations, such as saving your work and closing any open connections. For more information on specifying maintenance windows, see [Maintenance](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/#maintenance). ## Maintenance for high-availability clusters @@ -26,11 +26,11 @@ While there is no downtime during periodic maintenance, there will be a network Most connectivity issues correct themselves when you reopen the connection after waiting for a minimum of five seconds. We recommend that you: -- Wait five seconds before your first attempt. -- For the next attempt, increase the wait by doubling the previous wait time. Keep trying this approach until you reach a maximum wait time of 60 seconds. +- Wait five seconds before your first attempt. +- For the next attempt, increase the wait by doubling the previous wait time. Keep trying this approach until you reach a maximum wait time of 60 seconds. We also recommend that you set a maximum number of attempts to reopen the connection before your application reports that it can't reconnect. When an active connection that's currently executing a command is interrupted, you might need to take extra action when reopening the connection. (For read-only transactions that were in progress, you can reopen the connection without any extra steps.) For a transaction that was writing to the database, you need to know whether the transaction was rolled back or whether it succeeded to determine whether you need to retry the transaction. If it was rolled back, you need to retry it. If it succeeded, you don't need to retry it. It's possible for a transaction to succeed without sending you the commit acknowledgment from the database server, so you'll need to add some logic to be sure. -Test your retry logic by creating an event that causes a brief downtime to see if it's handling these transactions correctly. \ No newline at end of file +Test your retry logic by creating an event that causes a brief downtime to see if it's handling these transactions correctly. diff --git a/product_docs/docs/biganimal/release/using_cluster/01_postgres_access/index.mdx b/product_docs/docs/biganimal/release/using_cluster/01_postgres_access/index.mdx index bd3a6df48b4..8a6884604e6 100644 --- a/product_docs/docs/biganimal/release/using_cluster/01_postgres_access/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/01_postgres_access/index.mdx @@ -6,10 +6,11 @@ You control access to your Postgres database using database authentication imple For information on portal authentication, see: -- [Setting up your identity provider](/biganimal/latest/getting_started/identity_provider/) if you purchased BigAnimal directly from EDB -- [Setting up your Azure Marketplace account](/biganimal/latest/getting_started/02_azure_market_setup/) if you purchased BigAnimal through Azure Marketplace +- [Setting up your identity provider](/biganimal/latest/getting_started/identity_provider/) if you purchased BigAnimal directly from EDB +- [Setting up your Azure Marketplace account](/biganimal/latest/getting_started/02_azure_market_setup/) if you purchased BigAnimal through Azure Marketplace ## Setting up your database authentication + Don't use the edb_admin database role and edb_admin database created when creating your cluster in your application. Instead, create a new database role and a new database, which provides a high level of isolation in Postgres. If multiple applications are using the same cluster, each database can also contain multiple schemas, essentially a namespace in the database. If you need strict isolation, use a dedicated cluster or dedicated database. If you don't need that strict isolation level, you can deploy a single database with multiple schemas. See [Privileges](https://www.postgresql.org/docs/current/ddl-priv.html) in the PostgreSQL documentation to further customize ownership and roles to your requirements. To create a new role and database, first connect using `psql`: @@ -17,34 +18,39 @@ To create a new role and database, first connect using `psql`: ```shell psql -W "postgres://edb_admin@xxxxxxxxx.xxxxx.biganimal.io:5432/edb_admin?sslmode=require" ``` + !!! Note + Avoid storing data in the postgres system database. ## Admin roles ### pg_ba_admin + So that we can effectively manage the cloud resources and ensure users are protected against security threats, BigAnimal provides a special administrative role, pg_ba_admin. The edb_admin user is a member of the pg_ba_admin role. The pg_ba_admin role has privileges similar to a Postgres superuser. Like the edb_admin user, the pg_ba_admin role shouldn't be used for day-to-day application operations and access to the role must be controlled carefully. See [pg_ba_admin role](pg_ba_admin) for details. ### superuser -Superuser access in BigAnimal is available only where the users are in control of their infrastructure. When using your own cloud account, you can grant the edb_admin role superuser privileges for a cluster. See [Superuser access](/biganimal/latest/getting_started/creating_a_cluster/#superuser-access). +Superuser access in BigAnimal is available only where the users are in control of their infrastructure. When using your own cloud account, you can grant the edb_admin role superuser privileges for a cluster. See [Superuser access](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/#superuser-access). When granting superuser privileges, to avoid degrading service or compromising availability, ensure you limit the number of connections used by superusers. BigAnimal reserves and requires a few superuser connections for the readiness probe for these reasons: -- To check if the database is up and able to accept connections -- For creating specific roles required in PostgreSQL instances and some extensions + +- To check if the database is up and able to accept connections +- For creating specific roles required in PostgreSQL instances and some extensions !!! note + Superuser privileges allow you to make Postgres configuration changes using `ALTER SYSTEM` queries. We recommend that you don't do this because it might lead to an unpredictable or unrecoverable state of the cluster. In addition, `ALTER SYSTEM` changes aren't replicated across the cluster. For BigAnimal hosted and distributed high-availability clusters, there's no superuser access option. Use the edb_admin role for most superuser level activities. Unsafe activities aren't available to the edb_admin role. Distributed high-availability clusters also have a bdr_superuser role. This isn't a general superuser but a specific user/role that has privileges and access to all the bdr schemas and functions. For more information, see [bdr_superuser](/pgd/latest/security/roles/). - + See the [PostgreSQL documentation on superusers](https://www.postgresql.org/docs/current/role-attributes.html) for best practices. ### Notes on the edb_admin role - -- Changes to system configuration (GUCs) made by edb_admin or other Postgres users don't persist through a reboot or maintenance. Use the BigAnimal portal to modify system configuration. + +- Changes to system configuration (GUCs) made by edb_admin or other Postgres users don't persist through a reboot or maintenance. Use the BigAnimal portal to modify system configuration. - You have to remember your edb_admin password, as EDB doesn't have access to it. If you forget it, you can set a new one in the BigAnimal portal on the Edit Cluster page. @@ -52,8 +58,6 @@ See the [PostgreSQL documentation on superusers](https://www.postgresql.org/docs - BigAnimal stores all database-level authentication securely and directly in PostgreSQL. The `edb_admin` user password is `SCRAM-SHA-256` hashed prior to storage. This hash, even if compromised, can't be replayed by an attacker to gain access to the system. - - ## One database with one application For one database hosting a single application, replace `app1` with your preferred user name: @@ -105,6 +109,7 @@ If you use a single database to host multiple schemas, create a database owner a prod1=# create schema app1 authorization app1; prod1=# create schema app2 authorization app2; ``` + ## IAM authentication for Postgres Any user with a supported cloud account connected to a BigAnimal subscription who has the Postgres IAM role iam_aws, iam_azure, or iam_gcp can authenticate to the database using their IAM credentials. @@ -113,20 +118,22 @@ Any user with a supported cloud account connected to a BigAnimal subscription wh Provision your cluster before configuring IAM for Postgres. -1. In BigAnimal, turn on the IAM authentication feature when creating or modifying the cluster: - 1. On the **Additional Settings** tab, under **Authentication**, select **Identity and Access Management (IAM) Authentication**. - 1. Select **Create Cluster** or **Save**. - !!!note - To turn on IAM authentication using the CLI, see [Using IAM authentication on AWS](/biganimal/latest/reference/cli/using_features/#iam-authentication-cli-commands). -1. From your cloud provider, get the user name of each IAM user requiring database access. In the cloud account connected to BigAnimal, use Identity and Access Management (IAM) to perform user management. +1. In BigAnimal, turn on the IAM authentication feature when creating or modifying the cluster: + 1. On the **Additional Settings** tab, under **Authentication**, select **Identity and Access Management (IAM) Authentication**. + 2. Select **Create Cluster** or **Save**. + !!!note + + To turn on IAM authentication using the CLI, see [Using IAM authentication on AWS](/biganimal/latest/reference/cli/using_features/#iam-authentication-cli-commands). -1. In Postgres, if the IAM role doesn’t exist yet, use the `CREATE ROLE` command. For example, for AWS, use: +2. From your cloud provider, get the user name of each IAM user requiring database access. In the cloud account connected to BigAnimal, use Identity and Access Management (IAM) to perform user management. + +3. In Postgres, if the IAM role doesn’t exist yet, use the `CREATE ROLE` command. For example, for AWS, use: ``` CREATE ROLE "iam_aws"; ``` -1. For each IAM user, run the `CREATE USER` Postgres command. For example, for AWS, use: +4. For each IAM user, run the `CREATE USER` Postgres command. For example, for AWS, use: ``` CREATE USER "" IN ROLE iam_aws; @@ -140,14 +147,15 @@ If IAM integration is configured for your cluster, you can log in to Postgres us For either method, you must first authenticate to your cloud service provider IAM to get your password or token. -!!! Note +!!! Note + You can continue to log in using your Postgres username and password. However, doing so doesn’t provide IAM authentication even if this feature is configured. -1. Get your credentials for your IAM-managed cloud account. - - For AWS, your password is your access key (in the form <access key id>:<secret access key>). To get your access key, see [get-access-key-info](https://docs.aws.amazon.com/cli/latest/reference/sts/get-access-key-info.html) To get your authorization token, see [get-authorization-token](https://docs.aws.amazon.com/cli/latest/reference/ecr-public/get-authorization-token.html). - - For GCP, to get your access token, see [Create a short-lived access token](https://cloud.google.com/iam/docs/create-short-lived-credentials-direct). - - For Azure, to get your access token, see [the get-access-token command](https://learn.microsoft.com/en-us/cli/azure/account?view=azure-cli-latest#az-account-get-access-token()). -1. Connect to Postgres using your IAM credentials. +1. Get your credentials for your IAM-managed cloud account. + - For AWS, your password is your access key (in the form <access key id>:<secret access key>). To get your access key, see [get-access-key-info](https://docs.aws.amazon.com/cli/latest/reference/sts/get-access-key-info.html) To get your authorization token, see [get-authorization-token](https://docs.aws.amazon.com/cli/latest/reference/ecr-public/get-authorization-token.html). + - For GCP, to get your access token, see [Create a short-lived access token](https://cloud.google.com/iam/docs/create-short-lived-credentials-direct). + - For Azure, to get your access token, see [the get-access-token command](https://learn.microsoft.com/en-us/cli/azure/account?view=azure-cli-latest#az-account-get-access-token()). +2. Connect to Postgres using your IAM credentials. ### Using IAM authentication CLI commands diff --git a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx index 6eed8a25327..a2a0ed8f2ea 100644 --- a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx @@ -13,29 +13,32 @@ Three different methods enable you to connect to your cluster from your applicat Azure private endpoint is a network interface that securely connects a private IP address from your Azure virtual network (VNet) to an external service. You grant access only to a single cluster instead of the entire BigAnimal resource virtual network, thus ensuring maximum network isolation. Other advantages include: + - You need to configure the Private Link only once. Then you can use multiple private endpoints to connect applications from many different VNets. - There's no risk of IP address conflicts. Private endpoints are the same mechanism used by first-party Azure services such as CosmosDB for private VNet connectivity. For more information, see [What is a private endpoint?](https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-overview). Private Links (required by private endpoints) aren't free, however. See [Azure Private Link pricing](https://azure.microsoft.com/en-us/pricing/details/private-link/#pricing) for information on the costs associated with Private Links (required by private endpoints). !!!note + If you set up a private endpoint and want to change to a public network, you must remove the private endpoint resources before making the change. !!! ### Using BigAnimal's cloud account -When using BigAnimal's cloud account, when creating a cluster, you provide BigAnimal with your Azure subscription ID (see [Networking](/biganimal/latest/getting_started/creating_a_cluster/#cluster-settings-tab)). BigAnimal, in turn, provides you with a private link alias, which you can use to connect to your cluster privately. +When using BigAnimal's cloud account, when creating a cluster, you provide BigAnimal with your Azure subscription ID (see [Networking](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/#cluster-settings-tab)). BigAnimal, in turn, provides you with a private link alias, which you can use to connect to your cluster privately. -1. When creating your cluster, on the **Cluster Settings** tab, in the **Network** section: - 1. Select **Private**. +1. When creating your cluster, on the **Cluster Settings** tab, in the **Network** section: - 1. Enter your application's Azure subscription ID. + 1. Select **Private**. -1. After the cluster is created, go to the cluster details to see the corresponding endpoint service name. You need the service name while creating a private endpoint. + 2. Enter your application's Azure subscription ID. -1. Create a private endpoint in the client's VNet. The steps for creating a private endpoint in the client's VNet are the same whether you're using BigAnimal's cloud or your own. See [Step 1: Create an Azure private endpoint](#step-1-create-an-azure-private-endpoint) and [Step 2: Create an Azure Private DNS Zone for the private endpoint](#step-2-create-an-azure-private-dns-zone-for-the-private-endpoint). +2. After the cluster is created, go to the cluster details to see the corresponding endpoint service name. You need the service name while creating a private endpoint. -1. In your application's Azure account, select **Private Link Center**, and then select **Private endpoints**. Select the endpoint you created previously, and use the service name provided in the details section in BigAnimal to access your cluster. +3. Create a private endpoint in the client's VNet. The steps for creating a private endpoint in the client's VNet are the same whether you're using BigAnimal's cloud or your own. See [Step 1: Create an Azure private endpoint](#step-1-create-an-azure-private-endpoint) and [Step 2: Create an Azure Private DNS Zone for the private endpoint](#step-2-create-an-azure-private-dns-zone-for-the-private-endpoint). + +4. In your application's Azure account, select **Private Link Center**, and then select **Private endpoints**. Select the endpoint you created previously, and use the service name provided in the details section in BigAnimal to access your cluster. ### Using your Azure account @@ -60,7 +63,6 @@ Assume that your cluster is on a subscription called `development` and is being - Virtual network: `vnet-client` - Virtual network subnet: `snet-client` - #### Prerequisites To walk through an example in your own environment, you need: @@ -76,67 +78,73 @@ To walk through an example in your own environment, you need: - A Postgres client, such as [psql](https://www.postgresql.org/download/), installed on your client VM. !!!note + BigAnimal automatically provisions an Azure Private Link service for every private Postgres cluster. You can easily find this managed Private Link service by looking for the one that has the Cluster ID in its name, like `p-mckwlbakq5-rw-internal-lb`. !!! In this example, you create an Azure private endpoint in your client VM's virtual network. After you create the private endpoint, you can use its private IP address to access the Postgres cluster. You must perform this procedure for every virtual network you want to connect from. - -#### Step 1: Create an Azure private endpoint +#### Step 1: Create an Azure private endpoint Create an Azure private endpoint in each client virtual network that needs to connect to your BigAnimal cluster. You can create the private endpoint using either the [Azure portal](#using-the-azure-portal) or the [Azure CLI](#using-the-azure-cli). ##### Using the Azure portal -1. If you prefer to create the private endpoint using the Azure portal, on the upper-left side of the screen, select **Create a resource > Networking > Private Link**. Alternatively. in the search box enter `Private Link`. +1. If you prefer to create the private endpoint using the Azure portal, on the upper-left side of the screen, select **Create a resource > Networking > Private Link**. Alternatively. in the search box enter `Private Link`. + +2. Select **Create**. + +3. In Private Link Center, select **Private endpoints** in the menu on the left. -2. Select **Create**. +4. In Private endpoints, select **Add**. -3. In Private Link Center, select **Private endpoints** in the menu on the left. +5. Enter the details for the private endpoint in the **Basics** tab: -4. In Private endpoints, select **Add**. + ![](../images/create_private_endpoint.png) -5. Enter the details for the private endpoint in the **Basics** tab: + - Subscription — Select the subscription where your vm-client resides. In this case, it's `test`. - ![](../images/create_private_endpoint.png) + - Resource group — Select a resource group in the same region where your vm-client resides. This exanple uses `rg-client`. - - Subscription — Select the subscription where your vm-client resides. In this case, it's `test`. + - Name — Use a unique name for the private endpoint. For example, enter `vnet-client-private-endpoint`, where `vnet-client` is the client VNet ID. - - Resource group — Select a resource group in the same region where your vm-client resides. This exanple uses `rg-client`. + - Network Interface Name — This takes the name of the private endpoint and appends it with `-nic`. - - Name — Use a unique name for the private endpoint. For example, enter `vnet-client-private-endpoint`, where `vnet-client` is the client VNet ID. + - Region — The private endpoint must be in the same region as your VNet. In this case, it's `(Asia Pacific) Japan East`. - - Network Interface Name — This takes the name of the private endpoint and appends it with `-nic`. + !!!Note - - Region — The private endpoint must be in the same region as your VNet. In this case, it's `(Asia Pacific) Japan East`. + In a later step, you need the private endpoint's name to get its private IP address. - !!!Note - In a later step, you need the private endpoint's name to get its private IP address. +6. On the **Resource** tab, connect the private endpoint to the Private Link service that + you created by entering the following details: -1. On the **Resource** tab, connect the private endpoint to the Private Link service that -you created by entering the following details: + ![](../images/create_private_endpoint_resource.png) - ![](../images/create_private_endpoint_resource.png) + - Connection Method — Select **Connect to an Azure resource in my directory**. - - Connection Method — Select **Connect to an Azure resource in my directory**. + - Subscription — Select the subscription in which the target BigAnimal Postgres cluster resides. In this example, it's `development`. - - Subscription — Select the subscription in which the target BigAnimal Postgres cluster resides. In this example, it's `development`. - - Resource type — Select **Microsoft.Network/privateLinkServices**. This is the type of resource you want to connect to using this private endpoint. - - Resource — Select the Private Link service resource whose name starts with the cluster ID. In this case, it's **p-mckwlbakq5-rw-internal-lb**. + - Resource type — Select **Microsoft.Network/privateLinkServices**. This is the type of resource you want to connect to using this private endpoint. - !!!Note - BigAnimal creates the Private Link service in a resource group managed by Azure Kubernetes Service in the corresponding project/region. Its name follows this pattern: `MC_dp-PROJECT_ID-REGION-counter_REGION`. In this example, it's `MC_dp-brcxzr08qr7rbei1-japaneast-1_japaneast`. + - Resource — Select the Private Link service resource whose name starts with the cluster ID. In this case, it's **p-mckwlbakq5-rw-internal-lb**. + !!!Note -7. On the **Virtual Network** tab, enter the client VM’s Virtual Network details: + BigAnimal creates the Private Link service in a resource group managed by Azure Kubernetes Service in the corresponding project/region. Its name follows this pattern: `MC_dp-PROJECT_ID-REGION-counter_REGION`. In this example, it's `MC_dp-brcxzr08qr7rbei1-japaneast-1_japaneast`. - ![](../images/create_private_endpoint_virtual_network.png) - - Virtual Network — Enter the VM client’s virtual network. In this case, it's `vnet-client`. +7. On the **Virtual Network** tab, enter the client VM’s Virtual Network details: - - Subnet — To deploy the private endpoint, you must select a virtual network subnet to receive the private IP address assignment. In this example, the snet client subnet was already defined and will be assigned the private IP address. However, if a subnet isn't yet defined, you can select the default subnet, and a private IP address will be assigned. - - Private IP Configuration — This option defaults to **Dynamically allocate IP address**. This example uses the default. - - Application security group — You can leave this blank, or you can create or assign an Application Security Group. In this example, it's blank. + ![](../images/create_private_endpoint_virtual_network.png) + + - Virtual Network — Enter the VM client’s virtual network. In this case, it's `vnet-client`. + + - Subnet — To deploy the private endpoint, you must select a virtual network subnet to receive the private IP address assignment. In this example, the snet client subnet was already defined and will be assigned the private IP address. However, if a subnet isn't yet defined, you can select the default subnet, and a private IP address will be assigned. + + - Private IP Configuration — This option defaults to **Dynamically allocate IP address**. This example uses the default. + + - Application security group — You can leave this blank, or you can create or assign an Application Security Group. In this example, it's blank. 8. You can either skip or configure both **DNS** and **Tags** as you need and then go to **Review + Create**. @@ -144,7 +152,7 @@ you created by entering the following details: 10. Proceed to [Accessing the cluster](#accessing-the-cluster). -##### Using the Azure CLI +##### Using the Azure CLI If you prefer to create the private endpoint using the Azure CLI, either use your local terminal with an Azure CLI profile already configured or open a new Azure Cloud Shell using the Azure portal. @@ -176,6 +184,7 @@ az network private-endpoint create \ - `subscription` is the Azure subscription in which to create the private endpoint. #### Accessing the cluster + You have successfully built a tunnel between your client VM's virtual network and the cluster. You can now access the cluster from the private endpoint in your client VM. The private endpoint's private IP address is associated with an independent virtual network NIC. Get the private endpoint's private IP address using the following commands: ```shell @@ -183,7 +192,7 @@ NICID=$(az network private-endpoint show -n vnet-client-private-endpoint -g rg-c az network nic show -n ${NICID##*/} -g rg-client --query "ipConfigurations[0].privateIPAddress" -o tsv __OUTPUT__ 100.64.111.5 - ``` +``` From the client VM `vm-client`, access the cluster by using the private IP address: @@ -207,6 +216,7 @@ EDB strongly recommends using a [private Azure DNS zone](https://docs.microsoft. With a private DNS zone, you configure a DNS entry for your cluster's public hostname. Azure DNS ensures that all requests to that domain name from your VNet resolve to the private endpoint's IP address instead of the cluster's IP address. !!! Note + You need to create a single private Azure DNS zone for each VNet, even if you're connecting to multiple clusters. If you already created a DNS zone for this VNet, you can skip to step 6. 1. In the Azure portal, search for `Private DNS Zones`. @@ -243,6 +253,7 @@ With a private DNS zone, you configure a DNS entry for your cluster's public hos ``` !!! Tip + You might need to flush your local DNS cache to resolve your domain name to the new private IP address after adding the private endpoint. ## Other methods diff --git a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/02_connecting_from_aws/index.mdx b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/02_connecting_from_aws/index.mdx index 024f9e7ab7a..bb9a68e9424 100644 --- a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/02_connecting_from_aws/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/02_connecting_from_aws/index.mdx @@ -5,8 +5,6 @@ redirects: - /biganimal/release/using_cluster/02_connecting_your_cluster/02_connecting_from_aws/01_vpc_endpoint/ --- - - AWS VPC endpoint (AWS Private Link) service is a network interface that securely connects a private IP address from your AWS VPC to an external service. You grant access only to a single cluster instead of the entire BigAnimal resource VPC, thus ensuring maximum network isolation. For more information, see [VPC endpoint services (AWS PrivateLink)](https://docs.aws.amazon.com/vpc/latest/privatelink/endpoint-service.html). @@ -15,18 +13,19 @@ The way you create a private endpoint differs when you're using your AWS account ## Using BigAnimal's cloud account -When using BigAnimal's cloud account, you provide BigAnimal with your AWS account ID when creating a cluster (see [Networking](/biganimal/latest/getting_started/creating_a_cluster/#cluster-settings-tab)). BigAnimal, in turn, provides you with an AWS service name, which you can use to connect to your cluster privately. +When using BigAnimal's cloud account, you provide BigAnimal with your AWS account ID when creating a cluster (see [Networking](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/#cluster-settings-tab)). BigAnimal, in turn, provides you with an AWS service name, which you can use to connect to your cluster privately. + +1. When creating your cluster, on the **Cluster Settings** tab, in the **Network** section: -1. When creating your cluster, on the **Cluster Settings** tab, in the **Network** section: - 1. Select **Private**. + 1. Select **Private**. - 1. Enter your application's AWS account ID using just numbers (no hyphens), for example, 123456789012. + 2. Enter your application's AWS account ID using just numbers (no hyphens), for example, 123456789012. -1. After the cluster is created, go to the cluster details to see the corresponding endpoint service name. You need the service name while creating a VPC endpoint. +2. After the cluster is created, go to the cluster details to see the corresponding endpoint service name. You need the service name while creating a VPC endpoint. -1. Create a VPC endpoint in the client's VPC. The steps for creating a VPC endpoint in the client's VPC are the same whether you're using BigAnimal's cloud or your own. The steps are available [here](#step-2-create-a-vpc-endpoint-in-the-clients-vpc). +3. Create a VPC endpoint in the client's VPC. The steps for creating a VPC endpoint in the client's VPC are the same whether you're using BigAnimal's cloud or your own. The steps are available [here](#step-2-create-a-vpc-endpoint-in-the-clients-vpc). -1. In your application's AWS account, select **VPC**, and then select **Endpoints**. Select the endpoint you created previously, and use the service name provided in the details section in BigAnimal to access your cluster. +4. In your application's AWS account, select **VPC**, and then select **Endpoints**. Select the endpoint you created previously, and use the service name provided in the details section in BigAnimal to access your cluster. ## Using your AWS account @@ -44,6 +43,7 @@ There's an associated cost of resources, however. For more information, see [VPC endpoint services (AWS PrivateLink)](https://docs.aws.amazon.com/vpc/latest/privatelink/endpoint-service.html). #### Example + This example shows how to connect your cluster using VPC endpoints. Assume that your cluster is on an account called `development` and is being accessed from a client on another account called `test`. It has the following properties: @@ -65,81 +65,80 @@ Assume that your cluster is on an account called `development` and is being acce To walk through an example in your own environment, you need: -- Your cluster URL. You can find the URL in the **Connect** tab of your cluster instance in the BigAnimal portal. - +- Your cluster URL. You can find the URL in the **Connect** tab of your cluster instance in the BigAnimal portal. #### Step 1: Create an endpoint service for your cluster In the AWS account connected to BigAnimal, create an endpoint service to provide access to your clusters from other VPCs in other AWS accounts. Perform this procedure for each cluster to which you want to provide access. -1. Open the [Amazon EC2 console](https://console.aws.amazon.com/ec2/). Ensure that the region where your cluster is deployed is selected in the upper-right corner of the console. +1. Open the [Amazon EC2 console](https://console.aws.amazon.com/ec2/). Ensure that the region where your cluster is deployed is selected in the upper-right corner of the console. -1. In the navigation pane, under **Load Balancing**, select **Load Balancers**. +2. In the navigation pane, under **Load Balancing**, select **Load Balancers**. -1. Identify the load balancer that's tagged with the ID of the cluster to which you want to connect (`-rw-internal-lb`), for example, `p-96fh28m3cb-rw-internal-lb`. Note the name of that network load balancer. +3. Identify the load balancer that's tagged with the ID of the cluster to which you want to connect (`-rw-internal-lb`), for example, `p-96fh28m3cb-rw-internal-lb`. Note the name of that network load balancer. -1. Open the [Amazon VPC console](https://console.aws.amazon.com/vpc/). +4. Open the [Amazon VPC console](https://console.aws.amazon.com/vpc/). -1. From the navigation pane on the left, under **Virtual Private Cloud**, select **Endpoint Services**, and then select **Create endpoint service**. +5. From the navigation pane on the left, under **Virtual Private Cloud**, select **Endpoint Services**, and then select **Create endpoint service**. -1. Enter a suitable name for the endpoint service. +6. Enter a suitable name for the endpoint service. -1. Select **Network** for the load balancer type. +7. Select **Network** for the load balancer type. -1. Under **Available load balancers**, select the network load balancer of the cluster to which you want to connect. - -1. Leave all the other fields with their default values, and select **Create**. +8. Under **Available load balancers**, select the network load balancer of the cluster to which you want to connect. -1. Under **Details**, note the **Service name** of the created endpoint service (for example, `com.amazonaws.vpce.us-east-1.vpce-svc-0e123abc123198abc`). You need the service name while creating a VPC endpoint. +9. Leave all the other fields with their default values, and select **Create**. -1. In the navigation pane, select **Endpoint Services**. +10. Under **Details**, note the **Service name** of the created endpoint service (for example, `com.amazonaws.vpce.us-east-1.vpce-svc-0e123abc123198abc`). You need the service name while creating a VPC endpoint. -1. Select your endpoint service from the **Actions** list, and select **Allow principals**. - -1. Add the AWS account with which you want to connect to the endpoint service by specifying the ARN for the principal. The ARN must be in this format: +11. In the navigation pane, select **Endpoint Services**. - arn:aws:iam::<AWS ACCOUNT ID>:root +12. Select your endpoint service from the **Actions** list, and select **Allow principals**. +13. Add the AWS account with which you want to connect to the endpoint service by specifying the ARN for the principal. The ARN must be in this format: + + arn:aws:iam::<AWS ACCOUNT ID>:root #### Step 2: Create a VPC endpoint in the client's VPC Now that your endpoint service is created, you can connect it to the cluster VPC using a VPC endpoint. Perform this procedure in your application's AWS account. !!! Note + In your application's AWS account, ensure that you allow your application's security group to connect to your cluster. -1. Open the [Amazon VPC console](https://console.aws.amazon.com/vpc/). +1. Open the [Amazon VPC console](https://console.aws.amazon.com/vpc/). + +2. Ensure that the region where your cluster is deployed is selected in the upper-right corner of the console. + +3. From the navigation pane on the left, under **Virtual Private Cloud**, select **Endpoints**, and then select **Create endpoint**. -1. Ensure that the region where your cluster is deployed is selected in the upper-right corner of the console. +4. Enter a suitable name for the endpoint service. -1. From the navigation pane on the left, under **Virtual Private Cloud**, select **Endpoints**, and then select **Create endpoint**. +5. Under **Service category**, select **Other endpoint services**. -1. Enter a suitable name for the endpoint service. +6. Under **Service Name**, enter the name of the endpoint service that you created earlier: -1. Under **Service category**, select **Other endpoint services**. + - If following the example using your AWS account: `com.amazonaws.vpce.us-east-1.vpce-svc-0e123abc123198abc` + - If using BigAnimal's cloud: the service name provided in your BigAnimal cluster's details -1. Under **Service Name**, enter the name of the endpoint service that you created earlier: - - - If following the example using your AWS account: `com.amazonaws.vpce.us-east-1.vpce-svc-0e123abc123198abc` - - If using BigAnimal's cloud: the service name provided in your BigAnimal cluster's details - - To verify whether you successfully allowed access to the endpoint, select **Verify service**. + To verify whether you successfully allowed access to the endpoint, select **Verify service**. -1. Under VPC, select the client's VPC in which to create the endpoint. +7. Under VPC, select the client's VPC in which to create the endpoint. -1. Under **Subnets**, select the subnets (availability zones) in which to create the endpoint network interfaces. Enable the endpoint in all availability zones used by your application. +8. Under **Subnets**, select the subnets (availability zones) in which to create the endpoint network interfaces. Enable the endpoint in all availability zones used by your application. -1. Select **Create endpoint**. +9. Select **Create endpoint**. #### Step 3: Accept and test the connection -1. In your AWS account connected to BigAnimal, select **VPCs**, and then select **Endpoint services**. +1. In your AWS account connected to BigAnimal, select **VPCs**, and then select **Endpoint services**. -1. Select the endpoint service instance you created previously, and accept the endpoint connection request under **Endpoint connections**. +2. Select the endpoint service instance you created previously, and accept the endpoint connection request under **Endpoint connections**. -1. You can now successfully connect to your cluster. +3. You can now successfully connect to your cluster. - In your application's AWS account, select **VPC** and then select **Endpoints**. Select the endpoint you created previously and use the DNS name provided in the details section to access your cluster. + In your application's AWS account, select **VPC** and then select **Endpoints**. Select the endpoint you created previously and use the DNS name provided in the details section to access your cluster. ### Other method when using your account diff --git a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/connecting_from_gcp/index.mdx b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/connecting_from_gcp/index.mdx index 1bd5659759e..f324605bd5f 100644 --- a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/connecting_from_gcp/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/connecting_from_gcp/index.mdx @@ -6,124 +6,139 @@ navTitle: From Google Cloud The way you create a private Google Cloud endpoint differs when you're using your Google Cloud account versus using BigAnimal's cloud account. ## Using BigAnimal's cloud account -When using BigAnimal's cloud account, when creating a cluster, you provide BigAnimal with your Google Cloud project ID (see [Networking](/biganimal/latest/getting_started/creating_a_cluster/#cluster-settings-tab)). BigAnimal, in turn, provides you with a Google Cloud service attachment, which you can use to connect to your cluster privately. -1. When creating your cluster, on the **Cluster Settings** tab, in the **Network** section: - 1. Select **Private**. +When using BigAnimal's cloud account, when creating a cluster, you provide BigAnimal with your Google Cloud project ID (see [Networking](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/#cluster-settings-tab)). BigAnimal, in turn, provides you with a Google Cloud service attachment, which you can use to connect to your cluster privately. - 1. Enter your application's Google Cloud project ID. +1. When creating your cluster, on the **Cluster Settings** tab, in the **Network** section: -1. After the cluster is created, go to the cluster details to see the corresponding service attachment. You need the service attachment while creating a PSC-connected endpoint. + 1. Select **Private**. -1. Create a connected endpoint in the client's VPC. The steps for creating a connected endpoint in the client's VPC are the same whether you're using BigAnimal's cloud or your cloud. See [Step 2: Create a connected endpoint for the VM client/application](#step-2-create-a-connected-endpoint-for-the-vm-clientapplication). + 2. Enter your application's Google Cloud project ID. -1. In your application's Google Cloud, select **Private Service Connect**, and then select **Connected Endpoints**. Select the endpoint you created previously, and use the service attachment provided in the details section in BigAnimal to access your cluster. +2. After the cluster is created, go to the cluster details to see the corresponding service attachment. You need the service attachment while creating a PSC-connected endpoint. + +3. Create a connected endpoint in the client's VPC. The steps for creating a connected endpoint in the client's VPC are the same whether you're using BigAnimal's cloud or your cloud. See [Step 2: Create a connected endpoint for the VM client/application](#step-2-create-a-connected-endpoint-for-the-vm-clientapplication). + +4. In your application's Google Cloud, select **Private Service Connect**, and then select **Connected Endpoints**. Select the endpoint you created previously, and use the service attachment provided in the details section in BigAnimal to access your cluster. ## Using your Google Cloud account Two different methods enable you to connect to your private cluster from your application's VPC in Google Cloud. Each method offers different levels of accessibility and security. -- You can use Google Cloud [Private Service Connect (PSC)](https://cloud.google.com/vpc/docs/configure-private-service-connect-producer) to publish services using internal IP addresses in your VPC network. PSC is a network interface that securely connects a private IP address from your Google Cloud VPC to an external service. You grant access only to a single cluster instead of the entire BigAnimal resource VPC, thus ensuring maximum network isolation. We refer to this process of connecting as using PSC-connected endpoints. +- You can use Google Cloud [Private Service Connect (PSC)](https://cloud.google.com/vpc/docs/configure-private-service-connect-producer) to publish services using internal IP addresses in your VPC network. PSC is a network interface that securely connects a private IP address from your Google Cloud VPC to an external service. You grant access only to a single cluster instead of the entire BigAnimal resource VPC, thus ensuring maximum network isolation. We refer to this process of connecting as using PSC-connected endpoints. -- We recommend the PSC-connected endpoint method, which is most commonly used and is used in the example. However, if required by your organization, you can also use the [VPC peering](vpc_peering) connection method. +- We recommend the PSC-connected endpoint method, which is most commonly used and is used in the example. However, if required by your organization, you can also use the [VPC peering](vpc_peering) connection method. ### PSC-connected endpoint example + This example shows how to connect your cluster using PSC-connected endpoints. Assume that your cluster is in a project called `development` and is being accessed from a client in another project called `test`. It has the following properties: -- BigAnimal cluster: - - Google Cloud Project Project: `development` - - Google Cloud Project ID: `development-001` - - BigAnimal Cluster ID: `p-mckwlbakq5` - - Region where BigAnimal cluster is deployed: `us-central1` - - BigAnimal Organization ID: `brcxzr08qr7rbei1` - - Organization's domain name: `biganimal.io` - - Host Name: `p-mckwlbakq5.private.brcxzr08qr7rbei1.biganimal.io` -- VM Client: - - Google Cloud Project Name: `test` - - Google Cloud Project ID: `test-001` - - VM Client/App: `test-app-1` - - VM Client’s VPC: `client-app-vpc` - - VM Client’s Subnet: `client-app-subnet` - +- BigAnimal cluster: + - Google Cloud Project Project: `development` + - Google Cloud Project ID: `development-001` + - BigAnimal Cluster ID: `p-mckwlbakq5` + - Region where BigAnimal cluster is deployed: `us-central1` + - BigAnimal Organization ID: `brcxzr08qr7rbei1` + - Organization's domain name: `biganimal.io` + - Host Name: `p-mckwlbakq5.private.brcxzr08qr7rbei1.biganimal.io` +- VM Client: + - Google Cloud Project Name: `test` + - Google Cloud Project ID: `test-001` + - VM Client/App: `test-app-1` + - VM Client’s VPC: `client-app-vpc` + - VM Client’s Subnet: `client-app-subnet` #### Prerequisites To walk through an example in your own environment, you need a: -- BigAnimal Postgres cluster deployed with private connectivity. -- VM with a client/application installed in your Google Cloud project. -- Subnet in the VM’s VPC in the same region as the BigAnimal cluster. - +- BigAnimal Postgres cluster deployed with private connectivity. +- VM with a client/application installed in your Google Cloud project. +- Subnet in the VM’s VPC in the same region as the BigAnimal cluster. #### Step 1: Publish a service from BigAnimal !!! Note + Publish a service from BigAnimal in the Google Cloud project connected to your BigAnimal subscription. In the Google Cloud project connected to BigAnimal, to provide access to your cluster from other VPCs in other Google Cloud projects, create a PSC published service. Publish a service from BigAnimal for each Postgres cluster to which you want to provide access. -1. Get the hostname of your Postgres cluster from the **Connect** tab of the Cluster page on the BigAnimal portal (`P-mckwlbakq5.private.brcxzr08qr7rbei1.biganimal.io`). +1. Get the hostname of your Postgres cluster from the **Connect** tab of the Cluster page on the BigAnimal portal (`P-mckwlbakq5.private.brcxzr08qr7rbei1.biganimal.io`). -1. Using Cloudshell, the command prompt, or some other terminal, get the internal IP address of the host by performing a ping, nslookup, or dig +short <host> against the hostname (`10.247.200.9`). +2. Using Cloudshell, the command prompt, or some other terminal, get the internal IP address of the host by performing a ping, nslookup, or dig +short <host> against the hostname (`10.247.200.9`). -1. In the Google Cloud portal, go to **Network Services > Load balancing**. +3. In the Google Cloud portal, go to **Network Services > Load balancing**. -1. In the Filter area, under **Load Balancers**, select **Addresses** and filter for the host IP (`10.247.200.9`). Note the load balancer name (`a58262cd80b234a3aa917b719e69843f`). +4. In the Filter area, under **Load Balancers**, select **Addresses** and filter for the host IP (`10.247.200.9`). Note the load balancer name (`a58262cd80b234a3aa917b719e69843f`). -1. Go to **Private Service Connect > Published Services**. +5. Go to **Private Service Connect > Published Services**. -1. Select **+ Publish Service**. - 1. Under **Load Balancer Type**: +6. Select **+ Publish Service**. - 1. Select **Internal passthrough Network Load Balancer**. + 1. Under **Load Balancer Type**: - 1. In the **Internal load balancer** field, paste the load balancer name (`a58262cd80b234a3aa917b719e69843f`). - 1. For **Service Name**, enter the published service a name (`p-mckwlbakq5`). - 1. For **Subnets**, select **Reserve New Subnet**. + 1. Select **Internal passthrough Network Load Balancer**. -1. In the Reserve subnet for Private Service Connect window, enter the following details, and then select **Add**. - 1. For **Name**, use the name of the Postgres cluster (`p-mckwlbakq5`). + 2. In the **Internal load balancer** field, paste the load balancer name (`a58262cd80b234a3aa917b719e69843f`). + 2. For **Service Name**, enter the published service a name (`p-mckwlbakq5`). + 3. For **Subnets**, select **Reserve New Subnet**. - 1. For **IPv4 range**, assign the CIDR for the field IPv4 range, for example, `10.247.214.0/29`. - !!! Note "Recommendations for IP range" - - Allocate at least 8 IP addresses to the CIDR. The subnet mask must not be greater than 29. - - Avoid overlap with other reserved IP ranges by not allocating too many IP addresses at one time. - - If you encounter the error "This IPv4 address range overlaps with a subnet you already added. Enter an address range that doesn't overlap.", use another CIDR block until no error is returned. +7. In the Reserve subnet for Private Service Connect window, enter the following details, and then select **Add**. -1. (Optional) To accept connections automatically, add the consumer (where the client app resides) Google Cloud project ID (`test-001`). + 1. For **Name**, use the name of the Postgres cluster (`p-mckwlbakq5`). -1. Select **Add Service** and get the name of the service attachment. You might need to select the newly created published service to find the name of the service attachment. (`projects/development-001/regions/us-central1/serviceAttachments/p-mckwlbakq5`). + 2. For **IPv4 range**, assign the CIDR for the field IPv4 range, for example, `10.247.214.0/29`. + !!! Note "Recommendations for IP range" + + - Allocate at least 8 IP addresses to the CIDR. The subnet mask must not be greater than 29. + - Avoid overlap with other reserved IP ranges by not allocating too many IP addresses at one time. + - If you encounter the error "This IPv4 address range overlaps with a subnet you already added. Enter an address range that doesn't overlap.", use another CIDR block until no error is returned. + +8. (Optional) To accept connections automatically, add the consumer (where the client app resides) Google Cloud project ID (`test-001`). + +9. Select **Add Service** and get the name of the service attachment. You might need to select the newly created published service to find the name of the service attachment. (`projects/development-001/regions/us-central1/serviceAttachments/p-mckwlbakq5`). #### Step 2: Create a connected endpoint for the VM client/application !!! Note + Create a connected endpoint for the VM client/application in the Google Cloud project where your VM client/application resides. -1. From the Google Cloud console, switch over to the project where your VM client/application resides (`test`). - -1. To get the VPC of your VM (`client-app-vpc`), go to **Compute Engine > VM Instances**. Under **Network Interface**, note the network information. - -1. To create an endpoint with the VPC, go to **Network Services > Private Service Connect**. Under **Connected Endpoints**, select **+ Connect Endpoint**. - 1. For the target, select **Published service**, and use the service attachment captured earlier (`projects/development-001/regions/us-central1/serviceAttachments/p-mckwlbakq5`). - - 1. For the endpoint name, use the name of your VM client/application (`test-app-1`). - 1. For the network (VPC), use the name of your VM client’s VPC (`client-app-vpc`). - 1. For the subnetwork, use your VM client’s subnet (`client-app-subnet`). - !!! Note - If no subnet is available, create a subnet in the VPC for the region where your Postgres cluster was created as shown in [this knowledge base article](https://support.biganimal.com/hc/en-us/articles/20383247227801-GCP-Connect-to-BigAnimal-private-cluster-using-GCP-Private-Service-Connect#h_01H4NMNNSFQXNTX78W08Q3G39K). - 1. For the IP address, create an IP address, or choose an existing IP that isn't used by the other endpoints. - 1. Enable **Global Access**. - !!! Note - If your VM is running in a different region from BigAnimal, then always enable **Global Access**. -1. Select **Add Endpoint**. - -1. Check to see if the endpoint status is Accepted, and obtain the IP address. - !!! Note - If the endpoint status is Pending, see [this knowledge base article](https://support.biganimal.com/hc/en-us/articles/20383247227801-GCP-Connect-to-BigAnimal-private-cluster-using-GCP-Private-Service-Connect#h_01H4NMPGXCSC9V30WNESV52FAV). - -1. Connect to your BigAnimal cluster from your client application using the endpoint IP address (for example, `psql "postgres://edb_admin@:5432/edb_admin?sslmode=require"`). +1. From the Google Cloud console, switch over to the project where your VM client/application resides (`test`). + +2. To get the VPC of your VM (`client-app-vpc`), go to **Compute Engine > VM Instances**. Under **Network Interface**, note the network information. + +3. To create an endpoint with the VPC, go to **Network Services > Private Service Connect**. Under **Connected Endpoints**, select **+ Connect Endpoint**. + + 1. For the target, select **Published service**, and use the service attachment captured earlier (`projects/development-001/regions/us-central1/serviceAttachments/p-mckwlbakq5`). + + 2. For the endpoint name, use the name of your VM client/application (`test-app-1`). + + 3. For the network (VPC), use the name of your VM client’s VPC (`client-app-vpc`). + + 4. For the subnetwork, use your VM client’s subnet (`client-app-subnet`). + !!! Note + + If no subnet is available, create a subnet in the VPC for the region where your Postgres cluster was created as shown in [this knowledge base article](https://support.biganimal.com/hc/en-us/articles/20383247227801-GCP-Connect-to-BigAnimal-private-cluster-using-GCP-Private-Service-Connect#h_01H4NMNNSFQXNTX78W08Q3G39K). + + 5. For the IP address, create an IP address, or choose an existing IP that isn't used by the other endpoints. + + 6. Enable **Global Access**. + !!! Note + + If your VM is running in a different region from BigAnimal, then always enable **Global Access**. + +4. Select **Add Endpoint**. + +5. Check to see if the endpoint status is Accepted, and obtain the IP address. + !!! Note + + If the endpoint status is Pending, see [this knowledge base article](https://support.biganimal.com/hc/en-us/articles/20383247227801-GCP-Connect-to-BigAnimal-private-cluster-using-GCP-Private-Service-Connect#h_01H4NMPGXCSC9V30WNESV52FAV). + +6. Connect to your BigAnimal cluster from your client application using the endpoint IP address (for example, `psql "postgres://edb_admin@:5432/edb_admin?sslmode=require"`). #### Step 3: (Optional) Set up a private DNS zone diff --git a/product_docs/docs/biganimal/release/using_cluster/03_modifying_your_cluster/index.mdx b/product_docs/docs/biganimal/release/using_cluster/03_modifying_your_cluster/index.mdx index e8abf58baf3..46f5652d15f 100644 --- a/product_docs/docs/biganimal/release/using_cluster/03_modifying_your_cluster/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/03_modifying_your_cluster/index.mdx @@ -9,55 +9,56 @@ You can modify your cluster by modifying its [configuration settings](#modify-yo You can also modify your cluster by installing Postgres extensions. See [Postgres extensions](/biganimal/release/overview/extensions_tools) for more information. - ## Modify your cluster's configuration settings -1. Sign in to the [BigAnimal portal](https://portal.biganimal.com). +1. Sign in to the [BigAnimal portal](https://portal.biganimal.com). + +2. From the [Clusters](https://portal.biganimal.com/clusters) page, select the name of the cluster you want to edit. -2. From the [Clusters](https://portal.biganimal.com/clusters) page, select the name of the cluster you want to edit. +3. From the top-right corner of the **Cluster Info** panel, select **Edit Cluster**. -3. From the top-right corner of the **Cluster Info** panel, select **Edit Cluster**. +4. You can modify the following settings on the corresponding tab of the Edit Cluster page. -4. You can modify the following settings on the corresponding tab of the Edit Cluster page. + !!! Note - !!! Note - Any changes made to the cluster's instance type, volume type, or volume properties aren't automatically applied to replica settings. To avoid lag during replication, ensure that replica instance and storage types are at least as large as the source cluster's instance and storage types. See [Modify a faraway replica](/biganimal/latest/using_cluster/managing_replicas/#modify-a-faraway-replica). + Any changes made to the cluster's instance type, volume type, or volume properties aren't automatically applied to replica settings. To avoid lag during replication, ensure that replica instance and storage types are at least as large as the source cluster's instance and storage types. See [Modify a faraway replica](/biganimal/latest/using_cluster/managing_replicas/#modify-a-faraway-replica). - | Settings | Tab | Notes | - | ---------------------------------------------------- | ------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | - | Cluster type | **Cluster Info** | You can't switch from a single-node cluster or a high-availability cluster to a distributed high-availability cluster or vice versa. | - | Number of replicas (for a high-availability cluster) | **Cluster Info** | — | - | Cluster name and password | **Cluster Settings** | — | - | Instance type | **Cluster Settings** | Changing the instance type can incur higher cloud infrastructure charges. | - | Volume type | **Cluster Settings** | You can't switch between the io2 and io2 Block Express volume types in an AWS cluster. | - | Volume properties | **Cluster Settings** | It can take up to six hours to tune IOPS or resize the disks of your cluster because AWS requires a cooldown period after volume modifications, as explained in [Limitations](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/modify-volume-requirements.html). The volume properties are disabled and can't be modified while this is in progress. | - | Networking type (public or private) | **Cluster Settings** | If you're using Azure and previously set up a private link and want to change to a public network, you must remove the private link resources before making the change. | - | Nodes (for a distributed high-availability cluster) | **Data Groups** | After you create your cluster, you can't change the number of data nodes. | - | Database configuration parameters | **DB Configuration** | If you're using faraway replicas, only a small subset of parameters are editable. These parameters need to be modified in the replica when increased in the replica's source cluster. See [Modify a faraway replica](/biganimal/latest/using_cluster/managing_replicas/#modify-a-faraway-replica) for details. | - | Retention period for backups | **Additional Settings** | — | - | Custom maintenance window | **Additional Settings** | Set or modify a maintenance window in which maintenance upgrades occur for the cluster. See [Maintenance](/biganimal/latest/getting_started/creating_a_cluster/#maintenance). | - | Read-only workloads | **Additional Settings** | Enabling read-only workloads can incur higher cloud infrastructure charges. | - | PgBouncer | **Additional Settings** | Enabling PgBouncer incurs additional infrastructure costs that depend on your cloud provider. See [PgBouncer costs](../../pricing_and_billing/#pgbouncer-costs). | - | Identity and Access Management (IAM) Authentication | **Additional Settings** | Turn on the ability to log in to Postgres using AWS IAM credentials. You must then run a command to add each user’s credentials to a role that uses IAM authentication in Postgres. See [IAM authentication for Postgres](../01_postgres_access/#iam-authentication-for-postgres). | - | Superuser access | **Additional Settings** | Disabling the option removes superuser access for edb_admin, but any other superusers existing in the database retain their superuser privileges. | + | Settings | Tab | Notes | + | ---------------------------------------------------- | ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | Cluster type | **Cluster Info** | You can't switch from a single-node cluster or a high-availability cluster to a distributed high-availability cluster or vice versa. | + | Number of replicas (for a high-availability cluster) | **Cluster Info** | — | + | Cluster name and password | **Cluster Settings** | — | + | Instance type | **Cluster Settings** | Changing the instance type can incur higher cloud infrastructure charges. | + | Volume type | **Cluster Settings** | You can't switch between the io2 and io2 Block Express volume types in an AWS cluster. | + | Volume properties | **Cluster Settings** | It can take up to six hours to tune IOPS or resize the disks of your cluster because AWS requires a cooldown period after volume modifications, as explained in [Limitations](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/modify-volume-requirements.html). The volume properties are disabled and can't be modified while this is in progress. | + | Networking type (public or private) | **Cluster Settings** | If you're using Azure and previously set up a private link and want to change to a public network, you must remove the private link resources before making the change. | + | Nodes (for a distributed high-availability cluster) | **Data Groups** | After you create your cluster, you can't change the number of data nodes. | + | Database configuration parameters | **DB Configuration** | If you're using faraway replicas, only a small subset of parameters are editable. These parameters need to be modified in the replica when increased in the replica's source cluster. See [Modify a faraway replica](/biganimal/latest/using_cluster/managing_replicas/#modify-a-faraway-replica) for details. | + | Retention period for backups | **Additional Settings** | — | + | Custom maintenance window | **Additional Settings** | Set or modify a maintenance window in which maintenance upgrades occur for the cluster. See [Maintenance](/edb-postgres-ai/cloud-service/getting_started/creating_cluster/creating_a_cluster/#maintenance). | + | Read-only workloads | **Additional Settings** | Enabling read-only workloads can incur higher cloud infrastructure charges. | + | PgBouncer | **Additional Settings** | Enabling PgBouncer incurs additional infrastructure costs that depend on your cloud provider. See [PgBouncer costs](../../pricing_and_billing/#pgbouncer-costs). | + | Identity and Access Management (IAM) Authentication | **Additional Settings** | Turn on the ability to log in to Postgres using AWS IAM credentials. You must then run a command to add each user’s credentials to a role that uses IAM authentication in Postgres. See [IAM authentication for Postgres](../01_postgres_access/#iam-authentication-for-postgres). | + | Superuser access | **Additional Settings** | Disabling the option removes superuser access for edb_admin, but any other superusers existing in the database retain their superuser privileges. | 5. Save your changes. - !!! Note - Saving changes might require restarting the database. + !!! Note + + Saving changes might require restarting the database. ## Modify a data group You can modify the data groups in your distributed high-availability cluster by editing the configuration settings. -1. Sign in to the [BigAnimal portal](https://portal.biganimal.com). +1. Sign in to the [BigAnimal portal](https://portal.biganimal.com). -1. On the Clusters page, select the data group you want to edit. Data groups appear under the cluster they reside in. +2. On the Clusters page, select the data group you want to edit. Data groups appear under the cluster they reside in. -1. Select **Edit** next to the data group. +3. Select **Edit** next to the data group. -1. Edit the cluster settings in the **Data Groups** tab. See the table in [Modify your cluster configuration settings](#modify-your-clusters-configuration-settings). +4. Edit the cluster settings in the **Data Groups** tab. See the table in [Modify your cluster configuration settings](#modify-your-clusters-configuration-settings). -1. Select **Save**. +5. Select **Save**. -1. In the popup window, confirm your changes. +6. In the popup window, confirm your changes. diff --git a/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx b/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx index 0fc58c2f5b3..0ab7ab33b50 100644 --- a/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx @@ -12,19 +12,19 @@ the availability and recovery of the cluster. Before using fault injection testing, ensure you meet the following requirements: -+ You've connected your BigAnimal cloud account with your Azure subscription. See [Setting up your Azure Marketplace account](/biganimal/latest/getting_started/02_azure_market_setup/) for more information. -+ You have permissions in your Azure subscription to view and delete VMs and also the ability to view Kubernetes pods via Azure Kubernetes Service RBAC Reader. -+ You have PGD CLI installed. See [Installing PGD CLI](/pgd/latest/cli/installing_cli/#) for more information. -+ You've created a `pgd-cli-config.yml` file in your home directory. See [Configuring PGD CLI](/pgd/latest/cli/configuring_cli/) for more information. +- You've connected your BigAnimal cloud account with your Azure subscription. See [Setting up your Azure Marketplace account](/biganimal/latest/getting_started/02_azure_market_setup/) for more information. +- You have permissions in your Azure subscription to view and delete VMs and also the ability to view Kubernetes pods via Azure Kubernetes Service RBAC Reader. +- You have PGD CLI installed. See [Installing PGD CLI](/pgd/latest/cli/installing/) for more information. +- You've created a `pgd-cli-config.yml` file in your home directory. See [Configuring PGD CLI](/pgd/latest/cli/configuring_cli/) for more information. ## Fault injection testing steps Fault injection testing consists of the following steps: -1. Verifying cluster health -2. Determining the write leader node for your cluster -3. Deleting a write leader node from your cluster -4. Monitoring cluster health +1. Verifying cluster health +2. Determining the write leader node for your cluster +3. Deleting a write leader node from your cluster +4. Monitoring cluster health ### Verifying cluster health @@ -54,7 +54,6 @@ For help with a specific command and its parameters, enter `pgd help ` with your EDB subscription token in the following command: +To [install the PGD CLI](/pgd/latest/cli/installing/), for Debian and Ubuntu machines, replace `` with your EDB subscription token in the following command: ```bash curl -1sLf 'https://downloads.enterprisedb.com/<your-token>/postgres_distributed/setup.deb.sh' | sudo -E bash @@ -28,15 +28,16 @@ sudo yum install edb-pgd5-cli To connect to your distributed high-availability BigAnimal cluster using the PGD CLI, you need to [discover the database connection string](/pgd/latest/cli/discover_connections/). From your BigAnimal console: -1. Log in to the [BigAnimal clusters](https://portal.biganimal.com/clusters) view. -1. To show only clusters that work with PGD CLI, in the filter, set **Cluster Type** to **Distributed High Availability**. -1. Select your cluster. -1. In the view of your cluster, select the **Connect** tab. -1. Copy the read/write URI from the connection info. This is your connection string. +1. Log in to the [BigAnimal clusters](https://portal.biganimal.com/clusters) view. +2. To show only clusters that work with PGD CLI, in the filter, set **Cluster Type** to **Distributed High Availability**. +3. Select your cluster. +4. In the view of your cluster, select the **Connect** tab. +5. Copy the read/write URI from the connection info. This is your connection string. ### Using the PGD CLI with your database connection string -!!! Important +!!!Important + PGD doesn't prompt for interactive passwords. Accordingly, you need a [`.pgpass` file](https://www.postgresql.org/docs/current/libpq-pgpass.html) properly configured to allow access to the cluster. Your BigAnimal cluster's connection information page has all the information needed for the file. Without a properly configured `.pgpass`, you receive a database connection error when using a PGD CLI command, even when using the correct database connection string with the `--dsn` flag. @@ -50,10 +51,11 @@ pgd show-nodes --dsn "" ## PGD commands in BigAnimal -!!! Note +!!!Note + Three EDB Postgres Distributed CLI commands don't work with distributed high-availability BigAnimal clusters: `create-proxy`, `delete-proxy`, and `alter-proxy-option`. These commands are managed by BigAnimal, as BigAnimal runs on Kubernetes. It's a technical best practice to have the Kubernetes operator handle these functions. !!! - + The examples that follow show the most common PGD CLI commands with a BigAnimal cluster. ### `pgd check-health` @@ -90,7 +92,6 @@ p-mbx2p83u9n-a-3 2604177211 p-mbx2p83u9n-a data ACTIVE ACTIVE Up `pgd show-groups` returns all groups in your distributed high-availability BigAnimal cluster. It also notes the node that's the current write leader of each group: - ``` $ pgd show-groups --dsn "postgres://edb_admin@p-mbx2p83u9n-a.pg.biganimal.io:5432/bdrdb?sslmode=require" __OUTPUT__ @@ -105,7 +106,7 @@ p-mbx2p83u9n-a 2800873689 data world true true p-mbx2p83 `pgd switchover` manually changes the write leader of the group and can be used to simulate a [failover](/pgd/latest/quickstart/further_explore_failover). -``` +``` $ pgd switchover --group-name world --node-name p-mbx2p83u9n-a-2 --dsn "postgres://edb_admin@p-mbx2p83u9n-a.pg.biganimal.io:5432/bdrdb?sslmode=require" __OUTPUT__ switchover is complete diff --git a/product_docs/docs/pgd/5/reference/conflict_functions.mdx b/product_docs/docs/pgd/5/reference/conflict_functions.mdx index fff0d5bd099..c0fcf9ab035 100644 --- a/product_docs/docs/pgd/5/reference/conflict_functions.mdx +++ b/product_docs/docs/pgd/5/reference/conflict_functions.mdx @@ -30,7 +30,7 @@ The recognized methods for conflict detection are: ### Notes -For more information about the difference between `column_commit_timestamp` and `column_modify_timestamp` conflict detection methods, see [Current versus commit timestamp](../consistency/column-level-conflicts/03_timestamps/#comparing-column_modify_timestamp-and-column_commit_timestamp). +For more information about the difference between `column_commit_timestamp` and `column_modify_timestamp` conflict detection methods, see [Current versus commit timestamp](../consistency/column-level-conflicts/03_timestamps). This function uses the same replication mechanism as `DDL` statements. This means the replication is affected by the [ddl filters](../repsets/#ddl-replication-filtering) configuration.