diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/accelerate-table-data-access-with-in-memory-storage-407d1df.md b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/accelerate-table-data-access-with-in-memory-storage-407d1df.md index 6b5dbd9..d976ac7 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/accelerate-table-data-access-with-in-memory-storage-407d1df.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/accelerate-table-data-access-with-in-memory-storage-407d1df.md @@ -31,7 +31,7 @@ By default, table data is stored on disk. You can improve performance by enablin - In-Memory Storage + *Store Table Data in Memory* diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/acquiring-data-in-the-data-builder-1f15a29.md b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/acquiring-data-in-the-data-builder-1f15a29.md index baaaedd..0a4dc44 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/acquiring-data-in-the-data-builder-1f15a29.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/acquiring-data-in-the-data-builder-1f15a29.md @@ -208,7 +208,7 @@ All the objects you import or create in the *Data Builder* are listed on the *Da - Local Table \(see [Creating a Local Table](creating-a-local-table-2509fe4.md)\) - Graphical View \(see [Creating a Graphical View](../creating-a-graphical-view-27efb47.md)\) - SQL View \(see [Creating an SQL View](../creating-an-sql-view-81920e4.md)\) - - Data Access Control \(see [Create a "Simple Values" Data Access Control](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/5246328ec59045cb9c2aa693daee2557.html "Users with the DW Space Administrator role (or equivalent privileges) can create data access controls in which criteria are defined as simple values. Each user can only see the records that match any of the single values she is authorized for in the permissions entity.") :arrow_upper_right:\) + - Data Access Control \(see [Create a "Single Values" Data Access Control](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/5246328ec59045cb9c2aa693daee2557.html "Users with the DW Space Administrator role (or equivalent privileges) can create data access controls in which criteria are defined as single values. Each user can only see the records that match any of the single values she is authorized for in the permissions entity.") :arrow_upper_right:\) - Analytic Model \(see [Creating an Analytic Model](../Modeling-Data-in-the-Data-Builder/creating-an-analytic-model-e5fbe9e.md)\) - Task Chain \(see [Creating a Task Chain](creating-a-task-chain-d1afbc2.md)\) diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/add-a-source-7b50e8e.md b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/add-a-source-7b50e8e.md index 32712ac..4e2d1db 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/add-a-source-7b50e8e.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/add-a-source-7b50e8e.md @@ -28,8 +28,11 @@ Add a source to read data from. You can add multiple sources and combine them to > ### Note: > - You cannot use views with input parameters as sources in a data flow. - > - When browsing a remote file storage such as Amazon Simple Storage Service, Google Cloud Storage, or Microsoft Azure Blob Storage, you can only select files of type JSON/JSONL, CSV, XLS/XLSX, ORC, or PARQUET. - > - Local tables with delta capture can be added as source tables. However only the active records will be used. See [Capturing Delta Changes in Your Local Table](capturing-delta-changes-in-your-local-table-154bdff.md) + > - When browsing a remote file storage such as Amazon Simple Storage Service, Google Cloud Storage, or Microsoft Azure Blob Storage, you can only select files of type JSON/JSONL, CSV, XLS/XLSX, ORC, or PARQUET. Note that each cloud provider has its own naming convention for defining bucket and object names. These conventions should be followed accordingly to avoid any source-related issue. Even though Flowagent-based operators accept most special characters, we recommend that only alphanumeric characters are used \(no multibyte characters\). This will avoid any undesired issue because all sources work well with such characters. Furthermore, some special characters such as ", +, and , are not allowed by our Flowagent File Producer operator and should not be used. + > - Local tables with delta capture can be added as source tables. However, only the active records will be used. See [Capturing Delta Changes in Your Local Table](capturing-delta-changes-in-your-local-table-154bdff.md) + + > ### Restriction: + > If you add an excel file, you must ensure that the file size does not exceed 50 MB. 4. Click the source node to display its properties in the side panel, and complete the properties in the *General* section: @@ -50,7 +53,7 @@ Add a source to read data from. You can add multiple sources and combine them to - Label + *Label* @@ -62,19 +65,50 @@ Add a source to read data from. You can add multiple sources and combine them to - Business Name / Technical Name / Type / Connection + *Business Name / Technical Name / Type / Connection* + + + + + \[read-only\] Provide information to identify the source table. + + + + + + + Package - \[read-only\] Provide information to identify the source. + Select the package to which the object belongs. + + Packages are used to group related objects in order to facilitate their transport between tenants. + + > ### Note: + > Once a package is selected, it cannot be changed here. Only a user with the DW Space Administrator role \(or equivalent privileges\) can modify a package assignment in the *Packages* editor. + + For more information, see [Creating Packages to Export](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/24aba84ceeb3416881736f70f02e3a0a.html "Users with the DW Space Administrator role can create packages to model groups of related objects for transport between tenants. Modelers can add objects to packages via the Package field, which appears in editors when a package is created in their space. Once a package is complete and validated, the space administrator can export it to the Content Network. The structure of your package is preserved and, as the objects it contains evolve, you can easily export updated versions of it.") :arrow_upper_right:. - Status + *Type* + + + + + Type of object. For example: a local table. + + + + + + + *Status* @@ -88,12 +122,12 @@ Add a source to read data from. You can add multiple sources and combine them to - Use As + *Use As* - \[local tables\] Specifies whether the table is used as a *Source* or *Target* in the data flow. + \[read-only\] Specifies whether the table is used as a Source or Target in the data flow. > ### Note: > Changing the use of a table will reset all its properties. @@ -188,7 +222,7 @@ Add a source to read data from. You can add multiple sources and combine them to - Control Fetch Size + *Control Fetch Size* @@ -200,7 +234,7 @@ Add a source to read data from. You can add multiple sources and combine them to - Fetch Size \(Number of Rows\) + *Fetch Size \(Number of Rows\)* @@ -212,7 +246,7 @@ Add a source to read data from. You can add multiple sources and combine them to - Batch Query + *Batch Query* @@ -224,13 +258,30 @@ Add a source to read data from. You can add multiple sources and combine them to - Fail Run on String Truncation + *Fail Run on String Truncation* Fails the data flow run if string truncation is detected while fetching the source columns. This property is available only for CSV, JSON, and Excel files. + + + + + + *Enable ODBC Tracing* + + + + + : \[SAP HANA connection only\] Enable this option to create a new log file in the vflow graph pod with HANA ODBC debug logs. + + > ### Caution: + > Enabling this option must be used for troubleshooting purposes only, as it uses a lot of system resources. + + + diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/add-a-target-ab490fb.md b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/add-a-target-ab490fb.md index ad303cf..1eed1bf 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/add-a-target-ab490fb.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/add-a-target-ab490fb.md @@ -31,70 +31,8 @@ Select a target \(connection and container\) to define the target environment fo To narrow down the selection, start typing a part of the container name in the *Search* field. -4. If you select a cloud storage provider as the target, a list of additional options is displayed. Choose the relevant ones for your use case. +4. Review the target settings and properties and change or complete them as appropriate. - - - - - - - - - - - - - - -
- - Option - - - - Description - -
- - **Group Delta By** - - - - For *Load Type* of *Initial and Delta*: Choose *None*, *Date*, or *Hour*. - - Specifies whether to create additional folders for sorting updates based on the date or hour. - -
- - **File Type** - - - - *csv*: - - - *File Delimiter*: Specifies the character to use as a delimiter for columns in CSV files. - - *File Header*: Specifies whether CSV files include a header row with the column names. - - *parquet*: - - - *File Compression*: Specifies the compression algorithm to use for parquet files \(*None*, *GZIP*, *Snappy*\). - - *json*: - - - *Encoding*: Generated JSON files are encoded in UTF-8 format. - - - *Orient*: Specifies the internal structure of the produced JSON files:*"records"*: \[\{column -\> value\}, ... ,\{column -\> value\}\] - - *jsonlines*: - - - *Encoding*: Generated JSON Lines files are encoded in UTF-8 format. - - - - -
- -5. \(Optional\) To modify the throughput, you can change the number of replication threads. To do so, choose \(Browse source settings\), replace the default value of 10 with the value you want to use, and save your change. + For more information, see [Configure Your Replication Flow](configure-your-replication-flow-3f5ba0c.md). diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/add-or-create-a-target-table-0fa7805.md b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/add-or-create-a-target-table-0fa7805.md index 157fdda..d835850 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/add-or-create-a-target-table-0fa7805.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/add-or-create-a-target-table-0fa7805.md @@ -51,7 +51,7 @@ Add a target table to write data to. You can only have one target table in a dat - Label + *Label* @@ -63,7 +63,7 @@ Add a target table to write data to. You can only have one target table in a dat - Business Name / Technical Name / Type / Connection + *Business Name / Technical Name / Type / Connection* @@ -75,12 +75,43 @@ Add a target table to write data to. You can only have one target table in a dat - Mode + Package - Specifies the mode with which to write to the target table or modify the data in the target table. + Select the package to which the object belongs. + + Packages are used to group related objects in order to facilitate their transport between tenants. + + > ### Note: + > Once a package is selected, it cannot be changed here. Only a user with the DW Space Administrator role \(or equivalent privileges\) can modify a package assignment in the *Packages* editor. + + For more information, see [Creating Packages to Export](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/24aba84ceeb3416881736f70f02e3a0a.html "Users with the DW Space Administrator role can create packages to model groups of related objects for transport between tenants. Modelers can add objects to packages via the Package field, which appears in editors when a package is created in their space. Once a package is complete and validated, the space administrator can export it to the Content Network. The structure of your package is preserved and, as the objects it contains evolve, you can easily export updated versions of it.") :arrow_upper_right:. + + + + + + + *Type* + + + + + Type of object. For example: a local table. + + + + + + + *Mode* + + + + + Specifies the mode with which to write to the target table or modify the data in the target table.. You can choose between: @@ -88,8 +119,8 @@ Add a target table to write data to. You can only have one target table in a dat - *Truncate*: Erase the existing data in the target table and replace it with the data obtained from the data flow. - *Delete*: Delete records in the target table based on the columns mapped in the target. All the column mappings are considered for the match condition and if the value of these columns match the record in the target table, the record will be deleted. - > ### Restriction: - > The *Delete* mode fails for a target table when a mapped column contains NULL values. + > ### Note: + > The Delete mode fails for a target table when a mapped column contains NULL values. @@ -99,27 +130,25 @@ Add a target table to write data to. You can only have one target table in a dat - Update Records By Primary Key \(UPSERT\) + *Update Records By Primary Key \(UPSERT\)* - \[*Append* mode\] Instructs the flow to update, where appropriate, existing target table records that can be identified by their primary keys. - - If this option is not selected then all source records \(including those that are already present in the target table\) are appended, which may cause duplicate key errors. + \[Append mode\] Instructs the flow to update, where appropriate, existing target table records that can be identified by their primary keys. If this option is not selected then all source records \(including those that are already present in the target table\) are appended, which may cause duplicate key errors. > ### Note: - > - For working with data lake, UPSERT / APPEND operations via virtual tables are not supported \(see [Working with Data Lake](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/93d0b5d4faa24777a4b78513f7ed6172.html "Assign a SAP Datasphere space to access and work with SAP HANA Cloud, data lake.") :arrow_upper_right:\). + > - When working with data lake, only APPEND \(UPSERT option not selected\) and TRUNCATE modes are supported. APPEND \(with UPSERT option selected\) and DELETE modes are not supported. > - If no primary key is defined in the target table, then all records will be appended. - This feature is based on the UPSERT statement \(see [Upsert](https://help.sap.com/viewer/a4ae14a90e33416a90edc658d94a5c06/Cloud/en-US/972f970f9c0942d89c528f8ecc5a4977.html) in the *SQL Reference for SAP Vora in SAP Data Intelligence* guide\). + This feature is based on the UPSERT statement \(see Upsert in the SQL Reference for SAP Vora in SAP Data Intelligence guide\). - Status + *Status* @@ -133,12 +162,12 @@ Add a target table to write data to. You can only have one target table in a dat - Use As + *Use As* - Specifies whether the table is used as a *Source* or *Target* in the data flow. + \[read-only\] Specifies whether the table is used as a Source or Target in the data flow. > ### Note: > Changing the use of a table will reset all its properties. @@ -181,7 +210,7 @@ Add a target table to write data to. You can only have one target table in a dat - Control Batch Size + *Control Batch Size* @@ -193,13 +222,30 @@ Add a target table to write data to. You can only have one target table in a dat - Batch Size \(Number of Rows\) + *Batch Size \(Number of Rows\)* Specifies the amount of data being read. That is, it indicates the number of rows that will be fetched from the source in each request. + + + + + + *Enable ODBC Tracing* + + + + + : \[SAP HANA connection only\] Enable this option to create a new log file in the vflow graph pod with HANA ODBC debug logs. + + > ### Caution: + > Enabling this option must be used for troubleshooting purposes only, as it uses a lot of system resources. + + + diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/configure-your-replication-flow-3f5ba0c.md b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/configure-your-replication-flow-3f5ba0c.md index 5833437..27ec27b 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/configure-your-replication-flow-3f5ba0c.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/configure-your-replication-flow-3f5ba0c.md @@ -1,16 +1,20 @@ + + # Configure Your Replication Flow -Define general settings for your replication flow, such as the load type. +Define settings and properties for your replication flow and individual replication objects. ## Procedure -1. Click *Settings*. +1. Go to the *Settings* tab of the canvas to review the general settings \(load type and truncate\) and change them as appropriate. + + Alternatively, you can select the relevant replication object and review its settings and properties in the side panel. -2. Select the relevant load type for your replication flow: +2. Select the relevant load type: - *Initial Only*: Load all selected data once. @@ -34,4 +38,26 @@ Define general settings for your replication flow, such as the load type. If the target structure does not yet exist or is empty, you can ignore the *Truncate* setting. +4. Click \(Browse target settings\)to review the target settings **at replication flow level** and change them as appropriate. + + - Replication Thread Limit: You can change the default value as appropriate. + + - Overwrite Target Settings at Object Level: \[only relevant if the target is a cloud storage provider\] By default, any settings that you have made at replication object level are kept intact if you make a different setting at replication flow level. To change this, enable this option. + + - Additional settings that are only relevant for a specific target type and can be made for the replication flow itself as well as for individual replication objects. For more information, see + + - [Using a Cloud Storage Provider As the Target](using-a-cloud-storage-provider-as-the-target-43d93a2.md) + + - [Using Google BigQuery As the Target](using-google-bigquery-as-the-target-56d4472.md) + + - [Using Apache Kafka As the Target](using-apache-kafka-as-the-target-6df55db.md). + + + +5. Review the settings for each replication object and change or complete them as appropriate. + + To do so, select the relevant repliation object. Its properties are then displayed in the side panel. + + For a list of properties and further information, see [Creating a Replication Flow](creating-a-replication-flow-25e2bd7.md). + diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/create-an-input-parameter-a6fb3e7.md b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/create-an-input-parameter-a6fb3e7.md index 0b07b0f..ca25380 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/create-an-input-parameter-a6fb3e7.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/create-an-input-parameter-a6fb3e7.md @@ -4,7 +4,7 @@ # Create an Input Parameter -Create input parameters in your data flows for use in projection operator filter conditions or calculated columns. When you want to staring a data flow run, you are prompted to enter a value and this value is used to filter the data to be loaded. +Create input parameters in your data flows for use in projection operator filter conditions or calculated columns. When you want to staring a data flow run, you are prompted to enter a value and this value is used to filter the data to be loaded @@ -33,40 +33,36 @@ Create input parameters in your data flows for use in projection operator filter - Name + Name - Enter a descriptive name to help users identify the object. This name can be changed at any time. + Enter a descriptive name to help users identify the object. This name can be changed at any time. - Data Type + Data Type - Displays *string* as the currently available type. + Displays *string* as the currently available type. - Default Value + Default Value - \[optional\] Enter a default value for the input parameter. Each time the user is required to enter a value for the parameter, they can accept the default value or override it. The values must be entered inside of single quotes, for example, 'Germany'. - - The default value is used whenever the data flow is run as part of a schedule or task chain. - - You can enter `CURRENT_DATE()` or `CURRENT_TIME()` to obtain the current UTC date or timestamp at runtime. + \[optional\] Enter a default value for the input parameter. Each time the user is required to enter a value for the parameter, they can accept the default value or override it. The values must be entered inside of single quotes, for example, 'Germany'. The default value is used whenever the data flow is run as part of a schedule or task chain. You can enter `CURRENT_DATE()` or `CURRENT_TIME()` to obtain the current UTC date or timestamp at runtime. diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-data-flow-e30fd14.md b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-data-flow-e30fd14.md index 28b3e6d..9846221 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-data-flow-e30fd14.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-data-flow-e30fd14.md @@ -23,7 +23,7 @@ Create a data flow to move and transform data in an intuitive graphical interfac 1. In the side navigation area, click ![](../Creating-Finding-Sharing-Objects/images/Data_Builder_f73dc45.png) \(*Data Builder*\), select a space if necessary, and click *New Data Flow* to open the editor. -2. Add one or more objects from the *Source Browser* panel on the left of the screen as sources \(see [Add a Source](add-a-source-7b50e8e.md)\). +2. Drag one or more source objects from the *Source Browser* and drop it into the diagram \(see [Add a Source](add-a-source-7b50e8e.md)\). > ### Restriction: > - Data flows support loading data exclusively to local tables in the SAP Datasphere repository. @@ -84,6 +84,44 @@ Create a data flow to move and transform data in an intuitive graphical interfac + + + + + + \(Add Table\) + + + + + Create a new target table. + + + + + + + \(Impact and Lineage Analysis\) + + + + + Open the Impact and Lineage Analysis diagram. This diagram enables you to understand the lineage and impacts of the selected object. + + \(see [Impact and Lineage Analysis](../Creating-Finding-Sharing-Objects/impact-and-lineage-analysis-9da4892.md).\) + + + + + + + \(Open in New Tab\) + + + + + Open the selected entity in its own editor in a new tab. + @@ -108,65 +146,128 @@ Create a data flow to move and transform data in an intuitive graphical interfac > ### Tip: > In the toolbar, you can use \(Auto Layout\) and :desktop_computer: to organize the objects in your canvas. -5. Add or create a target table that the data flow will write its data to \(see [Add or Create a Target Table](add-or-create-a-target-table-0fa7805.md)\). - -6. Click \(Save\) to save the data flow: - - - *Save* to save the data flow. - - *Save As* to create a local a copy of the data flow. The data flow must have been previously saved at least once. The *Save* dialog opens. Enter new business and technical names and click *Save*. - -7. Click \(Deploy\) to deploy the data flow: - - - Newly created data flows are deployed for the first time. - - Data flows that have changes to deploy are redeployed. - - With deployment, you will be able to save draft version of your data flow without affecting the execution. - - > ### Note: - > In very rare situations, an error may occur the first time that a scheduled data flow is run in a space. In this case, you should run the data flow manually once. Subsequent scheduled runs will not require further intervention. - -8. To create schedule for the data flow, click \(Schedule\): - - - *Create Schedule*: Create a schedule to run the data flow asynchronously and recurrently in the background according to the settings defined in the schedule. - - For more information, see [Scheduling Data Integration Tasks](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/7fa07621d9c0452a978cb2cc8e4cd2b1.html "Schedule data integration tasks to run periodically at a specified date or time.") :arrow_upper_right:. - - - *Edit Schedule*: Change how the schedule is specified, or change the owner of the schedule. - - For more information, see [Take Over the Ownership of a Schedule](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/4b660c0395454bd0923f732eef4ee4b2.html "Per default, the user who creates a task schedule owns the schedule which means that the job scheduling component runs the task on the owner's behalf according to the defined schedule. You can assign the ownership of the schedule to yourself.") :arrow_upper_right:. - - - *Delete Schedule*: Delete the schedule if necessary . +5. Add or create a target table that the data flow will write its data to. > ### Note: - > For optimal performance, it is recommended that you consider staggering the scheduled run time of tasks such as data flows and task chains that may contain these tasks. There is a limit on how many tasks can be started at the same time. If you come close to this limit, scheduled task runs may be delayed and, if you go beyond the limit, some scheduled task runs might even be skipped. + > You can only have one target table in a data flow. + + For more information, see [Add or Create a Target Table](add-or-create-a-target-table-0fa7805.md). + +6. Click on the canva and review your data flow properties in the right panel: + + - Under *General*: + + + + + + + + + + + + + + + + + + + + + + + +
+ + Property + + + + Description + +
+ + Business Name + + + + Enter a descriptive name to help users identify the object. This name can be changed at any time. + +
+ + Technical Name + + + + Displays the name used in scripts and code, synchronized by default with the *Business Name*. + + To override the default technical name, enter a new one in the field. Technical names can contain only alphanumeric characters and underscores. -9. To run the data flow, click *Run*. - - If the data flow contains input parameters, a dialog box appears prompting the user to enter a value for each input parameter. You can either keep the default value or override it \(see [Create an Input Parameter](create-an-input-parameter-a6fb3e7.md)\). + > ### Note: + > Once the object is saved, the technical name can no longer be modified. -10. To view the *Run Status* section in the data flow properties panel, click the diagram background. -11. To see more details about the run, open the *Data Flow Monitor* by clicking \(Open in Data Integration Monitor\). + +
+ + Package + + + + Select the package to which the object belongs. - > ### Note: - > - The initialization time for executing a data flow takes an average of 20 seconds even with smaller data loads causing longer runtime for the data flow. - > - Metrics are displayed only for source and target tables and can be used for further analysis. - > - In your data flow, if a source view or an underlying view of the source view has data access controls applied to it, then no data is read from the view during the execution of the data flow. This results in incorrect data or no data in the output. - > - > For more information, see [Securing Data with Data Access Controls](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/a032e51c730147c7a1fcac125b4cfe14.html "Data access controls allow you to apply row-level security to your objects. When a data access control is applied to a data layer view or a business layer object, any user viewing its data will see only the rows for which they are authorized, based on the specified criteria.") :arrow_upper_right:. + Packages are used to group related objects in order to facilitate their transport between tenants. -12. You can configure more properties for a data flow, in the *Advanced Properties* section of the *Properties* panel. + > ### Note: + > Once a package is selected, it cannot be changed here. Only a user with the DW Space Administrator role \(or equivalent privileges\) can modify a package assignment in the *Packages* editor. + + For more information, see [Creating Packages to Export](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/24aba84ceeb3416881736f70f02e3a0a.html "Users with the DW Space Administrator role can create packages to model groups of related objects for transport between tenants. Modelers can add objects to packages via the Package field, which appears in editors when a package is created in their space. Once a package is complete and validated, the space administrator can export it to the Content Network. The structure of your package is preserved and, as the objects it contains evolve, you can easily export updated versions of it.") :arrow_upper_right:. + +
+ + Status + + + + \[read-only\] Displays the deployment and error status of the object. + + For more information, see [Saving and Deploying Objects](../Creating-Finding-Sharing-Objects/saving-and-deploying-objects-7c0b560.md). + +
+ + - Under Run Status: + + Displays the status of the flow run: + + - *Running*: The flow is currently running. + - *Completed*: The flow is completed successfully. + - *Failed*: Something goes wrong during the flow run and it could not be completed. Go to the details screen of your flow run and check the logs to identify to issue. + + - Under Input Parameters:Create a new input parameter or modify an existing one. For more information, see [Create an Input Parameter](create-an-input-parameter-a6fb3e7.md) + - Under Advanced Properties + - *Dynamic Memory Allocation*: You can allocate memory usage manually. Set the *Expected Data Volume* to *Small*, *Medium*, or *Large*. + + > ### Note: + > - Dynamic memory allocation should be done only if your data flow run is facing out-of-memory failures. + > - If multiple data flows are scheduled with *Expected Data Volume* as large, it doesn't alter/increase the actual memory needed for the data flow. However, the execution engine will allocate enough memory to handle such large volume of data and only after successful memory allocation, the data flow run is started. + > + > Execution engine allocates memory based on the data volume configured and the complexity of the operations performed in the data flow. + + - *Automatic restart on run failure*: Set this option to restart the data flow automatically if there are any failures or system upgrades, for example. + + +7. Click \(Save\) to save your data flow: - - To allocate memory usage for your data flow manually, enable the *Dynamic Memory Allocation* option. + - *Save* to save the data flow. + - *Save As* to create a local a copy of the data flow. The data flow must have been previously saved at least once. The *Save* dialog opens. Enter new business and technical names and click *Save*. - - Set the *Expected Data Volume* to *Small*, *Medium*, or *Large*. +8. Click \(Deploy\) to deploy the data flow: - > ### Note: - > - Dynamic memory allocation should be done only if your data flow run is facing out-of-memory failures. - > - If multiple data flows are scheduled with *Expected Data Volume* as large, it doesn't alter/increase the actual memory needed for the data flow. However, the execution engine will allocate enough memory to handle such large volume of data and only after successful memory allocation, the data flow run is started. - > - > Execution engine allocates memory based on the data volume configured and the complexity of the operations performed in the data flow. + - Newly created data flows are deployed for the first time. + - Data flows that have changes to deploy are redeployed. - - To restart the data flow automatically if there are any failures or system upgrades, enable the *Automatic restart on run failure* checkbox. + With deployment, you will be able to save draft version of your data flow without affecting the execution. diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-local-table-2509fe4.md b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-local-table-2509fe4.md index eaa0895..de592a8 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-local-table-2509fe4.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-local-table-2509fe4.md @@ -170,7 +170,7 @@ This procedure explains how to create an empty table by defining its columns. Yo - *Associations* - Create associations to other entities \(see [Create an Association](../create-an-association-66c6998.md)\). - *Business Purpose* - Provide a description, purpose, contacts, and tags to help other users understand your entity. - - *Table Services* - Enable the *In-Memory Storage* option to store the table data directly in memory \(see [Accelerate Table Data Access with In-Memory Storage](accelerate-table-data-access-with-in-memory-storage-407d1df.md)\). + - *Table Services* - Enable the *Memory Storage* option to store the table data directly in memory \(see [Accelerate Table Data Access with In-Memory Storage](accelerate-table-data-access-with-in-memory-storage-407d1df.md)\). > ### Note: > If the connection of your remote table source is configured as data access: *Remote Only,* you can navigate only to the *Remote Table Statistics* monitor. diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-local-table-from-a-csv-file-8bba251.md b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-local-table-from-a-csv-file-8bba251.md index 8f14c90..60b605d 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-local-table-from-a-csv-file-8bba251.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-local-table-from-a-csv-file-8bba251.md @@ -24,7 +24,7 @@ Import a `.csv` file to create a table and fill it with the data from the file. 2. Click *Select Source File*, navigate to, and select the CSV file you want to upload. > ### Note: - > The file extension must be `*.csv`. The file size must not exceed 200 MB. + > The file must have the extension `*.csv` and contain Unicode text only. The file size must not exceed 25 MB. 3. Review the following options, and then click *Upload* to open your file in SAP Datasphere: diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-replication-flow-25e2bd7.md b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-replication-flow-25e2bd7.md index ad9442c..ef65045 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-replication-flow-25e2bd7.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-replication-flow-25e2bd7.md @@ -14,7 +14,7 @@ You can use replication flows to copy data from the following source objects: - CDS views \(in ABAP-based SAP systems\) that are enabled for extraction -- Tables that have a unique key \(primary key\) +- Tables that have a primary key - Objects from ODP providers, such as extractors or SAP BW artifacts @@ -37,22 +37,207 @@ For more information about available connection types, sources, and targets, see 2. Select a source connection and a source container, then add source objects \(see [Add a Source](add-a-source-7496380.md)\). - The *Details* side panel shows the properties of your replication flow. If you select an object in the canvas, the side panel changes to show the properties of the selected object. - + The side panel shows the properties of your replication flow. Complete the missing information as appropriate: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + Property + + + + Description + +
+ + Business Name + + + + Enter a descriptive name to help users identify the object. This name can be changed at any time. + +
+ + Technical Name + + + + Displays the name used in scripts and code, synchronized by default with the *Business Name*. + +
+ + Package + + + + Select the package to which the object belongs. + + Packages are used to group related objects in order to facilitate their transport between tenants. + + > ### Note: + > Once a package is selected, it cannot be changed here. Only a user with the DW Space Administrator role \(or equivalent privileges\) can modify a package assignment in the *Packages* editor. + + For more information, see [Creating Packages to Export](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/24aba84ceeb3416881736f70f02e3a0a.html "Users with the DW Space Administrator role can create packages to model groups of related objects for transport between tenants. Modelers can add objects to packages via the Package field, which appears in editors when a package is created in their space. Once a package is complete and validated, the space administrator can export it to the Content Network. The structure of your package is preserved and, as the objects it contains evolve, you can easily export updated versions of it.") :arrow_upper_right:. + +
+ + Status + + + + \[read-only\] Displays the deployment and error status of the object. + + For more information, see [Saving and Deploying Objects](../Creating-Finding-Sharing-Objects/saving-and-deploying-objects-7c0b560.md). + +
+ + Delta Load Interval + + + + \[only relevant for load type `Initial and Delta`\] Define the time interval for replicating changes from the source to the target. + + For more information, see [Configure Your Replication Flow](configure-your-replication-flow-3f5ba0c.md). + +
+ + Run Status + + + + \[read-only\] Displays the overall status of the replication flow run, for example `Not Run Yet`. + + For more detailed information, go to the flow monitor. + +
+ 3. Select a target connection and target container \(see [Add a Target](add-a-target-ab490fb.md)\). -4. Review the default settings for *Load Type* and *Truncate* and change them if necessary \(see [Configure Your Replication Flow](configure-your-replication-flow-3f5ba0c.md)\). - -5. Define filters and mapping as required for your use case: - - - Define filters to delimit the scope of your replication flow \(see [Define Filters](define-filters-5a6ef36.md)\). - - - Define mappings to specify where you want the data from the source to go in the target \(see [Define Mapping](define-mapping-2c7948f.md)\). - - -6. Click \(Save\). A dialog box appears. Enter a business name and a technical name for your replication flow. - - When you enter a business name, the system automatically suggests a corresponding technical name, but you can change the technical name if required. Both names must be unique within the space you work in. +4. Click \(Target Settings\)to review the default target settings for your replication flow and change or complete them as appropriate \(see [Configure Your Replication Flow](configure-your-replication-flow-3f5ba0c.md)\). + +5. Select a replication object in the canvas to review its properties in the side panel and change or complete them as appropriate: + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + Property + + + + Description + +
+ + Projections + + + + Add a projection to define a filter or mappings. + +
+ + Delta Capture + + + + \[only relevant for local tables\] Keep track of changes in your data source. + + For more information, see[Capturing Delta Changes in Your Local Table](capturing-delta-changes-in-your-local-table-154bdff.md). + +
+ + Load Type + + + + Specify how you want to load the data \(initial only or initial and delta\). + + For more information, see [Configure Your Replication Flow](configure-your-replication-flow-3f5ba0c.md). + +
+ + Truncate + + + + Decide whether you want to delete any existing content in the target. + + For more information, see [Configure Your Replication Flow](configure-your-replication-flow-3f5ba0c.md). + +
+ + Target Columns + + + + Lists the target column names. A key symbol next to a column name indicates that this column is a key column. + +
+ + > ### Note: + > Some further properties are only relevant for specific types of targets. You can find a list of these properties in the detailed information for the respective targets: + > + > - [Using a Cloud Storage Provider As the Target](using-a-cloud-storage-provider-as-the-target-43d93a2.md) + > + > - [Using Google BigQuery As the Target](using-google-bigquery-as-the-target-56d4472.md) + > + > - [Using Apache Kafka As the Target](using-apache-kafka-as-the-target-6df55db.md). + +6. Click \(Save\). 7. Click \(Deploy\) to make your replication flow ready to run. diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-transformation-flow-f7161e6.md b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-transformation-flow-f7161e6.md index 7e79a74..30b02fd 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-transformation-flow-f7161e6.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-transformation-flow-f7161e6.md @@ -96,7 +96,7 @@ Creating a transformation flow involves two important steps: - Load Type + Load Type @@ -133,18 +133,4 @@ Creating a transformation flow involves two important steps: - Transformation flows that have changes to deploy are redeployed. - > ### Note: - > You will need to navigate to the *View Transform Editor* to check the validation messages for the transformation flow. - -7. \[optional\] To create a schedule for the transformation flow, click \(Schedule\). For example, it may make sense to replicate delta changes to the target table at regular intervals. For more information, see [Schedule a Data Integration Task](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/7c11059ed3314e1fb753736b7867512c.html "You can schedule or unschedule data integration tasks such as remote table replication, persisting views, or data flow execution. You may also pause and then later resume execution of scheduled tasks.") :arrow_upper_right: - -8. Click \(Run\) to start your transformation flow. - - You can also start your transformation flows in the *Data Integration Monitor*. - - > ### Note: - > You can cancel a running transformation flow in the *Data Integration Monitor*. For more information, see [Cancel a Transformation Flow Run](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/ab885f05210f4a52aebe8306c8cad083.html "You want to cancel a transformation flow that is running.") :arrow_upper_right:. - -9. To see more details about the run, open the *Flows* monitor by clicking \(Open in Data Integration Monitor\). - diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/images/Refresh_MassValidation_b399f2c.jpg b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/images/Refresh_MassValidation_b399f2c.jpg new file mode 100644 index 0000000..99b979e Binary files /dev/null and b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/images/Refresh_MassValidation_b399f2c.jpg differ diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/images/ValidateRemoteTables_d882b35.jpg b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/images/ValidateRemoteTables_d882b35.jpg new file mode 100644 index 0000000..20887ee Binary files /dev/null and b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/images/ValidateRemoteTables_d882b35.jpg differ diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/process-source-changes-for-several-remote-tables-4e0be16.md b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/process-source-changes-for-several-remote-tables-4e0be16.md new file mode 100644 index 0000000..040055e --- /dev/null +++ b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/process-source-changes-for-several-remote-tables-4e0be16.md @@ -0,0 +1,49 @@ + + + + +# Process Source Changes for Several Remote Tables + +Identify available table structure updates for all tables sharing the same source connection, and avoid errors and impact on dependent objects and runtimes in SAP Datasphere resulting from these updates, such as view runs, remote table replications or deployment. + + + +## Context + +Before you can process source changes for several remote table, you must fulfill the following prerequisites: + +- The remote tables are already saved in your space. +- They are connected via an SAP HANA smart data integration or SAP HANA smart data access. + +When changes are made in source models, they might not be reflected immediately in SAP Datasphere. This can result in errors and impact on dependent objects, and runtimes, and you need to do a refresh to get these updates in your remote table definition. + +To identify source changes for several remote tables sharing the same connection, you can either proceed from the*Repository Explorer* or from the *Data Builder* landing page: + + + +## Procedure + +1. Click Validate Remote Tables. + +2. From the windows *Validate Remote Tables*, select the relevant source connection and then the remote tables you want to check, and click *Validate*: + + ![](images/ValidateRemoteTables_d882b35.jpg) + + > ### Note: + > From the *Repository Explorer*, if you have remote tables in several spaces, you will be prompted to select a space. + + SAP Datasphere will compare the source and the target remote tables, and will refresh the status: Remote tables with incompatible changes will get the status *Runtime Error* \(for example a column has been removed\), whereas remote tables with compatible changes will get the status *Changes to Deploy* \(for example a non-key column has been added\). Remote tables that are not deployed at this time and have any changes in the source will get the status *Design Time Error*. + +3. To apply the changes, open the remote table in the table editor. + + The editor automatically opens a window to allow you to proceed with the changes: + + ![](images/Refresh_MassValidation_b399f2c.jpg) + + The object status set by *Validate Remote Tables* is only an indicator that a certain remote table requires action in the *Table Editor*. As there may be a delay between the *Validate* action and the time you enter into *Table Editor* screen, it is required to proceed with the *Refresh* action in the *Table Editor* to integrate the actual source changes in your remote table. + +4. Select the changes you want to apply to your remote table. + +5. Redeploy the remote table. + + diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/process-source-changes-in-the-table-editor-622328b.md b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/process-source-changes-in-the-table-editor-622328b.md index 391aa1f..b103af5 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/process-source-changes-in-the-table-editor-622328b.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/process-source-changes-in-the-table-editor-622328b.md @@ -22,7 +22,7 @@ Identify available table structure updates in your data sources and resolve conf > ### Restriction: > In case you remove columns from the remote table definition compared to the source object \(remote table having less columns than the source entity\), real-time replications don't work for remote tables connected via SAP HANA smart data access or Cloud Connector for SAP HANA on-premise versions lower than 2.0 SPS06. -Keeping your data up-to-date can sometimes be a challenge for modelers. When an update is available in your data source, you can do a refresh of your table structure in the *Data Builder* . +Keeping your data up-to-date can sometimes be a challenge for modelers. When an update is available in your data source, you can do a refresh of your table structure in the *Data Builder* or click Validate Remote Tables form the *Data Builder* landing page. For more information, see [Process Source Changes for Several Remote Tables](process-source-changes-for-several-remote-tables-4e0be16.md). > ### Note: > This refresh is a manual action from the *Table Editor*. When you click *Refresh*, you will receive a notification of any structural changes in the remote source and can then decide whether to proceed and import the changes or cancel. diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/processing-changes-to-source-and-target-tables-705292c.md b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/processing-changes-to-source-and-target-tables-705292c.md index e6c0de7..249493d 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/processing-changes-to-source-and-target-tables-705292c.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/processing-changes-to-source-and-target-tables-705292c.md @@ -4,5 +4,5 @@ If source or target tables of your transformation flow are modified, you can review the changes and decide how your transformation flow should handle the changes. -It is only possible to review target table changes in the *Transformation Flow Editor* and source table changs in the *View Transform Editor*. For more information, see [Process Target Changes in the Transformation Flow Editor](process-target-changes-in-the-transformation-flow-editor-75ab3ef.md) and [Process Source Changes in the View Transform Editor](process-source-changes-in-the-view-transform-editor-098ada1.md). +It is only possible to review target table changes in the *Transformation Flow Editor* and source table changes in the *View Transform Editor*. For more information, see [Process Target Changes in the Transformation Flow Editor](process-target-changes-in-the-transformation-flow-editor-75ab3ef.md) and [Process Source Changes in the View Transform Editor](process-source-changes-in-the-view-transform-editor-098ada1.md). diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/review-and-edit-imported-table-properties-75cea7b.md b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/review-and-edit-imported-table-properties-75cea7b.md index ecc6cba..50722f0 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/review-and-edit-imported-table-properties-75cea7b.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/review-and-edit-imported-table-properties-75cea7b.md @@ -4,7 +4,7 @@ # Review and Edit Imported Table Properties -Provide business-friendly names for your table and its columns, identify its semantic usage, enable replication and in-memory storage, and set other properties. +Provide business-friendly names for your table and its columns, identify its semantic usage, enable replication and memory storage, and set other properties. diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/running-a-flow-5b591d4.md b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/running-a-flow-5b591d4.md new file mode 100644 index 0000000..d5e5745 --- /dev/null +++ b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/running-a-flow-5b591d4.md @@ -0,0 +1,64 @@ + + + + +# Running a Flow + +Once your flow is configured and deployed, you can run it. + +To run a flow, you have 3 main options depending on your flow type: + +- Click *Run* to start a direct run. +- Click *Schedule* to run your data flow or your transformation flow at later time, or on regular basis \(this is not available for *Replication Flow*\). +- Add the flow into a task chain. This is only valid for a data flow. For more information, see [Creating a Task Chain](creating-a-task-chain-d1afbc2.md). + + + + + +## Run a Direct Flow + +Once you have completed the flow configuration and saved it, you can run it. Click \(Run\) to start the process to acquire and transform data as per your defined settings. Once completed, the *Run Status* section in the property panel is updated. You can navigate to the *Flows* monitor to get more detail on the run. See [Monitoring Flows](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/b661ea0766a24c7d839df950330a89fd.html "In the Flows monitor, you can find all the deployed flows per space.") :arrow_upper_right:. + +> ### Note: +> Regarding data flows: +> +> - If your data flow contains input parameters, a dialog box appears prompting the user to enter a value for each input parameter. You can either keep the default value or override it \(see [Create an Input Parameter](create-an-input-parameter-a6fb3e7.md)\). +> +> - The initialization time for running a data flow takes an average of 20 seconds even with smaller data loads causing longer runtime for the data flow. +> - Metrics are displayed only for source and target tables and can be used for further analysis. They are available in the *Flows* monitor. +> - If a source view or an underlying view of the source view has data access controls applied to it, then no data is read from the view during the execution of the data flow. This results in incorrect data or no data in the output. +> +> For more information, see [Securing Data with Data Access Controls](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/a032e51c730147c7a1fcac125b4cfe14.html "Data access controls allow you to apply row-level security to your objects. When a data access control is applied to a data layer view or a business layer object, any user viewing its data will see only the rows for which they are authorized, based on the specified criteria.") :arrow_upper_right:. + +> ### Note: +> Regarding transformation flows: +> +> You can cancel a running transformation flow in the *Data Integration Monitor*. For more information, see [Cancel a Transformation Flow Run](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/ab885f05210f4a52aebe8306c8cad083.html "You want to cancel a transformation flow that is running.") :arrow_upper_right:. + + + + + +## Create a Schedule to Run Your Flow + +You can also create a schedule to run your data flow or your transformation flow on regular basis. Click \(Schedule\) and define your schedule. For more information, see [Scheduling Data Integration Tasks](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/7fa07621d9c0452a978cb2cc8e4cd2b1.html "Schedule data integration tasks to run periodically at a specified date or time.") :arrow_upper_right:. + +> ### Note: +> For optimal performance, it is recommended that you consider staggering the scheduled run time of tasks such as data flows and task chains that may contain these tasks. There is a limit on how many tasks can be started at the same time. If you come close to this limit, scheduled task runs may be delayed and, if you go beyond the limit, some scheduled task runs might even be skipped. + +> ### Note: +> In very rare situations, an error may occur the first time that a scheduled data flow is run in a space. In this case, you should run the data flow manually once. Subsequent scheduled runs will not require further intervention. + + + + + +## Run Your Data Flow Using a Task Chain + +You can run a data flow using a task chain. For more information, see [Creating a Task Chain](creating-a-task-chain-d1afbc2.md). + +Alternatively, you can also run your flows from the \( Data Integration Monitor\). + +From the *Flows* monitor, you can check details of your runs, and perform other actions on your flows. For more information, see [Monitoring Flows](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/b661ea0766a24c7d839df950330a89fd.html "In the Flows monitor, you can find all the deployed flows per space.") :arrow_upper_right:. + diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/using-a-cloud-storage-provider-as-the-target-43d93a2.md b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/using-a-cloud-storage-provider-as-the-target-43d93a2.md index 7d01171..78dabec 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/using-a-cloud-storage-provider-as-the-target-43d93a2.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/using-a-cloud-storage-provider-as-the-target-43d93a2.md @@ -1,21 +1,28 @@ + + # Using a Cloud Storage Provider As the Target If you use a cloud storage provider as the target for your replication flow, you need to consider additional specifics and conditions. +> ### Note: +> You can only use a non-SAP target for a replication flow if your admin has assigned capacity units to Premium Outbound Integration. For more information, see [Configure the Size of Your SAP Datasphere Tenant](https://help.sap.com/docs/SAP_DATASPHERE/9f804b8efa8043539289f42f372c4862/33f8ef4ec359409fb75925a68c23ebc3.html). + This topic contains the following sections: -- [Loading Data to Cloud Storage Providers](using-a-cloud-storage-provider-as-the-target-43d93a2.md#loio43d93a27150a4a218e3df14e3abdf456__section_General_AllNonSAPTargets) +- [Available Targets](using-a-cloud-storage-provider-as-the-target-43d93a2.md#loio43d93a27150a4a218e3df14e3abdf456__section_ReplTargets_NonSAPTargets) -- [Premium Outbound Integration for Non-SAP Targets](using-a-cloud-storage-provider-as-the-target-43d93a2.md#loio43d93a27150a4a218e3df14e3abdf456__section_PremiumOutbound) +- [Additional Properties](using-a-cloud-storage-provider-as-the-target-43d93a2.md#loio43d93a27150a4a218e3df14e3abdf456__section_ReplFlow_NonSAP_Targets_Properties) +- [Files](using-a-cloud-storage-provider-as-the-target-43d93a2.md#loio43d93a27150a4a218e3df14e3abdf456__section_ReplFlow_Files) - -## Loading Data to Cloud Storage Providers + + +## Available Targets The following cloud storage providers can be used as the target of a replication flow. @@ -30,7 +37,137 @@ The following cloud storage providers can be used as the target of a replication For more information about the corresponding connection types, see [Integrating Data via Connections](https://help.sap.com/docs/SAP_DATASPHERE/be5967d099974c69b77f4549425ca4c0/eb85e157ab654152bd68a8714036e463.html). -Upon completion of the initial load, the system writes a `_SUCCESS` file. Downstream applications that can access the object store directly can use this file to verify that the replication completed successfully without having to check the replication flow status. This `_SUCCESS` file is provided for all cloud storage providers listed above except BigQuery. + + + + +## Additional Properties + +In addition to the properties that apply to all targets \(see the list in [Creating a Replication Flow](creating-a-replication-flow-25e2bd7.md)\), there are the following properties that are only relevant for cloud storage providers. You can review and change these properties for the replication flow itself \(by choosing \(Browse target settings\)\) or for individual replication objects \(by selecting an object so that its properties get displayed in the side panel\). + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +Property + + + +Description + +
+ +Group Delta by + + + +\[only relevant for load type Initial and Delta\] Select *None* \[default\], *Date*, or *Hour* to specify whether you want to create additional folders for sorting updates based on the date or hour. + +
+ +File Type + + + +Select the file type. You can choose between CSV, JSON, JSONLines, and Parquet \[default\]. + +For JSON and JSON Lines, generated files are encoded in UTF-8 format. + +
+ +Enable Apache Spark Compatibility + + + +\[only relevant for file type Parquet\] Enable this option to convert and store time data type columns to int64 in the Parquet files. The int64 data type represents microseconds after midnight. This conversion allows the columns to be consumed by Apache Spark. + +
+ +File Compression + + + +\[only relevant for file type Parquet\] Select the file compression type.You can choose between None \[default\], Gzip, and Snappy. + +
+ +Suppress Duplicates + + + +Enable this option to avoid duplicate records in your target file. During initial load, if a data record already exists in the target, the default system behavior with cloud storage provider targets is to write this record to the target once again, which results in duplicate records. If this is not the desired behavior for your use case, you can change it by enabling this option. + +
+ +Delimiter + + + +\[only relevant for file type CSV\] Select the character you want to use to separate the columns from each other. You can choose between Comma \[default\], Colon, Pipe, Semicolon, and Tab. + +
+ +Header Line + + + +\[only relevant for file type CSV\] Select True \[default\] or False to specify whether you want the file to have a header line or not. + +
+ +Orient + + + +\[only relevant for file type JSON\] Select the internal structure for the generated JSON files. Currently the only available option is Records: \[\{column -\> value\}, ... ,\{column -\> value\}\] + +
+ + + +
+ +## Files + +Running a replication flow with a cloud storage target creates various files and structures: + +Upon completion of the initial load, the system writes a `_SUCCESS` file. Downstream applications that can access the object store directly can use this file to verify that the replication completed successfully without having to check the replication flow status. The `.sap.partfile.metadata` objects include metadata information for the replication object. It exists in the root of each data file directory and is created and referenced as part of replication flow processing. Additionally, you can leverage these objects for interpreting and processing the data files. @@ -58,11 +195,3 @@ Each file contains the source columns as defined in the mapping for the replicat - *\_\_sequence\_number*: An integer value that reflects the sequential order of the delta row in relation to other deltas. This column is empty for initial load rows and is not populated for all source systems \(for example, ABAP\). - *\_\_timestamp*: The UTC date and time the system wrote the row. - - - - -## Premium Outbound Integration for Non-SAP Targets - -You can only use a non-SAP target for a replication flow if your admin has assigned capacity units to Premium Outbound Integration. For more information, see [Configure the Size of Your SAP Datasphere Tenant](https://help.sap.com/docs/SAP_DATASPHERE/9f804b8efa8043539289f42f372c4862/33f8ef4ec359409fb75925a68c23ebc3.html). - diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/using-apache-kafka-as-the-target-6df55db.md b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/using-apache-kafka-as-the-target-6df55db.md index 58cf8e2..1004856 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/using-apache-kafka-as-the-target-6df55db.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/using-apache-kafka-as-the-target-6df55db.md @@ -1,5 +1,7 @@ + + # Using Apache Kafka As the Target If you use Apache Kafka as the target for your replication flow, you need to consider the following additional specifics and conditions. @@ -7,29 +9,94 @@ If you use Apache Kafka as the target for your replication flow, you need to con > ### Note: > You can only use a non-SAP target for a replication flow if your admin has assigned capacity units to Premium Outbound Integration. For more information, see [Configure the Size of Your SAP Datasphere Tenant](https://help.sap.com/docs/SAP_DATASPHERE/9f804b8efa8043539289f42f372c4862/33f8ef4ec359409fb75925a68c23ebc3.html). -You can only replicate to **new** target objects \(Kafka topics\). +This topic contains the following sections: + +- [Additional Properties](using-apache-kafka-as-the-target-6df55db.md#loio6df55db4028842c1b1866e709ffef456__section_ReplFlow_Kafka_Properties) + +- [Further Information](using-apache-kafka-as-the-target-6df55db.md#loio6df55db4028842c1b1866e709ffef456__section_ReplFlow_Kafka_Info) + + + + + + +## Additional Properties + +In addition to the properties that apply to all targets \(see the list in [Creating a Replication Flow](creating-a-replication-flow-25e2bd7.md)\), there are the following properties that are only relevant for Apache Kafka. You can review and change these properties for the replication flow itself \(by choosing \(Browse target settings\)\) or for individual replication objects \(by selecting an object so that its properties get displayed in the side panel\). + + + + + + + + + + + + + + + + + + + + + + + +
+ +Property + + -You can use the following **formats**: +Description -- Apache Avro +
-- JSON +Number of Partitions + -You can choose from the following **compression types**: +Enter the number of Kafka partitions to be used for the Kafka topic. -- No Compression +
-- Gzip +Replication Factor -- Snappy + -- LZ4 +Enter the Kafka replication factor to be used for the Kafka topic. -- Zstandard +
+Message Encoder -The **target container** is automatically set to "/" and cannot be changed. + + +Select the message format for the Kafka topic You can choose between AVRO and JSON \[default\]. + +
+ +File Compression + + + +Select the file compression type. You can choose between No Compression \[default\], Gzip, Snappy, LZ4, and ZStandard. + +
+ + + +
+ +## Further Information + +If you want to use an **existing Kafka topic** as the target, keep in mind that schema registry is currently not supported. + +The **target container** is automatically set to "/" because Kafka does not have a superordinate container layer. You can **rename** target objects \(Kafka topics\). The following conditions apply: @@ -38,3 +105,5 @@ You can **rename** target objects \(Kafka topics\). The following conditions app - The maximum length for the new name is 249 characters. +For more information and examples, see also [SAP Datasphere Replication Flows Blog Series Part 3 – Integration with Kafka.](https://blogs.sap.com/2023/12/04/sap-datasphere-replication-flows-blog-series-part-3-integration-with-kafka/) + diff --git a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/using-google-bigquery-as-the-target-56d4472.md b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/using-google-bigquery-as-the-target-56d4472.md index 9c07a50..3f6c1df 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/using-google-bigquery-as-the-target-56d4472.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Acquiring-and-Preparing-Data-in-the-Data-Builder/using-google-bigquery-as-the-target-56d4472.md @@ -6,22 +6,38 @@ If you use Google BigQuery as the target for your replication flow, you need to You can only use a non-SAP target for a replication flow if your admin has assigned capacity units to Premium Outbound Integration. For more information, see [Configure the Size of Your SAP Datasphere Tenant](https://help.sap.com/docs/SAP_DATASPHERE/9f804b8efa8043539289f42f372c4862/33f8ef4ec359409fb75925a68c23ebc3.html). -The system always uses **write mode** *Append*. +This topic contains the following sections: -The *Truncate* setting is not available. +- [General Properties](using-google-bigquery-as-the-target-56d4472.md#loio56d4472a0e1f44d58e07ca26ab666328__section_ReplFlow_GBQ_General) -The system automatically adds the following columns to the **schema of the target table**: +- [Target Tables](using-google-bigquery-as-the-target-56d4472.md#loio56d4472a0e1f44d58e07ca26ab666328__section_ReplFlow_GBQ_TargetTables) -- OPERATION\_FLAG +- [Target Columns](using-google-bigquery-as-the-target-56d4472.md#loio56d4472a0e1f44d58e07ca26ab666328__section_ReplFlow_GBQ_TargetColumns) -- IS\_DELETED +- [Data Types](using-google-bigquery-as-the-target-56d4472.md#loio56d4472a0e1f44d58e07ca26ab666328__section_ReplFlow_GBQ_DataTypes) -- RECORDSTAMP +- [Primary Key](using-google-bigquery-as-the-target-56d4472.md#loio56d4472a0e1f44d58e07ca26ab666328__section_ReplFlow_GBQ_PrimaryKey) +- [SQL Statement](using-google-bigquery-as-the-target-56d4472.md#loio56d4472a0e1f44d58e07ca26ab666328__section_ReplFlow_GBQ_SQL) -These columns are needed to capture changes in the source so that they can be replicated to the target, and you cannot change their names or settings. -**Target table names** may only contain the following special characters: + + + + +## General Properties + +The system always uses **write mode** *Append*. + +The *Truncate* setting is not available. + + + + + +## Target Tables + +Target table names may only contain the following special characters: - Hyphen \(-\) @@ -30,7 +46,26 @@ These columns are needed to capture changes in the source so that they can be re - Space \( \) -The maximum length for **target column names** is 300 characters. +The maximum length for target column names is 300 characters. + +If the **target structure already exists** in Google BigQuery, you cannot make changes such as renaming a column or changing a data type in SAP Datasphere. You need to do this directly in Google BigQuery. + + + + + +## Target Columns + +The system automatically adds the following columns to the **schema of the target table**: + +- OPERATION\_FLAG + +- IS\_DELETED + +- RECORDSTAMP + + +These columns are needed to capture changes in the source so that they can be replicated to the target, and you cannot change their names or settings. In the following cases, target column names have to be changed so that they can be used for replication into Google BigQuery. You can change the names for the relevant target columns manually or choose *Auto-Projection*. @@ -41,9 +76,13 @@ In the following cases, target column names have to be changed so that they can - Some column name prefixes are reserved within Google BigQuery, for example \_TABLE\_, \_FILE\_, or \_PARTITION\_. If a target column name contains any of these prefixes, Auto-Projection adds AUTOPREFIX\_ in front of the existing name. -If the **target structure already exists** in Google BigQuery, you cannot make changes such as renaming a column or changing a data type in SAP Datasphere. You need to do this directly in Google BigQuery. -Google BigQuery does not support the **data types** DECFLOAT16, DECFLOAT34 and UINT64. The system automatically converts DECFLOAT16 and DECFLOAT34 into DECIMAL\(38,9\) and UINT64 into DECIMAL\(20,0\). The first value in brackets is the precision \(total number of digits\), the second one is the scale \(number of digits after the decimal point\). + + + +## Data Types + +Google BigQuery does not support the data types DECFLOAT16, DECFLOAT34 and UINT64. The system automatically converts DECFLOAT16 and DECFLOAT34 into DECIMAL\(38,9\) and UINT64 into DECIMAL\(20,0\). The first value in brackets is the precision \(total number of digits\), the second one is the scale \(number of digits after the decimal point\). For the DECFLOAT data types, the following conditions apply: @@ -60,6 +99,12 @@ For the DECFLOAT data types, the following conditions apply: For data type UINT64, the number of digits before the decimal point \(precision minus scale\) must be 20 or greater. + + + + +## Primary Key + A **primary key** can be created in Google BigQuery if the following prerequisites are met: - There are no more than **16 key columns**. @@ -86,5 +131,11 @@ A **primary key** can be created in Google BigQuery if the following prerequisit If either of these prerequisites is not met and you still run a replication flow, **none** of the columns in the target gets defined as a primary key column. + + + + +## SQL Statement + You can view and copy the **SQL Create Table statement** of a replication object so that you can modify and run it directly in Google BigQuery. This can be useful, for example, if you want to change partitions and clusters for a new target object. To do so, choose View SQL Create Table Statement or Copy SQL Create Table Statement from the context menu for the relevant target object. diff --git a/docs/Acquiring-Preparing-Modeling-Data/Buisiness-Builder/authorization-scenario-46d8c42.md b/docs/Acquiring-Preparing-Modeling-Data/Buisiness-Builder/authorization-scenario-46d8c42.md index d0a63f3..77b9a3a 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Buisiness-Builder/authorization-scenario-46d8c42.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Buisiness-Builder/authorization-scenario-46d8c42.md @@ -17,7 +17,7 @@ In the Business Builder, modelers can create, assign, and consume authorization [Securing Data with Data Access Controls](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/a032e51c730147c7a1fcac125b4cfe14.html "Data access controls allow you to apply row-level security to your objects. When a data access control is applied to a data layer view or a business layer object, any user viewing its data will see only the rows for which they are authorized, based on the specified criteria.") :arrow_upper_right: -[Create a "Simple Values" Data Access Control](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/5246328ec59045cb9c2aa693daee2557.html "Users with the DW Space Administrator role (or equivalent privileges) can create data access controls in which criteria are defined as simple values. Each user can only see the records that match any of the single values she is authorized for in the permissions entity.") :arrow_upper_right: +[Create a "Single Values" Data Access Control](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/5246328ec59045cb9c2aa693daee2557.html "Users with the DW Space Administrator role (or equivalent privileges) can create data access controls in which criteria are defined as single values. Each user can only see the records that match any of the single values she is authorized for in the permissions entity.") :arrow_upper_right: [Apply a Data Access Control](../apply-a-data-access-control-8f79fc8.md "You can apply one or more data access controls to a view to control the data that users will see based on the specified criteria.") diff --git a/docs/Acquiring-Preparing-Modeling-Data/Creating-Finding-Sharing-Objects/evaluating-catalog-assets-dc061a2.md b/docs/Acquiring-Preparing-Modeling-Data/Creating-Finding-Sharing-Objects/evaluating-catalog-assets-dc061a2.md index 80a5fb6..91f9738 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Creating-Finding-Sharing-Objects/evaluating-catalog-assets-dc061a2.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Creating-Finding-Sharing-Objects/evaluating-catalog-assets-dc061a2.md @@ -21,7 +21,7 @@ You must be assigned one of the following: > ### Note: - > To see the details of any terms, tags, or KPIs, the role must also have the *Read* permission for each of the following privileges: *Catalog Glossary Object*, *Catalog Tag Hierarchy*, and *Catalog KPI Object*. + > To see the details of any terms, tags, data products, or KPIs, the role must also have the *Read* permission for each of the following privileges: *Catalog Glossary Object*, *Catalog Tag Hierarchy*, and *Catalog KPI Object*. > ### Tip: @@ -567,3 +567,8 @@ The catalog automatically detects the change in real time: - If you edited an existing file, the metadata for the asset is automatically updated. +**Related Information** + + +[Evaluating your Data Product](../evaluating-your-data-product-335f49b.md "Each data product has a dedicated page that describes the data product in detail to allow a transparent elaboration.") + diff --git a/docs/Acquiring-Preparing-Modeling-Data/Creating-Finding-Sharing-Objects/finding-and-accessing-data-in-the-catalog-1047825.md b/docs/Acquiring-Preparing-Modeling-Data/Creating-Finding-Sharing-Objects/finding-and-accessing-data-in-the-catalog-1047825.md index 2180ec7..3472c18 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Creating-Finding-Sharing-Objects/finding-and-accessing-data-in-the-catalog-1047825.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Creating-Finding-Sharing-Objects/finding-and-accessing-data-in-the-catalog-1047825.md @@ -4,7 +4,7 @@ # Finding and Accessing Data in the Catalog -Discover data by searching and filtering results. Mark your favorite assets, terms, and key performance indicators \(KPIs\). +Discover data by searching and filtering results. Mark your favorite assets, listed data products, terms, and key performance indicators \(KPIs\). The catalog provides an effective data governance strategy by bringing together an organized inventory of business metadata and data assets to enable business and technical users to unlock the full potential of their enterprise data. The catalog is a central place to discover, classify, understand, and prepare all the data in your enterprise. @@ -24,7 +24,7 @@ You can search for assets by clicking \(*Catal ## Search by Entering a String -You can find objects globally by using the search bar and entering all or part of the characters in a term, asset, or KPI. Enter one or more characters in the *Search* field and press *Enter* \(or click *Search*\). +You can find objects globally by using the search bar and entering all or part of the characters in a term, asset, listed data product, or KPI. Enter one or more characters in the *Search* field and press *Enter* \(or click *Search*\). As you type, the field will begin proposing objects and search strings. Select an object to open it directly. Click on a string to trigger a search on it. @@ -71,6 +71,18 @@ Shows only assets. An asset is any data or analytic object made available in the +Data Products + + + + +Shows only listed data products. Data products must be listed in Data Marketplace before they appear in the catalog. A data product is either free or purchased data from a third-party provider that you can use in this product. + + + + + + Terms diff --git a/docs/Acquiring-Preparing-Modeling-Data/Creating-Finding-Sharing-Objects/repository-explorer-f8ce0b4.md b/docs/Acquiring-Preparing-Modeling-Data/Creating-Finding-Sharing-Objects/repository-explorer-f8ce0b4.md index b88630c..5630e96 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Creating-Finding-Sharing-Objects/repository-explorer-f8ce0b4.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Creating-Finding-Sharing-Objects/repository-explorer-f8ce0b4.md @@ -268,7 +268,7 @@ You can act on objects in the list in the following ways: - SQL View \(see [Creating an SQL View](../creating-an-sql-view-81920e4.md)\) - Entity-Relationship Model \(see [Creating an Entity-Relationship Model](../creating-an-entity-relationship-model-a91c042.md)\) - Data Flow \(see [Creating a Data Flow](../Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-data-flow-e30fd14.md)\) - - Data Access Control \(see [Create a "Simple Values" Data Access Control](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/5246328ec59045cb9c2aa693daee2557.html "Users with the DW Space Administrator role (or equivalent privileges) can create data access controls in which criteria are defined as simple values. Each user can only see the records that match any of the single values she is authorized for in the permissions entity.") :arrow_upper_right:\) + - Data Access Control \(see [Create a "Single Values" Data Access Control](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/5246328ec59045cb9c2aa693daee2557.html "Users with the DW Space Administrator role (or equivalent privileges) can create data access controls in which criteria are defined as single values. Each user can only see the records that match any of the single values she is authorized for in the permissions entity.") :arrow_upper_right:\) - Currency Conversion Views \(see [Enabling Currency Conversion with TCUR\* Tables and Views](enabling-currency-conversion-with-tcur-tables-and-views-b462239.md)\) @@ -302,7 +302,7 @@ You can act on objects in the list in the following ways: - Local Table \(see [Creating a Local Table](../Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-local-table-2509fe4.md)\) - Graphical View \(see [Creating a Graphical View](../creating-a-graphical-view-27efb47.md)\) - SQL View \(see [Creating an SQL View](../creating-an-sql-view-81920e4.md)\) - - Data Access Control \(see [Create a "Simple Values" Data Access Control](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/5246328ec59045cb9c2aa693daee2557.html "Users with the DW Space Administrator role (or equivalent privileges) can create data access controls in which criteria are defined as simple values. Each user can only see the records that match any of the single values she is authorized for in the permissions entity.") :arrow_upper_right:\) + - Data Access Control \(see [Create a "Single Values" Data Access Control](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/5246328ec59045cb9c2aa693daee2557.html "Users with the DW Space Administrator role (or equivalent privileges) can create data access controls in which criteria are defined as single values. Each user can only see the records that match any of the single values she is authorized for in the permissions entity.") :arrow_upper_right:\) - Analytic Model \(see [Creating an Analytic Model](../Modeling-Data-in-the-Data-Builder/creating-an-analytic-model-e5fbe9e.md)\) - Task Chain \(see [Creating a Task Chain](../Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-task-chain-d1afbc2.md)\) diff --git a/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/add-measures-e4cc3e8.md b/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/add-measures-e4cc3e8.md index 5b17445..2bfeb73 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/add-measures-e4cc3e8.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/add-measures-e4cc3e8.md @@ -40,7 +40,7 @@ There are different types of measures to choose from: 6. For count distinct measures, select the dimensions for which this measure shall be applied. -7. For calculated measures, restricted measures, and fact source measures, you can define an exception aggregation. +7. For calculated measures, restricted measures, and fact source measures, you can define an exception aggregation. The exception aggregation determines how a measure is aggregated with regard to one or more dimensions. A dimension is needed for exception aggregation in order to define the granularity with which the aggregation rule is applied. For more information, see [Aggregation and Exception Aggregation](aggregation-and-exception-aggregation-88ca394.md). 8. Decide whether your measure should be an *Auxiliary Measure*. An auxiliary measure can be used for further calculation but it will be hidden in the story in SAP Analytics Cloud. diff --git a/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/aggregation-and-exception-aggregation-88ca394.md b/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/aggregation-and-exception-aggregation-88ca394.md new file mode 100644 index 0000000..036644b --- /dev/null +++ b/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/aggregation-and-exception-aggregation-88ca394.md @@ -0,0 +1,225 @@ + + +# Aggregation and Exception Aggregation + +Aggregation is a process to summarize measures, for example, by grouping values of multiple rows into a single result. + +The standard aggregation is used to aggregate with respect to any dimension not in query drill-down. Exception aggregation is always performed in addition to standard aggregation. It defines the exception: It is used to aggregate with respect to the defined dimensions for exception aggregation. You can use exception aggregation if warehouse stock cannot be totaled up at the time, or if counters count the number of dimensions for a specific dimension. + +A measure is calculated in the following order: first the standard aggregation, then the formula, and then the exception aggregation. + +> ### Note: +> Using many dimensions in an exception aggregation will have an impact on the performance. This is because more data must be processed after the main aggregation, which uses the SAP HANA build-in engines and data structures. While these engines can use optimized data storage and processes, the exception aggregation is more generic and slower than the low level aggregation. + + + + + +## Example + +Count number of customer within a year: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +Count Number of Customers per Year \(2022\) + + + +NetAmount 2022 + + + +Number of Customers in 2022 + + + +Average NetAmount per Customer + +
+ +Colorado + + + +55,871.75 $ + + + +35 + + + +1,596.34 $ + +
+ +Conneticut + + + +7,924.00 $ + + + +5 + + + +1,584.80 $ + +
+ +Delaware + + + +35,217.10 $ + + + +7 + + + +5,031.01 $ + +
+ +Florida + + + +10,634.35 $ + + + +14 + + + +759.60 $ + +
+ +Florida + + + +47,895.40 $ + + + +25 + + + +1,915.82 $ + +
+ +Florida + + + +582.85 $ + + + +1 + + + +582.85 $ + +
+ +Maryland + + + +1,521.15 $ + + + +4 + + + +380.29 $ + +
+ +Pennsylvania + + + +53,748.20 $ + + + +23 + + + +2,336.88 $ + +
+ +The above query shows the companies, revenue of 2022 in combination with the number of customers within this year. The special aspect here is, that the "counting of the customers" does not need to have the CUSTOMER dimension itself within the current result set. The aggregation is done over exception aggregation and over the exception aggregation dimension CUSTOMER for the counting. This allows you to leave the dimension CUSTOMER out of the result set. Based on this you can have further analysis, e.g. with a formula which calculates the average revenue per customer in the year. + diff --git a/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/create-a-restricted-measure-bfb43dd.md b/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/create-a-restricted-measure-bfb43dd.md index 246aa1c..9f4fd33 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/create-a-restricted-measure-bfb43dd.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/create-a-restricted-measure-bfb43dd.md @@ -16,7 +16,13 @@ With a restricted measure, you can restrict available dimensions. 3. Enter your expression in the formula editor. An atomic expression usually has the format` (e.g. Country = 'Germany').` -4. You can enable constant selection for the restricted measure. You can choose to set it for all dimensions or just for selected dimensions. A measure with constant selection will not be impacted by filters or drill-downs on the constant dimensions. +4. You can define an exception aggreagtion. The exception aggregation determines how a measure is aggregated with regard to one or more dimensions. A dimension is needed for exception aggregation in order to define the granularity with which the aggregation rule is applied. + + Exception aggregation is supported for restricted measures on fact sources. You can use the exception aggregation to calculate the stock for a given time. You can also define no restrictions for the restricted measure, and just use it for the exception aggregation. + + For more information, see [Aggregation and Exception Aggregation](aggregation-and-exception-aggregation-88ca394.md). + +5. You can enable constant selection for the restricted measure. You can choose to set it for all dimensions or just for selected dimensions. A measure with constant selection will not be impacted by filters or drill-downs on the constant dimensions. Enabling constant selection is useful for comparing a single value with several different values. For example, you could create a restricted measure for sales in 2022, and then compare sales in 2022 with sales for all other years in the same table. diff --git a/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/creating-a-hierarchy-with-directory-36c39ee.md b/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/creating-a-hierarchy-with-directory-36c39ee.md index 24ef3fc..f93520e 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/creating-a-hierarchy-with-directory-36c39ee.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/creating-a-hierarchy-with-directory-36c39ee.md @@ -2,7 +2,12 @@ # Creating a Hierarchy with Directory -Select a *Semantic Usage* of *Hierarchy with Directory* to indicate that your entity contains one or more parent-child hierarchies and has an association to a directory dimension containing a list of the hierarchies. These types of hierarchy entities can include nodes from multiple dimensions \(for example, country, cost center group, and cost center\) and are commonly imported from SAP S/4HANA Cloud and SAP BW systems. +Select a *Semantic Usage* of *Hierarchy with Directory* to indicate that your entity contains one or more parent-child hierarchies and has an association to a directory dimension containing a list of the hierarchies. + +These types of hierarchy entities can include nodes from multiple dimensions \(for example, country, cost center group, and cost center\) and are commonly imported from SAP S/4HANA Cloud and SAP BW systems. + +> ### Note: +> For SAP BW systems, you must have defined an SAP BW Bridge connection to get the *Hierarchy with Directory*. For more information, see [Preparing Connectivity for for ODP Source Systems in SAP BW Bridge](https://help.sap.com/viewer/e2d2b48377c14490b55466b5f1872640/DEV_CURRENT/en-US/18d69431efb34a71b1939eb5a071db08.html "") :arrow_upper_right:. This topic contains the following sections: diff --git a/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/creating-an-external-hierarchy-dbac7a8.md b/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/creating-an-external-hierarchy-dbac7a8.md index be964fb..0559b5e 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/creating-an-external-hierarchy-dbac7a8.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/creating-an-external-hierarchy-dbac7a8.md @@ -77,18 +77,6 @@ Select a *Semantic Usage* of *Hierarchy* to indicate that your entity contains p - - - - - - Expose for Consumption - - - - - In order to consume your hierarchy in SAP Analytics Cloud, you must enable the *Expose for Consumption* switch. - diff --git a/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/modeling-data-in-the-data-builder-5c1e3d4.md b/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/modeling-data-in-the-data-builder-5c1e3d4.md index b33717f..ed7231d 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/modeling-data-in-the-data-builder-5c1e3d4.md +++ b/docs/Acquiring-Preparing-Modeling-Data/Modeling-Data-in-the-Data-Builder/modeling-data-in-the-data-builder-5c1e3d4.md @@ -218,7 +218,7 @@ All the objects you import or create in the *Data Builder* are listed on the *Da - Local Table \(see [Creating a Local Table](../Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-local-table-2509fe4.md)\) - Graphical View \(see [Creating a Graphical View](../creating-a-graphical-view-27efb47.md)\) - SQL View \(see [Creating an SQL View](../creating-an-sql-view-81920e4.md)\) - - Data Access Control \(see [Create a "Simple Values" Data Access Control](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/5246328ec59045cb9c2aa693daee2557.html "Users with the DW Space Administrator role (or equivalent privileges) can create data access controls in which criteria are defined as simple values. Each user can only see the records that match any of the single values she is authorized for in the permissions entity.") :arrow_upper_right:\) + - Data Access Control \(see [Create a "Single Values" Data Access Control](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/5246328ec59045cb9c2aa693daee2557.html "Users with the DW Space Administrator role (or equivalent privileges) can create data access controls in which criteria are defined as single values. Each user can only see the records that match any of the single values she is authorized for in the permissions entity.") :arrow_upper_right:\) - Analytic Model \(see [Creating an Analytic Model](creating-an-analytic-model-e5fbe9e.md)\) - Task Chain \(see [Creating a Task Chain](../Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-task-chain-d1afbc2.md)\) diff --git a/docs/Acquiring-Preparing-Modeling-Data/index.md b/docs/Acquiring-Preparing-Modeling-Data/index.md index 6ea2c74..a597fa5 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/index.md +++ b/docs/Acquiring-Preparing-Modeling-Data/index.md @@ -48,6 +48,7 @@ - [Replicate Remote Table Data](Acquiring-and-Preparing-Data-in-the-Data-Builder/replicate-remote-table-data-7e258a7.md) - [Accelerate Table Data Access with In-Memory Storage](Acquiring-and-Preparing-Data-in-the-Data-Builder/accelerate-table-data-access-with-in-memory-storage-407d1df.md) - [Process Source Changes in the Table Editor](Acquiring-and-Preparing-Data-in-the-Data-Builder/process-source-changes-in-the-table-editor-622328b.md) + - [Process Source Changes for Several Remote Tables](Acquiring-and-Preparing-Data-in-the-Data-Builder/process-source-changes-for-several-remote-tables-4e0be16.md) - [Modify or Duplicate Remote Tables](Acquiring-and-Preparing-Data-in-the-Data-Builder/modify-or-duplicate-remote-tables-8c3632f.md) - [Creating a Local Table](Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-local-table-2509fe4.md) - [Columns](Acquiring-and-Preparing-Data-in-the-Data-Builder/columns-8f0f40d.md) @@ -102,6 +103,7 @@ - [Processing Changes to Source and Target Tables](Acquiring-and-Preparing-Data-in-the-Data-Builder/processing-changes-to-source-and-target-tables-705292c.md) - [Process Target Changes in the Transformation Flow Editor](Acquiring-and-Preparing-Data-in-the-Data-Builder/process-target-changes-in-the-transformation-flow-editor-75ab3ef.md) - [Process Source Changes in the View Transform Editor](Acquiring-and-Preparing-Data-in-the-Data-Builder/process-source-changes-in-the-view-transform-editor-098ada1.md) + - [Running a Flow](Acquiring-and-Preparing-Data-in-the-Data-Builder/running-a-flow-5b591d4.md) - [Creating a Task Chain](Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-task-chain-d1afbc2.md) - [Preparing Data in the Data Builder](preparing-data-in-the-data-builder-f2e359c.md) - [Creating a Graphical View](creating-a-graphical-view-27efb47.md) @@ -171,6 +173,7 @@ - [Add Measures](Modeling-Data-in-the-Data-Builder/add-measures-e4cc3e8.md) - [Create a Currency Conversion Measure](Modeling-Data-in-the-Data-Builder/create-a-currency-conversion-measure-ec00efb.md) - [Create a Restricted Measure](Modeling-Data-in-the-Data-Builder/create-a-restricted-measure-bfb43dd.md) + - [Aggregation and Exception Aggregation](Modeling-Data-in-the-Data-Builder/aggregation-and-exception-aggregation-88ca394.md) - [Add a Variable](Modeling-Data-in-the-Data-Builder/add-a-variable-cdd8fa0.md) - [Using the Data Preview](Modeling-Data-in-the-Data-Builder/using-the-data-preview-9f1fa73.md) - [Analytical Datasets \(Deprecated\)](Modeling-Data-in-the-Data-Builder/analytical-datasets-deprecated-70dab71.md) diff --git a/docs/Acquiring-Preparing-Modeling-Data/preparing-data-in-the-data-builder-f2e359c.md b/docs/Acquiring-Preparing-Modeling-Data/preparing-data-in-the-data-builder-f2e359c.md index 88ac94d..ad7b653 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/preparing-data-in-the-data-builder-f2e359c.md +++ b/docs/Acquiring-Preparing-Modeling-Data/preparing-data-in-the-data-builder-f2e359c.md @@ -152,7 +152,7 @@ All the objects you import or create in the *Data Builder* are listed on the *Da - Local Table \(see [Creating a Local Table](Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-local-table-2509fe4.md)\) - Graphical View \(see [Creating a Graphical View](creating-a-graphical-view-27efb47.md)\) - SQL View \(see [Creating an SQL View](creating-an-sql-view-81920e4.md)\) - - Data Access Control \(see [Create a "Simple Values" Data Access Control](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/5246328ec59045cb9c2aa693daee2557.html "Users with the DW Space Administrator role (or equivalent privileges) can create data access controls in which criteria are defined as simple values. Each user can only see the records that match any of the single values she is authorized for in the permissions entity.") :arrow_upper_right:\) + - Data Access Control \(see [Create a "Single Values" Data Access Control](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/5246328ec59045cb9c2aa693daee2557.html "Users with the DW Space Administrator role (or equivalent privileges) can create data access controls in which criteria are defined as single values. Each user can only see the records that match any of the single values she is authorized for in the permissions entity.") :arrow_upper_right:\) - Analytic Model \(see [Creating an Analytic Model](Modeling-Data-in-the-Data-Builder/creating-an-analytic-model-e5fbe9e.md)\) - Task Chain \(see [Creating a Task Chain](Acquiring-and-Preparing-Data-in-the-Data-Builder/creating-a-task-chain-d1afbc2.md)\) diff --git a/docs/Acquiring-Preparing-Modeling-Data/sql-functions-reference-6d624a1.md b/docs/Acquiring-Preparing-Modeling-Data/sql-functions-reference-6d624a1.md index f0f2d64..ab9c072 100644 --- a/docs/Acquiring-Preparing-Modeling-Data/sql-functions-reference-6d624a1.md +++ b/docs/Acquiring-Preparing-Modeling-Data/sql-functions-reference-6d624a1.md @@ -267,7 +267,6 @@ SAP Datasphere supports the following window and window aggregation functions, w - `LAG` - `LAST_VALUE` - `LEAD` -- `LINEAR_APPROX` - `MAX` - `MEDIAN` - `MIN` diff --git a/docs/Administering/Creating-Spaces-and-Allocating-Storage/allocate-storage-to-a-space-f414c3d.md b/docs/Administering/Creating-Spaces-and-Allocating-Storage/allocate-storage-to-a-space-f414c3d.md index 1bbcd5b..eb61251 100644 --- a/docs/Administering/Creating-Spaces-and-Allocating-Storage/allocate-storage-to-a-space-f414c3d.md +++ b/docs/Administering/Creating-Spaces-and-Allocating-Storage/allocate-storage-to-a-space-f414c3d.md @@ -2,7 +2,7 @@ # Allocate Storage to a Space -Use the *Storage Assignment* properties to allocate disk and in-memory storage to the space and to choose whether it will have access to the SAP HANA data lake. +Use the *Space Storage* properties to allocate disk and memory storage to the space and to choose whether it will have access to the SAP HANA data lake. @@ -10,17 +10,17 @@ Use the *Storage Assignment* properties to allocate disk and in-memory storage t SAP Datasphere supports data tiering using the features of SAP HANA Cloud: -- In-Memory Storage \(hot data\) - Keep your most recent, frequently-accessed, and mission-critical data loaded constantly in memory to maximize real-time processing and analytics speeds. +- Memory Storage \(hot data\) - Keep your most recent, frequently-accessed, and mission-critical data loaded constantly in memory to maximize real-time processing and analytics speeds. When you persist a view, the persisted data is stored in memory \(see [Persist View Data](https://help.sap.com/viewer/24f836070a704022a40c15442163e5cf/DEV_CURRENT/en-US/9bd12cf116ae40e09cdba8b60cf75e11.html "Improve the performance while working with views by persisting the view data, and scheduling regular updates to keep your data up-to-date.") :arrow_upper_right:\). - Disk \(warm data\) - Store master data and less recent transactional data on disk to reduce storage costs. - When you load data to a local table or replicate data to a remote table in SAP Datasphere, the data is stored on disk by default, but you can load it in memory by activating the *In-Memory Storage* switch \(see [Accelerate Table Data Access with In-Memory Storage](https://help.sap.com/viewer/24f836070a704022a40c15442163e5cf/DEV_CURRENT/en-US/407d1dff76a842699ea08c17eb8748dd.html "By default, table data is stored on disk. You can improve performance by enabling in-memory storage.") :arrow_upper_right:\). + When you load data to a local table or replicate data to a remote table in SAP Datasphere, the data is stored on disk by default, but you can load it in memory by activating the *Memory Storage* switch \(see [Accelerate Table Data Access with In-Memory Storage](https://help.sap.com/viewer/24f836070a704022a40c15442163e5cf/DEV_CURRENT/en-US/407d1dff76a842699ea08c17eb8748dd.html "By default, table data is stored on disk. You can improve performance by enabling in-memory storage.") :arrow_upper_right:\). - Data Lake \(cold data\) - Store historical data that is infrequently accessed in the data lake. With its low cost and high scalability, the data lake is also suitable for storing vast quantities of raw structured and unstructured data, including IoT data. For more information, see [Integrating Data to and From SAP HANA Cloud Data Lake](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/e84545bd205b4f9f9c1731144c7d3075.html "Connect your SAP Datasphere space with SAP HANA Cloud, data lake to store and gain access to large amounts of data.") :arrow_upper_right:. -You can allocate specific amounts of in-memory and disk storage to a space or disable the *Enable Space Quota* option, and allow the space to consume all the storage it needs, up to the total amount available in your tenant. +You can allocate specific amounts of memory and disk storage to a space or disable the *Enable Space Quota* option, and allow the space to consume all the storage it needs, up to the total amount available in your tenant. @@ -30,7 +30,7 @@ You can allocate specific amounts of in-memory and disk storage to a space or di 1. In the side navigation area, click ![](../images/Space_Management_a868247.png) \(*Space Management*\), locate your space tile, and click *Edit* to open it. -2. Use the *Storage Assignment* properties to allocate disk and in-memory storage to the space and to choose whether it will have access to the SAP HANA data lake. +2. Use the *Space Storage* properties to allocate disk and memory storage to the space and to choose whether it will have access to the SAP HANA data lake. @@ -54,9 +54,9 @@ You can allocate specific amounts of in-memory and disk storage to a space or di
- Disable this option to allow the space to consume any amount of disk and in-memory space up to the total amounts available in your tenant. + Disable this option to allow the space to consume any amount of disk and memory space up to the total amounts available in your tenant. - If this option was disabled and then subsequently re-enabled, the *Disk* and *In-Memory* properties are initialized to the minimum values required by the current contents of the space. + If this option was disabled and then subsequently re-enabled, the *Disk* and *Memory* properties are initialized to the minimum values required by the current contents of the space. Default: Enabled @@ -70,7 +70,7 @@ You can allocate specific amounts of in-memory and disk storage to a space or di - Enter the amount of disk storage allocated to the space in GB. You can use the buttons to change the amount by whole GBs or enter fractional values in increments of 100MB by hand. + Enter the amount of disk storage allocated to the space in GB. You can use the buttons to change the amount by whole GBs or enter fractional values in increments of 100 MB by hand. Default: 2 GB @@ -79,12 +79,12 @@ You can allocate specific amounts of in-memory and disk storage to a space or di
- In-Memory \(GB\) + Memory \(GB\) - Enter the amount of in-memory storage allocated to the space in GB. You can use the buttons to change the amount by whole GBs or enter fractional values in increments of 100MB by hand. + Enter the amount of memory storage allocated to the space in GB. You can use the buttons to change the amount by whole GBs or enter fractional values in increments of 100 MB by hand. Default: 1 GB @@ -107,7 +107,7 @@ You can allocate specific amounts of in-memory and disk storage to a space or di
> ### Note: - > If a space exceeds its allocations of in-memory or disk storage, it will be locked until a user of the space deletes the excess data or an administrator assigns additional storage. See [Lock or Unlock Your Space](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/c05b6a6d06db427dbdd3041d61fd5840.html "If a space exceeds its assigned storage or if the audit logs enabled in the space consume too much disk storage, the space is automatically locked.") :arrow_upper_right:. + > If a space exceeds its allocations of memory or disk storage, it will be locked until a user of the space deletes the excess data or an administrator assigns additional storage. See [Lock or Unlock Your Space](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/c05b6a6d06db427dbdd3041d61fd5840.html "If a space exceeds its assigned storage or if the audit logs enabled in the space consume too much disk storage, the space is automatically locked.") :arrow_upper_right:. 3. Click *Save* to save your changes to the space, or *Deploy* to save and immediately make the changes available to users assigned to the space. diff --git a/docs/Administering/Creating-Spaces-and-Allocating-Storage/create-a-space-bbd41b8.md b/docs/Administering/Creating-Spaces-and-Allocating-Storage/create-a-space-bbd41b8.md index add496b..7c6be7d 100644 --- a/docs/Administering/Creating-Spaces-and-Allocating-Storage/create-a-space-bbd41b8.md +++ b/docs/Administering/Creating-Spaces-and-Allocating-Storage/create-a-space-bbd41b8.md @@ -177,7 +177,7 @@ Create a space, allocate storage, and set the space priority and statement limit -4. \[optional\] Use the *Storage Assignment* properties to allocate disk and in-memory storage to the space and to choose whether it will have access to the SAP HANA data lake. +4. \[optional\] Use the *Space Storage* properties to allocate disk and memory storage to the space and to choose whether it will have access to the SAP HANA data lake. For more information, see [Allocate Storage to a Space](allocate-storage-to-a-space-f414c3d.md). diff --git a/docs/Administering/Creating-Spaces-and-Allocating-Storage/creating-spaces-and-allocating-storage-2ace657.md b/docs/Administering/Creating-Spaces-and-Allocating-Storage/creating-spaces-and-allocating-storage-2ace657.md index 2bdbd29..8e8a5dd 100644 --- a/docs/Administering/Creating-Spaces-and-Allocating-Storage/creating-spaces-and-allocating-storage-2ace657.md +++ b/docs/Administering/Creating-Spaces-and-Allocating-Storage/creating-spaces-and-allocating-storage-2ace657.md @@ -4,7 +4,7 @@ All data acquisition, preparation, and modeling happens inside spaces. A space is a secure area - space data cannot be accessed outside the space unless it is shared to another space or exposed for consumption. -An administrator must create one or more spaces. They allocate disk and in-memory storage to the space, set its priority, and can limit how much memory and how many threads its statements can consume. +An administrator must create one or more spaces. They allocate disk and memory storage to the space, set its priority, and can limit how much memory and how many threads its statements can consume. If the administrator assigns one or more space administrators via a scoped role, they can then manage users, create connections to source systems, secure data with data access controls, and manage other aspects of the space \(see [Managing Your Space](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/268ea7e3e8d448deaab420219477064d.html "All data acquisition, preparation, and modeling happens inside spaces. A space is a secure area - space data cannot be accessed outside the space unless it is shared to another space or exposed for consumption.") :arrow_upper_right:\). diff --git a/docs/Administering/Creating-Spaces-and-Allocating-Storage/images/Tenant_Storage_Values_197b139.jpg b/docs/Administering/Creating-Spaces-and-Allocating-Storage/images/Tenant_Storage_Values_197b139.jpg new file mode 100644 index 0000000..16cf81e Binary files /dev/null and b/docs/Administering/Creating-Spaces-and-Allocating-Storage/images/Tenant_Storage_Values_197b139.jpg differ diff --git a/docs/Administering/Creating-Spaces-and-Allocating-Storage/images/Tenant_Storage_Values_197b139.png b/docs/Administering/Creating-Spaces-and-Allocating-Storage/images/Tenant_Storage_Values_197b139.png deleted file mode 100644 index 9627560..0000000 Binary files a/docs/Administering/Creating-Spaces-and-Allocating-Storage/images/Tenant_Storage_Values_197b139.png and /dev/null differ diff --git a/docs/Administering/Creating-Spaces-and-Allocating-Storage/monitor-tenant-and-space-storage-39b08d3.md b/docs/Administering/Creating-Spaces-and-Allocating-Storage/monitor-tenant-and-space-storage-39b08d3.md index 8e2a90f..1d96e6a 100644 --- a/docs/Administering/Creating-Spaces-and-Allocating-Storage/monitor-tenant-and-space-storage-39b08d3.md +++ b/docs/Administering/Creating-Spaces-and-Allocating-Storage/monitor-tenant-and-space-storage-39b08d3.md @@ -7,12 +7,12 @@ You can see the total storage available and the amount assigned to and used by s > ### Note: > You can also see the information below in the *System Monitor*. For more information, see [Monitoring SAP Datasphere](../Monitoring-SAP-Datasphere/monitoring-sap-datasphere-28910cd.md). -![](images/Tenant_Storage_Values_197b139.png) +![](images/Tenant_Storage_Values_197b139.jpg) The following information is available: -- *Used Disk* - Shows the total amount of disk storage used. Hover over this bar to see a breakdown between: - - *Space Data*: All data that is stored in spaces. +- *Disk Used for Storage* - Shows the total amount of disk storage used. Hover over this bar to see a breakdown between: + - *Data in Spaces*: All data that is stored in spaces. - *Audit Log Data*: Data related to audit logs \(see [Audit Logging](https://help.sap.com/viewer/0c3780ad05fd417fa27b98418535debd/cloud/en-US/c78a7c2a3cec4b0897db294d74e00d9b.html "Audit logs are records of read or change actions performed in the database. They allow you to see who did what and when.") :arrow_upper_right:\). @@ -24,7 +24,7 @@ The following information is available: - *Administrative Data*: Data used to administer the tenant and all spaces \(such as space quota, space version\). Includes all information stored in the central schemas \(DWC\_GLOBAL, DWC\_GLOBAL\_LOG, DWC\_TENANT\_OWNER\). -- *Assigned Disk* - Shows the total amount of disk storage assigned to all spaces. -- *Used In-Memory* - Shows the total amount of in-memory storage used in all spaces. -- *Assigned In-Memory* - Shows the total amount of in-memory storage assigned to all spaces. +- *Disk Assigned for Storage* - Shows the total amount of disk storage assigned to all spaces. +- *Memory Used for Storage* -Shows the total amount of memory storage used in all spaces. +- *Memory Assigned for Storage* - Shows the total amount of memory storage assigned to all spaces. diff --git a/docs/Administering/Creating-and-Configuring-Your-Tenant/creating-and-configuring-your-sap-datasphere-tenant-2f80b57.md b/docs/Administering/Creating-and-Configuring-Your-Tenant/creating-and-configuring-your-sap-datasphere-tenant-2f80b57.md index 244a40b..13afb0d 100644 --- a/docs/Administering/Creating-and-Configuring-Your-Tenant/creating-and-configuring-your-sap-datasphere-tenant-2f80b57.md +++ b/docs/Administering/Creating-and-Configuring-Your-Tenant/creating-and-configuring-your-sap-datasphere-tenant-2f80b57.md @@ -733,6 +733,48 @@ Supported +Europe \(Switzerland\) + + + + +5600 GB + + + + +13824 GB + + + + +Supported + + + + +90 TB + + + + +7200 h/month + + + + +20.5 GB/h + + + + +2100 h/month + + + + + + US West diff --git a/docs/Administering/Managing-Users-and-Roles/create-a-scoped-role-to-assign-privileges-to-users-in-spaces-b5c4e0b.md b/docs/Administering/Managing-Users-and-Roles/create-a-scoped-role-to-assign-privileges-to-users-in-spaces-b5c4e0b.md index e7b197e..9c01ced 100644 --- a/docs/Administering/Managing-Users-and-Roles/create-a-scoped-role-to-assign-privileges-to-users-in-spaces-b5c4e0b.md +++ b/docs/Administering/Managing-Users-and-Roles/create-a-scoped-role-to-assign-privileges-to-users-in-spaces-b5c4e0b.md @@ -244,7 +244,7 @@ To assign users to a scoped role, the users must be created beforehand. - To individually select users and assign them to spaces, click \(Add User Assignment\), then *Select Users*. Select one or more users in the wizard *Assign Users* and click *Next Step*. By default, the added users are automatically assigned to all the spaces included in the scoped role. If you want to modify this, select the one or more spaces to which you want to assign the users. Click *Next Step* and *Save*. > ### Note: - > You can also add a user to a scoped role from the In the side navigation area, click \(*Security*\) ** \> ** \(*Users*\). The user is automatically assigned to all the spaces included in the scoped role. + > You can also add a user to a scoped role from the \(*Users*\) area. In such a case, the user is automatically assigned to all the spaces included in the scoped role. See [Assign Users to a Role](assign-users-to-a-role-57a7880.md). - To assign all users included in the scoped role to one or more spaces. To do so, click \(Add User Assignment\), then *All Users of Current Role*. Select one or more spaces in the wizard *Assign Users* and click *Next Step* and *Save*. diff --git a/docs/Administering/Managing-Users-and-Roles/enabling-a-custom-saml-identity-provider-9b26536.md b/docs/Administering/Managing-Users-and-Roles/enabling-a-custom-saml-identity-provider-9b26536.md index fc6ed2d..df95db2 100644 --- a/docs/Administering/Managing-Users-and-Roles/enabling-a-custom-saml-identity-provider-9b26536.md +++ b/docs/Administering/Managing-Users-and-Roles/enabling-a-custom-saml-identity-provider-9b26536.md @@ -12,13 +12,11 @@ By default, SAP Cloud Identity Authentication is used by SAP Datasphere. SAP Dat ## Prerequisites +SAP Datasphere can be hosted on non-SAP data centers. + - You must have an IdP that supports SAML 2.0 protocol. - You must be able to configure your IdP. - You must be assigned to the *System Owner* role. For more information see [Transfer the System Owner Role](transfer-the-system-owner-role-b3d19a1.md). -- SAP Datasphere can be hosted either on SAP data centers or on non-SAP data centers. Determine which environment SAP Datasphere is hosted in by inspecting your SAP Datasphere URL: - - A single-digit number, for example us1 or jp1, indicates an SAP data center. - - A two-digit number, for example eu10 or us30, indicates a non-SAP data center. - - If your users are connecting from Apple devices using the mobile app, the certificate used by your IdP must be compatible with Apple's App Transport Security \(ATS\) feature. > ### Note: @@ -33,7 +31,7 @@ By default, SAP Cloud Identity Authentication is used by SAP Datasphere. SAP Dat ## Procedure -1. From the side navigation, go to \(*System*\) → \(*Administration*\) →*Security* . +1. From the side navigation, go to \(*System*\) → \(*Administration*\) →*Security*. If you've provisioned SAP Datasphere prior to version 2021.03 you'll see a different UI and need go to \(*Product Switch*\) → \(*Analytics*\) → \(*System*\) → \(*Administration*\) → *Security*. diff --git a/docs/Administering/Managing-Users-and-Roles/roles-and-privileges-by-app-and-feature-2d8b7d0.md b/docs/Administering/Managing-Users-and-Roles/roles-and-privileges-by-app-and-feature-2d8b7d0.md index 440d59c..536ea9a 100644 --- a/docs/Administering/Managing-Users-and-Roles/roles-and-privileges-by-app-and-feature-2d8b7d0.md +++ b/docs/Administering/Managing-Users-and-Roles/roles-and-privileges-by-app-and-feature-2d8b7d0.md @@ -978,7 +978,7 @@ Assets Search for an asset and view the detailed information for it. -See [Finding and Accessing Data in the Catalog](https://help.sap.com/viewer/24f836070a704022a40c15442163e5cf/DEV_CURRENT/en-US/10478251045b43e782fa15e0f3e113b0.html "Discover data by searching and filtering results. Mark your favorite assets, terms, and key performance indicators (KPIs).") :arrow_upper_right: +See [Finding and Accessing Data in the Catalog](https://help.sap.com/viewer/24f836070a704022a40c15442163e5cf/DEV_CURRENT/en-US/10478251045b43e782fa15e0f3e113b0.html "Discover data by searching and filtering results. Mark your favorite assets, listed data products, terms, and key performance indicators (KPIs).") :arrow_upper_right: diff --git a/docs/Administering/Managing-Users-and-Roles/standard-roles-delivered-with-sap-datasphere-a50a51d.md b/docs/Administering/Managing-Users-and-Roles/standard-roles-delivered-with-sap-datasphere-a50a51d.md index 31f3cf7..ac01342 100644 --- a/docs/Administering/Managing-Users-and-Roles/standard-roles-delivered-with-sap-datasphere-a50a51d.md +++ b/docs/Administering/Managing-Users-and-Roles/standard-roles-delivered-with-sap-datasphere-a50a51d.md @@ -23,7 +23,7 @@ In the side navigation area, click \(*Securit - **DW Administrator** - Can create users, roles and spaces and has other administration privileges across the SAP Datasphere tenant. Cannot access any of the apps \(such as the *Data Builder*\). - Roles providing privileges to work in SAP Datasphere spaces: - - **DW Space Administrator** \(template\) - Can manage all aspects of the spaces users are assigned to \(except the *Storage Assignment* and *Workload Management* properties\) and can create data access controls. + - **DW Space Administrator** \(template\) - Can manage all aspects of the spaces users are assigned to \(except the *Space Storage* and *Workload Management* properties\) and can create data access controls. - *DW Scoped Space Administrator* - This predefined scoped role is based on the DW Space Administrator role and inherits its privileges and permissions. > ### Note: diff --git a/docs/Administering/Monitoring-SAP-Datasphere/monitoring-sap-datasphere-28910cd.md b/docs/Administering/Monitoring-SAP-Datasphere/monitoring-sap-datasphere-28910cd.md index 4f11b65..737d60d 100644 --- a/docs/Administering/Monitoring-SAP-Datasphere/monitoring-sap-datasphere-28910cd.md +++ b/docs/Administering/Monitoring-SAP-Datasphere/monitoring-sap-datasphere-28910cd.md @@ -21,7 +21,7 @@ For example, you can see all the errors \(such as failed tasks and out-of-memory You can monitor out-of-memory errors and other information that are related to SAP HANA database SQL statements, depending on what you've specified in \(Configuration\) → *Monitoring*: -- If *Enable Expensive Statement Tracing* is not selected, then in *System Monitor* \> *Dashboard*, you cannot see the widgets about out-of-memory errors and about other information related to statements. For example, you cannot see the widgets: *Out-of-Memory Errors*, *Top 5 Statements by Memory Consumption*. +- If *Enable Expensive Statement Tracing* is not selected, then in *System Monitor* \> *Dashboard*, you cannot see the widgets about out-of-memory errors and about other information related to statements. For example, you cannot see the widgets: *Out-of-Memory Errors*, *Top 5 Statements by Processing Memory Consumption*. - If *Enable Expensive Statement Tracing* is not selected and none of the threshold options is selected, then in the tables of *System Monitor* \> *Logs*, you cannot see any information about out-of-memory errors and about other information related to statements. For example, no information is displayed in the columns *Peak Memory \(MiB\)* and *Peak CPU \(ms\)*. @@ -51,11 +51,11 @@ The out-of-memory widgets and top-memory consumption widgets help you to identif The following information is available in the *Dashboard* tab: -- *Disk Used by Spaces* - Shows the total amount of disk storage used in all spaces, out of the total amount of disk storage assigned to all spaces. +- *Disk Used by Spaces for Storage* - Shows the total amount of disk storage used in all spaces, out of the total amount of disk storage assigned to all spaces. - In *Storage Distribution*, you can see a breakdown between: + In *Disk Storage Used*, you can see a breakdown between: - - *Space Data*: All data that is stored in spaces. + - *Data in Spaces*: All data that is stored in spaces. - *Audit Log Data*: Data related to audit logs \(see [Audit Logging](https://help.sap.com/viewer/0c3780ad05fd417fa27b98418535debd/cloud/en-US/c78a7c2a3cec4b0897db294d74e00d9b.html "Audit logs are records of read or change actions performed in the database. They allow you to see who did what and when.") :arrow_upper_right:\). @@ -67,9 +67,9 @@ The following information is available in the *Dashboard* tab: - *Administrative Data*: Data used to administer the tenant and all spaces \(such as space quota, space version\). Includes all information stored in the central schemas \(DWC\_GLOBAL, DWC\_GLOBAL\_LOG, DWC\_TENANT\_OWNER\). -- *Memory Used by Spaces* - Shows the total amount of memory storage used in all spaces, out of the total amount of memory storage assigned to all spaces. -- *Disk Assigned to Spaces* - Shows the total amount of disk storage assigned to all spaces. -- *Memory Assigned to Spaces* - Shows the total amount of memory storage assigned to all spaces. +- *Memory Used by Spaces for Storage* - Shows the total amount of memory storage used in all spaces, out of the total amount of memory storage assigned to all spaces. +- *Disk Assigned to Spaces for Storage* - Shows the total amount of disk storage assigned to all spaces. +- *Memory Assigned to Spaces for Storage* - Shows the total amount of memory storage assigned to all spaces. For each of the key indicator widgets listed below, you can see detailed information by clicking the link *View Logs*, which takes you to the *Logs* tab. @@ -204,70 +204,70 @@ Shows the 5 tasks whose run duration time was the longest in the last 48 hours. -*Top 5 Tasks by Memory Consumption* +*Top 5 Tasks by Processing Memory Consumption* *Last 24 Hours* -Shows the 5 tasks whose memory consumption was the highest in the last 24 hours. +Shows the 5 tasks whose processing memory consumption was the highest in the last 24 hours. -*Top 5 Tasks by Memory Consumption* +*Top 5 Tasks by Processing Memory Consumption* *Last 48 Hours* -Shows the 5 tasks whose memory consumption was the highest in the last 48 hours. +Shows the 5 tasks whose processing memory consumption was the highest in the last 48 hours. -*Top 5 Statements by Memory Consumption* +*Top 5 Statements by Processing Memory Consumption* *Last 24 Hours* -Shows the 5 statements whose memory consumption was the highest in the last 24 hours. +Shows the 5 statements whose processing memory consumption was the highest in the last 24 hours. -*Top 5 Statements by Memory Consumption* +*Top 5 Statements by Processing Memory Consumption* *Last 48 Hours* -Shows the 5 statements whose memory consumption was the highest in the last 48 hours. +Shows the 5 statements whose processing memory consumption was the highest in the last 48 hours. -*Top 5 MDS Requests by Memory Consumption* +*Top 5 MDS Requests by Processing Memory Consumption* *Last 24 Hours* -Shows the 5 SAP HANA multi-dimensional services \(MDS\) requests \(used for example in SAP Analytics Cloud consumption\), whose memory consumption is the highest. +Shows the 5 SAP HANA multi-dimensional services \(MDS\) requests \(used for example in SAP Analytics Cloud consumption\), whose processing memory consumption is the highest. @@ -493,7 +493,7 @@ Shows the maximum amount of memory \(in MiB\) the task has used during the runti -*SAP HANA Peak CPU* +*SAP HANA CPU Time* @@ -801,7 +801,7 @@ Shows the maximum amount of memory \(in MiB\) the statement has used during the -*SAP HANA Peak CPU* +*SAP HANA CPU Time* @@ -937,7 +937,7 @@ You can control the tables in *Tasks* and *Statements* in the following ways: > - You can enter filter or sort values in multiple columns. > ### Note: - > If you filter on one of the following columns and you enter a number, use the “.” \(period\) character as the decimal separator, regardless of the decimal separator used in the number formatting that you’ve chosen in the general user settings \(*Settings* \> *Language & Region*\): *SAP HANA Peak Memory*, *SAP HANA Peak CPU*, *SAP HANA Used Memory* and *SAP HANA Used Disk*. + > If you filter on one of the following columns and you enter a number, use the “.” \(period\) character as the decimal separator, regardless of the decimal separator used in the number formatting that you’ve chosen in the general user settings \(*Settings* \> *Language & Region*\): *SAP HANA Peak Memory*, *SAP HANA CPU Time*, *SAP HANA Used Memory* and *SAP HANA Used Disk*. 3. Click *Clear Filter* in the filter strip or \(Remove Filter\)in the *Define Filter* dialog to remove the filter. diff --git a/docs/Administering/Preparing-Connectivity/prepare-connectivity-to-apache-kafka-1483ceb.md b/docs/Administering/Preparing-Connectivity/prepare-connectivity-to-apache-kafka-1483ceb.md new file mode 100644 index 0000000..6dcbf1a --- /dev/null +++ b/docs/Administering/Preparing-Connectivity/prepare-connectivity-to-apache-kafka-1483ceb.md @@ -0,0 +1,4 @@ + + +# Prepare Connectivity to Apache Kafka + diff --git a/docs/Administering/Preparing-Connectivity/preparing-connectivity-for-connections-bffbd58.md b/docs/Administering/Preparing-Connectivity/preparing-connectivity-for-connections-bffbd58.md index 203ae0c..29dcef2 100644 --- a/docs/Administering/Preparing-Connectivity/preparing-connectivity-for-connections-bffbd58.md +++ b/docs/Administering/Preparing-Connectivity/preparing-connectivity-for-connections-bffbd58.md @@ -186,7 +186,7 @@ yes \(Outbound IP Address\) -no +yes @@ -250,6 +250,53 @@ n/a +[Apache Kafka Connections](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/1992c6b7154c4bc080d83c8977382ff4.html "Use the connection to connect to and access data from an Apache Kafka cluster.") :arrow_upper_right: + + + + +no + + + + +no + + + + +yes + + + + +no + + + + +no + + + + +yes + + + + +no + + + + +[Prepare Connectivity to Apache Kafka](prepare-connectivity-to-apache-kafka-1483ceb.md) + + + + + + [Cloud Data Integration Connections](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/cd33107246f446628f9baff56faf5a1b.html "Use a Cloud Data Integration connection to access data from SAP cloud applications which provide OData-based APIs for data integration and have a Cloud Data Integration (CDI) provider service implemented.") :arrow_upper_right: @@ -797,7 +844,7 @@ no -no +yes @@ -891,7 +938,7 @@ no -no +yes diff --git a/docs/Administering/administering-sap-datasphere-70ee87c.md b/docs/Administering/administering-sap-datasphere-70ee87c.md index 2a5ef65..e977895 100644 --- a/docs/Administering/administering-sap-datasphere-70ee87c.md +++ b/docs/Administering/administering-sap-datasphere-70ee87c.md @@ -59,7 +59,7 @@ You must assign one or more roles to each of your users via scoped roles and glo - **DW Administrator** - Can create users, roles and spaces and has other administration privileges across the SAP Datasphere tenant. Cannot access any of the apps \(such as the *Data Builder*\). - Roles providing privileges to work in SAP Datasphere spaces: - - **DW Space Administrator** \(template\) - Can manage all aspects of the spaces users are assigned to \(except the *Storage Assignment* and *Workload Management* properties\) and can create data access controls. + - **DW Space Administrator** \(template\) - Can manage all aspects of the spaces users are assigned to \(except the *Space Storage* and *Workload Management* properties\) and can create data access controls. - *DW Scoped Space Administrator* - This predefined scoped role is based on the DW Space Administrator role and inherits its privileges and permissions. > ### Note: @@ -98,7 +98,7 @@ You must assign one or more roles to each of your users via scoped roles and glo All data acquisition, preparation, and modeling happens inside spaces. A space is a secure area - space data cannot be accessed outside the space unless it is shared to another space or exposed for consumption. -An administrator must create one or more spaces. They allocate disk and in-memory storage to the space, set its priority, and can limit how much memory and how many threads its statements can consume. See [Creating Spaces and Allocating Storage](Creating-Spaces-and-Allocating-Storage/creating-spaces-and-allocating-storage-2ace657.md). +An administrator must create one or more spaces. They allocate disk and memory storage to the space, set its priority, and can limit how much memory and how many threads its statements can consume. See [Creating Spaces and Allocating Storage](Creating-Spaces-and-Allocating-Storage/creating-spaces-and-allocating-storage-2ace657.md). diff --git a/docs/Administering/index.md b/docs/Administering/index.md index 3f4318d..7a961ca 100644 --- a/docs/Administering/index.md +++ b/docs/Administering/index.md @@ -61,6 +61,7 @@ - [Upload Third-Party ODBC Drivers \(Required for Data Flows\)](Preparing-Connectivity/upload-third-party-odbc-drivers-required-for-data-flows-b9b5579.md) - [Prepare Connectivity to Adverity](Preparing-Connectivity/prepare-connectivity-to-adverity-a37a758.md) - [Prepare Connectivity to Amazon Athena](Preparing-Connectivity/prepare-connectivity-to-amazon-athena-8d80f60.md) + - [Prepare Connectivity to Apache Kafka](Preparing-Connectivity/prepare-connectivity-to-apache-kafka-1483ceb.md) - [Prepare Connectivity to Amazon Redshift](Preparing-Connectivity/prepare-connectivity-to-amazon-redshift-519b2db.md) - [Prepare Connectivity for Cloud Data Integration](Preparing-Connectivity/prepare-connectivity-for-cloud-data-integration-b6fd8de.md) - [Prepare Connectivity for Generic JDBC](Preparing-Connectivity/prepare-connectivity-for-generic-jdbc-648fabf.md) diff --git a/docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/cancel-a-transformation-flow-run-ab885f0.md b/docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/cancel-a-transformation-flow-run-ab885f0.md index b2f45e5..0fda30a 100644 --- a/docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/cancel-a-transformation-flow-run-ab885f0.md +++ b/docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/cancel-a-transformation-flow-run-ab885f0.md @@ -26,4 +26,6 @@ You can use the *Cancel Run* button to stop a transformation flow run that is cu If the run has been cancelled successfully, the status of the transformation flow run will have the value *Failed \(Canceled\)*. + Cancelling a transformation flow involves rolling back database transactions. Therefore, it might take some time until the transformation flow is cancelled and the status changes to *Failed \(Canceled\)*. + diff --git a/docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/data-persistence-and-job-execution-settings-d04f5dd.md b/docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/data-persistence-and-processing-mode-d04f5dd.md similarity index 87% rename from docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/data-persistence-and-job-execution-settings-d04f5dd.md rename to docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/data-persistence-and-processing-mode-d04f5dd.md index fe8b1c7..34b925b 100644 --- a/docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/data-persistence-and-job-execution-settings-d04f5dd.md +++ b/docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/data-persistence-and-processing-mode-d04f5dd.md @@ -1,10 +1,10 @@ -# Data Persistence and Job Execution Settings +# Data Persistence and Processing Mode -From the *Views* Details screen, you can change the settings of the job execution. +From the *Views* Details screen, you can change the settings of the processing mode. -While persisting a view, an SQL procedure is called in SAP HANA. The SQL procedure can be executed synchronously or asynchronously. In the settings tab of the *Views* - Details screen, you can change the settings to optimize the job execution to persist a view following your needs: +While persisting a view, an SQL procedure is called in SAP HANA. The SQL procedure can be executed synchronously or asynchronously. In the settings tab of the *Views* - Details screen, you can change the settings to optimize the processing mode while persistîng a view following your needs: - Default \(asynchronous, may change in future\) diff --git a/docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/persisting-and-monitoring-views-9af04c9.md b/docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/persisting-and-monitoring-views-9af04c9.md index 694d918..384a2b0 100644 --- a/docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/persisting-and-monitoring-views-9af04c9.md +++ b/docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/persisting-and-monitoring-views-9af04c9.md @@ -316,7 +316,7 @@ Shows the records of the persisted views. -*Used In-Memory \(MiB\)* +*Memory Used for Storage \(MiB\)* @@ -328,7 +328,7 @@ Tracks how much size the view is using in your memory. -*Used Disk \(MiB\)* +*Disk Used for Storage \(MiB\)* diff --git a/docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/replicating-data-and-monitoring-remote-tables-4dd95d7.md b/docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/replicating-data-and-monitoring-remote-tables-4dd95d7.md index f2e8374..4bcfff2 100644 --- a/docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/replicating-data-and-monitoring-remote-tables-4dd95d7.md +++ b/docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/replicating-data-and-monitoring-remote-tables-4dd95d7.md @@ -310,7 +310,7 @@ Displays the next scheduled run \(if a schedule is set for the remote table\). -*Used In-Memory \(MiB\)* +*Memory Used for Storage \(MiB\)* @@ -322,7 +322,7 @@ Displays the size occupied by the remote table data in memory. -*Used Disk \(MiB\)* +*Disk Used for Storage \(MiB\)* diff --git a/docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/watermarks-890897f.md b/docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/watermarks-890897f.md new file mode 100644 index 0000000..d96745e --- /dev/null +++ b/docs/Integrating-data-and-managing-spaces/Data-Integration-Monitor/watermarks-890897f.md @@ -0,0 +1,33 @@ + + +# Watermarks + +When you run a transformation flow that loads delta changes to a target table, the system uses a watermark \(a timestamp\) to track the data that has been transferred. + +On the *Delta Capture Settings* tab of the *Data Integration Monitor*, you can view the watermark for a source table. Note that if a transformation flow run does not load delta data to a target table, no source table information is displayed. + + + +
+ +## How the System Calculates the Watermark + +When you run a transformation flow that loads delta changes to a target table, the system only transfers data that has a change timestamp greater than or equal to the watermark. Once the transformation flow run has completed successfully, the system sets the watermark to the start time of the transformation flow run. Note that if open transactions exist in the source table at the time the transformation flow run started, then the earliest start timestamp of these transactions is used as the watermark. + + + + + +## Resetting the Watermark + +If you reset a watermark, the system will transfer all data to the target table the next time the transformation flow runs \(using the load type *Initial and Delta*\). This means that you do not need to redeploy the transformation flow and use the load type *Initial Only*. + +Resetting the watermark can make sense in the following situations: + +- If a table that can capture delta changes is joined with a second table, and columns in the second table have been updated, you can reset the watermark to ensure that these changes are reflected in records that have already been transferred. + +- If corrupt data is present in the target table, you can reset the watermark for the transformation flow to ensure that the latest data from the source table is loaded to the target table. + + +On the *Delta Capture Settings* tab of the *Data Integration Monitor*, you can reset the watermark for a transformation flow. Choose the *Reset Watermark* button. + diff --git a/docs/Integrating-data-and-managing-spaces/Integrating-Data-Via-Connections/google-bigquery-connections-30ed77d.md b/docs/Integrating-data-and-managing-spaces/Integrating-Data-Via-Connections/google-bigquery-connections-30ed77d.md index da9e0e3..d61dd5b 100644 --- a/docs/Integrating-data-and-managing-spaces/Integrating-Data-Via-Connections/google-bigquery-connections-30ed77d.md +++ b/docs/Integrating-data-and-managing-spaces/Integrating-Data-Via-Connections/google-bigquery-connections-30ed77d.md @@ -126,7 +126,14 @@ Enter the ID of the Google Cloud project to which you want to connect. You can f -\[optional\] Enter an additional location for browsing datasets. It can be a region or multi-region, for example `us-west1`, `asia-east1`, or `EU`. +\[optional\] Enter an additional location. By default, SAP Datasphere connects to BigQuery datasets only from Google's default location - `US`. Datasets from any other location will only be available in the*Data Builder* if you enter the additional location here. + +A location can be a region, for example `us-west1`or `asia-east1`, or a multi-region, for example `EU`. + +> ### Note: +> If you provide an invalid location, the connection validation still passes, but failures will happen when you're using the connection in the *Data Builder*. + + diff --git a/docs/Integrating-data-and-managing-spaces/Integrating-Data-Via-Connections/microsoft-sql-server-connections-a13c8ab.md b/docs/Integrating-data-and-managing-spaces/Integrating-Data-Via-Connections/microsoft-sql-server-connections-a13c8ab.md index ab68676..2dcca1d 100644 --- a/docs/Integrating-data-and-managing-spaces/Integrating-Data-Via-Connections/microsoft-sql-server-connections-a13c8ab.md +++ b/docs/Integrating-data-and-managing-spaces/Integrating-Data-Via-Connections/microsoft-sql-server-connections-a13c8ab.md @@ -2,7 +2,7 @@ # Microsoft SQL Server Connections -Use a *Microsoft SQL Server* connection to access data from a Microsoft SQL Server \(on-premise\). +Use a *Microsoft SQL Server* connection to access data from a Microsoft SQL Server database \(on-premise\). @@ -110,7 +110,7 @@ Enter the Microsoft SQL Server database name. -Select the Microsoft SQL Server version. +Select the Microsoft SQL Server version. Supported versions are Microsoft SQL Server 2012 to 2022 \(default is 2022\). diff --git a/docs/Integrating-data-and-managing-spaces/Integrating-Data-Via-Connections/sap-hana-connections-e6b63f1.md b/docs/Integrating-data-and-managing-spaces/Integrating-Data-Via-Connections/sap-hana-connections-e6b63f1.md index 9612051..dc6c7b0 100644 --- a/docs/Integrating-data-and-managing-spaces/Integrating-Data-Via-Connections/sap-hana-connections-e6b63f1.md +++ b/docs/Integrating-data-and-managing-spaces/Integrating-Data-Via-Connections/sap-hana-connections-e6b63f1.md @@ -511,7 +511,7 @@ If set to *false*, the host name used for the connection is used for verificatio > > - When using SAP HANA smart data access via Cloud Connector for remote tables: To validate the server certificate, the certificate must have been uploaded to SAP Datasphere. > -> For more information, see [Upload Certificates (Required for Remote Tables)](https://help.sap.com/viewer/935116dd7c324355803d4b85809cec97/DEV_CURRENT/en-US/46f5467adc5242deb1f6b68083e72994.html "To enable a secure SSL/TLS-based connection for a connection type that supports remote tables but doesn't use a Data Provisioning Agent, you need to upload a server certificate to SAP Datasphere.") :arrow_upper_right:. +> For more information, see [Upload Certificates](https://help.sap.com/viewer/935116dd7c324355803d4b85809cec97/DEV_CURRENT/en-US/46f5467adc5242deb1f6b68083e72994.html "Secure SSL/TLS-based connections require the upload server certificates to SAP Datasphere.") :arrow_upper_right:. diff --git a/docs/Integrating-data-and-managing-spaces/Integrating-Data-to-and-From-HANA-Cloud/working-with-data-lake-93d0b5d.md b/docs/Integrating-data-and-managing-spaces/Integrating-Data-to-and-From-HANA-Cloud/working-with-data-lake-93d0b5d.md index b5623f4..019b473 100644 --- a/docs/Integrating-data-and-managing-spaces/Integrating-Data-to-and-From-HANA-Cloud/working-with-data-lake-93d0b5d.md +++ b/docs/Integrating-data-and-managing-spaces/Integrating-Data-to-and-From-HANA-Cloud/working-with-data-lake-93d0b5d.md @@ -21,7 +21,7 @@ You can assign a space that connects to data lake. Tables in the data lake can t > ### Note: > - Currently, DDL statements in your data lake are not audited. > -> - For data flows, UPSERT / APPEND operations via virtual tables are not supported. +> - For data flows, only APPEND \(UPSERT option not selected\) and TRUNCATE modes are supported. APPEND \(with UPSERT option selected\) and DELETE modes are not supported. diff --git a/docs/Integrating-data-and-managing-spaces/images/DWC_Monitoring_fb786ae.jpg b/docs/Integrating-data-and-managing-spaces/images/DWC_Monitoring_fb786ae.jpg index 5f5427b..f50f47c 100644 Binary files a/docs/Integrating-data-and-managing-spaces/images/DWC_Monitoring_fb786ae.jpg and b/docs/Integrating-data-and-managing-spaces/images/DWC_Monitoring_fb786ae.jpg differ diff --git a/docs/Integrating-data-and-managing-spaces/images/Monitor_Drop-Down_1669061.jpg b/docs/Integrating-data-and-managing-spaces/images/Monitor_Drop-Down_1669061.jpg index a5421d7..8b29d6d 100644 Binary files a/docs/Integrating-data-and-managing-spaces/images/Monitor_Drop-Down_1669061.jpg and b/docs/Integrating-data-and-managing-spaces/images/Monitor_Drop-Down_1669061.jpg differ diff --git a/docs/Integrating-data-and-managing-spaces/index.md b/docs/Integrating-data-and-managing-spaces/index.md index 44ef63e..5e8667c 100644 --- a/docs/Integrating-data-and-managing-spaces/index.md +++ b/docs/Integrating-data-and-managing-spaces/index.md @@ -69,7 +69,7 @@ - [Resuming Real-Time Replication After a Fail](Data-Integration-Monitor/resuming-real-time-replication-after-a-fail-fc0bfbe.md) - [Partitioning Remote Table Data Loads](Data-Integration-Monitor/partitioning-remote-table-data-loads-a218d27.md) - [Persisting and Monitoring Views](Data-Integration-Monitor/persisting-and-monitoring-views-9af04c9.md) - - [Data Persistence and Job Execution Settings](Data-Integration-Monitor/data-persistence-and-job-execution-settings-d04f5dd.md) + - [Data Persistence and Processing Mode](Data-Integration-Monitor/data-persistence-and-processing-mode-d04f5dd.md) - [Persisted Views and Data Access Control](Data-Integration-Monitor/persisted-views-and-data-access-control-7a4a983.md) - [Creating Partitions for Your Persisted Views](Data-Integration-Monitor/creating-partitions-for-your-persisted-views-9b1b595.md) - [Exploring Views with View Analyzer](Data-Integration-Monitor/exploring-views-with-view-analyzer-8921e5a.md) @@ -77,6 +77,7 @@ - [Monitoring Flows](Data-Integration-Monitor/monitoring-flows-b661ea0.md) - [Working With Existing Replication Flow Runs](Data-Integration-Monitor/working-with-existing-replication-flow-runs-da62e1e.md) - [Cancel a Transformation Flow Run](Data-Integration-Monitor/cancel-a-transformation-flow-run-ab885f0.md) + - [Watermarks](Data-Integration-Monitor/watermarks-890897f.md) - [Monitoring Remote Queries](Data-Integration-Monitor/monitoring-remote-queries-806d7f0.md) - [Creating Statistics for Your Remote Tables](Data-Integration-Monitor/creating-statistics-for-your-remote-tables-e4120bb.md) - [Monitoring Task Chains](Data-Integration-Monitor/monitoring-task-chains-4142201.md) diff --git a/docs/Integrating-data-and-managing-spaces/lock-or-unlock-your-space-c05b6a6.md b/docs/Integrating-data-and-managing-spaces/lock-or-unlock-your-space-c05b6a6.md index ff380b1..e4546c1 100644 --- a/docs/Integrating-data-and-managing-spaces/lock-or-unlock-your-space-c05b6a6.md +++ b/docs/Integrating-data-and-managing-spaces/lock-or-unlock-your-space-c05b6a6.md @@ -14,13 +14,13 @@ When a space is locked, users assigned to the space can continue to create and m ## Space Locked as it Has Exceeded its Assigned Storage -If a space exceeds its allocations of in-memory or disk storage, it will be locked until a space user deletes the excess data or an administrator assigns additional storage. +If a space exceeds its allocations of memory or disk storage, it will be locked until a space user deletes the excess data or an administrator assigns additional storage. In this situation, these actions are possible: - Users assigned to a space can delete data to bring the space back under the limit of its assigned storage. - A space administrator can use the *Unlock* button on the *Space Management* page or on the space page to unlock the space for a 24-hour grace period, in case urgent changes must be deployed. -- An administrator can assign more disk and/or in-memory storage to the space \(see [Allocate Storage to a Space](https://help.sap.com/viewer/935116dd7c324355803d4b85809cec97/DEV_CURRENT/en-US/f414c3d62bfe49b38e2cfdd7b4e7d786.html "Use the Storage Assignment properties to allocate disk and in-memory storage to the space and to choose whether it will have access to the SAP HANA data lake.") :arrow_upper_right:\). +- An administrator can assign more disk and/or memory storage to the space \(see [Allocate Storage to a Space](https://help.sap.com/viewer/935116dd7c324355803d4b85809cec97/DEV_CURRENT/en-US/f414c3d62bfe49b38e2cfdd7b4e7d786.html "Use the Storage Assignment properties to allocate disk and in-memory storage to the space and to choose whether it will have access to the SAP HANA data lake.") :arrow_upper_right:\). diff --git a/docs/Integrating-data-and-managing-spaces/managing-your-space-268ea7e.md b/docs/Integrating-data-and-managing-spaces/managing-your-space-268ea7e.md index e7eff55..556cd70 100644 --- a/docs/Integrating-data-and-managing-spaces/managing-your-space-268ea7e.md +++ b/docs/Integrating-data-and-managing-spaces/managing-your-space-268ea7e.md @@ -4,7 +4,7 @@ All data acquisition, preparation, and modeling happens inside spaces. A space is a secure area - space data cannot be accessed outside the space unless it is shared to another space or exposed for consumption. -An administrator must create one or more spaces. They allocate disk and in-memory storage to the space, set its priority, and can limit how much memory and how many threads its statements can consume. See [Creating Spaces and Allocating Storage](https://help.sap.com/viewer/935116dd7c324355803d4b85809cec97/DEV_CURRENT/en-US/2ace657356d54199b0b87d2327b1a70b.html "All data acquisition, preparation, and modeling happens inside spaces. A space is a secure area - space data cannot be accessed outside the space unless it is shared to another space or exposed for consumption.") :arrow_upper_right:. +An administrator must create one or more spaces. They allocate disk and memory storage to the space, set its priority, and can limit how much memory and how many threads its statements can consume. See [Creating Spaces and Allocating Storage](https://help.sap.com/viewer/935116dd7c324355803d4b85809cec97/DEV_CURRENT/en-US/2ace657356d54199b0b87d2327b1a70b.html "All data acquisition, preparation, and modeling happens inside spaces. A space is a secure area - space data cannot be accessed outside the space unless it is shared to another space or exposed for consumption.") :arrow_upper_right:. If an administrator has assigned you the role of space administrator for a certain space via a scoped role, you can assign other users, create connections to source systems, secure data with data access controls, and manage other aspects of the space. diff --git a/docs/Integrating-data-and-managing-spaces/monitor-your-space-storage-consumption-94fe6c1.md b/docs/Integrating-data-and-managing-spaces/monitor-your-space-storage-consumption-94fe6c1.md index 576336d..45ed2cb 100644 --- a/docs/Integrating-data-and-managing-spaces/monitor-your-space-storage-consumption-94fe6c1.md +++ b/docs/Integrating-data-and-managing-spaces/monitor-your-space-storage-consumption-94fe6c1.md @@ -23,7 +23,7 @@ Example -See the amount of disk storage and in-memory storage used in your space. +See the amount of disk storage and memory storage used in your space. For more information about storage capacity, see [Allocate Storage to a Space](https://help.sap.com/viewer/935116dd7c324355803d4b85809cec97/DEV_CURRENT/en-US/f414c3d62bfe49b38e2cfdd7b4e7d786.html "Use the Storage Assignment properties to allocate disk and in-memory storage to the space and to choose whether it will have access to the SAP HANA data lake.") :arrow_upper_right:. diff --git a/docs/Integrating-data-and-managing-spaces/review-your-space-status-b2915bf.md b/docs/Integrating-data-and-managing-spaces/review-your-space-status-b2915bf.md index e7ce2be..8430e89 100644 --- a/docs/Integrating-data-and-managing-spaces/review-your-space-status-b2915bf.md +++ b/docs/Integrating-data-and-managing-spaces/review-your-space-status-b2915bf.md @@ -82,7 +82,7 @@ If your space exceeds it's storage quota, it might change to a locked state. A space is locked: -- if the space exceeds its allocations of in-memory or disk storage, +- if the space exceeds its allocations of memory or disk storage, - if the audit logs consume too much disk storage, - if a space administrator has manually locked the space.