Skip to content

Commit

Permalink
Update from SAP DITA CMS (squashed):
Browse files Browse the repository at this point in the history
commit d22e72d1fe70723a20bc5a9129932952901a9bb3
Author: REDACTED
Date:   Thu Aug 17 20:31:53 2023 +0200

    Update from SAP DITA CMS ( 2023-08-17_20:31:52 )

    Project: loioaf2fcb3e6dd448f3af3c0ff9c70daaf9 (pju1689063851215.project)

    * Project map: loioaf2fcb3e6dd448f3af3c0ff9c70daaf9 (hgo1690215305779.ditamap)

    * Output: loioc25299a38b6448f889a43b42c9e5897d

    * Buildable map: loio678695d903b546e5947af69e56ed42b8 ()

    * Language: en-US

commit 08ebe25559c5bc916a23f6c05f29b4bc01be31aa
Author: REDACTED
Date:   Thu Aug 17 20:30:40 2023 +0200

    Update from SAP DITA CMS ( 2023-08-17_20:30:39 )

    Project: loioaf2fcb3e6dd448f3af3c0ff9c70daaf9 (pju1689063851215.project)

    * Project map: loioaf2fcb3e6dd448f3af3c0ff9c70daaf9 (hgo1690215305779.ditamap)

    * Output: loiob8faae83b519439fb4ea9d0eb1a5f26e

    * Buildable map: loio4e1c1e1d5d1947f5875e93e7597c4f4c ()

    * Language: en-US

##################################################
[Remaining squash message was removed before commit...]
  • Loading branch information
ditaccms-bot committed Aug 23, 2023
1 parent 01262a2 commit e2746d8
Show file tree
Hide file tree
Showing 140 changed files with 1,278 additions and 451 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,15 @@ This topic contains the following sections:
- [Create and Import Objects to Receive and Prepare Data](acquiring-data-in-the-data-builder-1f15a29.md#loio1f15a29a25354ec28392ab10ca4e9350__section_create_tables)
- [Create Objects and Act On Existing Objects](acquiring-data-in-the-data-builder-1f15a29.md#loio1f15a29a25354ec28392ab10ca4e9350__section_tools)

Space administrators and integrators prepare connections and other sources to allow modelers to acquire data \(see [Integrating Data and Managing Spaces in SAP Datasphere](https://help.sap.com/viewer/be5967d099974c69b77f4549425ca4c0/cloud/en-US/8f98d3c917f94452bafe288055b60b35.html "Users with the DW Space Administrator or DW Integrator role can create connections to source systems and databases and can schedule and monitor data replication and other data integration tasks. Space administrators use other methods to integrate data into their space and are responsible for maintaining the list of space members and monitoring and managing the space. They can create data access controls to secure data, and can transport content between tenants.") :arrow_upper_right:\).
Space administrators and integrators prepare connections and other sources to allow modelers to acquire data \(see [Integrating Data and Managing Spaces in SAP Datasphere](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/8f98d3c917f94452bafe288055b60b35.html "Users with the DW Space Administrator or DW Integrator role can create connections to source systems and databases and can schedule and monitor data replication and other data integration tasks. Space administrators use other methods to integrate data into their space and are responsible for maintaining the list of space members and monitoring and managing the space. They can create data access controls to secure data, and can transport content between tenants.") :arrow_upper_right:\).



<a name="loio1f15a29a25354ec28392ab10ca4e9350__section_federate_replicate"/>

## Federate and Replicate Data in Remote Tables

Many connections \(including most connections to SAP systems\) support importing remote tables to federate or replicate data \(see [Integrating Data via Connections](https://help.sap.com/viewer/be5967d099974c69b77f4549425ca4c0/cloud/en-US/eb85e157ab654152bd68a8714036e463.html "Connections provide access to data from a wide range of sources, cloud as well as on-premise sources, SAP as well as Non-SAP sources, and partner tools. They allow space members to use objects from the connected source to acquire, prepare and access data from those sources in SAP Datasphere. To connect to different sources, SAP Datasphere provides different connection types.") :arrow_upper_right:\).
Many connections \(including most connections to SAP systems\) support importing remote tables to federate or replicate data \(see [Integrating Data via Connections](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/eb85e157ab654152bd68a8714036e463.html "Connections provide access to data from a wide range of sources, cloud as well as on-premise sources, SAP as well as Non-SAP sources, and partner tools. They allow space members to use objects from the connected source to acquire, prepare and access data from those sources in SAP Datasphere. To connect to different sources, SAP Datasphere provides different connection types.") :arrow_upper_right:\).

You can import remote tables to make the data available in your space from the *Data Builder* start page, in an entity-relationship model, or directly as a source in a view.

Expand All @@ -45,15 +45,15 @@ You can import remote tables to make the data available in your space from the *

## Extract, Transform, and Load Data with Data Flows

Many connections \(including most connections to SAP systems\) support loading data to SAP Datasphere via data flows \(see [Integrating Data via Connections](https://help.sap.com/viewer/be5967d099974c69b77f4549425ca4c0/cloud/en-US/eb85e157ab654152bd68a8714036e463.html "Connections provide access to data from a wide range of sources, cloud as well as on-premise sources, SAP as well as Non-SAP sources, and partner tools. They allow space members to use objects from the connected source to acquire, prepare and access data from those sources in SAP Datasphere. To connect to different sources, SAP Datasphere provides different connection types.") :arrow_upper_right:\).
Many connections \(including most connections to SAP systems\) support loading data to SAP Datasphere via data flows \(see [Integrating Data via Connections](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/eb85e157ab654152bd68a8714036e463.html "Connections provide access to data from a wide range of sources, cloud as well as on-premise sources, SAP as well as Non-SAP sources, and partner tools. They allow space members to use objects from the connected source to acquire, prepare and access data from those sources in SAP Datasphere. To connect to different sources, SAP Datasphere provides different connection types.") :arrow_upper_right:\).

Data flows support a wide range of extract, transform, and load \(ETL\) operations.

- To get started: In the side navigation area, click ![](../Creating-Finding-Sharing-Objects/images/Data_Builder_f73dc45.png) \(*Data Builder*\), select a space if necessary, and click *New Data Flow* to open the editor. See see [Creating a Data Flow](creating-a-data-flow-e30fd14.md).
- To add a source to your data flow, drag it from the *Source Browser* \(see [Using the Source Browser](../using-the-source-browser-7d2b21d.md)\).
- In addition to connections, data flows can load and transform data from the following kinds of sources:
- Open SQL schemas \(see [Integrating Data via Database Users/Open SQL Schemas](https://help.sap.com/viewer/be5967d099974c69b77f4549425ca4c0/cloud/en-US/3de55a78a4614deda589633baea28645.html "Create a database user in your space to read and write directly to the SAP HANA Cloud database on which SAP Datasphere runs. Each database user has an Open SQL schema, which is attached to a space schema and provides a secure method for exchanging data with the space.") :arrow_upper_right:\)
- HDI containers \(see [Exchanging Data with SAP SQL Data Warehousing HDI Containers](https://help.sap.com/viewer/be5967d099974c69b77f4549425ca4c0/cloud/en-US/1aec7ca95af24208a61c1a444b249d95.html "Use SAP SQL Data Warehousing to build calculation views and other SAP HANA Cloud HDI objects directly in your SAP Datasphere run-time database and then exchange data between your HDI containers and your SAP Datasphere spaces. SAP SQL Data Warehousing can be used to bring existing HDI objects into your SAP Datasphere environment, and to allow users familiar with the HDI tools to leverage advanced SAP HANA Cloud features.") :arrow_upper_right:\).
- Open SQL schemas \(see [Integrating Data via Database Users/Open SQL Schemas](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/3de55a78a4614deda589633baea28645.html "Create a database user in your space to read and write directly to the SAP HANA Cloud database on which SAP Datasphere runs. Each database user has an Open SQL schema, which is attached to a space schema and provides a secure method for exchanging data with the space.") :arrow_upper_right:\)
- HDI containers \(see [Exchanging Data with SAP SQL Data Warehousing HDI Containers](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/1aec7ca95af24208a61c1a444b249d95.html "Use SAP SQL Data Warehousing to build calculation views and other SAP HANA Cloud HDI objects directly in your SAP Datasphere run-time database and then exchange data between your HDI containers and your SAP Datasphere spaces. SAP SQL Data Warehousing can be used to bring existing HDI objects into your SAP Datasphere environment, and to allow users familiar with the HDI tools to leverage advanced SAP HANA Cloud features.") :arrow_upper_right:\).
- Objects that are already in the SAP Datasphere repository \(see [Add Objects from the Repository](../add-objects-from-the-repository-13fcecd.md)\).

- Data flows load data into local tables.
Expand Down Expand Up @@ -100,7 +100,7 @@ You can import data from a CSV file to create a new local table \(see [Creating

Purchase data products from providers and download them directly into your space \(see [Purchasing Data from Data Marketplace](../purchasing-data-from-data-marketplace-4096fb8.md)\).

You can become a data provider and offer your own data products for sale in Data Marketplace via the Data Sharing Cockpit \(see [Data Marketplace - Data Provider&apos;s Guide](https://help.sap.com/viewer/e4059f908d16406492956e5dbcf142dc/cloud/en-US/e479b7b4c95741c7a7a1d42397984c7e.html "Start with Data Marketplace as a data provider.") :arrow_upper_right:\).
You can become a data provider and offer your own data products for sale in Data Marketplace via the Data Sharing Cockpit \(see [Data Marketplace - Data Provider&apos;s Guide](https://help.sap.com/viewer/bb1899f0b39f415b9de29a845873d7af/DEV_CURRENT/en-US/e479b7b4c95741c7a7a1d42397984c7e.html "Start with Data Marketplace as a data provider.") :arrow_upper_right:\).



Expand All @@ -111,7 +111,7 @@ You can become a data provider and offer your own data products for sale in Data
You can create and import empty tables and views to receive and prepare data:

- You can create an empty local table ready to receive data from a CSV file or from a data flow \(see [Creating a Local Table](creating-a-local-table-2509fe4.md)\).
- You can import business content prepared by SAP and partners to support end-to-end business scenarios \(see [Importing SAP and Partner Business Content from the Content Network](https://help.sap.com/viewer/be5967d099974c69b77f4549425ca4c0/cloud/en-US/400078d689bf4454b3fc977a4e201c2f.html "Users with the DW Administrator or DW Space Administrator role can import business content and sample content from SAP and partners from the Content Network.") :arrow_upper_right:\).
- You can import business content prepared by SAP and partners to support end-to-end business scenarios \(see [Importing SAP and Partner Business Content from the Content Network](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/400078d689bf4454b3fc977a4e201c2f.html "Users with the DW Administrator or DW Space Administrator role can import business content and sample content from SAP and partners from the Content Network.") :arrow_upper_right:\).
- You can import object definitions from a CSN/JSON file \(see [Importing Objects from a CSN/JSON File](../Creating-Finding-Sharing-Objects/importing-objects-from-a-csn-json-file-23599e6.md)\).


Expand Down Expand Up @@ -215,7 +215,7 @@ All the objects you import or create in the *Data Builder* are listed on the *Da
- Local Table \(see [Creating a Local Table](creating-a-local-table-2509fe4.md)\)
- Graphical View \(see [Creating a Graphical View](../creating-a-graphical-view-27efb47.md)\)
- SQL View \(see [Creating an SQL View](../creating-an-sql-view-81920e4.md)\)
- Data Access Control \(see [Create a Data Access Control](https://help.sap.com/viewer/be5967d099974c69b77f4549425ca4c0/cloud/en-US/5246328ec59045cb9c2aa693daee2557.html "Space administrators can create data access controls to define criteria on which data can be displayed to users.") :arrow_upper_right:\)
- Data Access Control \(see [Create a Data Access Control](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/5246328ec59045cb9c2aa693daee2557.html "Space administrators can create data access controls to define criteria on which data can be displayed to users.") :arrow_upper_right:\)
- Analytic Model \(see [Creating an Analytic Model](../Modeling-Data-in-the-Data-Builder/creating-an-analytic-model-e5fbe9e.md)\)
- Task Chain \(see [Creating a Task Chain](creating-a-task-chain-d1afbc2.md)\)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,9 @@ Add a target table to write data to. You can only have one target table in a dat
- Drag an existing table from the *Source Browser*, drop it onto the canvas, and click the floating *Target* button to use it as the target table. You can choose:
- On the *Repository* tab - A local table.
- On the *Sources* tab:
- A table in an Open SQL schema - if the space is granted write access to the table or schema \(see [Allow the Space to Write to the Open SQL Schema](https://help.sap.com/viewer/be5967d099974c69b77f4549425ca4c0/cloud/en-US/7eaa370fe4624dea9f182ee9c9ab645f.html "To grant the space write privileges in the Open SQL schema, use the GRANT_PRIVILEGE_TO_SPACE stored procedure. Once this is done, data flows running in the space can select tables in the Open SQL schema as targets and write data to them.") :arrow_upper_right:\).
- A table in a database user group schema - if the space is granted write access to the table or schema \(see [Allow a Space to Write to the Database User Group Schema](https://help.sap.com/viewer/9f804b8efa8043539289f42f372c4862/cloud/en-US/5b27e03849fe4c7182bcb4274f010e90.html "To grant a space write privileges in the database user group schema, use the GRANT_PRIVILEGE_TO_SPACE stored procedure. Once this is done, data flows running in the space can select tables in the schema as targets and write data to them.") :arrow_upper_right:\).
- A table in an SAP HDI container - if the space is granted write access to the table or schema \(see [Allow Your Space to Write to Your HDI Container](https://help.sap.com/viewer/be5967d099974c69b77f4549425ca4c0/cloud/en-US/aa3627f987d04b5f95fec1c45083dde9.html "To allow data flows in your SAP Datasphere space to use tables in your HDI container as targets, you must set the appropriate roles and add the container to your space.") :arrow_upper_right:\).
- A table in an Open SQL schema - if the space is granted write access to the table or schema \(see [Allow the Space to Write to the Open SQL Schema](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/7eaa370fe4624dea9f182ee9c9ab645f.html "To grant the space write privileges in the Open SQL schema, use the GRANT_PRIVILEGE_TO_SPACE stored procedure. Once this is done, data flows running in the space can select tables in the Open SQL schema as targets and write data to them.") :arrow_upper_right:\).
- A table in a database user group schema - if the space is granted write access to the table or schema \(see [Allow a Space to Write to the Database User Group Schema](https://help.sap.com/viewer/935116dd7c324355803d4b85809cec97/DEV_CURRENT/en-US/5b27e03849fe4c7182bcb4274f010e90.html "To grant a space write privileges in the database user group schema, use the GRANT_PRIVILEGE_TO_SPACE stored procedure. Once this is done, data flows running in the space can select tables in the schema as targets and write data to them.") :arrow_upper_right:\).
- A table in an SAP HDI container - if the space is granted write access to the table or schema \(see [Allow Your Space to Write to Your HDI Container](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/aa3627f987d04b5f95fec1c45083dde9.html "To allow data flows in your SAP Datasphere space to use tables in your HDI container as targets, you must set the appropriate roles and add the container to your space.") :arrow_upper_right:\).



Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,9 @@ Define general settings for your replication flow, such as the load type.

- If *Truncate* is marked for a **database table**, when you start the replication run, the system deletes the table content, but leaves the table structure intact and fills it with the relevant data from the source.

If not, the system tries to insert the data from the source after the existing data in the target, which can lead to issues with duplicate values in key fields.

- If *Truncate* is marked for a **data store table** \(such as SAP HANA Cloud, Data Lake\), when you start the replication run, the system deletes the table completely \(data and structure\) and re-creates it based on the source data.

If not, the system inserts the data from the source after the existing data in the target, which is the desired behavior for these tables in the majority of cases.
If not, the system inserts new data records after the existing data in the target. For data records that already exist in the target and have been changed in the source, the system updates the target records with the changed data from the source using the UPSERT mode.

- For data store objects \(for example from the data lake component of SAP HANA Cloud\), *Truncate* must always be set. \(If you still try to run a replication flow for an existing target without the *Truncate* option, you get an error message.\) When you start the replication run, the system deletes the object completely \(data and structure\) and re-creates it based on the source data.

If the target structure does not yet exist or is empty, you can ignore the *Truncate* setting.

Expand Down
Loading

0 comments on commit e2746d8

Please sign in to comment.