Skip to content

Commit

Permalink
Update from SAP DITA CMS (squashed):
Browse files Browse the repository at this point in the history
commit 60f43cae9254b38dd1c0c4a6ce1220d991d23532
Author: REDACTED
Date:   Mon Feb 26 23:53:28 2024 +0000

    Update from SAP DITA CMS 2024-02-26 23:53:28
    Project: dita-all/sst1706691030144
    Project map: af2fcb3e6dd448f3af3c0ff9c70daaf9.ditamap
    Language: en-US

commit fd38cf2c39ca44d0f8b1eaa7caed8aeac8419797
Author: REDACTED
Date:   Mon Feb 26 23:53:25 2024 +0000

    Update from SAP DITA CMS 2024-02-26 23:53:25
    Project: dita-all/sst1706691030144
    Project map: af2fcb3e6dd448f3af3c0ff9c70daaf9.ditamap
    Language: en-US

commit b1b7be30411913d8e816f2f858c3708a154a64aa
Author: REDACTED
Date:   Mon Feb 26 23:53:13 2024 +0000

    Update from SAP DITA CMS 2024-02-26 23:53:13
    Project: dita-all/sst1706691030144
    Project map: af2fcb3e6dd448f3af3c0ff9c70daaf9.ditamap
    Language: en-US

commit bc48af740dd943fdc3545b3f9ece96f62e2d66f7
Author: REDACTED
Date:   Mon Feb 26 14:59:15 2024 +0000

    Update from SAP DITA CMS 2024-02-26 14:59:15

##################################################
[Remaining squash message was removed before commit...]
  • Loading branch information
ditaccms-bot committed Feb 27, 2024
1 parent ecf55c7 commit 83cdc73
Show file tree
Hide file tree
Showing 267 changed files with 787 additions and 770 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,6 @@ By default, table data is stored on disk. You can improve performance by enablin
</tr>
</table>

3. Click <span class="SAP-icons"></span> \(Deploy\) to deploy the table and move its data into in-memory storage.
3. Click <span class="SAP-icons-V5"></span> \(Deploy\) to deploy the table and move its data into in-memory storage.


Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ You can become a data provider and offer your own data products for sale in Data
You can create and import empty tables and views to receive and prepare data:

- You can create an empty local table ready to receive data from a CSV file or from a data flow \(see [Creating a Local Table](creating-a-local-table-2509fe4.md)\).
- You can import business content prepared by SAP and partners to support end-to-end business scenarios \(see [Importing SAP and Partner Business Content from the Content Network](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/400078d689bf4454b3fc977a4e201c2f.html "Users with the DW Administrator or DW Space Administrator role can import business content and sample content from SAP and partners from the Content Network.") :arrow_upper_right:\).
- You can import business content prepared by SAP and partners to support end-to-end business scenarios \(see [Importing SAP and Partner Business Content from the Content Network](https://help.sap.com/viewer/9f36ca35bc6145e4acdef6b4d852d560/DEV_CURRENT/en-US/400078d689bf4454b3fc977a4e201c2f.html "Users with the DW Administrator global role (or users with both a scoped DW Space Administrator role and a global role providing the Lifecycle privilege), can use the Semantic Onboarding app to import business content and sample content from SAP and partners published to the Content Network.") :arrow_upper_right:\).
- You can import object definitions from a CSN/JSON file \(see [Importing Objects from a CSN/JSON File](../Creating-Finding-Sharing-Objects/importing-objects-from-a-csn-json-file-23599e6.md)\).


Expand All @@ -135,7 +135,7 @@ All the objects you import or create in the *Data Builder* are listed on the *Da

- Click one of the tabs to filter the list by object type.
- Click a tile to create a new object
- Click <span class="FPA-icons"></span> \(Show filters\) to filter the list on collections and search by criteria. Click *Show More* to open a dialog with additional filter options.
- Click <span class="FPA-icons-V3"></span> \(Show filters\) to filter the list on collections and search by criteria. Click *Show More* to open a dialog with additional filter options.
- Enter a string in the *Search* field to filter the list on business and technical names and users.
- Click a column header to sort or filter the list by values in the column.
- Select one or more objects and use any of the following tools:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,22 @@ Define the source for your replication flow \(connection, container, and objects

- Connections are created by your system administration. You can only use a data source if a connection has been created for it in your space and if you have the necessary authorizations.

- Containers are the parent objects that hold the data. For a CDS view, for example, the container is the relevant CDS root folder.
- Containers are the parent objects that hold the data:

- Replication objects are the datasets that you choose for replication, for example CDS views or database tables.
- For CDS view entities, the container is called SQL\_SERVICE.

- For standard CDS views, the container is the CDS root folder \(CDS\_EXTRACTION\).

If a standard CDS view for which replication is enabled is not shown in the CDS\_EXTRACTION folder, make sure that the user in the source connection has the required authorizations. For connections to an SAP S/4HANA Cloud source system, this might mean that the user must be assigned to an authorization group that contains the CDS view \(as described in [Integrating CDS Views Using ABAP CDS Pipeline](https://help.sap.com/docs/SAP_S4HANA_CLOUD/0f69f8fb28ac4bf48d2b57b9637e81fa/f509eddda867452db9631dae1ae442a3.html?version=2308.503)\).

- For database tables, the container is the schema that includes the table.

- For SAP Landscape Transformation Replication Server \(SLT\), the container is the relevant mass transfer ID. Make sure that it is available in SLT before you start creating your replication flow.


- You can use the SQL service exposure from SAP BTP, ABAP environment, or SAP S/4HANA Cloud, respectively, to replicate custom and standard CDS view entities if your system administration has created the relevant communication arrangements. For more information, see [Data Consumption using SAP Datasphere](https://help.sap.com/docs/btp/sap-business-technology-platform/data-consumption-using-sap-datasphere). For information about the relevant integration scenario, see [Integrating SQL Services using SAP Datasphere](https://help.sap.com/docs/btp/sap-business-technology-platform/integrating-sql-services-using-sap-datasphere).

- Replication objects are the datasets that you choose for replication, for example individual CDS view entities or database tables.



Expand All @@ -27,22 +40,21 @@ Define the source for your replication flow \(connection, container, and objects

If you are not sure which one to choose, or if none of the connections in the list is suitable for your purposes, contact your administration.

Alternatively, you can get started by clicking <span class="FPA-icons"></span> \(Browse source connections\)
Alternatively, you can get started by clicking <span class="FPA-icons-V3"></span> \(Browse source connections\)

2. Choose *Select Source Container*. A list of available source containers appears. Select the relevant one for your use case.

To narrow down the selection, start typing a part of the folder name in the *Search* field.

If you choose SAP Datasphere as the source connection, the source container is automatically defined as the space you are in. In addition, the load type is automatically set to *Initial* because *Initial and Delta* is not supported for SAP Datasphere as the source.
- If you choose SAP Datasphere as the source connection, the source container is automatically defined as the space you are in. In addition, the load type is automatically set to *Initial* because *Initial and Delta* is not supported for SAP Datasphere as the source.

- If CDS view entities have been made available using the SQL service exposure in SAP BTP, ABAP environment, you find these entities in a folder called SQL\_SERVICE.

For SAP Landscape Transformation Replication Server \(SLT\), the container is the relevant mass transfer ID. Make sure that it is available in SLT before you start creating your replication flow.

3. Choose *Add Source Objects*. A list of available objects appears. Select the relevant ones for your use case and choose *Next*. A list of the objects you selected appears.

> ### Note:
> - The list only shows objects for which replication is supported. For example, when replicating data from an SAP S/4HANA source you will only be shown CDS views that have the required annotations for data extraction.
>
> If a CDS view for which replication is enabled is not shown in the CDS\_EXTRACTION folder, please ensure that the user in the source connection has the required authorizations. For connections to an SAP S/4HANA Cloud source system, this might mean that the user must be assigned to an authorization group that contains the CDS view \(as described in [Integrating CDS Views Using ABAP CDS Pipeline](https://help.sap.com/docs/SAP_S4HANA_CLOUD/0f69f8fb28ac4bf48d2b57b9637e81fa/f509eddda867452db9631dae1ae442a3.html?version=2308.503)\).
> - The list only shows objects for which replication is supported. For example, if you select SAP S/4HANA Cloud as the source and the folder SQL\_SERVICE as the container, you will only be shown CDS view entities that have the required annotations for data extraction.
>
> - If you use SAP Datasphere as the source connection, your source objects must be local tables that have been deployed, are **not** enabled for delta capturing, and have a primary key.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -140,15 +140,15 @@ Add a source to read data from. You can add multiple sources and combine them to

5. In the *Columns* section, review the columns of the source.

- To view the data type of a column, hover over it and click <span class="FPA-icons"></span>
- If the source is a CSV, JSON, or Excel file, you can change the data type of a column and its corresponding properties by hovering over the column and clicking <span class="FPA-icons"></span> \(Menu\)*Edit Column*.
- To view the data type of a column, hover over it and click <span class="FPA-icons-V3"></span>
- If the source is a CSV, JSON, or Excel file, you can change the data type of a column and its corresponding properties by hovering over the column and clicking <span class="FPA-icons-V3"></span> \(Menu\)*Edit Column*.

> ### Note:
> For CSV, JSON, or Excel files, the schema is calculated based on a subset of data and hence you can adjust/verify the suggested data as per your requirements to cover all the data.
- For a remote source, you can control the data being read.
- To add/delete columns from source, click <span class="FPA-icons"></span>/<span class="FPA-icons"></span> \(Delete\) respectively.
- To add new/removed columns from the source based on latest metadata, click <span class="SAP-icons"></span> \(Refresh to fetch column changes from source\)
- To add/delete columns from source, click <span class="FPA-icons-V3"></span>/<span class="FPA-icons-V3"></span> \(Delete\) respectively.
- To add new/removed columns from the source based on latest metadata, click <span class="SAP-icons-V5"></span> \(Refresh to fetch column changes from source\)


6. \[optional\] For CDI, OData, and ABAP CDS sources complete the *Source Filters* section to define filter conditions for the consumption of their data:
Expand All @@ -167,7 +167,7 @@ Add a source to read data from. You can add multiple sources and combine them to

`<column_name>` is a column in the target table where data is being replicated, normally of type `date` or `timestamp`.

7. \[optional\] For source tables from OData remote connections, you can add or edit the parameters of a selected source table in the *OData Parameters* section. OData parameters allow you to define additional information or constraints to control how the data is returned from the OData connection. Click <span class="FPA-icons"></span> \(Define Parameters\) to open the *OData Parameters* dialog. You can:
7. \[optional\] For source tables from OData remote connections, you can add or edit the parameters of a selected source table in the *OData Parameters* section. OData parameters allow you to define additional information or constraints to control how the data is returned from the OData connection. Click <span class="FPA-icons-V3"></span> \(Define Parameters\) to open the *OData Parameters* dialog. You can:

- Manually enter a custom parameter according to URI conventions \(see [URI Conventions](https://www.odata.org/documentation/odata-version-2-0/uri-conventions/)\).

Expand All @@ -178,7 +178,7 @@ Add a source to read data from. You can add multiple sources and combine them to

In the *OData Parameters* section, enable the *Apply Parameters on Preview* toggle if you want to see how the changes in parameter impact the table preview.

8. \[optional\] For source tables from OData remote connections, you can edit the depth properties of a selected source table in the *OData Properties* section. Click <span class="FPA-icons"></span> \(Edit OData Properties\) to open the *OData Properties* dialog. You can set the depth of the source table to either 1 or 2.
8. \[optional\] For source tables from OData remote connections, you can edit the depth properties of a selected source table in the *OData Properties* section. Click <span class="FPA-icons-V3"></span> \(Edit OData Properties\) to open the *OData Properties* dialog. You can set the depth of the source table to either 1 or 2.

The depth of an OData object refers to the level of related entities that are included in the response when querying the OData service. The depth is by default set to 1 so that only the properties of the requested entity are returned. You can change the depth to 2 to include a second level. Depth is useful when you want to optimize performance by controlling the amount of data returned in a single request.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -102,11 +102,18 @@ Add a source table to read data from. You can add multiple source tables and com

To load delta changes from a source table \(including deleted records\), the delta capture setting must be enabled for the table. It is only possible to load delta changes from one source table. If you want to load delta changes from a source table, you need to ensure that the value of the option *Delta Capture* is *On* for both the source table and for the target table. In the *Transformation Flow Properties* panel, ensure that the load type *Initial and Delta* is selected.

4. In the *Columns* section, you can view the columns of the source table. To check the data type of a column, hover over it and click <span class="FPA-icons"></span>.
4. In the *Columns* section, you can view the columns of the source table. To check the data type of a column, hover over it and click <span class="FPA-icons-V3"></span>.

If the delta capture setting is enabled for a table, two additional columns are present in the table to track changes. For more information, see [Capturing Delta Changes in Your Local Table](capturing-delta-changes-in-your-local-table-154bdff.md).

> ### Note:
> If the delta capture setting is enabled for a source table, the columns *Change Date* and *Change Type* are automatically mapped to these columns in the target table. Mapping these columns \(or a calculated column that contains the content of these columns\) to any other target column is not permitted. For more information, see [Capturing Delta Changes in Your Local Table](capturing-delta-changes-in-your-local-table-154bdff.md).
> ### Note:
> The *Change Type* column does not support null values. Ensure that no null values are written to the *Change Type* column of the target table.
>
> **Example**
>
> You add two source tables. The delta capture setting is enabled for one table and disabled for the other. You create a union for these two tables. In the target table, the *Change Type* column will contain null values. In this case, running the transformation flow will result in errors.

Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Select a target \(connection and container\) to define the target environment fo

## Procedure

1. Choose <span class="FPA-icons"></span> \(Browse target connections\). A list of available target connections appears.
1. Choose <span class="FPA-icons-V3"></span> \(Browse target connections\). A list of available target connections appears.

2. Select the relevant one for your use case.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -113,13 +113,13 @@ A transformation flow writes data to a target table. You can create a new target
- To delete a mapping, select it, and click *Delete*. To delete all mappings, *Remove all Mappings.*

- You can, at any time, click <span class="FPA-icons"></span> \(Auto Map\) to relaunch automapping based on names and datatypes.
- You can, at any time, click <span class="FPA-icons-V3"></span> \(Auto Map\) to relaunch automapping based on names and datatypes.


> ### Note:
> If the delta capture setting is enabled for the target table of a transformation flow, the *Change Date* column of the target table will always contain the time that the transformation flow was run, even if an incoming column is mapped to the *Change Date* column.
4. In the *Columns* section, view the columns of the target table. To check the data type of a column, hover over it and click <span class="FPA-icons"></span>.
4. In the *Columns* section, view the columns of the target table. To check the data type of a column, hover over it and click <span class="FPA-icons-V3"></span>.

If the delta capture setting is enabled for a table, two additional columns are present in the table to track changes. For more information, see [Capturing Delta Changes in Your Local Table](capturing-delta-changes-in-your-local-table-154bdff.md).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Add a target table to write data to. You can only have one target table in a dat

1. Add a target table to the data flow in one of the following ways:

- Click <span class="FPA-icons"></span> \(Add Table\) in the toolbar, and drag and drop it onto the canvas to create a new target table
- Click <span class="FPA-icons-V3"></span> \(Add Table\) in the toolbar, and drag and drop it onto the canvas to create a new target table

The target table is added to your data flow. The text **New** at the bottom-right of the operator shows that the table is newly added.

Expand Down Expand Up @@ -189,10 +189,10 @@ Add a target table to write data to. You can only have one target table in a dat

- To delete a mapping, select it, and click *Delete*. To delete all mappings, *Remove all Mappings.*

- You can, at any time, click <span class="FPA-icons"></span> \(Auto Map\) to relaunch automapping based on names and datatypes.
- You can, at any time, click <span class="FPA-icons-V3"></span> \(Auto Map\) to relaunch automapping based on names and datatypes.


5. In the *Columns* section, view the columns of the target table. To check the data type of a column, hover over it and click <span class="FPA-icons"></span>.
5. In the *Columns* section, view the columns of the target table. To check the data type of a column, hover over it and click <span class="FPA-icons-V3"></span>.

6. \[optional\] In the *Advanced* section, configure the following properties:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Add an *Aggregation* node to perform `SUM`, `COUNT`, `MIN`, and `MAX` calculatio

For example, if you want to aggregate `Revenue` per `Country`, you should exclude all columns except `Revenue` and `Country`.

3. Select the projection node in order to display its context tools, and click <span class="FPA-icons"></span> \(Aggregation\).
3. Select the projection node in order to display its context tools, and click <span class="FPA-icons-V3"></span> \(Aggregation\).

![](images/Graphical_View_Editor_-_Aggregate_Data_90e85f4.gif) An *Aggregation* node is created, its symbol is selected, and its properties are displayed in the side panel.

Expand Down Expand Up @@ -67,8 +67,8 @@ Add an *Aggregation* node to perform `SUM`, `COUNT`, `MIN`, and `MAX` calculatio
- Total revenues of more than 1m, enter `SUM(Revenue) > 1000000`
- Total revenues for the US only, enter `Country='US'`

When working on a large expression, click <span class="FPA-icons"></span> \(Enter Full Screen\) to expand the expression editor.
When working on a large expression, click <span class="FPA-icons-V3"></span> \(Enter Full Screen\) to expand the expression editor.

7. Click <span class="FPA-icons"></span> \(Preview Data\) to open the *Data Preview* panel and review the data output by this node. For more information, see [Viewing or Previewing Data in Data Builder Objects](../viewing-or-previewing-data-in-data-builder-objects-b338e4a.md).
7. Click <span class="FPA-icons-V3"></span> \(Preview Data\) to open the *Data Preview* panel and review the data output by this node. For more information, see [Viewing or Previewing Data in Data Builder Objects](../viewing-or-previewing-data-in-data-builder-objects-b338e4a.md).


Loading

0 comments on commit 83cdc73

Please sign in to comment.