You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/data-factory/dataflow-gen2-deployment-pipelines.md
+20-19Lines changed: 20 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,46 +5,47 @@ ms.reviewer: whhender
5
5
ms.author: miescobar
6
6
author: ptyx507x
7
7
ms.topic: conceptual
8
-
ms.date: 9/19/2025
8
+
ms.date: 09/22/2025
9
9
ms.custom: dataflows
10
+
ai-usage: ai-assisted
10
11
---
11
12
12
13
# Dataflow Gen2 and deployment pipelines
13
14
14
15
>[!NOTE]
15
-
>Fabric deployment pipelines support [Dataflow Gen2 with CI/CD support](dataflow-gen2-cicd-and-git-integration.md). This article aims to provide general concepts and guidance on how to best use Dataflow Gen2 with deployment pipelines.
16
+
>Fabric deployment pipelines support [Dataflow Gen2 with CI/CD support](dataflow-gen2-cicd-and-git-integration.md). This article provides guidance on how to use Dataflow Gen2 with deployment pipelines.
16
17
17
-
Microsoft Fabric offers a robust set of tools for implementing Continuous Integration/Continuous Deployment (CI/CD) and Application Lifecycle Management (ALM). These capabilities empower teams to build, test, and deploy data solutions with speed, consistency, and governance.
18
+
Microsoft Fabric provides tools for Continuous Integration/Continuous Deployment (CI/CD) and Application Lifecycle Management (ALM). These tools help teams build, test, and deploy data solutions with consistency and governance.
18
19
19
-
Dataflow Gen2 with CI/CD support enables seamless integration of dataflows into [Fabric deployment pipelines](/fabric/cicd/deployment-pipelines/intro-to-deployment-pipelines). It automates build, test, and deployment stages, ensuring consistent, version-controlled delivery of dataflows. It accelerates development cycles, improves reliability, and simplifies management by embedding Dataflow Gen2 directly into Fabric’s end-to-end pipeline orchestration.
20
+
Dataflow Gen2 with CI/CD support integrates dataflows into [Fabric deployment pipelines](/fabric/cicd/deployment-pipelines/intro-to-deployment-pipelines). This integration automates build, test, and deployment stages. It provides consistent, version-controlled delivery of dataflows and improves reliabilityby embedding Dataflow Gen2 into Fabric's pipeline orchestration.
20
21
21
-
This article provides guidance on the different solution architectures for your Dataflow and related Fabric itemsto build a deployment pipeline tailored to your needs.
22
+
This article provides guidance on solution architectures for your Dataflow and related Fabric items. You can use this guidance to build a deployment pipeline that fits your needs.
22
23
23
24
While there are many goals with deployment pipelines, this article focuses on two specific goals:
24
25
25
-
***Consistency**: Ensure the mashup script of your Dataflow remains unchanged across all deployment stages.
26
-
***Stage-specific configuration**: Use dynamic references for data sources and destinations that adapt to each stage (Dev, Test, Prod).
26
+
-**Consistency**: Keep your Dataflow's mashup script unchanged across all deployment stages.
27
+
-**Stage-specific configuration**: Use dynamic references for data sources and destinations that adapt to each stage (Dev, Test, Prod).
27
28
28
29
## Solution architectures
29
30
30
-
A good solution architecture enables you to not only have something that works for your Dataflow Gen2, but also components that extend through your overall Fabric solution.
31
+
A good solution architecture works for your Dataflow Gen2 and extends through your overall Fabric solution.
31
32
32
33
The following list covers the available solution architectures when using a Dataflow Gen2:
33
34
34
-
***Parameterized Dataflow Gen2**: Using the [public parameters mode](dataflow-parameters.md), you can parameterize Dataflow components—such as logic, sources, or destinations—and pass runtime values to dynamically adapt the Dataflow based on the pipeline stage.
35
-
***Variable libraries inside a Dataflow Gen2**: Using the [variable libraries integration with Dataflow Gen2](dataflow-gen2-variable-library-integration.md), you can reference variables throughout your Dataflow. These variables are evaluated at runtime based on values stored in the library, enabling dynamic behavior aligned with the pipeline stage.
35
+
-**Parameterized Dataflow Gen2**: Using the [public parameters mode](dataflow-parameters.md), you can parameterize Dataflow components—such as logic, sources, or destinations—and pass runtime values to dynamically adapt the Dataflow based on the pipeline stage.
36
+
-**Variable libraries inside a Dataflow Gen2**: Using the [variable libraries integration with Dataflow Gen2](dataflow-gen2-variable-library-integration.md), you can reference variables throughout your Dataflow. These variables are evaluated at runtime based on values stored in the library, enabling dynamic behavior aligned with the pipeline stage.
36
37
37
-
The main differences between these two relies on how a parameterized Dataflow requires setting a process through either the REST API or the [Fabric pipeline Dataflow activity](dataflow-activity.md) to pass values for runtime whereas the variable libraries integration with Dataflow Gen2 relies on a variable library being available at the workspace level and the correct variables being referenced inside the Dataflow.
38
+
The main difference between these two approaches is how they pass values at runtime. A parameterized Dataflow requires a process through either the REST API or the [Fabric pipeline Dataflow activity](dataflow-activity.md) to pass values. The variable libraries integration with Dataflow Gen2 requires a variable library at the workspace level and the correct variables referenced inside the Dataflow.
38
39
39
-
While both options are valid, each has its own set of considerations and limitations. We recommend doing an assessment as to how you'd like your workflow to be and how such workflow would fit into your overall Fabric solution.
40
+
Both options are valid, and each has its own considerations and limitations. We recommend evaluating how your workflow works and how it fits into your overall Fabric solution.
40
41
41
42
## General considerations
42
43
43
-
The following are a collection of things to consider when using a Dataflow Gen2 inside a Fabric deployment pipeline:
44
+
Here are things to consider when using a Dataflow Gen2 inside a Fabric deployment pipeline:
44
45
45
-
***Default References**: Dataflow Gen2 creates absolute references to Fabric items (for example, Lakehouses, Warehouses) by default. Review your Dataflow to identify which references should remain fixed and which should be adapted dynamically across environments.
46
-
***Connection Behavior**: Dataflow Gen2 doesn't support dynamic reconfiguration of data source connections. If your Dataflow connects to sources like SQL databases using parameters (for example, server name, database name), those connections are statically bound and can't be altered using workspace variables or parameterization.
47
-
***Git Integration Scope**: As a general recommendation, only the first stage (typically Dev) needs Git integration enabled. Once the mashup script is authored and committed, subsequent stages can use deployment pipelines without Git.
48
-
***Use Fabric pipelines to orchestrate**: A [Dataflow activity in pipelines](dataflow-activity.md) can help you orchestrate the run of your Dataflow and pass parameters using an intuitive user interface. You can also use the [variable library integration with pipelines](variable-library-integration-with-data-pipelines.md) to retrieve the values from the variables and pass those values to the Dataflow parameters at runtime.
49
-
***Deployment Rules Compatibility**: While deployment rules can modify certain item properties, they don't currently support altering Dataflow connections or mashup logic. Plan your architecture accordingly.
50
-
***Testing Across Stages**: Always validate Dataflow behavior in each stage after deployment. Differences in data sources, permissions, or variable values can lead to unexpected results.
46
+
-**Default references**: Dataflow Gen2 creates absolute references to Fabric items (for example, Lakehouses, Warehouses) by default. Review your Dataflow to identify which references should remain fixed and which should be adapted dynamically across environments.
47
+
-**Connection behavior**: Dataflow Gen2 doesn't support dynamic reconfiguration of data source connections. If your Dataflow connects to sources like SQL databases using parameters (for example, server name, database name), those connections are statically bound and can't be altered using workspace variables or parameterization.
48
+
-**Git integration scope**: We recommend that only the first stage (typically Dev) needs Git integration enabled. Once the mashup script is authored and committed, subsequent stages can use deployment pipelines without Git.
49
+
-**Use Fabric pipelines to orchestrate**: A [Dataflow activity in pipelines](dataflow-activity.md) can help you orchestrate the run of your Dataflow and pass parameters using an intuitive user interface. You can also use the [variable library integration with pipelines](variable-library-integration-with-data-pipelines.md) to retrieve the values from the variables and pass those values to the Dataflow parameters at runtime.
50
+
-**Deployment rules compatibility**: Currently, deployment rules can modify certain item properties but don't support altering Dataflow connections or mashup logic. Plan your architecture accordingly.
51
+
-**Testing across stages**: Always validate Dataflow behavior in each stage after deployment. Differences in data sources, permissions, or variable values can lead to unexpected results.
:::image type="content" source="../media/migrate-pipeline-powershell-upgrade/verify-installation-module.png" alt-text="Screenshot of the module command output.":::
description: Map your Azure Data Factory Linked Service to your Fabric Connection
4
+
author: whhender
5
+
ms.author: whhender
6
+
ms.reviewer: ssrinivasara
7
+
ms.topic: include
8
+
ms.date: 09/17/2025
9
+
---
10
+
11
+
```json
12
+
[
13
+
{
14
+
"type": "LinkedServiceToConnectionId",
15
+
"key": "<ADF LinkedService Name>",
16
+
"value": "<Fabric Connection ID>"
17
+
}
18
+
]
19
+
```
20
+
21
+
- The `type` is the type of mapping to perform. It's usually `LinkedServiceToConnectionId`, but you might also use [other types in special cases.](../migrate-pipelines-how-to-add-connections-to-resolutions-file.md#resolution-types)
22
+
- The `key` depends on the `type` you're using. For `LinkedServiceToConnectionId`, the `key` is the name of the [ADF linked service](/azure/data-factory/concepts-linked-services) that you want to map.
23
+
- The `value` is the GUID of the Fabric connection you want to map to. You can [find the GUID in settings of the Fabric connection](../migrate-pipelines-how-to-add-connections-to-resolutions-file.md#get-the-guid-for-your-connection).
24
+
25
+
So, for example, if you have two ADF linked services named `MyAzureBlobStorage` and `MySQLServer` that you want to map to Fabric connections, your file would look like this:
26
+
27
+
```json
28
+
[
29
+
{
30
+
"type": "LinkedServiceToConnectionId",
31
+
"key": "MyAzureBlobStorage",
32
+
"value": "aaaa0000-bb11-2222-33cc-444444dddddd"
33
+
},
34
+
{
35
+
"type": "LinkedServiceToConnectionId",
36
+
"key": "MySQLServer",
37
+
"value": "bbbb1111-cc22-3333-44dd-555555eeeeee"
38
+
}
39
+
]
40
+
```
41
+
42
+
Create your **Resolutions.json** file using this structure and save it somewhere on your machine so that PowerShell can access it.
0 commit comments