Skip to content

Commit 17d88bf

Browse files
Merge pull request #2231 from MicrosoftDocs/main638941755061905528sync_temp
For protected branch, push strategy should use PR and merge to target branch method to work around git push error
2 parents 3f487bd + a6897c3 commit 17d88bf

35 files changed

+1245
-93
lines changed

docs/data-factory/.openpublishing.redirection.data-factory.json

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -65,11 +65,6 @@
6565
"redirect_url": "/fabric/data-factory/known-issue-staging-item",
6666
"redirect_document_id": true
6767
},
68-
{
69-
"source_path_from_root": "/docs/data-factory/migrate-pipelines-powershell-upgrade-module-for-azure-data-factory-to-fabric.md",
70-
"redirect_url": "/fabric/data-factory/migrate-from-azure-data-factory",
71-
"redirect_document_id": true
72-
},
7368
{
7469
"source_path_from_root": "/docs/data-factory/connector-azure-blob-storage-dataflows.md",
7570
"redirect_url": "/fabric/data-factory/connector-azure-blob-storage",

docs/data-factory/dataflow-gen2-deployment-pipelines.md

Lines changed: 20 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -5,46 +5,47 @@ ms.reviewer: whhender
55
ms.author: miescobar
66
author: ptyx507x
77
ms.topic: conceptual
8-
ms.date: 9/19/2025
8+
ms.date: 09/22/2025
99
ms.custom: dataflows
10+
ai-usage: ai-assisted
1011
---
1112

1213
# Dataflow Gen2 and deployment pipelines
1314

1415
>[!NOTE]
15-
>Fabric deployment pipelines support [Dataflow Gen2 with CI/CD support](dataflow-gen2-cicd-and-git-integration.md). This article aims to provide general concepts and guidance on how to best use Dataflow Gen2 with deployment pipelines.
16+
>Fabric deployment pipelines support [Dataflow Gen2 with CI/CD support](dataflow-gen2-cicd-and-git-integration.md). This article provides guidance on how to use Dataflow Gen2 with deployment pipelines.
1617
17-
Microsoft Fabric offers a robust set of tools for implementing Continuous Integration/Continuous Deployment (CI/CD) and Application Lifecycle Management (ALM). These capabilities empower teams to build, test, and deploy data solutions with speed, consistency, and governance.
18+
Microsoft Fabric provides tools for Continuous Integration/Continuous Deployment (CI/CD) and Application Lifecycle Management (ALM). These tools help teams build, test, and deploy data solutions with consistency and governance.
1819

19-
Dataflow Gen2 with CI/CD support enables seamless integration of dataflows into [Fabric deployment pipelines](/fabric/cicd/deployment-pipelines/intro-to-deployment-pipelines). It automates build, test, and deployment stages, ensuring consistent, version-controlled delivery of dataflows. It accelerates development cycles, improves reliability, and simplifies management by embedding Dataflow Gen2 directly into Fabric’s end-to-end pipeline orchestration.
20+
Dataflow Gen2 with CI/CD support integrates dataflows into [Fabric deployment pipelines](/fabric/cicd/deployment-pipelines/intro-to-deployment-pipelines). This integration automates build, test, and deployment stages. It provides consistent, version-controlled delivery of dataflows and improves reliability by embedding Dataflow Gen2 into Fabric's pipeline orchestration.
2021

21-
This article provides guidance on the different solution architectures for your Dataflow and related Fabric items to build a deployment pipeline tailored to your needs.
22+
This article provides guidance on solution architectures for your Dataflow and related Fabric items. You can use this guidance to build a deployment pipeline that fits your needs.
2223

2324
While there are many goals with deployment pipelines, this article focuses on two specific goals:
2425

25-
* **Consistency**: Ensure the mashup script of your Dataflow remains unchanged across all deployment stages.
26-
* **Stage-specific configuration**: Use dynamic references for data sources and destinations that adapt to each stage (Dev, Test, Prod).
26+
- **Consistency**: Keep your Dataflow's mashup script unchanged across all deployment stages.
27+
- **Stage-specific configuration**: Use dynamic references for data sources and destinations that adapt to each stage (Dev, Test, Prod).
2728

2829
## Solution architectures
2930

30-
A good solution architecture enables you to not only have something that works for your Dataflow Gen2, but also components that extend through your overall Fabric solution.
31+
A good solution architecture works for your Dataflow Gen2 and extends through your overall Fabric solution.
3132

3233
The following list covers the available solution architectures when using a Dataflow Gen2:
3334

34-
* **Parameterized Dataflow Gen2**: Using the [public parameters mode](dataflow-parameters.md), you can parameterize Dataflow components—such as logic, sources, or destinations—and pass runtime values to dynamically adapt the Dataflow based on the pipeline stage.
35-
* **Variable libraries inside a Dataflow Gen2**: Using the [variable libraries integration with Dataflow Gen2](dataflow-gen2-variable-library-integration.md), you can reference variables throughout your Dataflow. These variables are evaluated at runtime based on values stored in the library, enabling dynamic behavior aligned with the pipeline stage.
35+
- **Parameterized Dataflow Gen2**: Using the [public parameters mode](dataflow-parameters.md), you can parameterize Dataflow components—such as logic, sources, or destinations—and pass runtime values to dynamically adapt the Dataflow based on the pipeline stage.
36+
- **Variable libraries inside a Dataflow Gen2**: Using the [variable libraries integration with Dataflow Gen2](dataflow-gen2-variable-library-integration.md), you can reference variables throughout your Dataflow. These variables are evaluated at runtime based on values stored in the library, enabling dynamic behavior aligned with the pipeline stage.
3637

37-
The main differences between these two relies on how a parameterized Dataflow requires setting a process through either the REST API or the [Fabric pipeline Dataflow activity](dataflow-activity.md) to pass values for runtime whereas the variable libraries integration with Dataflow Gen2 relies on a variable library being available at the workspace level and the correct variables being referenced inside the Dataflow.
38+
The main difference between these two approaches is how they pass values at runtime. A parameterized Dataflow requires a process through either the REST API or the [Fabric pipeline Dataflow activity](dataflow-activity.md) to pass values. The variable libraries integration with Dataflow Gen2 requires a variable library at the workspace level and the correct variables referenced inside the Dataflow.
3839

39-
While both options are valid, each has its own set of considerations and limitations. We recommend doing an assessment as to how you'd like your workflow to be and how such workflow would fit into your overall Fabric solution.
40+
Both options are valid, and each has its own considerations and limitations. We recommend evaluating how your workflow works and how it fits into your overall Fabric solution.
4041

4142
## General considerations
4243

43-
The following are a collection of things to consider when using a Dataflow Gen2 inside a Fabric deployment pipeline:
44+
Here are things to consider when using a Dataflow Gen2 inside a Fabric deployment pipeline:
4445

45-
* **Default References**: Dataflow Gen2 creates absolute references to Fabric items (for example, Lakehouses, Warehouses) by default. Review your Dataflow to identify which references should remain fixed and which should be adapted dynamically across environments.
46-
* **Connection Behavior**: Dataflow Gen2 doesn't support dynamic reconfiguration of data source connections. If your Dataflow connects to sources like SQL databases using parameters (for example, server name, database name), those connections are statically bound and can't be altered using workspace variables or parameterization.
47-
* **Git Integration Scope**: As a general recommendation, only the first stage (typically Dev) needs Git integration enabled. Once the mashup script is authored and committed, subsequent stages can use deployment pipelines without Git.
48-
* **Use Fabric pipelines to orchestrate**: A [Dataflow activity in pipelines](dataflow-activity.md) can help you orchestrate the run of your Dataflow and pass parameters using an intuitive user interface. You can also use the [variable library integration with pipelines](variable-library-integration-with-data-pipelines.md) to retrieve the values from the variables and pass those values to the Dataflow parameters at runtime.
49-
* **Deployment Rules Compatibility**: While deployment rules can modify certain item properties, they don't currently support altering Dataflow connections or mashup logic. Plan your architecture accordingly.
50-
* **Testing Across Stages**: Always validate Dataflow behavior in each stage after deployment. Differences in data sources, permissions, or variable values can lead to unexpected results.
46+
- **Default references**: Dataflow Gen2 creates absolute references to Fabric items (for example, Lakehouses, Warehouses) by default. Review your Dataflow to identify which references should remain fixed and which should be adapted dynamically across environments.
47+
- **Connection behavior**: Dataflow Gen2 doesn't support dynamic reconfiguration of data source connections. If your Dataflow connects to sources like SQL databases using parameters (for example, server name, database name), those connections are statically bound and can't be altered using workspace variables or parameterization.
48+
- **Git integration scope**: We recommend that only the first stage (typically Dev) needs Git integration enabled. Once the mashup script is authored and committed, subsequent stages can use deployment pipelines without Git.
49+
- **Use Fabric pipelines to orchestrate**: A [Dataflow activity in pipelines](dataflow-activity.md) can help you orchestrate the run of your Dataflow and pass parameters using an intuitive user interface. You can also use the [variable library integration with pipelines](variable-library-integration-with-data-pipelines.md) to retrieve the values from the variables and pass those values to the Dataflow parameters at runtime.
50+
- **Deployment rules compatibility**: Currently, deployment rules can modify certain item properties but don't support altering Dataflow connections or mashup logic. Plan your architecture accordingly.
51+
- **Testing across stages**: Always validate Dataflow behavior in each stage after deployment. Differences in data sources, permissions, or variable values can lead to unexpected results.
Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
---
2+
title: Prepare the Environment for Fabric Pipeline Upgrade
3+
description: Steps to prepare the environment for pipeline upgrade
4+
author: whhender
5+
ms.author: whhender
6+
ms.reviewer: ssrinivasara
7+
ms.topic: include
8+
ms.date: 09/16/2025
9+
---
10+
11+
Before you start upgrading pipelines, [verify](#verify-your-installation) your environment has the required tools and modules:
12+
13+
- [PowerShell 7.4.2 (x64) or later](#install-powershell-742-x64-or-later)
14+
- [FabricPipelineUpgrade module](#install-and-import-the-fabricpipelineupgrade-module)
15+
16+
### Install PowerShell 7.4.2 (x64) or later
17+
18+
You need **PowerShell 7.4.2** or later on your machine.
19+
20+
[Download PowerShell](/powershell/scripting/install/installing-powershell-on-windows)
21+
22+
### Install and import the FabricPipelineUpgrade module
23+
24+
1. Open PowerShell 7 (x64).
25+
26+
1. Select the Start menu, search for **PowerShell 7**, open the app's context menu, and select **Run as administrator**.
27+
28+
:::image type="content" source="../media/migrate-pipeline-powershell-upgrade/powershell-icon.png" alt-text="Screenshot of the PowerShell icon.":::
29+
30+
1. In the elevated PowerShell window, install the module from the PowerShell Gallery:
31+
32+
```PowerShell
33+
Install-Module Microsoft.FabricPipelineUpgrade -Repository PSGallery -SkipPublisherCheck
34+
```
35+
36+
1. Import the module into your session:
37+
38+
```PowerShell
39+
Import-Module Microsoft.FabricPipelineUpgrade
40+
```
41+
42+
1. If you see a signing or execution policy error, run this command and then import the module again:
43+
44+
```PowerShell
45+
Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
46+
```
47+
48+
### Verify your installation
49+
50+
Run this command to confirm the module loaded correctly:
51+
52+
```PowerShell
53+
Get-Command -Module Microsoft.FabricPipelineUpgrade
54+
```
55+
:::image type="content" source="../media/migrate-pipeline-powershell-upgrade/verify-installation-module.png" alt-text="Screenshot of the module command output.":::
56+
57+
58+
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
---
2+
title: Basics to Create a Resolution File
3+
description: Map your Azure Data Factory Linked Service to your Fabric Connection
4+
author: whhender
5+
ms.author: whhender
6+
ms.reviewer: ssrinivasara
7+
ms.topic: include
8+
ms.date: 09/17/2025
9+
---
10+
11+
```json
12+
[
13+
{
14+
"type": "LinkedServiceToConnectionId",
15+
"key": "<ADF LinkedService Name>",
16+
"value": "<Fabric Connection ID>"
17+
}
18+
]
19+
```
20+
21+
- The `type` is the type of mapping to perform. It's usually `LinkedServiceToConnectionId`, but you might also use [other types in special cases.](../migrate-pipelines-how-to-add-connections-to-resolutions-file.md#resolution-types)
22+
- The `key` depends on the `type` you're using. For `LinkedServiceToConnectionId`, the `key` is the name of the [ADF linked service](/azure/data-factory/concepts-linked-services) that you want to map.
23+
- The `value` is the GUID of the Fabric connection you want to map to. You can [find the GUID in settings of the Fabric connection](../migrate-pipelines-how-to-add-connections-to-resolutions-file.md#get-the-guid-for-your-connection).
24+
25+
So, for example, if you have two ADF linked services named `MyAzureBlobStorage` and `MySQLServer` that you want to map to Fabric connections, your file would look like this:
26+
27+
```json
28+
[
29+
{
30+
"type": "LinkedServiceToConnectionId",
31+
"key": "MyAzureBlobStorage",
32+
"value": "aaaa0000-bb11-2222-33cc-444444dddddd"
33+
},
34+
{
35+
"type": "LinkedServiceToConnectionId",
36+
"key": "MySQLServer",
37+
"value": "bbbb1111-cc22-3333-44dd-555555eeeeee"
38+
}
39+
]
40+
```
41+
42+
Create your **Resolutions.json** file using this structure and save it somewhere on your machine so that PowerShell can access it.
88.7 KB
Loading
39.5 KB
Loading
64.3 KB
Loading
124 KB
Loading
114 KB
Loading
122 KB
Loading

0 commit comments

Comments
 (0)