You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/data-factory/connector-amazon-s3-compatible-overview.md
+7-12Lines changed: 7 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,19 +14,14 @@ ms.custom:
14
14
15
15
This Amazon S3 Compatible connector is supported in Data Factory for [!INCLUDE [product-name](../includes/product-name.md)] with the following capabilities.
16
16
17
-
## Support in Dataflow Gen2
17
+
## Supported capabilities
18
18
19
-
Data Factory in [!INCLUDE [product-name](../includes/product-name.md)] doesn't currently support Amazon S3 Compatible connectors in Dataflow Gen2.
The Amazon S3 Compatible connector supports the following capabilities in a pipeline:
26
+
To learn more about the copy activity configuration for Amazon S3 Compatible in data pipelines, go to [Configure in a data pipeline copy activity](connector-amazon-s3-compatible-copy-activity.md).
To learn more about the copy activity configuration for Amazon S3 Compatible in a pipeline, go to [Configure in a pipeline copy activity](connector-amazon-s3-compatible-copy-activity.md).
Copy file name to clipboardExpand all lines: docs/data-factory/connector-greenplum-for-pipeline-overview.md
+6-10Lines changed: 6 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,17 +13,13 @@ ms.custom:
13
13
14
14
The Greenplum for Pipeline connector is supported in Data Factory for [!INCLUDE [product-name](../includes/product-name.md)] with the following capabilities.
15
15
16
-
## Support in Dataflow Gen2
16
+
## Supported capabilities
17
17
18
-
Data Factory in Microsoft Fabric doesn't currently support Greenplum for Pipeline in Dataflow Gen2.
The Greenplum for Pipeline connector supports the following capabilities in a pipeline:
24
+
To learn more about the copy activity configuration for Greenplum for Pipeline in data pipelines, go to [Configure in a Data pipeline copy activity](connector-greenplum-for-pipeline-copy-activity.md).
To learn more about the copy activity configuration for Greenplum for Pipeline in a pipeline, go to [Configure in a pipeline copy activity](connector-greenplum-for-pipeline-copy-activity.md).
Copy file name to clipboardExpand all lines: docs/data-factory/connector-salesforce-service-cloud-overview.md
+6-10Lines changed: 6 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,17 +14,13 @@ ms.custom:
14
14
15
15
The Salesforce Service Cloud connector is supported in Data Factory for [!INCLUDE [product-name](../includes/product-name.md)] with the following capabilities.
16
16
17
-
## Support in Dataflow Gen2
17
+
## Supported capabilities
18
18
19
-
Data Factory in Microsoft Fabric doesn't currently support Salesforce reports in Dataflow Gen2.
To learn more about the copy activity configuration for Salesforce Service Cloud in pipelines, go to [Configure in a pipeline copy activity](connector-salesforce-service-cloud-copy-activity.md).
|**Lookup activity**| On-premises (version 3000.238.11 or above) | Basic |
25
-
26
-
To learn about the copy activity configuration for Vertica in pipelines, go to [Configure Vertica in a copy activity](connector-vertica-copy-activity.md).
> To use Vertica connector in date pipelines, please install [Vertica ODBC driver](https://www.vertica.com/download/vertica/client-drivers/) on the computer running on-premises data gateway. For detailed steps, go to [Prerequisites](connector-vertica-copy-activity.md#prerequisites).
27
+
28
+
## Related content
29
+
30
+
To learn about the copy activity configuration for Vertica in data pipelines, go to [Configure Vertica in a copy activity](connector-vertica-copy-activity.md).
Copy file name to clipboardExpand all lines: docs/data-factory/decision-guide-data-movement.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,17 +13,17 @@ ai-usage: ai-assisted
13
13
14
14
Microsoft Fabric gives you several ways to bring data into Fabric, based on what you need. Today, you can use **Mirroring**, **Copy activities in Pipelines**, or **Copy job**. Each option offers a different level of control and complexity, so you can pick what fits your scenario best.
15
15
16
-
Mirroring is designed to be simple and free, but it won't cover every advanced scenario. Copy activities in pipelines give you powerful data ingestion features, but they require you to build and manage pipelines. Copy job fills the gap between these options. It gives you more flexibility and control than Mirroring, plus native support for both batch and incremental copying, without the complexity of building pipelines.
16
+
Mirroring is designed to be simple and free solution to mirror database to Fabric, but it won't cover every advanced scenario. Copy activities in pipelines give you fully customizable data ingestion features, but they require you to build and manage pipeline by yourself. Copy job fills the gap between these 2 options. It gives you more flexibility and control than Mirroring, plus native support for both batch and incremental copying, without the complexity of building pipelines.
17
17
18
18
:::image type="content" source="media/decision-guide-data-movement/decision-guide-data-movement.svg" alt-text="Screenshot of a data movement strategy decision tree, comparing mirroring, copy job, and copy activity." lightbox="media/decision-guide-data-movement/decision-guide-data-movement.svg":::
19
19
20
20
## Key concepts
21
21
22
-
-**Mirroring** gives you a **simple and free** way to copy operational data into Fabric for analytics. It's optimized for ease of use with minimal setup, and it writes to a single, read-only destination in OneLake.
22
+
-**Mirroring** gives you a **simple and free** way to mirror operational data into Fabric for analytics. It's optimized for ease of use with minimal setup, and it writes to a single, read-only destination in OneLake.
23
23
24
-
-**Copy activities in Pipelines** is built for users who need **orchestrated, pipeline-based data ingestion workflows**. You can customize it extensively and add transformation logic, but you need to define and manage pipeline components.
24
+
-**Copy activities in Pipelines** is built for users who need **orchestrated, pipeline-based data ingestion workflows**. You can customize it extensively and add transformation logic, but you need to define and manage pipeline components yourself, including tracking the state of the last run for incremental copy.
25
25
26
-
-**Copy Job** gives you a complete data ingestion experience from any source to any destination. It **makes data ingestion easier with native support for both batch and incremental copying, so you don't need to build pipelines**, while still giving you access to many advanced options. It supports many sources and destinations and works well when you want more control than Mirroring but less complexity than managing pipelines with Copy activity.
26
+
-**Copy Job** gives you a complete data ingestion experience from any source to any destination. It **makes data ingestion easier with native support for multiple delivery styles, including bulk copy, incremental copy, and change data capture (CDC) replication, and you don't need to build pipelines**, while still giving you access to many advanced options. It supports many sources and destinations and works well when you want more control than Mirroring but less complexity than managing pipelines with Copy activity.
27
27
28
28
## Data movement decision guide
29
29
@@ -86,4 +86,4 @@ Now that you have an idea of which data movement strategy to use, you can get st
86
86
87
87
-[Get started with Mirroring](/fabric/mirroring/overview)
88
88
-[Create a Copy Job](/fabric/data-factory/create-copy-job)
89
-
-[Create a Copy Activity](/fabric/data-factory/copy-data-activity)
89
+
-[Create a Copy Activity](/fabric/data-factory/copy-data-activity)
Copy file name to clipboardExpand all lines: docs/data-science/ai-functions/overview.md
+10-6Lines changed: 10 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,12 +33,16 @@ You can incorporate these functions as part of data-science and data-engineering
33
33
34
34
- To use AI functions with the built-in AI endpoint in Fabric, your administrator needs to enable [the tenant switch for Copilot and other features that are powered by Azure OpenAI](../../admin/service-admin-portal-copilot.md).
35
35
- Depending on your location, you might need to enable a tenant setting for cross-geo processing. Learn more about [available regions for Azure OpenAI Service](../../get-started/copilot-fabric-overview.md#available-regions-for-azure-openai-service).
36
-
- You also need an F2 or later edition or a P edition. If you use a trial edition, you can bring your own Azure Open AI resource.
36
+
- You need a paid Fabric capacity (F2 or higher, or any P edition). Bring-your-own Azure OpenAI resources aren't supported on the Fabric trial edition.
37
+
38
+
> [!IMPORTANT]
39
+
>
40
+
> The Fabric trial edition doesn't support bring-your-own Azure OpenAI resources for AI functions. To connect a custom Azure OpenAI endpoint, upgrade to an F2 (or higher) or P capacity.
37
41
38
42
> [!NOTE]
39
43
>
40
44
> - AI functions are supported in [Fabric Runtime 1.3](../../data-engineering/runtime-1-3.md) and later.
41
-
> - AI functions use the*gpt-4o-mini (2024-07-18)* model by default. Learn more about [billing and consumption rates](../ai-services/ai-services-overview.md).
45
+
> -Unless you configure a different model, AI functions default to*gpt-4o-mini (2024-07-18)*. Learn more about [billing and consumption rates](../ai-services/ai-services-overview.md).
42
46
> - Most of the AI functions are optimized for use on English-language texts.
43
47
44
48
## Getting started with AI functions
@@ -109,7 +113,7 @@ Each of the following functions allows you to invoke the built-in AI endpoint in
109
113
110
114
### Calculate similarity with ai.similarity
111
115
112
-
The `ai.similarity` function invokes AI to compare input text values with a single common text value, or with pairwise text values in another column. The output similarity score values are relative, and they can range from `-1` (opposites) to `1` (identical). A score of `0` indicates that the values are unrelated in meaning. Get [detailed instructions](./similarity.md) about the use of `ai.similarity`.
116
+
The `ai.similarity` function compares each input text value either to one common reference text or to the corresponding value in another column (pairwise mode). The output similarity score values are relative, and they can range from `-1` (opposites) to `1` (identical). A score of `0` indicates that the values are unrelated in meaning. Get [detailed instructions](./similarity.md) about the use of `ai.similarity`.
113
117
114
118
#### Sample usage
115
119
@@ -242,7 +246,7 @@ The `ai.extract` function invokes AI to scan input text and extract specific typ
0 commit comments