diff --git a/Documentation/4.1/Raven.Documentation.Pages/server/ongoing-tasks/.docs.json b/Documentation/4.1/Raven.Documentation.Pages/server/ongoing-tasks/.docs.json
index f541e53106..5b5b95fae1 100644
--- a/Documentation/4.1/Raven.Documentation.Pages/server/ongoing-tasks/.docs.json
+++ b/Documentation/4.1/Raven.Documentation.Pages/server/ongoing-tasks/.docs.json
@@ -1,19 +1,25 @@
[
- {
- "Path": "general-info.markdown",
- "Name": "General Info",
- "DiscussionId": "d24b385e-5eae-4171-b35d-277157b8e933",
- "Mappings": []
- },
- {
- "Path": "/etl",
- "Name": "ETL",
- "Mappings": []
- },
- {
- "Path": "backup-overview.markdown",
- "Name": "Backup Overview",
- "DiscussionId": "2b3b34b3-04f0-4c1a-bfcf-40a9ed314f8a",
- "Mappings": []
- }
-]
\ No newline at end of file
+ {
+ "Path": "general-info.markdown",
+ "Name": "General Info",
+ "LastSupportedVersion": "4.1",
+ "DiscussionId": "d24b385e-5eae-4171-b35d-277157b8e933",
+ "Mappings": [
+ {
+ "Version": 4.2,
+ "Key": "studio/database/tasks/ongoing-tasks/general-info"
+ }
+ ]
+ },
+ {
+ "Path": "/etl",
+ "Name": "ETL",
+ "Mappings": []
+ },
+ {
+ "Path": "backup-overview.markdown",
+ "Name": "Backup Overview",
+ "DiscussionId": "2b3b34b3-04f0-4c1a-bfcf-40a9ed314f8a",
+ "Mappings": []
+ }
+]
diff --git a/Documentation/5.2/Raven.Documentation.Pages/client-api/operations/what-are-operations.dotnet.markdown b/Documentation/5.2/Raven.Documentation.Pages/client-api/operations/what-are-operations.dotnet.markdown
index 537fd6d313..0a288c7515 100644
--- a/Documentation/5.2/Raven.Documentation.Pages/client-api/operations/what-are-operations.dotnet.markdown
+++ b/Documentation/5.2/Raven.Documentation.Pages/client-api/operations/what-are-operations.dotnet.markdown
@@ -9,8 +9,8 @@
and the **[Session](../../client-api/session/what-is-a-session-and-how-does-it-work)**.
They in turn are built on top of the lower-level __Operations__ and __RavenCommands__ API.
-* __RavenDB provides access to this lower-level API__, so that instead of using the higher session API,
- you can generate requests directly to the server by executing operations on the DocumentStore.
+* **RavenDB provides direct access to this lower-level API**, allowing you to send requests
+ directly to the server via DocumentStore Operations instead of using the higher-level Session API.
* In this page:
* [Why use operations](../../client-api/operations/what-are-operations#why-use-operations)
diff --git a/Documentation/5.2/Raven.Documentation.Pages/client-api/operations/what-are-operations.js.markdown b/Documentation/5.2/Raven.Documentation.Pages/client-api/operations/what-are-operations.js.markdown
index e1b51c3310..a64faacb47 100644
--- a/Documentation/5.2/Raven.Documentation.Pages/client-api/operations/what-are-operations.js.markdown
+++ b/Documentation/5.2/Raven.Documentation.Pages/client-api/operations/what-are-operations.js.markdown
@@ -9,8 +9,8 @@
and the **[Session](../../client-api/session/what-is-a-session-and-how-does-it-work)**.
They in turn are built on top of the lower-level __Operations__ and __RavenCommands__ API.
-* __RavenDB provides access to this lower-level API__, so that instead of using the higher session API,
- you can generate requests directly to the server by executing operations on the DocumentStore.
+* **RavenDB provides direct access to this lower-level API**, allowing you to send requests
+ directly to the server via DocumentStore Operations instead of using the higher-level Session API.
* In this page:
* [Why use operations](../../client-api/operations/what-are-operations#why-use-operations)
diff --git a/Documentation/5.2/Raven.Documentation.Pages/server/ongoing-tasks/etl/olap.markdown b/Documentation/5.2/Raven.Documentation.Pages/server/ongoing-tasks/etl/olap.markdown
index 663f430a4c..d95b5287b6 100644
--- a/Documentation/5.2/Raven.Documentation.Pages/server/ongoing-tasks/etl/olap.markdown
+++ b/Documentation/5.2/Raven.Documentation.Pages/server/ongoing-tasks/etl/olap.markdown
@@ -59,8 +59,10 @@ The OLAP connection string contains the configurations for each destination of t
| `GoogleCloudSettings` | Settings for Google Cloud Platform. |
| `FTPSettings` | Settings for File Transfer Protocol. |
-{NOTE: ETL Destination Settings}
-
+{NOTE: }
+
+#### ETL destination settings
+
This is the list of different settings objects that the `OlapConnectionString` object can contain.
#### `LocalSettings`
diff --git a/Documentation/5.2/Raven.Documentation.Pages/server/security/authorization/security-clearance-and-permissions.markdown b/Documentation/5.2/Raven.Documentation.Pages/server/security/authorization/security-clearance-and-permissions.markdown
index 55fc1115b6..9d3f9acd22 100644
--- a/Documentation/5.2/Raven.Documentation.Pages/server/security/authorization/security-clearance-and-permissions.markdown
+++ b/Documentation/5.2/Raven.Documentation.Pages/server/security/authorization/security-clearance-and-permissions.markdown
@@ -125,7 +125,7 @@ The following operations are **forbidden**:
- Creating documents or modifying existing documents
- Changing any configurations or settings
-- Creating or modifying [ongoing tasks](../../../server/ongoing-tasks/general-info)
+- Creating or modifying [ongoing tasks](../../../studio/database/tasks/ongoing-tasks/general-info)
- Defining [static indexes](../../../indexes/creating-and-deploying#static-indexes) (the database will create
[auto-indexes](../../../indexes/creating-and-deploying#auto-indexes) if there is no existing index that satisfies a query.)
diff --git a/Documentation/5.3/Raven.Documentation.Pages/client-api/operations/maintenance/etl/add-etl.markdown b/Documentation/5.3/Raven.Documentation.Pages/client-api/operations/maintenance/etl/add-etl.dotnet.markdown
similarity index 100%
rename from Documentation/5.3/Raven.Documentation.Pages/client-api/operations/maintenance/etl/add-etl.markdown
rename to Documentation/5.3/Raven.Documentation.Pages/client-api/operations/maintenance/etl/add-etl.dotnet.markdown
diff --git a/Documentation/5.3/Raven.Documentation.Pages/client-api/operations/maintenance/etl/add-etl.java.markdown b/Documentation/5.3/Raven.Documentation.Pages/client-api/operations/maintenance/etl/add-etl.java.markdown
new file mode 100644
index 0000000000..76ca2f6077
--- /dev/null
+++ b/Documentation/5.3/Raven.Documentation.Pages/client-api/operations/maintenance/etl/add-etl.java.markdown
@@ -0,0 +1,52 @@
+# Operations: How to Add ETL
+
+You can add ETL task by using **AddEtlOperation**.
+
+## Syntax
+
+{CODE:java add_etl_operation@ClientApi\Operations\AddEtl.java /}
+
+| Parameters | | |
+| ------------- | ----- | ---- |
+| **configuration** | `EtlConfiguration` | ETL configuration where `T` is connection string type |
+
+## Example - Add Raven ETL
+
+{NOTE: Secure servers}
+ To [connect secure RavenDB servers](../../../../server/security/authentication/certificate-management#enabling-communication-between-servers:-importing-and-exporting-certificates)
+ you need to
+
+ 1. Export the server certificate from the source server.
+ 2. Install it as a client certificate on the destination server.
+
+ This can be done in the RavenDB Studio -> Server Management -> [Certificates view](../../../../server/security/authentication/certificate-management#studio-certificates-management-view).
+{NOTE/}
+
+
+{CODE:java add_raven_etl@ClientApi\Operations\AddEtl.java /}
+
+## Example - Add Sql ETL
+
+{CODE:java add_sql_etl@ClientApi\Operations\AddEtl.java /}
+
+## Example - Add OLAP ETL
+
+{CODE:java add_olap_etl@ClientApi\Operations\AddEtl.java /}
+
+## Related Articles
+
+### ETL
+
+- [Basics](../../../../server/ongoing-tasks/etl/basics)
+- [RavenDB ETL](../../../../server/ongoing-tasks/etl/raven)
+- [SQL ETL](../../../../server/ongoing-tasks/etl/sql)
+
+### Studio
+
+- [RavenDB ETL Task](../../../../studio/database/tasks/ongoing-tasks/ravendb-etl-task)
+
+### Connection Strings
+
+- [Add](../../../../client-api/operations/maintenance/connection-strings/add-connection-string)
+- [Get](../../../../client-api/operations/maintenance/connection-strings/get-connection-string)
+- [Remove](../../../../client-api/operations/maintenance/connection-strings/remove-connection-string)
diff --git a/Documentation/5.3/Raven.Documentation.Pages/client-api/operations/maintenance/etl/add-etl.js.markdown b/Documentation/5.3/Raven.Documentation.Pages/client-api/operations/maintenance/etl/add-etl.js.markdown
new file mode 100644
index 0000000000..d14085530c
--- /dev/null
+++ b/Documentation/5.3/Raven.Documentation.Pages/client-api/operations/maintenance/etl/add-etl.js.markdown
@@ -0,0 +1,41 @@
+# Operations: How to Add ETL
+
+You can add ETL task by using **AddEtlOperation**.
+
+## Usage
+
+{CODE:nodejs add_etl_operation@client-api\Operations\addEtl.js /}
+
+| Parameters | | |
+| ------------- | ----- | ---- |
+| **configuration** | `EtlConfiguration` | ETL configuration where `T` is connection string type |
+
+## Example - Add Raven ETL
+
+{CODE:nodejs add_raven_etl@client-api\Operations\addEtl.js /}
+
+## Example - Add Sql ETL
+
+{CODE:nodejs add_sql_etl@client-api\Operations\addEtl.js /}
+
+## Example - Add OLAP ETL
+
+{CODE:nodejs add_olap_etl@client-api\Operations\addEtl.js /}
+
+## Related Articles
+
+### ETL
+
+- [Basics](../../../../server/ongoing-tasks/etl/basics)
+- [RavenDB ETL](../../../../server/ongoing-tasks/etl/raven)
+- [SQL ETL](../../../../server/ongoing-tasks/etl/sql)
+
+### Studio
+
+- [RavenDB ETL Task](../../../../studio/database/tasks/ongoing-tasks/ravendb-etl-task)
+
+### Connection Strings
+
+- [Add](../../../../client-api/operations/maintenance/connection-strings/add-connection-string)
+- [Get](../../../../client-api/operations/maintenance/connection-strings/get-connection-string)
+- [Remove](../../../../client-api/operations/maintenance/connection-strings/remove-connection-string)
diff --git a/Documentation/5.3/Raven.Documentation.Pages/client-api/operations/what-are-operations.dotnet.markdown b/Documentation/5.3/Raven.Documentation.Pages/client-api/operations/what-are-operations.dotnet.markdown
index 780e16ad5e..1507d524d3 100644
--- a/Documentation/5.3/Raven.Documentation.Pages/client-api/operations/what-are-operations.dotnet.markdown
+++ b/Documentation/5.3/Raven.Documentation.Pages/client-api/operations/what-are-operations.dotnet.markdown
@@ -9,8 +9,8 @@
and the **[Session](../../client-api/session/what-is-a-session-and-how-does-it-work)**.
They in turn are built on top of the lower-level __Operations__ and __RavenCommands__ API.
-* __RavenDB provides access to this lower-level API__, so that instead of using the higher session API,
- you can generate requests directly to the server by executing operations on the DocumentStore.
+* **RavenDB provides direct access to this lower-level API**, allowing you to send requests
+ directly to the server via DocumentStore Operations instead of using the higher-level Session API.
* In this page:
* [Why use operations](../../client-api/operations/what-are-operations#why-use-operations)
diff --git a/Documentation/5.3/Raven.Documentation.Pages/client-api/operations/what-are-operations.js.markdown b/Documentation/5.3/Raven.Documentation.Pages/client-api/operations/what-are-operations.js.markdown
index 831b17b41a..9e7318b321 100644
--- a/Documentation/5.3/Raven.Documentation.Pages/client-api/operations/what-are-operations.js.markdown
+++ b/Documentation/5.3/Raven.Documentation.Pages/client-api/operations/what-are-operations.js.markdown
@@ -9,8 +9,8 @@
and the **[Session](../../client-api/session/what-is-a-session-and-how-does-it-work)**.
They in turn are built on top of the lower-level __Operations__ and __RavenCommands__ API.
-* __RavenDB provides access to this lower-level API__, so that instead of using the higher session API,
- you can generate requests directly to the server by executing operations on the DocumentStore.
+* **RavenDB provides direct access to this lower-level API**, allowing you to send requests
+ directly to the server via DocumentStore Operations instead of using the higher-level Session API.
* In this page:
* [Why use operations](../../client-api/operations/what-are-operations#why-use-operations)
diff --git a/Documentation/5.3/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/olap-etl-task.markdown b/Documentation/5.3/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/olap-etl-task.markdown
index 6317f54d9c..7eaabfca0b 100644
--- a/Documentation/5.3/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/olap-etl-task.markdown
+++ b/Documentation/5.3/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/olap-etl-task.markdown
@@ -1,6 +1,5 @@
# OLAP ETL Task
-
---
+---
{NOTE: }
diff --git a/Documentation/5.3/Samples/java/src/test/java/net/ravendb/ClientApi/Operations/AddEtl.java b/Documentation/5.3/Samples/java/src/test/java/net/ravendb/ClientApi/Operations/AddEtl.java
new file mode 100644
index 0000000000..4c739e9ecb
--- /dev/null
+++ b/Documentation/5.3/Samples/java/src/test/java/net/ravendb/ClientApi/Operations/AddEtl.java
@@ -0,0 +1,122 @@
+package net.ravendb.ClientApi.Operations;
+
+import net.ravendb.client.documents.DocumentStore;
+import net.ravendb.client.documents.IDocumentStore;
+import net.ravendb.client.documents.operations.etl.*;
+import net.ravendb.client.documents.operations.etl.olap.OlapConnectionString;
+import net.ravendb.client.documents.operations.etl.olap.OlapEtlConfiguration;
+import net.ravendb.client.documents.operations.etl.sql.SqlConnectionString;
+import net.ravendb.client.documents.operations.etl.sql.SqlEtlConfiguration;
+import net.ravendb.client.documents.operations.etl.sql.SqlEtlTable;
+
+import java.util.Arrays;
+
+public class AddEtl {
+
+ private interface IFoo {
+ /*
+ //region add_etl_operation
+ public AddEtlOperation(EtlConfiguration configuration);
+ //endregion
+ */
+ }
+
+ public AddEtl() {
+ try (IDocumentStore store = new DocumentStore()) {
+ //region add_raven_etl
+ RavenEtlConfiguration configuration = new RavenEtlConfiguration();
+ configuration.setName("Employees ETL");
+ Transformation transformation = new Transformation();
+ transformation.setName("Script #1");
+ transformation.setScript("loadToEmployees ({\n" +
+ " Name: this.FirstName + ' ' + this.LastName,\n" +
+ " Title: this.Title\n" +
+ "});");
+
+ configuration.setTransforms(Arrays.asList(transformation));
+ AddEtlOperation operation = new AddEtlOperation<>(configuration);
+ AddEtlOperationResult result = store.maintenance().send(operation);
+ //endregion
+ }
+
+ try (IDocumentStore store = new DocumentStore()) {
+ //region add_sql_etl
+ SqlEtlConfiguration configuration = new SqlEtlConfiguration();
+ SqlEtlTable table1 = new SqlEtlTable();
+ table1.setTableName("Orders");
+ table1.setDocumentIdColumn("Id");
+ table1.setInsertOnlyMode(false);
+
+ SqlEtlTable table2 = new SqlEtlTable();
+ table2.setTableName("OrderLines");
+ table2.setDocumentIdColumn("OrderId");
+ table2.setInsertOnlyMode(false);
+
+ configuration.setSqlTables(Arrays.asList(table1, table2));
+ configuration.setName("Order to SQL");
+ configuration.setConnectionStringName("sql-connection-string-name");
+
+ Transformation transformation = new Transformation();
+ transformation.setName("Script #1");
+ transformation.setCollections(Arrays.asList("Orders"));
+ transformation.setScript("var orderData = {\n" +
+ " Id: id(this),\n" +
+ " OrderLinesCount: this.Lines.length,\n" +
+ " TotalCost: 0\n" +
+ "};\n" +
+ "\n" +
+ " for (var i = 0; i < this.Lines.length; i++) {\n" +
+ " var line = this.Lines[i];\n" +
+ " orderData.TotalCost += line.PricePerUnit;\n" +
+ "\n" +
+ " // Load to SQL table 'OrderLines'\n" +
+ " loadToOrderLines({\n" +
+ " OrderId: id(this),\n" +
+ " Qty: line.Quantity,\n" +
+ " Product: line.Product,\n" +
+ " Cost: line.PricePerUnit\n" +
+ " });\n" +
+ " }\n" +
+ " orderData.TotalCost = Math.round(orderData.TotalCost * 100) / 100;\n" +
+ "\n" +
+ " // Load to SQL table 'Orders'\n" +
+ " loadToOrders(orderData)");
+
+ configuration.setTransforms(Arrays.asList(transformation));
+
+ AddEtlOperation operation = new AddEtlOperation<>(configuration);
+
+ AddEtlOperationResult result = store.maintenance().send(operation);
+ //endregion
+ }
+
+ try (IDocumentStore store = new DocumentStore()) {
+ //region add_olap_etl
+ OlapEtlConfiguration configuration = new OlapEtlConfiguration();
+
+ configuration.setName("Orders ETL");
+ configuration.setConnectionStringName("olap-connection-string-name");
+
+ Transformation transformation = new Transformation();
+ transformation.setName("Script #1");
+ transformation.setCollections(Arrays.asList("Orders"));
+ transformation.setScript("var orderDate = new Date(this.OrderedAt);\n"+
+ "var year = orderDate.getFullYear();\n"+
+ "var month = orderDate.getMonth();\n"+
+ "var key = new Date(year, month);\n"+
+ "loadToOrders(key, {\n"+
+ " Company : this.Company,\n"+
+ " ShipVia : this.ShipVia\n"+
+ "})"
+ );
+
+ configuration.setTransforms(Arrays.asList(transformation));
+
+ AddEtlOperation operation = new AddEtlOperation(configuration);
+
+ AddEtlOperationResult result = store.maintenance().send(operation);
+ //endregion
+ }
+
+ }
+}
diff --git a/Documentation/5.3/Samples/nodejs/client-api/Operations/AddEtl.js b/Documentation/5.3/Samples/nodejs/client-api/Operations/AddEtl.js
new file mode 100644
index 0000000000..dc880c095b
--- /dev/null
+++ b/Documentation/5.3/Samples/nodejs/client-api/Operations/AddEtl.js
@@ -0,0 +1,90 @@
+
+import {
+ AddEtlOperation,
+ DocumentStore, OlapEtlConfiguration,
+ RavenEtlConfiguration,
+ SqlEtlConfiguration,
+ SqlEtlTable,
+ Transformation
+} from 'ravendb';
+import { EtlConfiguration } from 'ravendb';
+
+ let urls, database, authOptions;
+
+ class T {
+ }
+
+{
+ //document_store_creation
+ const store = new DocumentStore(["http://localhost:8080"], "Northwind2");
+ store.initialize();
+ const session = store.openSession();
+ let configuration;
+ let etlConfiguration;
+
+
+ //region add_etl_operation
+ const operation = new AddEtlOperation(etlConfiguration);
+ //endregion
+
+
+ //region add_raven_etl
+ const etlConfigurationRvn = Object.assign(new RavenEtlConfiguration(), {
+ connectionStringName: "raven-connection-string-name",
+ disabled: false,
+ name: "etlRvn"
+ });
+
+ const transformationRvn = {
+ applyToAllDocuments: true,
+ name: "Script #1"
+ };
+
+ etlConfigurationRvn.transforms = [transformationRvn];
+
+ const operationRvn = new AddEtlOperation(etlConfigurationRvn);
+ const etlResultRvn = await store.maintenance.send(operationRvn);
+ //endregion
+
+
+ //region add_sql_etl
+ const transformation = {
+ applyToAllDocuments: true,
+ name: "Script #1"
+ };
+
+ const table1 = {
+ documentIdColumn: "Id",
+ insertOnlyMode: false,
+ tableName: "Users"
+ };
+
+ const etlConfigurationSql = Object.assign(new SqlEtlConfiguration(), {
+ connectionStringName: "sql-connection-string-name",
+ disabled: false,
+ name: "etlSql",
+ transforms: [transformation],
+ sqlTables: [table1]
+ });
+
+ const operationSql = new AddEtlOperation(etlConfigurationSql);
+ const etlResult = await store.maintenance.send(operationSql);
+ //endregion
+
+ //region add_olap_etl
+ const transformationOlap = {
+ applyToAllDocuments: true,
+ name: "Script #1"
+ };
+
+ const etlConfigurationOlap = Object.assign(new OlapEtlConfiguration(), {
+ connectionStringName: "olap-connection-string-name",
+ disabled: false,
+ name: "etlOlap",
+ transforms: [transformationOlap],
+ });
+
+ const operationOlap = new AddEtlOperation(etlConfigurationOlap);
+ const etlResultOlap = await store.maintenance.send(operationOlap);
+ //endregion
+}
\ No newline at end of file
diff --git a/Documentation/5.4/Raven.Documentation.Pages/client-api/operations/maintenance/indexes/delete-index.dotnet.markdown b/Documentation/5.4/Raven.Documentation.Pages/client-api/operations/maintenance/indexes/delete-index.dotnet.markdown
index ab59125d7a..a0d816769d 100644
--- a/Documentation/5.4/Raven.Documentation.Pages/client-api/operations/maintenance/indexes/delete-index.dotnet.markdown
+++ b/Documentation/5.4/Raven.Documentation.Pages/client-api/operations/maintenance/indexes/delete-index.dotnet.markdown
@@ -29,8 +29,8 @@
{CODE syntax@ClientApi\Operations\Maintenance\Indexes\DeleteIndex.cs /}
-| Parameters | Type | Description |
-|- | - | - |
+| Parameter | Type | Description |
+|---------------|----------|-------------------------|
| **indexName** | `string` | Name of index to delete |
{PANEL/}
diff --git a/Documentation/5.4/Raven.Documentation.Pages/client-api/operations/what-are-operations.dotnet.markdown b/Documentation/5.4/Raven.Documentation.Pages/client-api/operations/what-are-operations.dotnet.markdown
index 15aee886c6..670483064a 100644
--- a/Documentation/5.4/Raven.Documentation.Pages/client-api/operations/what-are-operations.dotnet.markdown
+++ b/Documentation/5.4/Raven.Documentation.Pages/client-api/operations/what-are-operations.dotnet.markdown
@@ -9,8 +9,8 @@
and the **[Session](../../client-api/session/what-is-a-session-and-how-does-it-work)**.
They in turn are built on top of the lower-level **Operations** and **Commands** API.
-* **RavenDB provides direct access to this lower-level API**, allowing direct delivery of requests
- to the server via document store operations instead of using the higher-level Session API.
+* **RavenDB provides direct access to this lower-level API**, allowing you to send requests
+ directly to the server via DocumentStore Operations instead of using the higher-level Session API.
* In this page:
* [Why use operations](../../client-api/operations/what-are-operations#why-use-operations)
diff --git a/Documentation/5.4/Raven.Documentation.Pages/client-api/operations/what-are-operations.js.markdown b/Documentation/5.4/Raven.Documentation.Pages/client-api/operations/what-are-operations.js.markdown
index c93b195e4e..e35d90c965 100644
--- a/Documentation/5.4/Raven.Documentation.Pages/client-api/operations/what-are-operations.js.markdown
+++ b/Documentation/5.4/Raven.Documentation.Pages/client-api/operations/what-are-operations.js.markdown
@@ -9,8 +9,8 @@
and the **[Session](../../client-api/session/what-is-a-session-and-how-does-it-work)**.
They in turn are built on top of the lower-level **Operations** and **Commands** API.
-* **RavenDB provides direct access to this lower-level API**, allowing direct delivery of requests
- to the server via document store operations instead of using the higher-level session API.
+* **RavenDB provides direct access to this lower-level API**, allowing you to send requests
+ directly to the server via DocumentStore Operations instead of using the higher-level Session API.
* In this page:
* [Why use operations](../../client-api/operations/what-are-operations#why-use-operations)
diff --git a/Documentation/5.4/Raven.Documentation.Pages/client-api/operations/what-are-operations.python.markdown b/Documentation/5.4/Raven.Documentation.Pages/client-api/operations/what-are-operations.python.markdown
index 018b7cc698..ccfa934148 100644
--- a/Documentation/5.4/Raven.Documentation.Pages/client-api/operations/what-are-operations.python.markdown
+++ b/Documentation/5.4/Raven.Documentation.Pages/client-api/operations/what-are-operations.python.markdown
@@ -9,8 +9,8 @@
and the **[session](../../client-api/session/what-is-a-session-and-how-does-it-work)**.
They in turn are built on top of the lower-level **Operations** and **Commands** API.
-* **RavenDB provides direct access to this lower-level API**, allowing direct delivery of requests
- to the server via document store operations instead of using the higher-level session API.
+* **RavenDB provides direct access to this lower-level API**, allowing you to send requests
+ directly to the server via DocumentStore Operations instead of using the higher-level Session API.
* In this page:
* [Why use operations](../../client-api/operations/what-are-operations#why-use-operations)
diff --git a/Documentation/5.4/Raven.Documentation.Pages/server/extensions/refresh.dotnet.markdown b/Documentation/5.4/Raven.Documentation.Pages/server/extensions/refresh.dotnet.markdown
index 2ca8e2f40e..05eadb4a23 100644
--- a/Documentation/5.4/Raven.Documentation.Pages/server/extensions/refresh.dotnet.markdown
+++ b/Documentation/5.4/Raven.Documentation.Pages/server/extensions/refresh.dotnet.markdown
@@ -92,4 +92,3 @@ How to set a document to refresh 1 hour from now:
### Server
- [What is a Change Vector](../../server/clustering/replication/change-vector)
-- [Ongoing Tasks: General Info](../../server/ongoing-tasks/general-info)
diff --git a/Documentation/5.4/Raven.Documentation.Pages/server/extensions/refresh.js.markdown b/Documentation/5.4/Raven.Documentation.Pages/server/extensions/refresh.js.markdown
index 4291fe7227..2cf5cf3bd8 100644
--- a/Documentation/5.4/Raven.Documentation.Pages/server/extensions/refresh.js.markdown
+++ b/Documentation/5.4/Raven.Documentation.Pages/server/extensions/refresh.js.markdown
@@ -95,4 +95,3 @@ Alternatively, document refreshing can also be configured in the studio, under _
### Server
- [What is a Change Vector](../../server/clustering/replication/change-vector)
-- [Ongoing Tasks: General Info](../../server/ongoing-tasks/general-info)
diff --git a/Documentation/5.4/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/rabbit-mq.markdown b/Documentation/5.4/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/rabbit-mq.markdown
index a9e947a7eb..ced7c90ac2 100644
--- a/Documentation/5.4/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/rabbit-mq.markdown
+++ b/Documentation/5.4/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/rabbit-mq.markdown
@@ -73,7 +73,6 @@ for (var i = 0; i < this.Lines.length; i++) {
// Attributes: Id, PartitionKey, Type, Source
loadToOrders(orderData, `users`, {
Id: id(this),
- PartitionKey: id(this),
Type: 'special-promotion',
Source: '/promotion-campaigns/summer-sale'
});
diff --git a/Documentation/5.4/Samples/csharp/Raven.Documentation.Samples/Server/OngoingTasks/ETL/Queue/Queue.cs b/Documentation/5.4/Samples/csharp/Raven.Documentation.Samples/Server/OngoingTasks/ETL/Queue/Queue.cs
index 84328d688b..3c3cdce609 100644
--- a/Documentation/5.4/Samples/csharp/Raven.Documentation.Samples/Server/OngoingTasks/ETL/Queue/Queue.cs
+++ b/Documentation/5.4/Samples/csharp/Raven.Documentation.Samples/Server/OngoingTasks/ETL/Queue/Queue.cs
@@ -1,18 +1,18 @@
using System;
-using System.Collections.Generic;
-using Raven.Client.Documents;
+using System.Collections.Generic;
+using Raven.Client.Documents;
using Raven.Client.Documents.Operations.Backups;
-using Raven.Client.Documents.Operations.ConnectionStrings;
-using Raven.Client.Documents.Operations.ETL;
+using Raven.Client.Documents.Operations.ConnectionStrings;
+using Raven.Client.Documents.Operations.ETL;
using Raven.Client.Documents.Operations.ETL.ElasticSearch;
using Raven.Client.Documents.Operations.ETL.OLAP;
using Raven.Client.Documents.Operations.ETL.Queue;
using Raven.Client.Documents.Operations.ETL.SQL;
-namespace Raven.Documentation.Samples.Server.OngoingTasks.ETL.Queue
-{
- public class ConnectionStrings
+namespace Raven.Documentation.Samples.Server.OngoingTasks.ETL.Queue
+{
+ public class ConnectionStrings
{
- private interface IFoo
+ private interface IFoo
{
#region QueueBrokerType
public enum QueueBrokerType
@@ -23,13 +23,13 @@ public enum QueueBrokerType
}
#endregion
- }
-
- public ConnectionStrings()
- {
- using (var store = new DocumentStore())
- {
- #region add_rabbitMQ_connection-string
+ }
+
+ public ConnectionStrings()
+ {
+ using (var store = new DocumentStore())
+ {
+ #region add_rabbitMQ_connection-string
var res = store.Maintenance.Send(
new PutConnectionStringOperation(
new QueueConnectionString
@@ -42,9 +42,9 @@ public ConnectionStrings()
#endregion
}
- using (var store = new DocumentStore())
+ using (var store = new DocumentStore())
{
- #region add_kafka_connection-string
+ #region add_kafka_connection-string
var res = store.Maintenance.Send(
new PutConnectionStringOperation(
new QueueConnectionString
@@ -241,7 +241,6 @@ public void AddRabbitmqEtlTask()
loadToOrders(orderData, `routingKey`, {
Id: id(this),
- PartitionKey: id(this),
Type: 'special-promotion',
Source: '/promotion-campaigns/summer-sale'
});",
@@ -310,7 +309,6 @@ public void AddRabbitmqEtlTaskDeleteProcessedDocuments()
loadToOrders(orderData, `routingKey`, {
Id: id(this),
- PartitionKey: id(this),
Type: 'special-promotion',
Source: '/promotion-campaigns/summer-sale'
});",
@@ -387,4 +385,4 @@ public class CloudEventAttributes
#endregion
}
}
-}
+}
diff --git a/Documentation/6.0/Raven.Documentation.Pages/server/ongoing-tasks/backup-overview.markdown b/Documentation/6.0/Raven.Documentation.Pages/server/ongoing-tasks/backup-overview.markdown
index 415c68bb84..781e6df1e9 100644
--- a/Documentation/6.0/Raven.Documentation.Pages/server/ongoing-tasks/backup-overview.markdown
+++ b/Documentation/6.0/Raven.Documentation.Pages/server/ongoing-tasks/backup-overview.markdown
@@ -81,7 +81,7 @@ Backed-up data includes both database-level and cluster-level contents, as detai
| [Compare-exchange values](../../client-api/operations/compare-exchange/overview) | ✔ | ✔ |
| [Identities](../../client-api/document-identifiers/working-with-document-identifiers#identities) | ✔ | ✔ |
| [Indexes](../../indexes/creating-and-deploying) | Index definitions are saved and used to rebuild indexes during database restoration | ✔ |
-| [Ongoing Tasks configuration](../../server/ongoing-tasks/general-info) | ✔ | ✔ |
+| [Ongoing Tasks configuration](../../studio/database/tasks/ongoing-tasks/general-info) | ✔ | ✔ |
| [Subscriptions](../../client-api/data-subscriptions/what-are-data-subscriptions) | ✔ | ✔ |
{PANEL/}
diff --git a/Documentation/6.0/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/rabbit-mq.markdown b/Documentation/6.0/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/rabbit-mq.markdown
index b2a55710aa..9921020b25 100644
--- a/Documentation/6.0/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/rabbit-mq.markdown
+++ b/Documentation/6.0/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/rabbit-mq.markdown
@@ -73,7 +73,6 @@ for (var i = 0; i < this.Lines.length; i++) {
// Attributes: Id, PartitionKey, Type, Source
loadToOrders(orderData, `users`, {
Id: id(this),
- PartitionKey: id(this),
Type: 'special-promotion',
Source: '/promotion-campaigns/summer-sale'
});
diff --git a/Documentation/6.0/Raven.Documentation.Pages/sharding/etl.markdown b/Documentation/6.0/Raven.Documentation.Pages/sharding/etl.markdown
index 88919edf57..0b47882964 100644
--- a/Documentation/6.0/Raven.Documentation.Pages/sharding/etl.markdown
+++ b/Documentation/6.0/Raven.Documentation.Pages/sharding/etl.markdown
@@ -116,7 +116,7 @@ and relevant code samples [here](../server/ongoing-tasks/etl/olap#athena-example
{PANEL: Retrieving Shard-Specific ETL Task Info}
-* The [GetOngoingTaskInfoOperation](../server/ongoing-tasks/general-info#get-ongoing-task-info-operation)
+* The [GetOngoingTaskInfoOperation](../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations)
store operation can be used on a non-sharded database to retrieve a task's information.
* `GetOngoingTaskInfoOperation` can also be used on a sharded database.
diff --git a/Documentation/6.0/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/external-replication-task.markdown b/Documentation/6.0/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/external-replication-task.markdown
index 226f82f9dc..e8c8d054b0 100644
--- a/Documentation/6.0/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/external-replication-task.markdown
+++ b/Documentation/6.0/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/external-replication-task.markdown
@@ -1,4 +1,4 @@
-# Studio: External Replication Task
+# External Replication Task
---
{NOTE: }
diff --git a/Documentation/6.0/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/kafka-etl-task.markdown b/Documentation/6.0/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/kafka-etl-task.markdown
index 38889c718d..5b9e812be3 100644
--- a/Documentation/6.0/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/kafka-etl-task.markdown
+++ b/Documentation/6.0/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/kafka-etl-task.markdown
@@ -1,4 +1,4 @@
-# Studio: Kafka ETL Task
+# Kafka ETL Task
---
{NOTE: }
@@ -7,11 +7,14 @@
* **Extract** selected data from RavenDB documents
* **Transform** the data to new JSON objects and add the new objects to CloudEvents messages.
* **Load** the messages to **topics** of a Kafka broker.
+
* Messages enqueued in Kafka topics are added at the queue's tail.
When the messages reach the queue's head, Kafka clients can access and consume them.
+
* Kafka ETL tasks transfer **documents only**.
Document extensions like attachments, counters, or time series, are not transferred.
-* This page explains how to create a Kafka ETL task using Studio.
+
+* This page explains how to create a Kafka ETL task using the Studio.
[Learn here](../../../../server/ongoing-tasks/etl/queue-etl/kafka) how to define a Kafka ETL task using code.
* In this page:
diff --git a/Documentation/6.0/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/kafka-queue-sink.markdown b/Documentation/6.0/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/kafka-queue-sink.markdown
index b98d60bb99..515498264e 100644
--- a/Documentation/6.0/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/kafka-queue-sink.markdown
+++ b/Documentation/6.0/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/kafka-queue-sink.markdown
@@ -1,4 +1,4 @@
-# Studio: Kafka Queue Sink Task
+# Kafka Queue Sink Task
---
{NOTE: }
@@ -15,7 +15,7 @@
can read batches of JSON formatted enqueued messages from Kafka topics, construct
documents using user-defined scripts, and store the documents in RavenDB collections.
-* This page explains how to create a Kafka queue sink task using Studio.
+* This page explains how to create a Kafka queue sink task using the Studio.
Learn more about RavenDB queue sinks [here](../../../../server/ongoing-tasks/queue-sink/overview).
Learn how to define a Kafka queue sink using the API [here](../../../../server/ongoing-tasks/queue-sink/kafka-queue-sink).
diff --git a/Documentation/6.0/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/rabbitmq-etl-task.markdown b/Documentation/6.0/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/rabbitmq-etl-task.markdown
index 5b65bc75c2..21181597e9 100644
--- a/Documentation/6.0/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/rabbitmq-etl-task.markdown
+++ b/Documentation/6.0/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/rabbitmq-etl-task.markdown
@@ -1,4 +1,4 @@
-# Studio: RabbitMQ ETL Task
+# RabbitMQ ETL Task
---
{NOTE: }
@@ -7,13 +7,16 @@
* **Extract** selected data from RavenDB documents
* **Transform** the data to new JSON objects and add the new objects to CloudEvents messages.
* **Load** the messages to a **RabbitMQ exchange**.
+
* The RabbitMQ exchange enqueues incoming messages at the tail of RabbitMQ queue/s.
When the enqueued messages advance to the queue head, RabbitMQ clients can access and consume them.
+
* RabbitMQ ETL tasks transfer **documents only**.
Document extensions like attachments, counters, or time series, are not transferred.
-* This page explains how to create a RabbitMQ ETL task using Studio.
+
+* This page explains how to create a RabbitMQ ETL task using the Studio.
[Learn here](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq) how to define a RabbitMQ ETL task using code.
-
+
* In this page:
* [Open RabbitMQ ETL Task View](../../../../studio/database/tasks/ongoing-tasks/rabbitmq-etl-task#open-rabbitmq-etl-task-view)
* [Define RabbitMQ ETL Task](../../../../studio/database/tasks/ongoing-tasks/rabbitmq-etl-task#define-rabbitmq-etl-task)
diff --git a/Documentation/6.0/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/rabbitmq-queue-sink.markdown b/Documentation/6.0/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/rabbitmq-queue-sink.markdown
index 4858b8bf5c..fa6b2e2e16 100644
--- a/Documentation/6.0/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/rabbitmq-queue-sink.markdown
+++ b/Documentation/6.0/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/rabbitmq-queue-sink.markdown
@@ -1,4 +1,4 @@
-# Studio: RabbitMQ Queue Sink Task
+# RabbitMQ Queue Sink Task
---
{NOTE: }
@@ -17,7 +17,7 @@
queues, construct documents using user-defined scripts, and store the
documents in RavenDB collections.
-* This page explains how to create a RabbitMQ sink task using Studio.
+* This page explains how to create a RabbitMQ sink task using the Studio.
Learn more about RavenDB queue sinks [here](../../../../server/ongoing-tasks/queue-sink/overview).
Learn how to define a RabbitMQ queue sink using the API [here](../../../../server/ongoing-tasks/queue-sink/rabbit-mq-queue-sink).
diff --git a/Documentation/6.0/Samples/csharp/Raven.Documentation.Samples/Server/OngoingTasks/ETL/Queue/Queue.cs b/Documentation/6.0/Samples/csharp/Raven.Documentation.Samples/Server/OngoingTasks/ETL/Queue/Queue.cs
index 84328d688b..3c3cdce609 100644
--- a/Documentation/6.0/Samples/csharp/Raven.Documentation.Samples/Server/OngoingTasks/ETL/Queue/Queue.cs
+++ b/Documentation/6.0/Samples/csharp/Raven.Documentation.Samples/Server/OngoingTasks/ETL/Queue/Queue.cs
@@ -1,18 +1,18 @@
using System;
-using System.Collections.Generic;
-using Raven.Client.Documents;
+using System.Collections.Generic;
+using Raven.Client.Documents;
using Raven.Client.Documents.Operations.Backups;
-using Raven.Client.Documents.Operations.ConnectionStrings;
-using Raven.Client.Documents.Operations.ETL;
+using Raven.Client.Documents.Operations.ConnectionStrings;
+using Raven.Client.Documents.Operations.ETL;
using Raven.Client.Documents.Operations.ETL.ElasticSearch;
using Raven.Client.Documents.Operations.ETL.OLAP;
using Raven.Client.Documents.Operations.ETL.Queue;
using Raven.Client.Documents.Operations.ETL.SQL;
-namespace Raven.Documentation.Samples.Server.OngoingTasks.ETL.Queue
-{
- public class ConnectionStrings
+namespace Raven.Documentation.Samples.Server.OngoingTasks.ETL.Queue
+{
+ public class ConnectionStrings
{
- private interface IFoo
+ private interface IFoo
{
#region QueueBrokerType
public enum QueueBrokerType
@@ -23,13 +23,13 @@ public enum QueueBrokerType
}
#endregion
- }
-
- public ConnectionStrings()
- {
- using (var store = new DocumentStore())
- {
- #region add_rabbitMQ_connection-string
+ }
+
+ public ConnectionStrings()
+ {
+ using (var store = new DocumentStore())
+ {
+ #region add_rabbitMQ_connection-string
var res = store.Maintenance.Send(
new PutConnectionStringOperation(
new QueueConnectionString
@@ -42,9 +42,9 @@ public ConnectionStrings()
#endregion
}
- using (var store = new DocumentStore())
+ using (var store = new DocumentStore())
{
- #region add_kafka_connection-string
+ #region add_kafka_connection-string
var res = store.Maintenance.Send(
new PutConnectionStringOperation(
new QueueConnectionString
@@ -241,7 +241,6 @@ public void AddRabbitmqEtlTask()
loadToOrders(orderData, `routingKey`, {
Id: id(this),
- PartitionKey: id(this),
Type: 'special-promotion',
Source: '/promotion-campaigns/summer-sale'
});",
@@ -310,7 +309,6 @@ public void AddRabbitmqEtlTaskDeleteProcessedDocuments()
loadToOrders(orderData, `routingKey`, {
Id: id(this),
- PartitionKey: id(this),
Type: 'special-promotion',
Source: '/promotion-campaigns/summer-sale'
});",
@@ -387,4 +385,4 @@ public class CloudEventAttributes
#endregion
}
}
-}
+}
diff --git a/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/.docs.json b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/.docs.json
index c9e23fe685..a42d579072 100644
--- a/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/.docs.json
+++ b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/.docs.json
@@ -31,6 +31,11 @@
"Name": "Indexes",
"Mappings": []
},
+ {
+ "Path": "/ongoing-tasks",
+ "Name": "Ongoing Tasks",
+ "Mappings": []
+ },
{
"Path": "/backup",
"Name": "Backup",
diff --git a/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/connection-strings/.docs.json b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/connection-strings/.docs.json
index c456490232..f700310ad7 100644
--- a/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/connection-strings/.docs.json
+++ b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/connection-strings/.docs.json
@@ -5,16 +5,16 @@
"DiscussionId": "1dc3a5d2-72c3-49f7-8864-44af79d69e95",
"Mappings": []
},
- {
- "Path": "remove-connection-string.markdown",
- "Name": "Remove Connection String",
- "DiscussionId": "1074768c-807a-4d29-8c61-6facceb7bd67",
- "Mappings": []
- },
{
"Path": "get-connection-string.markdown",
"Name": "Get Connection String",
"DiscussionId": "c59f5b9b-dbc4-460e-840b-c13a355d37c8",
"Mappings": []
+ },
+ {
+ "Path": "remove-connection-string.markdown",
+ "Name": "Remove Connection String",
+ "DiscussionId": "1074768c-807a-4d29-8c61-6facceb7bd67",
+ "Mappings": []
}
]
diff --git a/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/connection-strings/add-connection-string.dotnet.markdown b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/connection-strings/add-connection-string.dotnet.markdown
new file mode 100644
index 0000000000..3bf847aa41
--- /dev/null
+++ b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/connection-strings/add-connection-string.dotnet.markdown
@@ -0,0 +1,163 @@
+# Add Connection String Operation
+---
+
+{NOTE: }
+
+* Use the [PutConnectionStringOperation](../../../../client-api/operations/maintenance/connection-strings/add-connection-string#the%C2%A0putconnectionstringoperation%C2%A0method) method to define a connection string in your database.
+
+* In this page:
+ * [Add a RavenDB connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string#add-a-ravendb-connection-string)
+ * [Add an SQL connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string#add-an-sql-connection-string)
+ * [Add an OLAP connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string#add-an-olap-connection-string)
+ * [Add an Elasticsearch connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string#add-an-elasticsearch-connection-string)
+ * [Add a Kafka connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string#add-a-kafka-connection-string)
+ * [Add a RabbitMQ connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string#add-a-rabbitmq-connection-string)
+ * [Add an Azure Queue Storage connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string#add-an-azure-queue-storage-connection-string)
+ * [The PutConnectionStringOperation method](../../../../client-api/operations/maintenance/connection-strings/add-connection-string#the%C2%A0putconnectionstringoperation%C2%A0method)
+
+{NOTE/}
+
+---
+
+{PANEL: Add a RavenDB connection string}
+
+The RavenDB connection string is used by RavenDB's [RavenDB ETL Task](../../../../server/ongoing-tasks/etl/raven).
+
+---
+
+Example:
+{CODE add_raven_connection_string@ClientApi\Operations\Maintenance\ConnectionStrings\AddConnectionStrings.cs /}
+
+---
+
+Syntax:
+{CODE:csharp raven_connection_string@ClientApi\Operations\Maintenance\ConnectionStrings\AddConnectionStrings.cs /}
+
+{NOTE: }
+
+**Secure servers**
+
+To [connect to secure RavenDB servers](../../../../server/security/authentication/certificate-management#enabling-communication-between-servers:-importing-and-exporting-certificates)
+you need to:
+1. Export the server certificate from the source server.
+2. Install it as a client certificate on the destination server.
+
+This can be done from the [Certificates view](../../../../server/security/authentication/certificate-management#studio-certificates-management-view) in the Studio.
+
+{NOTE/}
+
+{PANEL/}
+
+{PANEL: Add an SQL connection string}
+
+The SQL connection string is used by RavenDB's [SQL ETL Task](../../../../server/ongoing-tasks/etl/sql).
+
+---
+
+Example:
+{CODE add_sql_connection_string@ClientApi\Operations\Maintenance\ConnectionStrings\AddConnectionStrings.cs /}
+
+---
+
+Syntax:
+{CODE sql_connection_string@ClientApi\Operations\Maintenance\ConnectionStrings\AddConnectionStrings.cs /}
+
+{PANEL/}
+
+{PANEL: Add an OLAP connection string}
+
+The OLAP connection string is used by RavenDB's [OLAP ETL Task](../../../../server/ongoing-tasks/etl/olap).
+
+---
+
+**To a local machine** - example:
+
+{CODE add_olap_connection_string_1@ClientApi\Operations\Maintenance\ConnectionStrings\AddConnectionStrings.cs /}
+
+---
+
+**To a cloud-based server** - example:
+
+* The following example shows a connection string to Amazon AWS.
+* Adust the parameters as needed if you are using other cloud-based servers (e.g. Google, Azure, Glacier, S3, FTP).
+* The available parameters are listed in [ETL destination settings](../../../../server/ongoing-tasks/etl/olap#etl-destination-settings).
+
+{CODE add_olap_connection_string_2@ClientApi\Operations\Maintenance\ConnectionStrings\AddConnectionStrings.cs /}
+
+---
+
+Syntax:
+{CODE olap_connection_string@ClientApi\Operations\Maintenance\ConnectionStrings\AddConnectionStrings.cs /}
+
+{PANEL/}
+
+{PANEL: Add an Elasticsearch connection string }
+
+The Elasticsearch connection string is used by RavenDB's [Elasticsearch ETL Task](../../../../server/ongoing-tasks/etl/elasticsearch).
+
+---
+
+Example:
+{CODE add_elasticsearch_connection_string@ClientApi\Operations\Maintenance\ConnectionStrings\AddConnectionStrings.cs /}
+
+---
+
+Syntax:
+{CODE elasticsearch_connection_string@ClientApi\Operations\Maintenance\ConnectionStrings\AddConnectionStrings.cs /}
+
+{PANEL/}
+
+{PANEL: Add a Kafka connection string }
+
+The Kafkah connection string is used by RavenDB's [Kafka Queue ETL Task](../../../../server/ongoing-tasks/etl/queue-etl/kafka).
+Learn how to add a Kafka connection string in section [Add a Kafka connection string]( ../../../../server/ongoing-tasks/etl/queue-etl/kafka#add-a-kafka-connection-string).
+
+{PANEL/}
+
+{PANEL: Add a RabbitMQ connection string }
+
+The RabbitMQ connection string is used by RavenDB's [RabbitMQ Queue ETL Task](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq).
+Learn how to add a RabbitMQ connection string in section [Add a RabbitMQ connection string]( ../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq#add-a-rabbitmq-connection-string).
+
+{PANEL/}
+
+{PANEL: Add an Azure Queue Storage connection string }
+
+The Azure Queue Storage connection string is used by RavenDB's [Azure Queue Storage ETL Task](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue).
+Learn to add an Azure Queue Storage connection string in section [Add an Azure Queue Storage connection string]( ../../../../server/ongoing-tasks/etl/queue-etl/azure-queue#add-an-azure-queue-storage-connection-string).
+
+{PANEL/}
+
+{PANEL: The `PutConnectionStringOperation` method}
+
+{CODE put_connection_string@ClientApi\Operations\Maintenance\ConnectionStrings\AddConnectionStrings.cs /}
+
+| Parameters | Type | Description |
+|----------------------|---------------------------------|--------------------------------------------------------------------------------------------------------------------|
+| **connectionString** | `RavenConnectionString` | Object that defines the RavenDB connection string. |
+| **connectionString** | `SqlConnectionString` | Object that defines the SQL Connection string. |
+| **connectionString** | `OlapConnectionString` | Object that defines the OLAP connction string. |
+| **connectionString** | `ElasticSearchConnectionString` | Object that defines the Elasticsearch connction string. |
+| **connectionString** | `QueueConnectionString` | Object that defines the connection string for the Queue ETLs tasks (Kafka, RabbitMQ, and the Azure Queue Storage). |
+
+{CODE:csharp connection_string_class@ClientApi\Operations\Maintenance\ConnectionStrings\AddConnectionStrings.cs /}
+
+{PANEL/}
+
+## Related Articles
+
+### Connection Strings
+
+- [Get](../../../../client-api/operations/maintenance/connection-strings/get-connection-string)
+- [Remove](../../../../client-api/operations/maintenance/connection-strings/remove-connection-string)
+
+### ETL (Extract, Transform, Load) Tasks
+
+- [Operations: How to Add ETL](../../../../client-api/operations/maintenance/etl/add-etl)
+- [Ongoing Tasks: ETL Basics](../../../../server/ongoing-tasks/etl/basics)
+
+### External Replication
+
+- [External Replication Task](../../../../studio/database/tasks/ongoing-tasks/external-replication-task)
+- [How Replication Works](../../../../server/clustering/replication/replication)
+
diff --git a/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/connection-strings/get-connection-string.dotnet.markdown b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/connection-strings/get-connection-string.dotnet.markdown
new file mode 100644
index 0000000000..f26e691e9c
--- /dev/null
+++ b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/connection-strings/get-connection-string.dotnet.markdown
@@ -0,0 +1,62 @@
+# Get Connection String Operation
+---
+
+{NOTE: }
+
+* Use `GetConnectionStringsOperation` to retrieve properties for a specific connection string
+ or for all connection strings defined in the databse.
+
+* To learn how to create a new connection string, see [Add Connection String Operation](../../../../client-api/operations/maintenance/connection-strings/add-connection-string).
+
+* In this page:
+ * [Get connection string by name and type](../../../../client-api/operations/maintenance/connection-strings/get-connection-string#get-connection-string-by-name-and-type)
+ * [Get all connection strings](../../../../client-api/operations/maintenance/connection-strings/get-connection-string#get-all-connnection-strings)
+ * [Syntax](../../../../client-api/operations/maintenance/connection-strings/get-connection-string#syntax)
+
+{NOTE/}
+
+---
+
+{PANEL: Get connection string by name and type}
+
+The following example retrieves a RavenDB Connection String:
+
+{CODE get_connection_string_by_name@ClientApi\Operations\Maintenance\ConnectionStrings\GetConnectionStrings.cs /}
+
+{PANEL/}
+
+{PANEL: Get all connnection strings}
+
+{CODE get_all_connection_strings@ClientApi\Operations\Maintenance\ConnectionStrings\GetConnectionStrings.cs /}
+
+{PANEL/}
+
+{PANEL: Syntax}
+
+{CODE syntax_1@ClientApi\Operations\Maintenance\ConnectionStrings\GetConnectionStrings.cs /}
+
+| Parameter | Type | Description |
+|--------------------------|------------------------|--------------------------------------------------------------------------------|
+| **connectionStringName** | `string` | Connection string name |
+| **type** | `ConnectionStringType` | Connection string type:
`Raven`, `Sql`, `Olap`, `ElasticSearch`, or `Queue` |
+
+{CODE syntax_2@ClientApi\Operations\Maintenance\ConnectionStrings\GetConnectionStrings.cs /}
+
+| Return value of `store.Maintenance.Send(GetConnectionStringsOperation)` | |
+|--------------------------------------------------------------------------|---------------------------------------------------------------|
+| `GetConnectionStringsResult` | Class with all connection strings are defined on the database |
+
+{CODE syntax_3@ClientApi\Operations\Maintenance\ConnectionStrings\GetConnectionStrings.cs /}
+
+{NOTE: }
+A detailed syntax for each connection string type is available in article [Add connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string).
+{NOTE/}
+
+{PANEL/}
+
+## Related Articles
+
+### Connection Strings
+
+- [Add](../../../../client-api/operations/maintenance/connection-strings/add-connection-string)
+- [Remove](../../../../client-api/operations/maintenance/connection-strings/remove-connection-string)
diff --git a/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/connection-strings/remove-connection-string.dotnet.markdown b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/connection-strings/remove-connection-string.dotnet.markdown
new file mode 100644
index 0000000000..e2e196fe00
--- /dev/null
+++ b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/connection-strings/remove-connection-string.dotnet.markdown
@@ -0,0 +1,39 @@
+# Remove Connection String Operation
+---
+
+{NOTE: }
+
+* Use `RemoveConnectionStringOperation` to remove a connection string definition from the database.
+
+* In this page:
+ * [Remove connection string](../../../../client-api/operations/maintenance/connection-strings/remove-connection-string#remove-connecion-string)
+ * [Syntax](../../../../client-api/operations/maintenance/connection-strings/remove-connection-string#syntax)
+
+{NOTE/}
+
+---
+
+{PANEL: Remove connection string}
+
+The following example removes a RavenDB Connection String.
+
+{CODE remove_raven_connection_string@ClientApi\Operations\Maintenance\ConnectionStrings\RemoveConnectionStrings.cs /}
+
+{PANEL/}
+
+{PANEL: Syntax}
+
+{CODE remove_connection_string@ClientApi\Operations\Maintenance\ConnectionStrings\RemoveConnectionStrings.cs /}
+
+| Parameter | Type | Description |
+|----------------------|-------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **connectionString** | `T` | Connection string to remove:
`RavenConnectionString`
`SqlConnectionString`
`OlapConnectionString`
`ElasticSearchConnectionString`
`QueueConnectionString` |
+
+{PANEL/}
+
+## Related Articles
+
+### Connection Strings
+
+- [Get](../../../../client-api/operations/maintenance/connection-strings/get-connection-string)
+- [Add](../../../../client-api/operations/maintenance/connection-strings/add-connection-string)
diff --git a/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/etl/add-etl.dotnet.markdown b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/etl/add-etl.dotnet.markdown
new file mode 100644
index 0000000000..b3cf1b5eef
--- /dev/null
+++ b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/etl/add-etl.dotnet.markdown
@@ -0,0 +1,155 @@
+# Add ETL Operation
+---
+
+{NOTE: }
+
+* Use the `AddEtlOperation` method to add a new ongoing ETL task to your database.
+
+* To learn about ETL (Extract, Transfer, Load) ongoing tasks, see article [ETL Basics](../../../../server/ongoing-tasks/etl/basics).
+ To learn how to manage ETL tasks from the Studio, see [Ongoing tasks - overview](../../../../studio/database/tasks/ongoing-tasks/general-info).
+
+* In this page:
+
+ * [Add RavenDB ETL task](../../../../client-api/operations/maintenance/etl/add-etl#add-ravendb-etl-task)
+ * [Add SQL ETL task](../../../../client-api/operations/maintenance/etl/add-etl#add-sql-etl-task)
+ * [Add OLAP ETL task](../../../../client-api/operations/maintenance/etl/add-etl#add-olap-etl-task)
+ * [Add Elasticsearch ETL task](../../../../client-api/operations/maintenance/etl/add-etl#add-elasticsearch-etl-task)
+ * [Add Kafka ETL task](../../../../client-api/operations/maintenance/etl/add-etl#add-kafka-etl-task)
+ * [Add RabbitMQ ETL task](../../../../client-api/operations/maintenance/etl/add-etl#add-rabbitmq-etl-task)
+ * [Add Azure Queue Storage ETL task](../../../../client-api/operations/maintenance/etl/add-etl#add-azure-queue-storage-etl-task)
+ * [Syntax](../../../../client-api/operations/maintenance/etl/add-etl#syntax)
+
+{NOTE/}
+
+---
+
+{PANEL: Add RavenDB ETL task}
+
+* Learn about the RavenDB ETL task in article **[RavenDB ETL task](../../../../server/ongoing-tasks/etl/raven)**.
+* Learn how to define a connection string for the RavenDB ETL task in section **[Add a RavenDB connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string#add-a-ravendb-connection-string)**.
+* To manage the RavenDB ETL task from the Studio, see **[Studio: RavenDB ETL task](../../../../studio/database/tasks/ongoing-tasks/ravendb-etl-task)**.
+
+---
+
+The following example adds a RavenDB ETL task:
+
+{CODE add_raven_etl@ClientApi\Operations\Maintenance\Etl\AddEtl.cs /}
+
+{PANEL/}
+
+{PANEL: Add SQL ETL task}
+
+* Learn about the SQL ETL task in article **[SQL ETL task](../../../../server/ongoing-tasks/etl/sql)**.
+* Learn how to define a connection string for the SQL ETL task in section **[Add a SQL connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string#add-an-sql-connection-string)**.
+
+---
+
+The following example adds an SQL ETL task:
+
+{CODE add_sql_etl@ClientApi\Operations\Maintenance\Etl\AddEtl.cs /}
+
+{PANEL/}
+
+{PANEL: Add OLAP ETL task}
+
+* Learn about the OLAP ETL task in article **[OLAP ETL task](../../../../server/ongoing-tasks/etl/olap)**.
+* Learn how to define a connection string for the OLAP ETL task in section **[Add an OLAP connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string#add-an-olap-connection-string)**.
+* To manage the OLAP ETL task from the Studio, see **[Studio: OLAP ETL task](../../../../studio/database/tasks/ongoing-tasks/olap-etl-task)**.
+
+---
+
+The following example adds an OLAP ETL task:
+
+{CODE add_olap_etl@ClientApi\Operations\Maintenance\Etl\AddEtl.cs /}
+
+{PANEL/}
+
+{PANEL: Add Elasticsearch ETL task}
+
+* Learn about the Elasticsearch ETL task in article **[Elasticsearch ETL task](../../../../server/ongoing-tasks/etl/elasticsearch)**.
+* Learn how to define a connection string for the Elasticsearch ETL task in section **[Add an Elasticsearch connection string](../../../../client-api/operations/maintenance/connection-strings/add-connection-string#add-an-elasticsearch-connection-string)**.
+* To manage the Elasticsearch ETL task from the Studio, see **[Studio: Elasticsearch ETL task](../../../../studio/database/tasks/ongoing-tasks/elasticsearch-etl-task)**.
+
+---
+
+The following example adds an Elasticsearch ETL task:
+
+{CODE add_elasticsearch_etl@ClientApi\Operations\Maintenance\Etl\AddEtl.cs /}
+
+{PANEL/}
+
+{PANEL: Add Kafka ETL task}
+
+* Learn about the Kafka ETL task in article **[Kafka ETL task](../../../../server/ongoing-tasks/etl/queue-etl/kafka)**.
+* Learn how to define a connection string for the Kafka ETL task in section **[Add a Kafka connection string](../../../../server/ongoing-tasks/etl/queue-etl/kafka#add-a-kafka-connection-string)**.
+* To manage the Kafka ETL task from the Studio, see **[Studio: Kafka ETL task](../../../../studio/database/tasks/ongoing-tasks/kafka-etl-task)**.
+
+---
+
+* Examples showing how to add a Kafka ETL task are available in section **[Add a Kafka ETL task](../../../../server/ongoing-tasks/etl/queue-etl/kafka#add-a-kafka-etl-task)**.
+
+{PANEL/}
+
+{PANEL: Add RabbitMQ ETL task}
+
+* Learn about the RabbitMQ ETL task in article **[RabbitMQ ETL task](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq)**.
+* Learn how to define a connection string for the RabbitMQ ETL task in section **[Add a RabbitMQ connection string](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq#add-a-rabbitmq-connection-string)**.
+* To manage the RabbitMQ ETL task from the Studio, see **[Studio: RabbitMQ ETL task](../../../../studio/database/tasks/ongoing-tasks/rabbitmq-etl-task)**.
+
+---
+
+* Examples showing how to add a RabbitMQ ETL task are available in section **[Add a RabbitMQ ETL task](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq#add-a-rabbitmq-etl-task)**.
+
+{PANEL/}
+
+{PANEL: Add Azure Queue Storage ETL task}
+
+* Learn about the Azure Queue Storage ETL task in article **[Azure Queue Storage ETL task](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue)**.
+* Learn how to define a connection string for the Azure Queue Storage ETL task in section
+ **[Add an Azure Queue Storage connection string](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue#add-an-azure-queue-storage-connection-string)**.
+* To manage the Azure Queue Storage ETL task from the Studio, see **[Studio: Azure Queue Storage ETL task](../../../../studio/database/tasks/ongoing-tasks/azure-queue-storage-etl)**.
+
+---
+
+* Examples showing how to add an Azure Queue Storage ETL task are available in section **[Add a Azure Queue Storage ETL task](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue#add-an-azure-queue-storage-etl-task)**.
+
+{PANEL/}
+
+{PANEL: Syntax}
+
+{CODE add_etl_operation@ClientApi\Operations\Maintenance\Etl\AddEtl.cs /}
+
+| Parameter | Type | Description |
+|-------------------|-----------------------|----------------------------------------------------------------------|
+| **configuration** | `EtlConfiguration` | The ETL configuration object where `T` is the connection string type |
+
+{PANEL/}
+
+## Related Articles
+
+### Server
+
+- [ETL basics](../../../../server/ongoing-tasks/etl/basics)
+- [RavenDB ETL](../../../../server/ongoing-tasks/etl/raven)
+- [SQL ETL](../../../../server/ongoing-tasks/etl/sql)
+- [OLAP ETL](../../../../server/ongoing-tasks/etl/olap)
+- [Elasticsearch ETL](../../../../server/ongoing-tasks/etl/elasticsearch)
+- [Kafka ETL](../../../../server/ongoing-tasks/etl/queue-etl/kafka)
+- [RabbitMQ ETL](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq)
+- [Azure Queue Storage ETL](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue)
+
+### Studio
+
+- [Ongoing tasks - overview](../../../../studio/database/tasks/ongoing-tasks/general-info)
+- [RavenDB ETL Task](../../../../studio/database/tasks/ongoing-tasks/ravendb-etl-task)
+- [OLAP ETL Task](../../../../studio/database/tasks/ongoing-tasks/olap-etl-task)
+- [Elasticsearch ETL Task](../../../../studio/database/tasks/ongoing-tasks/elasticsearch-etl-task)
+- [Kafka ETL Task](../../../../studio/database/tasks/ongoing-tasks/kafka-etl-task)
+- [RabbitMQ ETL Task](../../../../studio/database/tasks/ongoing-tasks/rabbitmq-etl-task)
+- [Azure Queue Storage ETL Task](../../../../studio/database/tasks/ongoing-tasks/azure-queue-storage-etl)
+
+### Connection Strings
+
+- [Add](../../../../client-api/operations/maintenance/connection-strings/add-connection-string)
+- [Get](../../../../client-api/operations/maintenance/connection-strings/get-connection-string)
+- [Remove](../../../../client-api/operations/maintenance/connection-strings/remove-connection-string)
diff --git a/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/etl/add-etl.java.markdown b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/etl/add-etl.java.markdown
new file mode 100644
index 0000000000..05c71dc16c
--- /dev/null
+++ b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/etl/add-etl.java.markdown
@@ -0,0 +1,78 @@
+# Add ETL Operation
+---
+
+{NOTE: }
+
+* Use the `AddEtlOperation` method to add a new ongoing ETL task to your database.
+
+* To learn about ETL (Extract, Transfer, Load) ongoing tasks, see article [ETL Basics](../../../../server/ongoing-tasks/etl/basics).
+ To learn how to manage ETL tasks from the Studio, see [Ongoing tasks - overview](../../../../studio/database/tasks/ongoing-tasks/general-info).
+
+* In this page:
+ * [Example - add Raven ETL](../../../../client-api/operations/maintenance/etl/add-etl#example---add-raven-etl)
+ * [Example - add SQL ETL](../../../../client-api/operations/maintenance/etl/add-etl#example---add-sql-etl)
+ * [Example - add OLAP ETL](../../../../client-api/operations/maintenance/etl/add-etl#example---add-olap-etl)
+ * [Syntax](../../../../client-api/operations/maintenance/etl/add-etl#syntax)
+
+{NOTE/}
+
+---
+
+{PANEL: Example - add Raven ETL}
+
+{CODE:java add_raven_etl@ClientApi\Operations\Maintenance\Etl\AddEtl.java /}
+
+{NOTE: }
+
+**Secure servers**:
+
+To [connect secure RavenDB servers](../../../../server/security/authentication/certificate-management#enabling-communication-between-servers:-importing-and-exporting-certificates)
+you need to
+
+1. Export the server certificate from the source server.
+2. Install it as a client certificate on the destination server.
+
+This can be done in the RavenDB Studio -> Server Management -> [Certificates view](../../../../server/security/authentication/certificate-management#studio-certificates-management-view).
+{NOTE/}
+
+{PANEL/}
+
+{PANEL: Example - add SQL ETL}
+
+{CODE:java add_sql_etl@ClientApi\Operations\Maintenance\Etl\AddEtl.java /}
+
+{PANEL/}
+
+{PANEL: Example - add OLAP ETL}
+
+{CODE:java add_olap_etl@ClientApi\Operations\Maintenance\Etl\AddEtl.java /}
+
+{PANEL/}
+
+{PANEL: Syntax}
+
+{CODE:java add_etl_operation@ClientApi\Operations\Maintenance\Etl\AddEtl.java /}
+
+| Parameter | Type | Description |
+|-------------------|-----------------------|-------------------------------------------------------|
+| **configuration** | `EtlConfiguration` | ETL configuration where `T` is connection string type |
+
+{PANEL/}
+
+## Related Articles
+
+### ETL
+
+- [Basics](../../../../server/ongoing-tasks/etl/basics)
+- [RavenDB ETL](../../../../server/ongoing-tasks/etl/raven)
+- [SQL ETL](../../../../server/ongoing-tasks/etl/sql)
+
+### Studio
+
+- [RavenDB ETL Task](../../../../studio/database/tasks/ongoing-tasks/ravendb-etl-task)
+
+### Connection Strings
+
+- [Add](../../../../client-api/operations/maintenance/connection-strings/add-connection-string)
+- [Get](../../../../client-api/operations/maintenance/connection-strings/get-connection-string)
+- [Remove](../../../../client-api/operations/maintenance/connection-strings/remove-connection-string)
diff --git a/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/etl/add-etl.js.markdown b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/etl/add-etl.js.markdown
new file mode 100644
index 0000000000..9e337f26a4
--- /dev/null
+++ b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/etl/add-etl.js.markdown
@@ -0,0 +1,67 @@
+# Add ETL Operation
+---
+
+{NOTE: }
+
+* Use the `AddEtlOperation` method to add a new ongoing ETL task to your database.
+
+* To learn about ETL (Extract, Transfer, Load) ongoing tasks, see article [ETL Basics](../../../../server/ongoing-tasks/etl/basics).
+ To learn how to manage ETL tasks from the Studio, see [Ongoing tasks - overview](../../../../studio/database/tasks/ongoing-tasks/general-info).
+
+* In this page:
+ * [Example - add Raven ETL](../../../../client-api/operations/maintenance/etl/add-etl#example---add-raven-etl)
+ * [Example - add SQL ETL](../../../../client-api/operations/maintenance/etl/add-etl#example---add-sql-etl)
+ * [Example - add OLAP ETL](../../../../client-api/operations/maintenance/etl/add-etl#example---add-olap-etl)
+ * [Syntax](../../../../client-api/operations/maintenance/etl/add-etl#syntax)
+
+{NOTE/}
+
+---
+
+{PANEL: Example - add Raven ETL}
+
+{CODE:nodejs add_raven_etl@client-api\operations\maintenance\etl\addEtl.js /}
+
+{PANEL/}
+
+{PANEL: Example - add SQL ETL}
+
+{CODE:nodejs add_sql_etl@client-api\operations\maintenance\etl\addEtl.js /}
+
+{PANEL/}
+
+{PANEL: Example - add OLAP ETL}
+
+{CODE:nodejs add_olap_etl@client-api\operations\maintenance\etl\addEtl.js /}
+
+{PANEL/}
+
+{PANEL: Syntax}
+
+{CODE:nodejs add_etl_operation@client-api\operations\maintenance\etl\addEtl.js /}
+
+| Parameter | Type | Description |
+|-------------------|---------------------------|-----------------------------------|
+| **configuration** | `EtlConfiguration` object | The ETL task configuration to add |
+
+{CODE:nodejs syntax@client-api\operations\maintenance\etl\addEtl.js /}
+
+{PANEL/}
+
+## Related Articles
+
+### ETL
+
+- [Basics](../../../../server/ongoing-tasks/etl/basics)
+- [RavenDB ETL](../../../../server/ongoing-tasks/etl/raven)
+- [SQL ETL](../../../../server/ongoing-tasks/etl/sql)
+
+### Studio
+
+- [RavenDB ETL Task](../../../../studio/database/tasks/ongoing-tasks/ravendb-etl-task)
+
+### Connection Strings
+
+- [Add](../../../../client-api/operations/maintenance/connection-strings/add-connection-string)
+- [Get](../../../../client-api/operations/maintenance/connection-strings/get-connection-string)
+- [Remove](../../../../client-api/operations/maintenance/connection-strings/remove-connection-string)
diff --git a/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/ongoing-tasks/.docs.json b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/ongoing-tasks/.docs.json
new file mode 100644
index 0000000000..887e78d82b
--- /dev/null
+++ b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/ongoing-tasks/.docs.json
@@ -0,0 +1,8 @@
+[
+ {
+ "Path": "ongoing-task-operations.markdown",
+ "Name": "Ongoing Task Operations",
+ "DiscussionId": "b56f5a63-46cf-4e61-956e-4c3e7592274b",
+ "Mappings": []
+ }
+]
diff --git a/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.dotnet.markdown b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.dotnet.markdown
new file mode 100644
index 0000000000..d23865acb2
--- /dev/null
+++ b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations.dotnet.markdown
@@ -0,0 +1,83 @@
+# Ongoing Task Operations
+---
+
+{NOTE: }
+
+* Once an ongoing task is created, it can be managed via the Client API [Operations](../../../../client-api/operations/what-are-operations).
+ You can get task info, toggle the task state (enable, disable), or delete the task.
+
+* Ongoing tasks can also be managed via the [Tasks list view](../../../../studio/database/tasks/ongoing-tasks/general-info#ongoing-tasks---view) in the Studio.
+
+* In this page:
+ * [Get ongoing task info](../../../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations#get-ongoing-task-info)
+ * [Delete ongoing task](../../../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations#delete-ongoing-task)
+ * [Toggle ongoing task state](../../../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations#toggle-ongoing-task-state)
+ * [Syntax](../../../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations#syntax)
+
+{NOTE/}
+
+{PANEL: Get ongoing task info}
+
+For the examples in this article, let's create a simple external replication ongoing task:
+
+{CODE:csharp create_task@ClientApi\Operations\Maintenance\OngoingTasks\OngoingTaskOperations.cs /}
+
+---
+
+Use `GetOngoingTaskInfoOperation` to get information about an ongoing task.
+
+{CODE-TABS}
+{CODE-TAB:csharp:Sync get@ClientApi\Operations\Maintenance\OngoingTasks\OngoingTaskOperations.cs /}
+{CODE-TAB:csharp:Async get_async@ClientApi\Operations\Maintenance\OngoingTasks\OngoingTaskOperations.cs /}
+{CODE-TABS/}
+
+{PANEL/}
+
+{PANEL: Delete ongoing task}
+
+Use `DeleteOngoingTaskOperation` to remove an ongoing task from the list of tasks assigned to the database.
+
+{CODE-TABS}
+{CODE-TAB:csharp:Sync delete@ClientApi\Operations\Maintenance\OngoingTasks\OngoingTaskOperations.cs /}
+{CODE-TAB:csharp:Async delete_async@ClientApi\Operations\Maintenance\OngoingTasks\OngoingTaskOperations.cs /}
+{CODE-TABS/}
+
+{PANEL/}
+
+{PANEL: Toggle ongoing task state}
+
+Use `ToggleOngoingTaskStateOperation` to enable/disable the task state.
+
+{CODE-TABS}
+{CODE-TAB:csharp:Sync toggle@ClientApi\Operations\Maintenance\OngoingTasks\OngoingTaskOperations.cs /}
+{CODE-TAB:csharp:Async toggle_async@ClientApi\Operations\Maintenance\OngoingTasks\OngoingTaskOperations.cs /}
+{CODE-TABS/}
+
+{PANEL/}
+
+{PANEL: Syntax}
+
+{CODE syntax_1@ClientApi\Operations\Maintenance\OngoingTasks\OngoingTaskOperations.cs /}
+
+{CODE syntax_2@ClientApi\Operations\Maintenance\OngoingTasks\OngoingTaskOperations.cs /}
+
+{CODE syntax_3@ClientApi\Operations\Maintenance\OngoingTasks\OngoingTaskOperations.cs /}
+
+| Parameter | Type | Description |
+|--------------|-------------------|--------------------------------------------------------|
+| **taskId** | `long` | Task ID |
+| **taskName** | `string` | Task name |
+| **taskType** | `OngoingTaskType` | Task type |
+| **disable** | `bool` | `true` - disable the task
`false` - enable the task |
+
+{CODE syntax_4@ClientApi\Operations\Maintenance\OngoingTasks\OngoingTaskOperations.cs /}
+
+
+
+| Return value of `store.Maintenance.Send(GetOngoingTaskInfoOperation)` | |
+|-------------------------------------------------------------------------|----------------------------------------|
+| `OngoingTaskReplication` | Object with information about the task |
+
+{CODE syntax_5@ClientApi\Operations\Maintenance\OngoingTasks\OngoingTaskOperations.cs /}
+
+{PANEL/}
diff --git a/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/what-are-operations.dotnet.markdown b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/what-are-operations.dotnet.markdown
new file mode 100644
index 0000000000..af348d44cd
--- /dev/null
+++ b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/what-are-operations.dotnet.markdown
@@ -0,0 +1,442 @@
+# What are Operations
+
+---
+
+{NOTE: }
+
+* The RavenDB Client API is built with the notion of layers.
+ At the top, and what you will usually interact with, are the **[DocumentStore](../../client-api/what-is-a-document-store)**
+ and the **[Session](../../client-api/session/what-is-a-session-and-how-does-it-work)**.
+ They in turn are built on top of the lower-level **Operations** and **Commands** API.
+
+* **RavenDB provides direct access to this lower-level API**, allowing you to send requests
+ directly to the server via DocumentStore Operations instead of using the higher-level Session API.
+
+* In this page:
+ * [Why use operations](../../client-api/operations/what-are-operations#why-use-operations)
+ * [How operations work](../../client-api/operations/what-are-operations#how-operations-work)
+
+ * **Operation types**:
+ * [Common operations](../../client-api/operations/what-are-operations#common-operations)
+ * [Maintenance operations](../../client-api/operations/what-are-operations#maintenance-operations)
+ * [Server-maintenance operations](../../client-api/operations/what-are-operations#server-maintenance-operations)
+ * [Manage lengthy operations](../../client-api/operations/what-are-operations#manage-lengthy-operations)
+ * [Wait for completion](../../client-api/operations/what-are-operations#wait-for-completion)
+ * [Kill operation](../../client-api/operations/what-are-operations#kill-operation)
+
+{NOTE/}
+
+---
+
+{PANEL: Why use operations}
+
+* Operations provide **management functionality** that is Not available in the context of the session, for example:
+ * Create/delete a database
+ * Execute administrative tasks
+ * Assign permissions
+ * Change server configuration, and more.
+
+* The operations are executed on the DocumentStore and are Not part of the session transaction.
+
+* There are some client tasks, such as patching documents, that can be carried out either via the Session ([session.Advanced.Patch()](../../client-api/operations/patching/single-document#array-manipulation))
+ or via an Operation on the DocumentStore ([PatchOperation](../../client-api/operations/patching/single-document#operations-api)).
+
+{PANEL/}
+
+{PANEL: How operations work}
+
+* **Sending the request**:
+ Each Operation is an encapsulation of a `RavenCommand`.
+ The RavenCommand creates the HTTP request message to be sent to the relevant server endpoint.
+ The DocumentStore `OperationExecutor` sends the request and processes the results.
+* **Target node**:
+ By default, the operation will be executed on the server node that is defined by the [client configuration](../../client-api/configuration/load-balance/overview#client-logic-for-choosing-a-node).
+ However, server-maintenance operations can be executed on a specific node by using the [ForNode](../../client-api/operations/how-to/switch-operations-to-a-different-node) method.
+* **Target database**:
+ By default, operations work on the default database defined in the DocumentStore.
+ However, common operations & maintenance operations can operate on a different database by using the [ForDatabase](../../client-api/operations/how-to/switch-operations-to-a-different-database) method.
+* **Transaction scope**:
+ Operations execute as a single-node transaction.
+ If needed, data will then replicate to the other nodes in the database-group.
+* **Background operations**:
+ Some operations may take a long time to complete and can be awaited for completion.
+ Learn more [below](../../client-api/operations/what-are-operations#wait-for-completion).
+
+{PANEL/}
+
+{PANEL: Common operations}
+
+* All common operations implement the `IOperation` interface.
+ The operation is executed within the **database scope**.
+ Use [ForDatabase](../../client-api/operations/how-to/switch-operations-to-a-different-database) to operate on a specific database other than the default defined in the store.
+
+* These operations include set-based operations such as _PatchOperation_, _CounterBatchOperation_,
+ document-extensions related operations such as getting/putting an attachment, and more.
+ See all available operations [below](../../client-api/operations/what-are-operations#operations-list).
+
+* To execute a common operation request,
+ use the `Send` method on the `Operations` property in the DocumentStore.
+
+#### Example:
+
+{CODE-TABS}
+{CODE-TAB:csharp:Sync operations_ex@ClientApi\Operations\WhatAreOperations.cs /}
+{CODE-TAB:csharp:Async operations_ex_async@ClientApi\Operations\WhatAreOperations.cs /}
+{CODE-TABS/}
+
+##### Syntax:
+
+{CODE-TABS}
+{CODE-TAB:csharp:Sync operations_send@ClientApi\Operations\WhatAreOperations.cs /}
+{CODE-TAB:csharp:Async operations_send_async@ClientApi\Operations\WhatAreOperations.cs /}
+{CODE-TABS/}
+
+{NOTE: }
+
+ **The following common operations are available:**
+
+---
+
+* **Attachments**:
+ [PutAttachmentOperation](../../client-api/operations/attachments/put-attachment)
+ [GetAttachmentOperation](../../client-api/operations/attachments/get-attachment)
+ [DeleteAttachmentOperation](../../client-api/operations/attachments/delete-attachment)
+
+* **Counters**:
+ [CounterBatchOperation](../../client-api/operations/counters/counter-batch)
+ [GetCountersOperation](../../client-api/operations/counters/get-counters)
+
+* **Time series**:
+ [TimeSeriesBatchOperation](../../document-extensions/timeseries/client-api/operations/append-and-delete)
+ [GetMultipleTimeSeriesOperation](../../document-extensions/timeseries/client-api/operations/get)
+ [GetTimeSeriesOperation](../../document-extensions/timeseries/client-api/operations/get)
+ GetTimeSeriesStatisticsOperation
+
+* **Revisions**:
+ [GetRevisionsOperation](../../document-extensions/revisions/client-api/operations/get-revisions)
+
+* **Patching**:
+ [PatchOperation](../../client-api/operations/patching/single-document)
+ [PatchByQueryOperation](../../client-api/operations/patching/set-based)
+
+* **Delete by query**:
+ [DeleteByQueryOperation](../../client-api/operations/common/delete-by-query)
+
+* **Compare-exchange**:
+ [PutCompareExchangeValueOperation](../../client-api/operations/compare-exchange/put-compare-exchange-value)
+ [GetCompareExchangeValueOperation](../../client-api/operations/compare-exchange/get-compare-exchange-value)
+ [GetCompareExchangeValuesOperation](../../client-api/operations/compare-exchange/get-compare-exchange-values)
+ [DeleteCompareExchangeValueOperation](../../client-api/operations/compare-exchange/delete-compare-exchange-value)
+
+{NOTE/}
+{PANEL/}
+
+{PANEL: Maintenance operations}
+
+* All maintenance operations implement the `IMaintenanceOperation` interface.
+ The operation is executed within the **database scope**.
+ Use [ForDatabase](../../client-api/operations/how-to/switch-operations-to-a-different-database) to operate on a specific database other than the default defined in the store.
+
+* These operations include database management operations such as setting client configuration,
+ managing indexes & ongoing-tasks operations, getting stats, and more.
+ See all available maintenance operations [below](../../client-api/operations/what-are-operations#maintenance-list).
+
+* To execute a maintenance operation request,
+ use the `Send` method on the `Maintenance` property in the DocumentStore.
+
+#### Example:
+
+{CODE-TABS}
+{CODE-TAB:csharp:Sync maintenance_ex@ClientApi\Operations\WhatAreOperations.cs /}
+{CODE-TAB:csharp:Async maintenance_ex_async@ClientApi\Operations\WhatAreOperations.cs /}
+{CODE-TABS/}
+
+##### Syntax:
+
+{CODE-TABS}
+{CODE-TAB:csharp:Sync maintenance_send@ClientApi\Operations\WhatAreOperations.cs /}
+{CODE-TAB:csharp:Async maintenance_send_async@ClientApi\Operations\WhatAreOperations.cs /}
+{CODE-TABS/}
+
+{NOTE: }
+
+ **The following maintenance operations are available:**
+
+---
+
+* **Statistics**:
+ [GetStatisticsOperation](../../client-api/operations/maintenance/get-stats#get-database-stats)
+ [GetDetailedStatisticsOperation](../../client-api/operations/maintenance/get-stats#get-detailed-database-stats)
+ [GetCollectionStatisticsOperation](../../client-api/operations/maintenance/get-stats#get-collection-stats)
+ [GetDetailedCollectionStatisticsOperation](../../client-api/operations/maintenance/get-stats#get-detailed-collection-stats)
+
+* **Client Configuration**:
+ [PutClientConfigurationOperation](../../client-api/operations/maintenance/configuration/put-client-configuration)
+ [GetClientConfigurationOperation](../../client-api/operations/maintenance/configuration/get-client-configuration)
+
+* **Indexes**:
+ [PutIndexesOperation](../../client-api/operations/maintenance/indexes/put-indexes)
+ [SetIndexesLockOperation](../../client-api/operations/maintenance/indexes/set-index-lock)
+ [SetIndexesPriorityOperation](../../client-api/operations/maintenance/indexes/set-index-priority)
+ [GetIndexErrorsOperation](../../client-api/operations/maintenance/indexes/get-index-errors)
+ [GetIndexOperation](../../client-api/operations/maintenance/indexes/get-index)
+ [GetIndexesOperation](../../client-api/operations/maintenance/indexes/get-indexes)
+ [GetTermsOperation](../../client-api/operations/maintenance/indexes/get-terms)
+ GetIndexPerformanceStatisticsOperation
+ GetIndexStatisticsOperation
+ GetIndexesStatisticsOperation
+ GetIndexingStatusOperation
+ GetIndexStalenessOperation
+ [GetIndexNamesOperation](../../client-api/operations/maintenance/indexes/get-index-names)
+ [StartIndexOperation](../../client-api/operations/maintenance/indexes/start-index)
+ [StartIndexingOperation](../../client-api/operations/maintenance/indexes/start-indexing)
+ [StopIndexOperation](../../client-api/operations/maintenance/indexes/stop-index)
+ [StopIndexingOperation](../../client-api/operations/maintenance/indexes/stop-indexing)
+ [ResetIndexOperation](../../client-api/operations/maintenance/indexes/reset-index)
+ [DeleteIndexOperation](../../client-api/operations/maintenance/indexes/delete-index)
+ [DeleteIndexErrorsOperation](../../client-api/operations/maintenance/indexes/delete-index-errors)
+ [DisableIndexOperation](../../client-api/operations/maintenance/indexes/disable-index)
+ [EnableIndexOperation](../../client-api/operations/maintenance/indexes/enable-index)
+ [IndexHasChangedOperation](../../client-api/operations/maintenance/indexes/index-has-changed)
+
+* **Analyzers**:
+ PutAnalyzersOperation
+ DeleteAnalyzerOperation
+
+* **Ongoing tasks**:
+ [GetOngoingTaskInfoOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations#get-ongoing-task-info)
+ [ToggleOngoingTaskStateOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations#toggle-ongoing-task-state)
+ [DeleteOngoingTaskOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations#delete-ongoing-task)
+
+* **ETL tasks**:
+ AddEtlOperation
+ UpdateEtlOperation
+ [ResetEtlOperation](../../client-api/operations/maintenance/etl/reset-etl)
+
+* **Replication tasks**:
+ PutPullReplicationAsHubOperation
+ GetPullReplicationTasksInfoOperation
+ GetReplicationHubAccessOperation
+ GetReplicationPerformanceStatisticsOperation
+ RegisterReplicationHubAccessOperation
+ UnregisterReplicationHubAccessOperation
+ UpdateExternalReplicationOperation
+ UpdatePullReplicationAsSinkOperation
+
+* **Backup**:
+ BackupOperation
+ GetPeriodicBackupStatusOperation
+ StartBackupOperation
+ UpdatePeriodicBackupOperation
+
+* **Connection strings**:
+ [PutConnectionStringOperation](../../client-api/operations/maintenance/connection-strings/add-connection-string)
+ [RemoveConnectionStringOperation](../../client-api/operations/maintenance/connection-strings/remove-connection-string)
+ [GetConnectionStringsOperation](../../client-api/operations/maintenance/connection-strings/get-connection-string)
+
+* **Transaction recording**:
+ StartTransactionsRecordingOperation
+ StopTransactionsRecordingOperation
+ ReplayTransactionsRecordingOperation
+
+* **Database settings**:
+ [PutDatabaseSettingsOperation](../../client-api/operations/maintenance/configuration/database-settings-operation#put-database-settings-operation)
+ [GetDatabaseSettingsOperation](../../client-api/operations/maintenance/configuration/database-settings-operation#get-database-settings-operation)
+
+* **Identities**:
+ [GetIdentitiesOperation](../../client-api/operations/maintenance/identities/get-identities)
+ [NextIdentityForOperation](../../client-api/operations/maintenance/identities/increment-next-identity)
+ [SeedIdentityForOperation](../../client-api/operations/maintenance/identities/seed-identity)
+
+* **Time series**:
+ ConfigureTimeSeriesOperation
+ ConfigureTimeSeriesPolicyOperation
+ ConfigureTimeSeriesValueNamesOperation
+ RemoveTimeSeriesPolicyOperation
+
+* **Revisions**:
+ [ConfigureRevisionsOperation](../../document-extensions/revisions/client-api/operations/configure-revisions)
+
+* **Sorters**:
+ [PutSortersOperation](../../client-api/operations/maintenance/sorters/put-sorter)
+ DeleteSorterOperation
+
+* **Misc**:
+ ConfigureExpirationOperation
+ ConfigureRefreshOperation
+ UpdateDocumentsCompressionConfigurationOperation
+ DatabaseHealthCheckOperation
+ GetOperationStateOperation
+ CreateSampleDataOperation
+
+{NOTE/}
+{PANEL/}
+
+{PANEL: Server-maintenance operations}
+
+* All server-maintenance operations implement the `IServerOperation` interface.
+ The operation is executed within the **server scope**.
+ Use [ForNode](../../client-api/operations/how-to/switch-operations-to-a-different-node) to operate on a specific node other than the default defined in the client configuration.
+
+* These operations include server management and configuration operations.
+ See all available operations [below](../../client-api/operations/what-are-operations#server-list).
+
+* To execute a server-maintenance operation request,
+ use the `Send` method on the `Maintenance.Server` property in the DocumentStore.
+
+#### Example:
+
+{CODE-TABS}
+{CODE-TAB:csharp:Sync server_ex@ClientApi\Operations\WhatAreOperations.cs /}
+{CODE-TAB:csharp:Async server_ex_async@ClientApi\Operations\WhatAreOperations.cs /}
+{CODE-TABS/}
+
+##### Syntax:
+
+{CODE-TABS}
+{CODE-TAB:csharp:Sync server_send@ClientApi\Operations\WhatAreOperations.cs /}
+{CODE-TAB:csharp:Async server_send_async@ClientApi\Operations\WhatAreOperations.cs /}
+{CODE-TABS/}
+
+{NOTE: }
+
+ **The following server-maintenance operations are available:**
+
+---
+
+* **Client certificates**:
+ [PutClientCertificateOperation](../../client-api/operations/server-wide/certificates/put-client-certificate)
+ [CreateClientCertificateOperation](../../client-api/operations/server-wide/certificates/create-client-certificate)
+ [GetCertificatesOperation](../../client-api/operations/server-wide/certificates/get-certificates)
+ [DeleteCertificateOperation](../../client-api/operations/server-wide/certificates/delete-certificate)
+ EditClientCertificateOperation
+ GetCertificateMetadataOperation
+ ReplaceClusterCertificateOperation
+
+* **Server-wide client configuration**:
+ [PutServerWideClientConfigurationOperation](../../client-api/operations/server-wide/configuration/put-serverwide-client-configuration)
+ [GetServerWideClientConfigurationOperation](../../client-api/operations/server-wide/configuration/get-serverwide-client-configuration)
+
+* **Database management**:
+ [CreateDatabaseOperation](../../client-api/operations/server-wide/create-database)
+ [DeleteDatabasesOperation](../../client-api/operations/server-wide/delete-database)
+ [ToggleDatabasesStateOperation](../../client-api/operations/server-wide/toggle-databases-state)
+ [GetDatabaseNamesOperation](../../client-api/operations/server-wide/get-database-names)
+ [AddDatabaseNodeOperation](../../client-api/operations/server-wide/add-database-node)
+ [PromoteDatabaseNodeOperation](../../client-api/operations/server-wide/promote-database-node)
+ [ReorderDatabaseMembersOperation](../../client-api/operations/server-wide/reorder-database-members)
+ [CompactDatabaseOperation](../../client-api/operations/server-wide/compact-database)
+ GetDatabaseRecordOperation
+ SetDatabasesLockOperation
+ CreateDatabaseOperationWithoutNameValidation
+ SetDatabaseDynamicDistributionOperation
+ ModifyDatabaseTopologyOperation
+ UpdateDatabaseOperation
+ UpdateUnusedDatabasesOperation
+
+* **Server-wide ongoing tasks**:
+ DeleteServerWideTaskOperation
+ ToggleServerWideTaskStateOperation
+
+* **Server-wide replication tasks**:
+ PutServerWideExternalReplicationOperation
+ GetServerWideExternalReplicationOperation
+ GetServerWideExternalReplicationsOperation
+
+* **Server-wide backup tasks**:
+ PutServerWideBackupConfigurationOperation
+ GetServerWideBackupConfigurationOperation
+ GetServerWideBackupConfigurationsOperation
+ RestoreBackupOperation
+
+* **Server-wide analyzers**:
+ PutServerWideAnalyzersOperation
+ DeleteServerWideAnalyzerOperation
+
+* **Server-wide sorters**:
+ [PutServerWideSortersOperation](../../client-api/operations/server-wide/sorters/put-sorter-server-wide)
+ DeleteServerWideSorterOperation
+
+* **Logs & debug**:
+ SetLogsConfigurationOperation
+ GetLogsConfigurationOperation
+ GetClusterDebugInfoPackageOperation
+ [GetBuildNumberOperation](../../client-api/operations/server-wide/get-build-number)
+ GetServerWideOperationStateOperation
+
+* **Traffic watch**:
+ PutTrafficWatchConfigurationOperation
+ GetTrafficWatchConfigurationOperation
+
+* **Revisions**:
+ [ConfigureRevisionsForConflictsOperation](../../document-extensions/revisions/client-api/operations/conflict-revisions-configuration)
+
+* **Misc**:
+ ModifyConflictSolverOperation
+ OfflineMigrationOperation
+
+{NOTE/}
+{PANEL/}
+
+{PANEL: Manage lengthy operations}
+
+* Some operations that run in the server background may take a long time to complete.
+
+* For Operations that implement an interface with type `OperationIdResult`,
+ executing the operation via the `Send` method will return an `Operation` object,
+ which can be **awaited for completion** or **aborted (killed)**.
+
+---
+
+#### Wait for completion:
+
+{CODE-TABS}
+{CODE-TAB:csharp:With_Timeout wait_timeout_ex@ClientApi\Operations\WhatAreOperations.cs /}
+{CODE-TAB:csharp:With_Timout_async wait_timeout_ex_async@ClientApi\Operations\WhatAreOperations.cs /}
+{CODE-TAB:csharp:With_CancellationToken wait_token_ex@ClientApi\Operations\WhatAreOperations.cs /}
+{CODE-TAB:csharp:With_CancellationToken_async wait_token_ex_async@ClientApi\Operations\WhatAreOperations.cs /}
+{CODE-TABS/}
+
+##### Syntax:
+
+{CODE-TABS}
+{CODE-TAB:csharp:Sync waitForCompletion_syntax@ClientApi\Operations\WhatAreOperations.cs /}
+{CODE-TAB:csharp:Async waitForCompletion_syntax_async@ClientApi\Operations\WhatAreOperations.cs /}
+{CODE-TABS/}
+
+| Parameter | Type | Description |
+|-------------|---------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **timeout** | `TimeSpan` | - **When timespan is specified** -
The server will throw a `TimeoutException` if operation has Not completed within the specified time frame.
The operation itself continues to run in the background,
no rollback action takes place. - `null` -
WaitForCompletion will wait for operation to complete forever.
|
+| **token** | `CancellationToken` | - **When cancellation token is specified** -
The server will throw a `TimeoutException` if operation has Not completed at cancellation time.
The operation itself continues to run in the background,
no rollback action takes place.
|
+
+| Return type | |
+|--------------------|-------------------------------|
+| `IOperationResult` | The operation result content. |
+
+#### Kill operation:
+
+{CODE-TABS}
+{CODE-TAB:csharp:Kill kill_ex@ClientApi\Operations\WhatAreOperations.cs /}
+{CODE-TAB:csharp:Kill_async kill_ex_async@ClientApi\Operations\WhatAreOperations.cs /}
+{CODE-TABS/}
+
+##### Syntax:
+
+{CODE kill_syntax@ClientApi\Operations\WhatAreOperations.cs /}
+
+| Parameter | Type | Description |
+|-------------|---------------------|----------------------------------------------------------------------|
+| **token** | `CancellationToken` | Provide a cancellation token if needed to abort the KillAsync method |
+
+{PANEL/}
+
+## Related articles
+
+### Document Store
+
+- [What is a Document Store](../../client-api/what-is-a-document-store)
+
+### Operations
+
+- [How to Switch Operations to a Different Database](../../client-api/operations/how-to/switch-operations-to-a-different-database)
+- [How to Switch Operations to a Different Node](../../client-api/operations/how-to/switch-operations-to-a-different-node)
diff --git a/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/what-are-operations.java.markdown b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/what-are-operations.java.markdown
new file mode 100644
index 0000000000..192129872d
--- /dev/null
+++ b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/what-are-operations.java.markdown
@@ -0,0 +1,145 @@
+# What are Operations
+
+The RavenDB client API is built with the notion of layers. At the top, and what you will usually interact with, are the **[DocumentStore](../../client-api/what-is-a-document-store)** and the **[DocumentSession](../../client-api/session/what-is-a-session-and-how-does-it-work)**.
+
+They in turn, are built on top of the notion of Operations and Commands.
+
+Operations are an encapsulation of a set of low level commands which are used to manipulate data, execute administrative tasks, and change the configuration on a server.
+
+They are available in the DocumentStore under the **operations**, **maintenance**, and **maintenance().server** methods.
+
+{PANEL:Common Operations}
+
+Common operations include set based operations for [Patching](../../client-api/operations/patching/set-based) or removal of documents by using queries (more can be read [here](../../client-api/operations/common/delete-by-query)).
+There is also the ability to handle distributed [Compare Exchange](../../client-api/operations/compare-exchange/overview) operations and manage [Attachments](../../client-api/operations/attachments/get-attachment) and [Counters](../../client-api/operations/counters/counter-batch).
+
+### How to Send an Operation
+
+In order to execute an operation, you will need to use the `send` or `sendAsync` methods. Available overloads are:
+{CODE-TABS}
+{CODE-TAB:java:Sync Client_Operations_api@ClientApi\Operations\WhatAreOperations.java /}
+{CODE-TAB:java:Async Client_Operations_api_async@ClientApi\Operations\WhatAreOperations.java /}
+{CODE-TABS/}
+
+### The following operations are available:
+
+#### Compare Exchange
+
+* [CompareExchange](../../client-api/operations/compare-exchange/overview)
+
+#### Attachments
+
+* [GetAttachmentOperation](../../client-api/operations/attachments/get-attachment)
+* [PutAttachmentOperation](../../client-api/operations/attachments/put-attachment)
+* [DeleteAttachmentOperation](../../client-api/operations/attachments/delete-attachment)
+
+#### Patching
+
+* [PatchByQueryOperation](../../client-api/operations/patching/set-based)
+* [PatchOperation](../../client-api/operations/patching/single-document)
+
+
+#### Counters
+
+* [CounterBatchOperation](../../client-api/operations/counters/counter-batch)
+* [GetCountersOperation](../../client-api/operations/counters/get-counters)
+
+
+#### Misc
+
+* [DeleteByQueryOperation](../../client-api/operations/common/delete-by-query)
+
+### Example - Get Attachment
+
+{CODE:java Client_Operations_1@ClientApi\Operations\WhatAreOperations.java /}
+
+{PANEL/}
+
+{PANEL:Maintenance Operations}
+
+Maintenance operations include operations for changing the configuration at runtime and for management of index operations.
+
+### How to Send an Operation
+
+{CODE:java Maintenance_Operations_api@ClientApi\Operations\WhatAreOperations.java /}
+
+### The following maintenance operations are available:
+
+#### Client Configuration
+
+* [PutClientConfigurationOperation](../../client-api/operations/maintenance/configuration/put-client-configuration)
+* [GetClientConfigurationOperation](../../client-api/operations/maintenance/configuration/get-client-configuration)
+
+#### Indexing
+
+* [DeleteIndexOperation](../../client-api/operations/maintenance/indexes/delete-index)
+* [DisableIndexOperation](../../client-api/operations/maintenance/indexes/disable-index)
+* [EnableIndexOperation](../../client-api/operations/maintenance/indexes/enable-index)
+* [ResetIndexOperation](../../client-api/operations/maintenance/indexes/reset-index)
+* [SetIndexesLockOperation](../../client-api/operations/maintenance/indexes/set-index-lock)
+* [SetIndexesPriorityOperation](../../client-api/operations/maintenance/indexes/set-index-priority)
+* [StartIndexOperation](../../client-api/operations/maintenance/indexes/start-index)
+* [StartIndexingOperation](../../client-api/operations/maintenance/indexes/start-indexing)
+* [StopIndexOperation](../../client-api/operations/maintenance/indexes/stop-index)
+* [StopIndexingOperation](../../client-api/operations/maintenance/indexes/stop-indexing)
+* [GetIndexErrorsOperation](../../client-api/operations/maintenance/indexes/get-index-errors)
+* [GetIndexOperation](../../client-api/operations/maintenance/indexes/get-index)
+* [GetIndexesOperation](../../client-api/operations/maintenance/indexes/get-indexes)
+* [GetTermsOperation](../../client-api/operations/maintenance/indexes/get-terms)
+* [IndexHasChangedOperation](../../client-api/operations/maintenance/indexes/index-has-changed)
+* [PutIndexesOperation](../../client-api/operations/maintenance/indexes/put-indexes)
+
+#### Misc
+
+* [GetCollectionStatisticsOperation](../../client-api/operations/maintenance/get-stats)
+* [GetStatisticsOperation](../../client-api/operations/maintenance/get-stats)
+* [GetIdentitiesOperation](../../client-api/operations/maintenance/identities/get-identities)
+
+### Example - Stop Index
+
+{CODE:java Maintenance_Operations_1@ClientApi\Operations\WhatAreOperations.java /}
+
+{PANEL/}
+
+{PANEL:Server Operations}
+
+These type of operations contain various administrative and miscellaneous configuration operations.
+
+### How to Send an Operation
+
+{CODE-TABS}
+{CODE-TAB:java:Sync Server_Operations_api@ClientApi\Operations\WhatAreOperations.java /}
+{CODE-TAB:java:Async Server_Operations_api_async@ClientApi\Operations\WhatAreOperations.java /}
+{CODE-TABS/}
+
+### The following server-wide operations are available:
+
+
+#### Cluster Management
+
+* [CreateDatabaseOperation](../../client-api/operations/server-wide/create-database)
+* [DeleteDatabasesOperation](../../client-api/operations/server-wide/delete-database)
+
+#### Miscellaneous
+
+* [GetDatabaseNamesOperation](../../client-api/operations/server-wide/get-database-names)
+
+### Example - Get Build Number
+
+{CODE:java Server_Operations_1@ClientApi\Operations\WhatAreOperations.java /}
+
+{PANEL/}
+
+## Remarks
+
+{NOTE By default, operations available in `store.operations` or `store.maintenance` are working on a default database that was setup for that store. To switch operations to a different database that is available on that server use the **[forDatabase](../../client-api/operations/how-to/switch-operations-to-a-different-database)** method. /}
+
+## Related articles
+
+### Document Store
+
+- [What is a Document Store](../../client-api/what-is-a-document-store)
+
+### Operations
+
+- [How to Switch Operations to a Different Database](../../client-api/operations/how-to/switch-operations-to-a-different-database)
diff --git a/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/what-are-operations.js.markdown b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/what-are-operations.js.markdown
new file mode 100644
index 0000000000..7317085784
--- /dev/null
+++ b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/what-are-operations.js.markdown
@@ -0,0 +1,431 @@
+# What are Operations
+
+---
+
+{NOTE: }
+
+* The RavenDB Client API is built with the notion of layers.
+ At the top, and what you will usually interact with, are the **[DocumentStore](../../client-api/what-is-a-document-store)**
+ and the **[Session](../../client-api/session/what-is-a-session-and-how-does-it-work)**.
+ They in turn are built on top of the lower-level **Operations** and **Commands** API.
+
+* **RavenDB provides direct access to this lower-level API**, allowing you to send requests
+ directly to the server via DocumentStore Operations instead of using the higher-level Session API.
+
+* In this page:
+ * [Why use operations](../../client-api/operations/what-are-operations#why-use-operations)
+ * [How operations work](../../client-api/operations/what-are-operations#how-operations-work)
+
+ * __Operation types__:
+ * [Common operations](../../client-api/operations/what-are-operations#common-operations)
+ * [Maintenance operations](../../client-api/operations/what-are-operations#maintenance-operations)
+ * [Server-maintenance operations](../../client-api/operations/what-are-operations#server-maintenance-operations)
+ * [Manage lengthy operations](../../client-api/operations/what-are-operations#manage-lengthy-operations)
+ * [Wait for completion](../../client-api/operations/what-are-operations#wait-for-completion)
+ * [Kill operation](../../client-api/operations/what-are-operations#killOperation)
+
+{NOTE/}
+
+---
+
+{PANEL: Why use operations}
+
+* Operations provide __management functionality__ that is Not available in the context of the session, for example:
+ * Create/delete a database
+ * Execute administrative tasks
+ * Assign permissions
+ * Change server configuration, and more.
+
+* The operations are executed on the DocumentStore and are Not part of the session transaction.
+
+* There are some client tasks, such as patching documents, that can be carried out either via the Session ([session.advanced.patch()](../../client-api/operations/patching/single-document#array-manipulation))
+ or via an Operation on the DocumentStore ([PatchOperation](../../client-api/operations/patching/single-document#operations-api)).
+
+{PANEL/}
+
+{PANEL: How operations work}
+
+* __Sending the request__:
+ Each Operation creates an HTTP request message to be sent to the relevant server endpoint.
+ The DocumentStore `OperationExecutor` sends the request and processes the results.
+* __Target node__:
+ By default, the operation will be executed on the server node that is defined by the [client configuration](../../client-api/configuration/load-balance/overview#client-logic-for-choosing-a-node).
+ However, server-maintenance operations can be executed on a specific node by using the [forNode](../../client-api/operations/how-to/switch-operations-to-a-different-node) method.
+* __Target database__:
+ By default, operations work on the default database defined in the DocumentStore.
+ However, common operations & maintenance operations can operate on a different database by using the [forDatabase](../../client-api/operations/how-to/switch-operations-to-a-different-database) method.
+* __Transaction scope__:
+ Operations execute as a single-node transaction.
+ If needed, data will then replicate to the other nodes in the database-group.
+* __Background operations__:
+ Some operations may take a long time to complete and can be awaited for completion.
+ Learn more [below](../../client-api/operations/what-are-operations#wait-for-completion).
+
+{PANEL/}
+
+{PANEL: Common operations}
+
+{NOTE: }
+
+* All common operations implement the `IOperation` interface.
+ The operation is executed within the __database scope__.
+ Use [forDatabase](../../client-api/operations/how-to/switch-operations-to-a-different-database) to operate on a specific database other than the default defined in the store.
+
+* These operations include set-based operations such as _PatchOperation_, _CounterBatchOperation_,
+ document-extensions related operations such as getting/putting an attachment, and more.
+ See all available operations [below](../../client-api/operations/what-are-operations#operations-list).
+
+* To execute a common operation request,
+ use the `send` method on the `operations` property in the DocumentStore.
+
+__Example__:
+
+{CODE:nodejs operations_ex@client-api\operations\whatAreOperations.js /}
+
+{NOTE/}
+
+{NOTE: }
+
+__Send syntax__:
+
+{CODE:nodejs operations_send@client-api\operations\whatAreOperations.js /}
+
+{NOTE/}
+
+{NOTE: }
+
+ __The following common operations are available:__
+
+---
+
+* __Attachments__:
+ [PutAttachmentOperation](../../client-api/operations/attachments/put-attachment)
+ [GetAttachmentOperation](../../client-api/operations/attachments/get-attachment)
+ [DeleteAttachmentOperation](../../client-api/operations/attachments/delete-attachment)
+
+* __Counters__:
+ [CounterBatchOperation](../../client-api/operations/counters/counter-batch)
+ [GetCountersOperation](../../client-api/operations/counters/get-counters)
+
+* __Time series__:
+ TimeSeriesBatchOperation
+ GetMultipleTimeSeriesOperation
+ GetTimeSeriesOperation
+ GetTimeSeriesStatisticsOperation
+
+* __Revisions__:
+ [GetRevisionsOperation](../../document-extensions/revisions/client-api/operations/get-revisions)
+
+* __Patching__:
+ [PatchOperation](../../client-api/operations/patching/single-document)
+ [PatchByQueryOperation](../../client-api/operations/patching/set-based)
+
+* __Delete by query__:
+ [DeleteByQueryOperation](../../client-api/operations/common/delete-by-query)
+
+* __Compare-exchange__:
+ PutCompareExchangeValueOperation
+ GetCompareExchangeValueOperation
+ GetCompareExchangeValuesOperation
+ DeleteCompareExchangeValueOperation
+
+{NOTE/}
+{PANEL/}
+
+{PANEL: Maintenance operations}
+
+{NOTE: }
+
+* All maintenance operations implement the `IMaintenanceOperation` interface.
+ The operation is executed within the __database scope__.
+ Use [forDatabase](../../client-api/operations/how-to/switch-operations-to-a-different-database) to operate on a specific database other than the default defined in the store.
+
+* These operations include database management operations such as setting client configuration,
+ managing indexes & ongoing-tasks operations, getting stats, and more.
+ See all available maintenance operations [below](../../client-api/operations/what-are-operations#maintenance-list).
+
+* To execute a maintenance operation request,
+ use the `send` method on the `maintenance` property in the DocumentStore.
+
+__Example__:
+
+{CODE:nodejs maintenance_ex@client-api\operations\whatAreOperations.js /}
+
+{NOTE/}
+
+{NOTE: }
+
+__Send syntax__:
+
+{CODE:nodejs maintenance_send@client-api\operations\whatAreOperations.js /}
+
+{NOTE/}
+
+{NOTE: }
+
+ __The following maintenance operations are available:__
+
+---
+
+* __Statistics__:
+ [GetStatisticsOperation](../../client-api/operations/maintenance/get-stats#get-database-stats)
+ [GetDetailedStatisticsOperation](../../client-api/operations/maintenance/get-stats#get-detailed-database-stats)
+ [GetCollectionStatisticsOperation](../../client-api/operations/maintenance/get-stats#get-collection-stats)
+ [GetDetailedCollectionStatisticsOperation](../../client-api/operations/maintenance/get-stats#get-detailed-collection-stats)
+
+* __Client Configuration__:
+ [PutClientConfigurationOperation](../../client-api/operations/maintenance/configuration/put-client-configuration)
+ [GetClientConfigurationOperation](../../client-api/operations/maintenance/configuration/get-client-configuration)
+
+* __Indexes__:
+ [PutIndexesOperation](../../client-api/operations/maintenance/indexes/put-indexes)
+ [SetIndexesLockOperation](../../client-api/operations/maintenance/indexes/set-index-lock)
+ [SetIndexesPriorityOperation](../../client-api/operations/maintenance/indexes/set-index-priority)
+ [GetIndexErrorsOperation](../../client-api/operations/maintenance/indexes/get-index-errors)
+ [GetIndexOperation](../../client-api/operations/maintenance/indexes/get-index)
+ [GetIndexesOperation](../../client-api/operations/maintenance/indexes/get-indexes)
+ [GetTermsOperation](../../client-api/operations/maintenance/indexes/get-terms)
+ GetIndexPerformanceStatisticsOperation
+ GetIndexStatisticsOperation
+ GetIndexesStatisticsOperation
+ GetIndexingStatusOperation
+ GetIndexStalenessOperation
+ [GetIndexNamesOperation](../../client-api/operations/maintenance/indexes/get-index-names)
+ [StartIndexOperation](../../client-api/operations/maintenance/indexes/start-index)
+ [StartIndexingOperation](../../client-api/operations/maintenance/indexes/start-indexing)
+ [StopIndexOperation](../../client-api/operations/maintenance/indexes/stop-index)
+ [StopIndexingOperation](../../client-api/operations/maintenance/indexes/stop-indexing)
+ [ResetIndexOperation](../../client-api/operations/maintenance/indexes/reset-index)
+ [DeleteIndexOperation](../../client-api/operations/maintenance/indexes/delete-index)
+ [DeleteIndexErrorsOperation](../../client-api/operations/maintenance/indexes/delete-index-errors)
+ [DisableIndexOperation](../../client-api/operations/maintenance/indexes/disable-index)
+ [EnableIndexOperation](../../client-api/operations/maintenance/indexes/enable-index)
+ [IndexHasChangedOperation](../../client-api/operations/maintenance/indexes/index-has-changed)
+
+* __Analyzers__:
+ PutAnalyzersOperation
+ DeleteAnalyzerOperation
+
+* **Ongoing tasks**:
+ [GetOngoingTaskInfoOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations#get-ongoing-task-info)
+ [ToggleOngoingTaskStateOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations#toggle-ongoing-task-state)
+ [DeleteOngoingTaskOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations#delete-ongoing-task)
+
+* __ETL tasks__:
+ AddEtlOperation
+ UpdateEtlOperation
+ [ResetEtlOperation](../../client-api/operations/maintenance/etl/reset-etl)
+
+* __Replication tasks__:
+ PutPullReplicationAsHubOperation
+ GetPullReplicationTasksInfoOperation
+ GetReplicationHubAccessOperation
+ GetReplicationPerformanceStatisticsOperation
+ RegisterReplicationHubAccessOperation
+ UnregisterReplicationHubAccessOperation
+ UpdateExternalReplicationOperation
+ UpdatePullReplicationAsSinkOperation
+
+* __Backup__:
+ BackupOperation
+ GetPeriodicBackupStatusOperation
+ StartBackupOperation
+ UpdatePeriodicBackupOperation
+
+* __Connection strings__:
+ [PutConnectionStringOperation](../../client-api/operations/maintenance/connection-strings/add-connection-string)
+ [RemoveConnectionStringOperation](../../client-api/operations/maintenance/connection-strings/remove-connection-string)
+ [GetConnectionStringsOperation](../../client-api/operations/maintenance/connection-strings/get-connection-string)
+
+* __Transaction recording__:
+ StartTransactionsRecordingOperation
+ StopTransactionsRecordingOperation
+ ReplayTransactionsRecordingOperation
+
+* __Database settings__:
+ [PutDatabaseSettingsOperation](../../client-api/operations/maintenance/configuration/database-settings-operation#put-database-settings-operation)
+ [GetDatabaseSettingsOperation](../../client-api/operations/maintenance/configuration/database-settings-operation#get-database-settings-operation)
+
+* __Identities__:
+ [GetIdentitiesOperation](../../client-api/operations/maintenance/identities/get-identities)
+ [NextIdentityForOperation](../../client-api/operations/maintenance/identities/increment-next-identity)
+ [SeedIdentityForOperation](../../client-api/operations/maintenance/identities/seed-identity)
+
+* __Time series__:
+ ConfigureTimeSeriesOperation
+ ConfigureTimeSeriesPolicyOperation
+ ConfigureTimeSeriesValueNamesOperation
+ RemoveTimeSeriesPolicyOperation
+
+* __Revisions__:
+ [ConfigureRevisionsOperation](../../document-extensions/revisions/client-api/operations/configure-revisions)
+
+* __Sorters__:
+ [PutSortersOperation](../../client-api/operations/maintenance/sorters/put-sorter)
+ DeleteSorterOperation
+
+* __Misc__:
+ ConfigureExpirationOperation
+ ConfigureRefreshOperation
+ UpdateDocumentsCompressionConfigurationOperation
+ DatabaseHealthCheckOperation
+ GetOperationStateOperation
+ CreateSampleDataOperation
+
+{NOTE/}
+{PANEL/}
+
+{PANEL: Server-maintenance operations}
+
+{NOTE: }
+
+* All server-maintenance operations implement the `IServerOperation` interface.
+ The operation is executed within the __server scope__.
+ Use [forNode](../../client-api/operations/how-to/switch-operations-to-a-different-node) to operate on a specific node other than the default defined in the client configuration.
+
+* These operations include server management and configuration operations.
+ See all available operations [below](../../client-api/operations/what-are-operations#server-list).
+
+* To execute a server-maintenance operation request,
+ use the `send` method on the `maintenance.server` property in the DocumentStore.
+
+__Example__:
+
+{CODE:nodejs server_ex@client-api\operations\whatAreOperations.js /}
+
+{NOTE/}
+
+{NOTE: }
+
+__Send syntax__:
+
+{CODE:nodejs server_send@client-api\operations\whatAreOperations.js /}
+
+{NOTE/}
+
+{NOTE: }
+
+ __The following server-maintenance operations are available:__
+
+---
+
+* __Client certificates__:
+ [PutClientCertificateOperation](../../client-api/operations/server-wide/certificates/put-client-certificate)
+ [CreateClientCertificateOperation](../../client-api/operations/server-wide/certificates/create-client-certificate)
+ [GetCertificatesOperation](../../client-api/operations/server-wide/certificates/get-certificates)
+ [DeleteCertificateOperation](../../client-api/operations/server-wide/certificates/delete-certificate)
+ EditClientCertificateOperation
+ GetCertificateMetadataOperation
+ ReplaceClusterCertificateOperation
+
+* __Server-wide client configuration__:
+ [PutServerWideClientConfigurationOperation](../../client-api/operations/server-wide/configuration/put-serverwide-client-configuration)
+ [GetServerWideClientConfigurationOperation](../../client-api/operations/server-wide/configuration/get-serverwide-client-configuration)
+
+* __Database management__:
+ [CreateDatabaseOperation](../../client-api/operations/server-wide/create-database)
+ [DeleteDatabasesOperation](../../client-api/operations/server-wide/delete-database)
+ [ToggleDatabasesStateOperation](../../client-api/operations/server-wide/toggle-databases-state)
+ [GetDatabaseNamesOperation](../../client-api/operations/server-wide/get-database-names)
+ [AddDatabaseNodeOperation](../../client-api/operations/server-wide/add-database-node)
+ [PromoteDatabaseNodeOperation](../../client-api/operations/server-wide/promote-database-node)
+ [ReorderDatabaseMembersOperation](../../client-api/operations/server-wide/reorder-database-members)
+ [CompactDatabaseOperation](../../client-api/operations/server-wide/compact-database)
+ GetDatabaseRecordOperation
+ SetDatabasesLockOperation
+ CreateDatabaseOperationWithoutNameValidation
+ SetDatabaseDynamicDistributionOperation
+ ModifyDatabaseTopologyOperation
+ UpdateDatabaseOperation
+ UpdateUnusedDatabasesOperation
+
+* __Server-wide ongoing tasks__:
+ DeleteServerWideTaskOperation
+ ToggleServerWideTaskStateOperation
+
+* __Server-wide replication tasks__:
+ PutServerWideExternalReplicationOperation
+ GetServerWideExternalReplicationOperation
+ GetServerWideExternalReplicationsOperation
+
+* __Server-wide backup tasks__:
+ PutServerWideBackupConfigurationOperation
+ GetServerWideBackupConfigurationOperation
+ GetServerWideBackupConfigurationsOperation
+ RestoreBackupOperation
+
+* __Server-wide analyzers__:
+ PutServerWideAnalyzersOperation
+ DeleteServerWideAnalyzerOperation
+
+* __Server-wide sorters__:
+ [PutServerWideSortersOperation](../../client-api/operations/server-wide/sorters/put-sorter-server-wide)
+ DeleteServerWideSorterOperation
+
+* __Logs & debug__:
+ SetLogsConfigurationOperation
+ GetLogsConfigurationOperation
+ GetClusterDebugInfoPackageOperation
+ [GetBuildNumberOperation](../../client-api/operations/server-wide/get-build-number)
+ GetServerWideOperationStateOperation
+
+* __Traffic watch__:
+ PutTrafficWatchConfigurationOperation
+ GetTrafficWatchConfigurationOperation
+
+* __Revisions__:
+ [ConfigureRevisionsForConflictsOperation](../../document-extensions/revisions/client-api/operations/conflict-revisions-configuration)
+
+* __Misc__:
+ ModifyConflictSolverOperation
+ OfflineMigrationOperation
+
+{NOTE/}
+{PANEL/}
+
+{PANEL: Manage lengthy operations}
+
+* Some operations that run in the server background may take a long time to complete.
+
+* For Operations that implement an interface with type `OperationIdResult`,
+ executing the operation via the `send` method will return a promise for `OperationCompletionAwaiter` object,
+ which can then be __awaited for completion__ or __aborted (killed)__.
+
+---
+
+{NOTE: }
+
+ __Wait for completion__:
+
+{CODE:nodejs wait_ex@client-api\operations\whatAreOperations.js /}
+
+{NOTE/}
+
+{NOTE: }
+
+ __Kill operation__:
+
+{CODE:nodejs kill_ex@client-api\operations\whatAreOperations.js /}
+
+{NOTE/}
+
+{NOTE: }
+
+##### Syntax:
+
+{CODE:nodejs wait_kill_syntax@client-api\operations\whatAreOperations.js /}
+
+{NOTE/}
+
+{PANEL/}
+
+## Related articles
+
+### Document Store
+
+- [What is a Document Store](../../client-api/what-is-a-document-store)
+
+### Operations
+
+- [How to Switch Operations to a Different Database](../../client-api/operations/how-to/switch-operations-to-a-different-database)
+- [How to Switch Operations to a Different Node](../../client-api/operations/how-to/switch-operations-to-a-different-node)
diff --git a/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/what-are-operations.python.markdown b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/what-are-operations.python.markdown
new file mode 100644
index 0000000000..f1c36683a7
--- /dev/null
+++ b/Documentation/6.1/Raven.Documentation.Pages/client-api/operations/what-are-operations.python.markdown
@@ -0,0 +1,373 @@
+# What are Operations
+
+---
+
+{NOTE: }
+
+* The RavenDB Client API is built with the notion of layers.
+ At the top, and what you will usually interact with, are the **[documentStore](../../client-api/what-is-a-document-store)**
+ and the **[session](../../client-api/session/what-is-a-session-and-how-does-it-work)**.
+ They in turn are built on top of the lower-level **Operations** and **Commands** API.
+
+* **RavenDB provides direct access to this lower-level API**, allowing you to send requests
+ directly to the server via DocumentStore Operations instead of using the higher-level Session API.
+
+* In this page:
+ * [Why use operations](../../client-api/operations/what-are-operations#why-use-operations)
+ * [How operations work](../../client-api/operations/what-are-operations#how-operations-work)
+
+ * **Operation types**:
+ * [Common operations](../../client-api/operations/what-are-operations#common-operations)
+ * [Maintenance operations](../../client-api/operations/what-are-operations#maintenance-operations)
+ * [Server-maintenance operations](../../client-api/operations/what-are-operations#server-maintenance-operations)
+ * [Manage lengthy operations](../../client-api/operations/what-are-operations#manage-lengthy-operations)
+ * [Wait for completion](../../client-api/operations/what-are-operations#wait-for-completion)
+ * [Kill operation](../../client-api/operations/what-are-operations#kill-operation)
+
+{NOTE/}
+
+---
+
+{PANEL: Why use operations}
+
+* Operations provide **management functionality** that is Not available in the context of the session, for example:
+ * Create/delete a database
+ * Execute administrative tasks
+ * Assign permissions
+ * Change server configuration, and more.
+
+* The operations are executed on the DocumentStore and are Not part of the session transaction.
+
+* There are some client tasks, such as patching documents, that can be carried out either via the Session
+ ([session.advanced.patch()](../../client-api/operations/patching/single-document#array-manipulation))
+ or via an Operation on the DocumentStore ([PatchOperation](../../client-api/operations/patching/single-document#operations-api)).
+
+{PANEL/}
+
+{PANEL: How operations work}
+
+* **Sending the request**:
+ Each Operation is an encapsulation of a `RavenCommand`.
+ The RavenCommand creates the HTTP request message to be sent to the relevant server endpoint.
+ The DocumentStore `OperationExecutor` sends the request and processes the results.
+* **Target node**:
+ By default, the operation will be executed on the server node that is defined by the [client configuration](../../client-api/configuration/load-balance/overview#client-logic-for-choosing-a-node).
+ However, server-maintenance operations can be executed on a specific node by using the [for_node](../../client-api/operations/how-to/switch-operations-to-a-different-node) method.
+* **Target database**:
+ By default, operations work on the default database defined in the DocumentStore.
+ However, common operations & maintenance operations can operate on a different database by using the [for_database](../../client-api/operations/how-to/switch-operations-to-a-different-database) method.
+* **Transaction scope**:
+ Operations execute as a single-node transaction.
+ If needed, data will then replicate to the other nodes in the database-group.
+* **Background operations**:
+ Some operations may take a long time to complete and can be awaited for completion.
+ Learn more [below](../../client-api/operations/what-are-operations#wait-for-completion).
+
+{PANEL/}
+
+{PANEL: Common operations}
+
+* All common operations implement the `IOperation` interface.
+ The operation is executed within the **database scope**.
+ Use [for_database](../../client-api/operations/how-to/switch-operations-to-a-different-database) to operate on a specific database other than the default defined in the store.
+
+* These operations include set-based operations such as _PatchOperation_, _CounterBatchOperation_,
+ document-extensions related operations such as getting/putting an attachment, and more.
+ See all available operations [below](../../client-api/operations/what-are-operations#operations-list).
+
+* To execute a common operation request,
+ use the `send` method on the `operations` property in the DocumentStore.
+
+#### Example:
+
+{CODE:python operations_ex@ClientApi\Operations\WhatAreOperations.py /}
+
+##### Syntax:
+
+{CODE:python operations_send@ClientApi\Operations\WhatAreOperations.py /}
+
+{NOTE: }
+
+ **The following common operations are available:**
+
+---
+
+* **Attachments**:
+ [PutAttachmentOperation](../../client-api/operations/attachments/put-attachment)
+ [GetAttachmentOperation](../../client-api/operations/attachments/get-attachment)
+ [DeleteAttachmentOperation](../../client-api/operations/attachments/delete-attachment)
+
+* **Counters**:
+ [CounterBatchOperation](../../client-api/operations/counters/counter-batch)
+ [GetCountersOperation](../../client-api/operations/counters/get-counters)
+
+* **Time series**:
+ [TimeSeriesBatchOperation](../../document-extensions/timeseries/client-api/operations/append-and-delete)
+ [GetMultipleTimeSeriesOperation](../../document-extensions/timeseries/client-api/operations/get)
+ [GetTimeSeriesOperation](../../document-extensions/timeseries/client-api/operations/get)
+ GetTimeSeriesStatisticsOperation
+
+* **Revisions**:
+ [GetRevisionsOperation](../../document-extensions/revisions/client-api/operations/get-revisions)
+
+* **Patching**:
+ [PatchOperation](../../client-api/operations/patching/single-document)
+ [PatchByQueryOperation](../../client-api/operations/patching/set-based)
+
+* **Delete by query**:
+ [DeleteByQueryOperation](../../client-api/operations/common/delete-by-query)
+
+* **Compare-exchange**:
+ [PutCompareExchangeValueOperation](../../client-api/operations/compare-exchange/put-compare-exchange-value)
+ [GetCompareExchangeValueOperation](../../client-api/operations/compare-exchange/get-compare-exchange-value)
+ [GetCompareExchangeValuesOperation](../../client-api/operations/compare-exchange/get-compare-exchange-values)
+ [DeleteCompareExchangeValueOperation](../../client-api/operations/compare-exchange/delete-compare-exchange-value)
+
+{NOTE/}
+{PANEL/}
+
+{PANEL: Maintenance operations}
+
+* All maintenance operations implement the `IMaintenanceOperation` interface.
+ The operation is executed within the **database scope**.
+ Use [for_database](../../client-api/operations/how-to/switch-operations-to-a-different-database) to operate on a specific database other than the default defined in the store.
+
+* These operations include database management operations such as setting client configuration,
+ managing indexes & ongoing-tasks operations, getting stats, and more.
+ See all available maintenance operations [below](../../client-api/operations/what-are-operations#maintenance-list).
+
+* To execute a maintenance operation request,
+ use the `send` method on the `maintenance` property in the DocumentStore.
+
+#### Example:
+
+{CODE:python maintenance_ex@ClientApi\Operations\WhatAreOperations.py /}
+
+##### Syntax:
+
+{CODE:python maintenance_send@ClientApi\Operations\WhatAreOperations.py /}
+
+{NOTE: }
+
+ **The following maintenance operations are available:**
+
+---
+
+* **Statistics**:
+ [GetStatisticsOperation](../../client-api/operations/maintenance/get-stats#get-database-stats)
+ [GetDetailedStatisticsOperation](../../client-api/operations/maintenance/get-stats#get-detailed-database-stats)
+ [GetCollectionStatisticsOperation](../../client-api/operations/maintenance/get-stats#get-collection-stats)
+ [GetDetailedCollectionStatisticsOperation](../../client-api/operations/maintenance/get-stats#get-detailed-collection-stats)
+
+* **Client Configuration**:
+ [PutClientConfigurationOperation](../../client-api/operations/maintenance/configuration/put-client-configuration)
+ [GetClientConfigurationOperation](../../client-api/operations/maintenance/configuration/get-client-configuration)
+
+* **Indexes**:
+ [PutIndexesOperation](../../client-api/operations/maintenance/indexes/put-indexes)
+ [SetIndexesLockOperation](../../client-api/operations/maintenance/indexes/set-index-lock)
+ [SetIndexesPriorityOperation](../../client-api/operations/maintenance/indexes/set-index-priority)
+ [GetIndexErrorsOperation](../../client-api/operations/maintenance/indexes/get-index-errors)
+ [GetIndexOperation](../../client-api/operations/maintenance/indexes/get-index)
+ [GetIndexesOperation](../../client-api/operations/maintenance/indexes/get-indexes)
+ [GetTermsOperation](../../client-api/operations/maintenance/indexes/get-terms)
+ GetIndexPerformanceStatisticsOperation
+ GetIndexStatisticsOperation
+ GetIndexesStatisticsOperation
+ GetIndexingStatusOperation
+ GetIndexStalenessOperation
+ [GetIndexNamesOperation](../../client-api/operations/maintenance/indexes/get-index-names)
+ [StartIndexOperation](../../client-api/operations/maintenance/indexes/start-index)
+ [StartIndexingOperation](../../client-api/operations/maintenance/indexes/start-indexing)
+ [StopIndexOperation](../../client-api/operations/maintenance/indexes/stop-index)
+ [StopIndexingOperation](../../client-api/operations/maintenance/indexes/stop-indexing)
+ [ResetIndexOperation](../../client-api/operations/maintenance/indexes/reset-index)
+ [DeleteIndexOperation](../../client-api/operations/maintenance/indexes/delete-index)
+ [DeleteIndexErrorsOperation](../../client-api/operations/maintenance/indexes/delete-index-errors)
+ [DisableIndexOperation](../../client-api/operations/maintenance/indexes/disable-index)
+ [EnableIndexOperation](../../client-api/operations/maintenance/indexes/enable-index)
+ [IndexHasChangedOperation](../../client-api/operations/maintenance/indexes/index-has-changed)
+
+* **Analyzers**:
+ PutAnalyzersOperation
+ DeleteAnalyzerOperation
+
+* **Ongoing tasks**:
+ [GetOngoingTaskInfoOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations#get-ongoing-task-info)
+ [ToggleOngoingTaskStateOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations#toggle-ongoing-task-state)
+ [DeleteOngoingTaskOperation](../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations#delete-ongoing-task)
+
+* **ETL tasks**:
+ AddEtlOperation
+ UpdateEtlOperation
+ [ResetEtlOperation](../../client-api/operations/maintenance/etl/reset-etl)
+
+* **Replication tasks**:
+ PutPullReplicationAsHubOperation
+ GetPullReplicationTasksInfoOperation
+ GetReplicationHubAccessOperation
+ GetReplicationPerformanceStatisticsOperation
+ RegisterReplicationHubAccessOperation
+ UnregisterReplicationHubAccessOperation
+ UpdateExternalReplicationOperation
+ UpdatePullReplicationAsSinkOperation
+
+* **Backup**:
+ BackupOperation
+ GetPeriodicBackupStatusOperation
+ StartBackupOperation
+ UpdatePeriodicBackupOperation
+
+* **Connection strings**:
+ [PutConnectionStringOperation](../../client-api/operations/maintenance/connection-strings/add-connection-string)
+ [RemoveConnectionStringOperation](../../client-api/operations/maintenance/connection-strings/remove-connection-string)
+ [GetConnectionStringsOperation](../../client-api/operations/maintenance/connection-strings/get-connection-string)
+
+* **Transaction recording**:
+ StartTransactionsRecordingOperation
+ StopTransactionsRecordingOperation
+ ReplayTransactionsRecordingOperation
+
+* **Database settings**:
+ [PutDatabaseSettingsOperation](../../client-api/operations/maintenance/configuration/database-settings-operation#put-database-settings-operation)
+ [GetDatabaseSettingsOperation](../../client-api/operations/maintenance/configuration/database-settings-operation#get-database-settings-operation)
+
+* **Identities**:
+ [GetIdentitiesOperation](../../client-api/operations/maintenance/identities/get-identities)
+ [NextIdentityForOperation](../../client-api/operations/maintenance/identities/increment-next-identity)
+ [SeedIdentityForOperation](../../client-api/operations/maintenance/identities/seed-identity)
+
+* **Time series**:
+ ConfigureTimeSeriesOperation
+ ConfigureTimeSeriesPolicyOperation
+ ConfigureTimeSeriesValueNamesOperation
+ RemoveTimeSeriesPolicyOperation
+
+* **Revisions**:
+ [ConfigureRevisionsOperation](../../document-extensions/revisions/client-api/operations/configure-revisions)
+
+* **Sorters**:
+ [PutSortersOperation](../../client-api/operations/maintenance/sorters/put-sorter)
+ DeleteSorterOperation
+
+* **Misc**:
+ ConfigureExpirationOperation
+ ConfigureRefreshOperation
+ UpdateDocumentsCompressionConfigurationOperation
+ DatabaseHealthCheckOperation
+ GetOperationStateOperation
+ CreateSampleDataOperation
+
+{NOTE/}
+{PANEL/}
+
+{PANEL: Server-maintenance operations}
+
+* All server-maintenance operations implement the `IServerOperation` interface.
+ The operation is executed within the **server scope**.
+ Use [for_node](../../client-api/operations/how-to/switch-operations-to-a-different-node) to operate on a specific node other than the default defined in the client configuration.
+
+* These operations include server management and configuration operations.
+ See all available operations [below](../../client-api/operations/what-are-operations#server-list).
+
+* To execute a server-maintenance operation request,
+ use the `send` method on the `maintenance.server` property in the DocumentStore.
+
+#### Example:
+
+{CODE:python server_ex@ClientApi\Operations\WhatAreOperations.py /}
+
+##### Syntax:
+
+{CODE:python server_send@ClientApi\Operations\WhatAreOperations.py /}
+
+{NOTE: }
+
+ **The following server-maintenance operations are available:**
+
+---
+
+* **Client certificates**:
+ [PutClientCertificateOperation](../../client-api/operations/server-wide/certificates/put-client-certificate)
+ [CreateClientCertificateOperation](../../client-api/operations/server-wide/certificates/create-client-certificate)
+ [GetCertificatesOperation](../../client-api/operations/server-wide/certificates/get-certificates)
+ [DeleteCertificateOperation](../../client-api/operations/server-wide/certificates/delete-certificate)
+ EditClientCertificateOperation
+ GetCertificateMetadataOperation
+ ReplaceClusterCertificateOperation
+
+* **Server-wide client configuration**:
+ [PutServerWideClientConfigurationOperation](../../client-api/operations/server-wide/configuration/put-serverwide-client-configuration)
+ [GetServerWideClientConfigurationOperation](../../client-api/operations/server-wide/configuration/get-serverwide-client-configuration)
+
+* **Database management**:
+ [CreateDatabaseOperation](../../client-api/operations/server-wide/create-database)
+ [DeleteDatabasesOperation](../../client-api/operations/server-wide/delete-database)
+ [ToggleDatabasesStateOperation](../../client-api/operations/server-wide/toggle-databases-state)
+ [GetDatabaseNamesOperation](../../client-api/operations/server-wide/get-database-names)
+ [AddDatabaseNodeOperation](../../client-api/operations/server-wide/add-database-node)
+ [PromoteDatabaseNodeOperation](../../client-api/operations/server-wide/promote-database-node)
+ [ReorderDatabaseMembersOperation](../../client-api/operations/server-wide/reorder-database-members)
+ [CompactDatabaseOperation](../../client-api/operations/server-wide/compact-database)
+ GetDatabaseRecordOperation
+ SetDatabasesLockOperation
+ CreateDatabaseOperationWithoutNameValidation
+ SetDatabaseDynamicDistributionOperation
+ ModifyDatabaseTopologyOperation
+ UpdateDatabaseOperation
+ UpdateUnusedDatabasesOperation
+
+* **Server-wide ongoing tasks**:
+ DeleteServerWideTaskOperation
+ ToggleServerWideTaskStateOperation
+
+* **Server-wide replication tasks**:
+ PutServerWideExternalReplicationOperation
+ GetServerWideExternalReplicationOperation
+ GetServerWideExternalReplicationsOperation
+
+* **Server-wide backup tasks**:
+ PutServerWideBackupConfigurationOperation
+ GetServerWideBackupConfigurationOperation
+ GetServerWideBackupConfigurationsOperation
+ RestoreBackupOperation
+
+* **Server-wide analyzers**:
+ PutServerWideAnalyzersOperation
+ DeleteServerWideAnalyzerOperation
+
+* **Server-wide sorters**:
+ [PutServerWideSortersOperation](../../client-api/operations/server-wide/sorters/put-sorter-server-wide)
+ DeleteServerWideSorterOperation
+
+* **Logs & debug**:
+ SetLogsConfigurationOperation
+ GetLogsConfigurationOperation
+ GetClusterDebugInfoPackageOperation
+ [GetBuildNumberOperation](../../client-api/operations/server-wide/get-build-number)
+ GetServerWideOperationStateOperation
+
+* **Traffic watch**:
+ PutTrafficWatchConfigurationOperation
+ GetTrafficWatchConfigurationOperation
+
+* **Revisions**:
+ [ConfigureRevisionsForConflictsOperation](../../document-extensions/revisions/client-api/operations/conflict-revisions-configuration)
+
+* **Misc**:
+ ModifyConflictSolverOperation
+ OfflineMigrationOperation
+
+{NOTE/}
+{PANEL/}
+
+## Related articles
+
+### Document Store
+
+- [What is a Document Store](../../client-api/what-is-a-document-store)
+
+### Operations
+
+- [How to Switch Operations to a Different Database](../../client-api/operations/how-to/switch-operations-to-a-different-database)
+- [How to Switch Operations to a Different Node](../../client-api/operations/how-to/switch-operations-to-a-different-node)
diff --git a/Documentation/6.1/Raven.Documentation.Pages/server/configuration/etl-configuration.markdown b/Documentation/6.1/Raven.Documentation.Pages/server/configuration/etl-configuration.markdown
new file mode 100644
index 0000000000..8a22a1f70d
--- /dev/null
+++ b/Documentation/6.1/Raven.Documentation.Pages/server/configuration/etl-configuration.markdown
@@ -0,0 +1,133 @@
+# Configuration: ETL Options
+---
+
+{NOTE: }
+
+* In this page:
+ * [ETL.ExtractAndTransformTimeoutInSec](../../server/configuration/etl-configuration#etl.extractandtransformtimeoutinsec)
+ * [ETL.MaxBatchSizeInMb](../../server/configuration/etl-configuration#etl.maxbatchsizeinmb)
+ * [ETL.MaxFallbackTimeInSec](../../server/configuration/etl-configuration#etl.maxfallbacktimeinsec)
+ * [ETL.MaxNumberOfExtractedDocuments](../../server/configuration/etl-configuration#etl.maxnumberofextracteddocuments)
+ * [ETL.MaxNumberOfExtractedItems](../../server/configuration/etl-configuration#etl.maxnumberofextracteditems)
+ * [ETL.OLAP.MaxNumberOfExtractedDocuments](../../server/configuration/etl-configuration#etl.olap.maxnumberofextracteddocuments)
+ * [ETL.Queue.AzureQueueStorage.TimeToLiveInSec](../../server/configuration/etl-configuration#etl.queue.azurequeuestorage.timetoliveinsec)
+ * [ETL.Queue.AzureQueueStorage.VisibilityTimeoutInSec](../../server/configuration/etl-configuration#etl.queue.azurequeuestorage.visibilitytimeoutinsec)
+ * [ETL.Queue.Kafka.InitTransactionsTimeoutInSec](../../server/configuration/etl-configuration#etl.queue.kafka.inittransactionstimeoutinsec)
+ * [ETL.SQL.CommandTimeoutInSec](../../server/configuration/etl-configuration#etl.sql.commandtimeoutinsec)
+
+{NOTE/}
+
+---
+
+{PANEL: ETL.ExtractAndTransformTimeoutInSec}
+
+Number of seconds after which extraction and transformation will end and loading will start.
+
+- **Type**: `int`
+- **Default**: `30`
+- **Scope**: Server-wide or per database
+
+{PANEL/}
+
+{PANEL: ETL.MaxNumberOfExtractedDocuments}
+
+* Max number of extracted documents in an ETL batch.
+* If value is not set, or set to null, the number of extracted documents fallbacks to `ETL.MaxNumberOfExtractedItems` value.
+
+---
+
+- **Type**: `int`
+- **Default**: `8192`
+- **Scope**: Server-wide or per database
+
+{PANEL/}
+
+{PANEL: ETL.MaxBatchSizeInMb}
+
+* Maximum size in megabytes of a batch of data (documents and attachments) that will be sent to the destination as a single batch after transformation.
+* If value is not set, or set to null, the size of the batch isn't limited in the processed ETL batch.
+
+---
+
+- **Type**: `Size`
+- **Size Unit**: `Megabytes`
+- **Default**: `64`
+- **Scope**: Server-wide or per database
+
+{PANEL/}
+
+{PANEL: ETL.MaxFallbackTimeInSec}
+
+* Maximum number of seconds the ETL process will be in a fallback mode after a load connection failure to a destination.
+* The fallback mode means suspending the process.
+
+---
+
+- **Type**: `int`
+- **Default**: `900`
+- **Scope**: Server-wide or per database
+
+{PANEL/}
+
+{PANEL: ETL.MaxNumberOfExtractedItems}
+
+* Max number of extracted items (documents, counters, etc) in an ETL batch.
+* If value is not set, or set to null, the number of extracted items isn't limited in the processed ETL batch.
+
+---
+
+- **Type**: `int`
+- **Default**: `8192`
+- **Scope**: Server-wide or per database
+
+{PANEL/}
+
+{PANEL: ETL.OLAP.MaxNumberOfExtractedDocuments}
+
+Max number of extracted documents in OLAP ETL batch.
+
+- **Type**: `int`
+- **Default**: `64 * 1024`
+- **Scope**: Server-wide or per database
+
+{PANEL/}
+
+{PANEL: ETL.Queue.AzureQueueStorage.TimeToLiveInSec}
+
+Lifespan of a message in the queue.
+
+- **Type**: `int`
+- **Default**: `604800` (7 days)
+- **Scope**: Server-wide or per database
+
+{PANEL/}
+
+{PANEL: ETL.Queue.AzureQueueStorage.VisibilityTimeoutInSec}
+
+How long a message is hidden after being retrieved but not deleted.
+
+- **Type**: `int`
+- **Default**: `0`
+- **Scope**: Server-wide or per database
+
+{PANEL/}
+
+{PANEL: ETL.Queue.Kafka.InitTransactionsTimeoutInSec}
+
+Timeout to initialize transactions for the Kafka producer.
+
+- **Type**: `int`
+- **Default**: `60`
+- **Scope**: Server-wide or per database
+
+{PANEL/}
+
+{PANEL: ETL.SQL.CommandTimeoutInSec}
+
+Number of seconds after which the SQL command will timeout.
+
+- **Type**: `int`
+- **Default**: `null` (use provider default)
+- **Scope**: Server-wide or per database
+
+{PANEL/}
diff --git a/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/.docs.json b/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/.docs.json
index d536290f32..a42106293a 100644
--- a/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/.docs.json
+++ b/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/.docs.json
@@ -1,12 +1,8 @@
[
{
- "Path": "/etl",
- "Name": "ETL",
- "Mappings": []
- },
- {
- "Path": "/queue-sink",
- "Name": "Queue Sink",
+ "Path": "external-replication.markdown",
+ "Name": "External Replication",
+ "DiscussionId": "70d71f23-2b91-4c03-9171-115aa91d18fa",
"Mappings": []
},
{
@@ -31,9 +27,13 @@
"Mappings": []
},
{
- "Path": "external-replication.markdown",
- "Name": "External Replication",
- "DiscussionId": "70d71f23-2b91-4c03-9171-115aa91d18fa",
+ "Path": "/etl",
+ "Name": "ETL",
+ "Mappings": []
+ },
+ {
+ "Path": "/queue-sink",
+ "Name": "Queue Sink",
"Mappings": []
}
]
diff --git a/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/basics.markdown b/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/basics.markdown
new file mode 100644
index 0000000000..55227dea8b
--- /dev/null
+++ b/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/basics.markdown
@@ -0,0 +1,283 @@
+# Ongoing Tasks: ETL Basics
+---
+
+{NOTE: }
+
+* **ETL (Extract, Transform & Load)** is a three-stage RavenDB process that transfers data from a RavenDB database to an external target.
+ The data can be filtered and transformed along the way.
+
+* The external target can be:
+ * Another RavenDB database instance (outside of the [Database Group](../../../studio/database/settings/manage-database-group))
+ * A relational database
+ * Elasticsearch
+ * OLAP (Online Analytical Processing)
+ * A message broker such as Apache Kafka, RabbitMQ, or Azure Queue Storage
+
+* ETL can be used on [sharded](../../../sharding/etl) and non-sharded databases alike.
+ Learn more about how ETL works on a sharded database [here](../../../sharding/etl).
+
+* In this page:
+ * [Why use ETL](../../../server/ongoing-tasks/etl/basics#why-use-etl)
+ * [Defining ETL Tasks](../../../server/ongoing-tasks/etl/basics#defining-etl-tasks)
+ * [ETL Stages:](../../../server/ongoing-tasks/etl/basics#etl-stages)
+ * [Extract](../../../server/ongoing-tasks/etl/basics#extract)
+ * [Transform](../../../server/ongoing-tasks/etl/basics#transform)
+ * [Load](../../../server/ongoing-tasks/etl/basics#load)
+ * [Troubleshooting](../../../server/ongoing-tasks/etl/basics#troubleshooting)
+{NOTE/}
+
+---
+
+{PANEL: Why use ETL}
+
+* **Share relevant data**
+ Send data in a well-defined format to match specific requirements, ensuring only relevant data is transmitted
+ (e.g., sending data to an existing reporting solution).
+
+* **Protect your data - Share partial data**
+ Limit access to sensitive data. Details that should remain private can be filtered out as you can share partial data.
+
+* **Reduce system calls**
+ Distribute data across related services within your system architecture, allowing each service to access its _own copy_ of the data without cross-service calls
+ (e.g., sharing a product catalog among multiple stores).
+
+* **Transform the data**
+ * Modify content sent as needed with JavaScript code.
+ * Multiple documents can be sent from a single source document.
+ * Data can be transformed to match the target destination's model.
+
+* **Aggregate your data**
+ Data sent from multiple locations can be aggregated in a central server
+ (e.g., aggregating sales data from point of sales systems for centralized calculations).
+
+{PANEL/}
+
+{PANEL: Defining ETL Tasks}
+
+* The following ETL tasks can be defined:
+ * [RavenDB ETL](../../../server/ongoing-tasks/etl/raven) - send data to another _RavenDB database_
+ * [SQL ETL](../../../server/ongoing-tasks/etl/sql) - send data to an _SQL database_
+ * [OLAP ETL](../../../server/ongoing-tasks/etl/OLAP) - send data to an _OLAP destination_
+ * [Elasticsearch ETL](../../../server/ongoing-tasks/etl/elasticsearch) - send data to an _Elasticsearch destination_
+ * [Kafka ETL](../../../server/ongoing-tasks/etl/queue-etl/kafka) - send data to a _Kafka message broker_
+ * [RabbitMQ ETL](../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq) - send data to an _RabbitMQ exchange_
+ * [Azure Queue Storage ETL](../../../server/ongoing-tasks/etl/queue-etl/azure-queue) - send data to an _Azure Queue Storage message queue_
+
+* All ETL tasks can be defined from the Client API or from the [Studio](../../../studio/database/tasks/ongoing-tasks/general-info).
+
+* The destination address and access options are set using a pre-defined **connection string**, simplifying deployment across different environments.
+ For example, with RavenDB ETL, multiple URLs can be configured in the connection string since the target database can reside on multiple nodes within the Database Group of the destination cluster.
+ If one of the destination nodes is unavailable, RavenDB automatically executes the ETL process against another node specified in the connection string.
+ Learn more in the [Connection Strings](../../../client-api/operations/maintenance/connection-strings/add-connection-string) article.
+
+{PANEL/}
+
+{PANEL: ETL Stages}
+
+ETL's three stages are:
+
+* [Extract](../../../server/ongoing-tasks/etl/basics#extract) - Extract the documents from the database
+* [Transform](../../../server/ongoing-tasks/etl/basics#transform) - Transform & filter the documents data according to the supplied script (optional)
+* [Load](../../../server/ongoing-tasks/etl/basics#load) - Load (write) the transformed data into the target destination
+
+---
+
+### Extract
+
+The ETL process starts with retrieving the documents from the database.
+You can choose which documents will be processed by the next two stages (Transform and Load).
+
+The possible options are:
+
+* Documents from a single collection
+* Documents from multiple collections
+* All documents
+
+---
+
+### Transform
+
+* This stage transforms and filters the extracted documents according to a provided script.
+ Any transformation can be done so that only relevant data is shared.
+ The script is written in JavaScript and its input is a document.
+
+* A task can be provided with multiple transformation scripts.
+ Different scripts run in separate processes, allowing multiple scripts to run in parallel.
+
+* You can do any transformation and send only data you are interested in sharing.
+ The following is an example of RavenDB ETL script processing documents from the "Employees" collection:
+
+ {CODE-BLOCK:javascript}
+
+var managerName = null;
+
+if (this.ReportsTo !== null)
+{
+ var manager = load(this.ReportsTo);
+ managerName = manager.FirstName + " " + manager.LastName;
+}
+
+// Load the object to a target destination by the name of "EmployeesWithManager"
+loadToEmployeesWithManager({
+ Name: this.FirstName + " " + this.LastName,
+ Title: this.Title ,
+ BornOn: new Date(this.Birthday).getFullYear(),
+ Manager: managerName
+});
+ {CODE-BLOCK/}
+
+{NOTE: }
+
+#### Syntax
+
+In addition to the ECMAScript 5.1 API,
+RavenDB introduces the following functions and members that can be used in the transformation script:
+
+|---|---|---|
+| `this` | object | The current document (with metadata) |
+| `id(document)` | function | Returns the document ID |
+| `load(id)` | function | Load another document.
This will increase the maximum number of allowed steps in a script.
**Note**:
Changes made to the other _loaded_ document will Not trigger the ETL process.|
+
+Specific ETL functions:
+
+|---|---|---|
+| `loadTo` | function | Load an object to the specified target.
This command has several syntax options,
see details [below](../../../server/ongoing-tasks/etl/basics#themethod).
**Note:**
An object will only be sent to the destination if the `loadTo` method is called. |
+| Attachments: |||
+| `loadAttachment(name)` | function | Load an attachment of the current document. |
+| `hasAttachment(name)` | function | Check if an attachment with a given name exists for the current document. |
+| `getAttachments()` | function | Get a collection of attachments details for the current document. Each item has the following properties:
`Name`, `Hash`, `ContentType`, `Size`. |
+| `.addAttachment([name,] attachmentRef)` | function | Add an attachment to a transformed document that will be sent to a target (``).
For details specific to Raven ETL, refer to this [section](../../../server/ongoing-tasks/etl/raven#attachments). |
+
+{NOTE/}
+
+{NOTE: }
+
+#### The `loadTo` method
+
+{INFO: }
+An object will only be sent to the destination if the `loadTo` method is called.
+{INFO /}
+
+To specify which target to load the data into, use either of the following overloads in your script.
+The two methods are equivalent, offering alternative syntax.
+
+* **`loadTo(obj, {attributes})`**
+ * Here the target is specified as part of the function name.
+ * The _<TargetName>_ in this syntax is Not a variable and cannot be used as one,
+ it is simply a string literal of the target's name.
+
+* **`loadTo('TargetName', obj, {attributes})`**
+ * Here the target is passed as an argument to the method.
+ * Separating the target name from the `loadTo` function name makes it possible to include symbols like `'-'` and `'.'` in target names.
+ This is not possible when the `loadTo` syntax is used because including special characters in the name of a JavaScript function makes it invalid.
+ * This syntax may vary for some ETL types.
+ Find the accurate syntax for each ETL type in the type's specific documentation.
+
+---
+
+For each ETL type, the target must be:
+
+ * RavenDB ETL: a _collection_ name
+ * SQL ETL: a _table_ name
+ * OLAP ETL: a _folder_ name
+ * Elasticsearch ETL: an _index_ name
+ * Kafka ETL: a _topic_ name
+ * RabbitMQ ETL: an _exchange_ name
+ * Azure Queue Storage ETL: a _queue_ name
+
+{NOTE/}
+
+{INFO: }
+
+#### Batch processing
+
+Documents are extracted and transformed by the ETL process in a batch manner.
+The number of documents processed depends on the following configuration limits:
+
+* [`ETL.ExtractAndTransformTimeoutInSec`](../../../server/configuration/etl-configuration#etl.extractandtransformtimeoutinsec) (default: 30 sec)
+ Time-frame for the extraction and transformation stages (in seconds), after which the loading stage will start.
+
+* [`ETL.MaxNumberOfExtractedDocuments`](../../../server/configuration/etl-configuration#etl.maxnumberofextracteddocuments) (default: 8192)
+ Maximum number of extracted documents in an ETL batch.
+
+* [`ETL.MaxNumberOfExtractedItems`](../../../server/configuration/etl-configuration#etl.maxnumberofextracteditems) (default: 8192)
+ Maximum number of extracted items (documents, counters) in an ETL batch.
+
+* [`ETL.MaxBatchSizeInMb`](../../../server/configuration/etl-configuration#etl.maxbatchsizeinmb) (default: 64 MB)
+ Maximum size of an ETL batch in MB.
+
+{INFO/}
+
+---
+
+### Load
+
+* Loading the results to the target destination is the last stage.
+
+* In contrast to [Replication](../../../server/clustering/replication/replication),
+ ETL is a push-only process that _writes_ data to the destination whenever documents from the relevant collections are changed. **Existing entries on the target will always be overwritten**.
+
+* Updates are implemented by executing consecutive DELETEs and INSERTs.
+ When a document is modified, the delete command is sent before the new data is inserted and both are processed under the same transaction on the destination side.
+ This applies to all ETL types with two exceptions:
+ * In RavenDB ETL, when documents are loaded to **the same** collection there is no need to send DELETE because the document on the other side has the same identifier and will just update it.
+ * in SQL ETL you can configure to use inserts only, which is a viable option for append-only systems.
+
+{NOTE: }
+
+**Securing ETL Processes for Encrypted Databases**:
+
+If your RavenDB database is encrypted, then you must not send data in an ETL process using a non-encrypted channel by default.
+It means that the connection to the target must be secured:
+
+- In RavenDB ETL, a URL of a destination server has to use HTTPS
+ (a server certificate of the source server needs to be registered as a client certificate on the destination server).
+- in SQL ETL, a connection string to an SQL database must specify encrypted connection (specific per SQL engine provided).
+
+This validation can be turned off by selecting the _Allow ETL on a non-encrypted communication channel_ option in the Studio,
+or setting `AllowEtlOnNonEncryptedChannel` if the task is defined using the Client API.
+Please note that in such cases, your data encrypted at rest _won't_ be protected in transit.
+
+{NOTE/}
+
+{PANEL/}
+
+{PANEL: Troubleshooting}
+
+ETL errors and warnings are [logged to files](../../../server/troubleshooting/logging) and displayed in the notification center panel.
+You will be notified if any of the following events happen:
+
+- Connection error to the target
+- JS script is invalid
+- Transformation error
+- Load error
+- Slow SQL was detected
+
+
+**Fallback Mode**:
+If the ETL cannot proceed the load stage (e.g. it can't connect to the destination) then it enters the fallback mode.
+The fallback mode means suspending the process and retrying it periodically.
+The fallback time starts from 5 seconds and it's doubled on every consecutive error according to the time passed since the last error,
+but it never crosses [`ETL.MaxFallbackTimeInSec`](../../../server/configuration/etl-configuration#etl.maxfallbacktimeinsec) configuration (default: 900 sec).
+
+Once the process is in the fallback mode, then the _Reconnect_ state is shown in the Studio.
+
+{PANEL/}
+
+## Related Articles
+
+### ETL
+- [RavenDB ETL Task](../../../server/ongoing-tasks/etl/raven)
+- [SQL ETL Task](../../../server/ongoing-tasks/etl/sql)
+- [OLAP ETL Task](../../../server/ongoing-tasks/etl/olap)
+- [Elasticsearch ETL Task](../../../server/ongoing-tasks/etl/elasticsearch)
+- [Kafka ETL Task](../../../server/ongoing-tasks/etl/queue-etl/kafka)
+- [RabbitMQ ETL Task](../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq)
+- [Azure Queue Storage ETL Task](../../../server/ongoing-tasks/etl/queue-etl/azure-queue)
+
+### Studio
+- [Define RavenDB ETL Task in Studio](../../../studio/database/tasks/ongoing-tasks/ravendb-etl-task)
+
+### Sharding
+- [Sharding Overview](../../../sharding/overview)
+- [Sharding: ETL](../../../sharding/etl)
diff --git a/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/.docs.json b/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/.docs.json
index c10a24719e..4da23deceb 100644
--- a/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/.docs.json
+++ b/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/.docs.json
@@ -1,20 +1,26 @@
[
- {
- "Path": "overview.markdown",
- "Name": "Overview",
- "DiscussionId": "e8884541-a5ef-493e-8ba6-411a9b1a1c8a",
- "Mappings": []
- },
- {
- "Path": "kafka.markdown",
- "Name": "Kafka ETL",
- "DiscussionId": "98400991-8eb2-41f5-b364-d315890458f8",
- "Mappings": []
- },
- {
- "Path": "rabbit-mq.markdown",
- "Name": "RabbitMQ ETL",
- "DiscussionId": "1b5b21e5-5e51-4717-a467-4a31694271cf",
- "Mappings": []
- }
-]
\ No newline at end of file
+ {
+ "Path": "overview.markdown",
+ "Name": "Overview",
+ "DiscussionId": "e8884541-a5ef-493e-8ba6-411a9b1a1c8a",
+ "Mappings": []
+ },
+ {
+ "Path": "kafka.markdown",
+ "Name": "Kafka ETL",
+ "DiscussionId": "98400991-8eb2-41f5-b364-d315890458f8",
+ "Mappings": []
+ },
+ {
+ "Path": "rabbit-mq.markdown",
+ "Name": "RabbitMQ ETL",
+ "DiscussionId": "1b5b21e5-5e51-4717-a467-4a31694271cf",
+ "Mappings": []
+ },
+ {
+ "Path": "azure-queue.markdown",
+ "Name": "Azure Queue Storage ETL",
+ "DiscussionId": "1b5b21e5-5e51-4717-a467-4a31694271cf",
+ "Mappings": []
+ }
+]
diff --git a/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/azure-queue.markdown b/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/azure-queue.markdown
new file mode 100644
index 0000000000..455f1fea4a
--- /dev/null
+++ b/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/azure-queue.markdown
@@ -0,0 +1,207 @@
+# Queue ETL: Azure Queue Storage
+---
+
+{NOTE: }
+
+* Azure Queue Storage is a Microsoft Azure service that allows for the storage and retrieval of large numbers of messages,
+ enabling communication between applications by allowing them to asynchronously send and receive messages.
+ Each message in a queue can be up to 64 KB in size, and a queue can contain millions of messages,
+ providing a robust and scalable solution for data processing.
+
+* Create an **Azure Queue Storage ETL Task** to:
+ * Extract data from a RavenDB database
+ * Transform the data using one or more custom scripts
+ * Load the resulting JSON object to an Azure Queue destination as a CloudEvents message
+
+* Utilizing this task allows RavenDB to act as an event producer in an Azure Queue architecture.
+
+* [Azure Functions](https://learn.microsoft.com/en-us/azure/azure-functions/functions-overview?pivots=programming-language-csharp)
+ can be triggered to consume and process messages that are sent to Azure queues,
+ enabling powerful and flexible workflows.
+ The message visibility period and life span in the Queue can be customized through these [ETL configuration options](../../../../server/configuration/etl-configuration#etl.queue.azurequeuestorage.timetoliveinsec).
+
+* Read more about Azure Queue Storage in the platform's [official documentation](https://learn.microsoft.com/en-us/azure/storage/queues/storage-queues-introduction).
+
+---
+
+* This article focuses on how to create an Azure Queue Storage ETL task using the Client API.
+ To define an Azure Queue Storage ETL task from the Studio, see [Studio: Azure Queue Storage ETL Task](../../../../studio/database/tasks/ongoing-tasks/azure-queue-storage-etl).
+ For an **overview of Queue ETL tasks**, see [Queue ETL tasks overview](../../../../server/ongoing-tasks/etl/queue-etl/overview).
+
+* In this page:
+ * [Add an Azure Queue Storage connection string](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue#add-an-azure-queue-storage-connection-string)
+ * [Authentication methods](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue#authentication-methods)
+ * [Example](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue#example)
+ * [Syntax](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue#syntax)
+ * [Add an Azure Queue Storage ETL task](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue#add-an-azure-queue-storage-etl-task)
+ * [Example](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue#example-basic)
+ * [Delete processed documents](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue#delete-processed-documents)
+ * [Syntax](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue#syntax-1)
+ * [The transformation script](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue#the-transformation-script)
+ * [The loadTo method](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue#the-loadto-method)
+
+{NOTE/}
+
+---
+
+{PANEL: Add an Azure Queue Storage connection string}
+
+Prior to setting up the ETL task, define a connection string that the task will use to access your Azure account.
+The connection string includes the authorization credentials required to connect.
+
+---
+
+#### Authentication methods:
+There are three authenticaton methods available:
+
+* **Connection string**
+ * Provide a single string that includes all the options required to connect to your Azure account.
+ Learn more about Azure Storage connection strings [here](https://learn.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string).
+ * Note: the following connection string parameters are mandatory:
+ * `AccountName`
+ * `AccountKey`
+ * `DefaultEndpointsProtocol`
+ * `QueueEndpoint` (when using http protocol)
+* **Entra ID**
+ * Use the Entra ID authorization method to achieve enhanced security by leveraging Microsoft Entra’s robust identity solutions.
+ * This approach minimizes the risks associated with exposed credentials commonly found in connection strings and enables
+ more granular control through [Role-Based Access Controls](https://learn.microsoft.com/en-us/azure/role-based-access-control/).
+* **Passwordless**
+ * This authorization method requires the machine to be pre-authorized and can only be used in self-hosted mode.
+ * Passwordless authorization works only when the account on the machine is assigned the Storage Account Queue Data Contributor role; the Contributor role alone is inadequate.
+
+---
+
+#### Example:
+
+{CODE add_azure_connection_string@Server\OngoingTasks\ETL\Queue\AzureQueueStorageEtl.cs /}
+
+---
+
+#### Syntax:
+
+{CODE queue_connection_string@Server\OngoingTasks\ETL\Queue\AzureQueueStorageEtl.cs /}
+{CODE queue_broker_type@Server\OngoingTasks\ETL\Queue\AzureQueueStorageEtl.cs /}
+{CODE azure_con_str_settings@Server\OngoingTasks\ETL\Queue\AzureQueueStorageEtl.cs /}
+
+{PANEL/}
+
+{PANEL: Add an Azure Queue Storage ETL task}
+
+{NOTE: }
+
+ __Example__:
+
+---
+
+* In this example, the Azure Queue Storage ETL Task will -
+ * Extract source documents from the "Orders" collection in RavenDB.
+ * Process each "Order" document using a defined script that creates a new `orderData` object.
+ * Load the `orderData` object to the "OrdersQueue" in an Azure Queue Storage.
+* For more details about the script and the `loadTo` method, see the [transromation script](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue#the-transformation-script) section below.
+
+{CODE add_azure_etl_task@Server\OngoingTasks\ETL\Queue\AzureQueueStorageEtl.cs /}
+
+{NOTE/}
+{NOTE: }
+
+ __Delete processed documents__:
+
+---
+
+* You have the option to delete documents from your RavenDB database once they have been processed by the Queue ETL task.
+
+* Set the optional `Queues` property in your ETL configuration with the list of Azure queues for which processed documents should be deleted.
+
+{CODE azure_delete_documents@Server\OngoingTasks\ETL\Queue\AzureQueueStorageEtl.cs /}
+
+{NOTE/}
+
+---
+
+#### Syntax
+
+{CODE etl_configuration@Server\OngoingTasks\ETL\Queue\AzureQueueStorageEtl.cs /}
+
+{PANEL/}
+
+{PANEL: The transformation script}
+
+The [basic characteristics](../../../../server/ongoing-tasks/etl/basics) of an Azure Queue Storage ETL script are similar to those of other ETL types.
+The script defines what data to **extract** from the source document, how to **transform** this data,
+and which Azure Queue to **load** it to.
+
+---
+
+#### The loadTo method
+
+To specify which Azure queue to load the data into, use either of the following methods in your script.
+The two methods are equivalent, offering alternative syntax:
+
+* **`loadTo(obj, {attributes})`**
+ * Here the target is specified as part of the function name.
+ * The target _<QueueName>_ in this syntax is Not a variable and cannot be used as one,
+ it is simply a string literal of the target's name.
+
+* **`loadTo('QueueName', obj, {attributes})`**
+ * Here the target is passed as an argument to the method.
+ * Separating the target name from the `loadTo` command makes it possible to include symbols like `'-'` and `'.'` in target names.
+ This is not possible when the `loadTo` syntax is used because including special characters in the name of a JavaScript function makes it invalid.
+
+ | Parameter | Type | Description |
+ |----------------|--------|----------------------------------------------------------------------------------------------------------------------------------|
+ | **QueueName** | string | The name of the Azure Queue |
+ | **obj** | object | The object to transfer |
+ | **attributes** | object | An object with optional & required [CloudEvents attributes](../../../../server/ongoing-tasks/etl/queue-etl/overview#cloudevents) |
+
+For example, the following two calls, which load data to "OrdersQueue", are equivalent:
+
+* `loadToOrdersQueue(obj, {attributes})`
+* `loadTo('OrdersQueue', obj, {attributes})`
+
+---
+
+A sample script that processes documents from the Orders collection:
+
+{CODE-BLOCK: JavaScript}
+// Create an orderData object
+// ==========================
+var orderData = {
+ Id: id(this),
+ OrderLinesCount: this.Lines.length,
+ TotalCost: 0
+};
+
+// Update the orderData's TotalCost field
+// ======================================
+for (var i = 0; i < this.Lines.length; i++) {
+ var line = this.Lines[i];
+ var cost = (line.Quantity * line.PricePerUnit) * ( 1 - line.Discount);
+ orderData.TotalCost += cost;
+}
+
+// Load the object to the "OrdersQueue" in Azure
+// =============================================
+loadToOrdersQueue(orderData, {
+ Id: id(this),
+ Type: 'com.example.promotions',
+ Source: '/promotion-campaigns/summer-sale'
+})
+{CODE-BLOCK/}
+
+{PANEL/}
+
+## Related Articles
+
+### Server
+
+- [ETL Basics](../../../../server/ongoing-tasks/etl/basics)
+- [Queue ETL Overview](../../../../server/ongoing-tasks/etl/queue-etl/overview)
+- [RabbitMQ ETL](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq)
+- [Kafka ETL](../../../../server/ongoing-tasks/etl/queue-etl/kafka)
+
+### Studio
+
+- [Studio: Kafka ETL Task](../../../../studio/database/tasks/ongoing-tasks/kafka-etl-task)
+- [Studio: RabbitMQ ETL Task](../../../../studio/database/tasks/ongoing-tasks/rabbitmq-etl-task)
+- [Studio: Azure Queue Storage ETL Task](../../../../studio/database/tasks/ongoing-tasks/azure-queue-storage-etl)
diff --git a/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/images/overview_stats.png b/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/images/overview_stats.png
new file mode 100644
index 0000000000..df41727416
Binary files /dev/null and b/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/images/overview_stats.png differ
diff --git a/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/kafka.markdown b/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/kafka.markdown
new file mode 100644
index 0000000000..c7b2ffc82f
--- /dev/null
+++ b/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/kafka.markdown
@@ -0,0 +1,179 @@
+# Queue ETL: Apache Kafka
+---
+
+{NOTE: }
+
+* Apache Kafka is a distributed, high-performance, transactional messaging platform that remains performant
+ as the number of messages it needs to process increases and the number of events it needs to stream climbs to the big-data zone.
+
+* Create a **Kafka ETL Task** to:
+ * Extract data from a RavenDB database
+ * Transform the data using one or more custom scripts
+ * Load the resulting JSON object to a Kafka destination as a CloudEvents message
+
+* Utilizing this task allows RavenDB to act as an event producer in a Kafka architecture.
+
+* Read more about Kafka clusters, brokers, topics, partitions, and other related subjects,
+ in the platform's [official documentation](https://kafka.apache.org/documentation/#gettingStarted).
+
+---
+
+* This article focuses on how to create a Kafka ETL task using the Client API.
+ To define a Kafka ETL task from the Studio, see [Studio: Kafka ETL Task](../../../../studio/database/tasks/ongoing-tasks/kafka-etl-task).
+ For an **overview of Queue ETL tasks**, see [Queue ETL tasks overview](../../../../server/ongoing-tasks/etl/queue-etl/overview).
+
+* In this page:
+ * [Add a Kafka connection string](../../../../server/ongoing-tasks/etl/queue-etl/kafka#add-a-kafka-connection-string)
+ * [Exmaple](../../../../server/ongoing-tasks/etl/queue-etl/kafka#example)
+ * [Syntax](../../../../server/ongoing-tasks/etl/queue-etl/kafka#syntax)
+ * [Add a Kafka ETL task](../../../../server/ongoing-tasks/etl/queue-etl/kafka#add-a-kafka-etl-task)
+ * [Example - basic](../../../../server/ongoing-tasks/etl/queue-etl/kafka#example-basic)
+ * [Example - delete processed documents](../../../../server/ongoing-tasks/etl/queue-etl/kafka#delete-processed-documents)
+ * [Syntax](../../../../server/ongoing-tasks/etl/queue-etl/kafka#syntax-1)
+ * [The transformation script](../../../../server/ongoing-tasks/etl/queue-etl/kafka#the-transformation-script)
+ * [The loadTo method](../../../../server/ongoing-tasks/etl/queue-etl/kafka#the-loadto-method)
+
+{NOTE/}
+
+---
+
+{PANEL: Add a Kafka connection string}
+
+Before setting up the ETL task, define a connection string that the task will use to connect to the message broker's bootstrap servers.
+
+---
+
+#### Example
+
+{CODE add_kafka_connection_string@Server\OngoingTasks\ETL\Queue\KafkaEtl.cs /}
+
+---
+
+#### Syntax
+
+{CODE queue_connection_string@Server\OngoingTasks\ETL\Queue\KafkaEtl.cs /}
+{CODE queue_broker_type@Server\OngoingTasks\ETL\Queue\KafkaEtl.cs /}
+{CODE kafka_con_str_settings@Server\OngoingTasks\ETL\Queue\KafkaEtl.cs /}
+
+{PANEL/}
+
+{PANEL: Add a Kafka ETL task}
+
+{NOTE: }
+
+ __Example - basic__:
+
+---
+
+* In this example, the Kafka ETL Task will -
+ * Extract source documents from the "Orders" collection in RavenDB.
+ * Process each "Order" document using a defined script that creates a new `orderData` object.
+ * Load the `orderData` object to the "OrdersTopic" in a Kafka broker.
+* For more details about the script and the `loadTo` method, see the [transromation script](../../../../server/ongoing-tasks/etl/queue-etl/kafka#the-transformation-script) section below.
+
+{CODE add_kafka_etl_task@Server\OngoingTasks\ETL\Queue\KafkaEtl.cs /}
+
+{NOTE/}
+{NOTE: }
+
+ __Example - delete processed documents__:
+
+---
+
+* You have the option to delete documents from your RavenDB database once they have been processed by the Queue ETL task.
+
+* Set the optional `Queues` property in your ETL configuration with the list of Kafka topics for which processed documents should be deleted.
+
+{CODE kafka_delete_documents@Server\OngoingTasks\ETL\Queue\KafkaEtl.cs /}
+
+{NOTE/}
+
+---
+
+#### Syntax
+
+{CODE etl_configuration@Server\OngoingTasks\ETL\Queue\KafkaEtl.cs /}
+
+{PANEL/}
+
+{PANEL: The transformation script}
+
+The [basic characteristics](../../../../server/ongoing-tasks/etl/basics) of a Kafka ETL script are similar to those of other ETL types.
+The script defines what data to **extract** from the source document, how to **transform** this data,
+and which Kafka Topic to **load** it to.
+
+---
+
+#### The loadTo method
+
+To specify which Kafka topic to load the data into, use either of the following methods in your script.
+The two methods are equivalent, offering alternative syntax:
+
+ * **`loadTo(obj, {attributes})`**
+ * Here the target is specified as part of the function name.
+ * The target _<TopicName>_ in this syntax is Not a variable and cannot be used as one,
+ it is simply a string literal of the target's name.
+
+ * **`loadTo('TopicName', obj, {attributes})`**
+ * Here the target is passed as an argument to the method.
+ * Separating the target name from the `loadTo` command makes it possible to include symbols like `'-'` and `'.'` in target names.
+ This is not possible when the `loadTo` syntax is used because including special characters in the name of a JavaScript function makes it invalid.
+
+ | Parameter | Type | Description |
+ |----------------|--------|----------------------------------------------------------------------------------------------------------------------------------|
+ | **TopicName** | string | The name of the Kafka topic |
+ | **obj** | object | The object to transfer |
+ | **attributes** | object | An object with optional & required [CloudEvents attributes](../../../../server/ongoing-tasks/etl/queue-etl/overview#cloudevents) |
+
+For example, the following two calls, which load data to "OrdersTopic", are equivalent:
+
+ * `loadToOrdersTopic(obj, {attributes})`
+ * `loadTo('OrdersTopic', obj, {attributes})`
+
+---
+
+A sample script that process documents from the Orders collection:
+
+{CODE-BLOCK: JavaScript}
+// Create an orderData object
+// ==========================
+var orderData = {
+ Id: id(this),
+ OrderLinesCount: this.Lines.length,
+ TotalCost: 0
+};
+
+// Update the orderData's TotalCost field
+// ======================================
+for (var i = 0; i < this.Lines.length; i++) {
+ var line = this.Lines[i];
+ var cost = (line.Quantity * line.PricePerUnit) * ( 1 - line.Discount);
+ orderData.TotalCost += cost;
+}
+
+// Load the object to the "OrdersTopic" in Kafka
+// =============================================
+loadToOrders(orderData, {
+ Id: id(this),
+ PartitionKey: id(this),
+ Type: 'com.example.promotions',
+ Source: '/promotion-campaigns/summer-sale'
+})
+{CODE-BLOCK/}
+
+{PANEL/}
+
+## Related Articles
+
+### Server
+
+- [ETL Basics](../../../../server/ongoing-tasks/etl/basics)
+- [Queue ETL Overview](../../../../server/ongoing-tasks/etl/queue-etl/overview)
+- [RabbitMQ ETL](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq)
+- [Azure Queue Storage ETL](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue)
+
+### Studio
+
+- [Studio: Kafka ETL Task](../../../../studio/database/tasks/ongoing-tasks/kafka-etl-task)
+- [Studio: RabbitMQ ETL Task](../../../../studio/database/tasks/ongoing-tasks/rabbitmq-etl-task)
+- [Studio: Azure Queue Storage ETL Task](../../../../studio/database/tasks/ongoing-tasks/azure-queue-storage-etl)
diff --git a/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/overview.markdown b/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/overview.markdown
new file mode 100644
index 0000000000..f59d76b6d7
--- /dev/null
+++ b/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/overview.markdown
@@ -0,0 +1,145 @@
+# Queue ETL Overview
+---
+
+{NOTE: }
+
+* Message brokers are high-throughput, distributed messaging services that host data they receive
+ from **producer** applications and serve it to **consumer** clients via FIFO data queues.
+
+* RavenDB can operate as a _Producer_ within this architecture to the following message brokers:
+ * **Apache Kafka**
+ * **RabbitMQ**
+ * **Azure Queue Storage**
+
+* This functionality is achieved by defining [Queue ETL tasks](../../../../server/ongoing-tasks/etl/queue-etl/overview#queue-etl-tasks) within a RavenDB database.
+
+* RavenDB can also function as a _Consumer_.
+ To learn about RavenDB's role as a _Consumer_ please refer to the [Queue Sink section](../../../../server/ongoing-tasks/queue-sink/overview).
+
+* In this page:
+ * [Queue ETL tasks](../../../../server/ongoing-tasks/etl/queue-etl/overview#queue-etl-tasks)
+ * [Data delivery](../../../../server/ongoing-tasks/etl/queue-etl/overview#data-delivery)
+ * [What is transferred](../../../../server/ongoing-tasks/etl/queue-etl/overview#what-is-transferred)
+ * [How are messages produced and consumed](../../../../server/ongoing-tasks/etl/queue-etl/overview#how-are-messages-produced-and-consumed)
+ * [Idempotence and message duplication](../../../../server/ongoing-tasks/etl/queue-etl/overview#idempotence-and-message-duplication)
+ * [CloudEvents](../../../../server/ongoing-tasks/etl/queue-etl/overview#cloudevents)
+ * [Task statistics](../../../../server/ongoing-tasks/etl/queue-etl/overview#task-statistics)
+
+{NOTE/}
+
+---
+
+{PANEL: Queue ETL tasks}
+
+RavenDB produces messages to broker queues via the following Queue ETL tasks:
+
+* **Kafka ETL Task**
+ You can define a Kafka ETL Task from the [Studio](../../../../studio/database/tasks/ongoing-tasks/kafka-etl-task)
+ or using the [Client API](../../../../server/ongoing-tasks/etl/queue-etl/kafka).
+* **RabbitMQ ETL Task**
+ You can define a RabbitMQ ETL Task from the [Studio](../../../../studio/database/tasks/ongoing-tasks/rabbitmq-etl-task)
+ or using the [Client API](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq).
+* **Azure Queue Storage ETL Task**
+ You can define an Azure Queue Storage ETL Task from the [Studio](../../../../studio/database/tasks/ongoing-tasks/azure-queue-storage-etl)
+ or using the [Client API](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue).
+
+---
+
+These ETL tasks:
+
+* **Extract** selected data from RavenDB documents from specified collections.
+* **Transform** the data to new JSON objects.
+* Wrap the JSON objects as [CloudEvents messages](https://cloudevents.io) and **Load** them to the designated message broker.
+
+{PANEL/}
+
+{PANEL: Data delivery}
+
+{NOTE: }
+
+#### What is transferred
+
+* **Documents only**
+ A Queue ETL task transfers documents only.
+ Document extensions like attachments, counters, or time series, will not be transferred.
+* **CloudEvents messages**
+ JSON objects produced by the task's transformation script are wrapped
+ and delivered as [CloudEvents Messages](../../../../server/ongoing-tasks/etl/queue-etl/overview#cloudevents).
+
+{NOTE/}
+{NOTE: }
+
+#### How are messages produced and consumed
+
+* The Queue ETL task will send the messages it produces to the target using a **connection string**,
+ which specifies the destination and credentials required to authorize the connection.
+ Find the specific syntax for defining a connection string per task in each task's documentation.
+* Each message will be added to the tail of its assigned queue according to the transformation script.
+ As earlier messages are processed, it will advance to the head of the queue, becoming available for consumers.
+* RavenDB publishes messages to the designated brokers using [transactions and batches](../../../../server/ongoing-tasks/etl/basics#batch-processing),
+ creating a batch of messages and opening a transaction to the destination queue for the batch.
+
+{NOTE/}
+{NOTE: }
+
+#### Idempotence and message duplication
+
+* RavenDB is an **idempotent producer**, which typically does not send duplicate messages to queues.
+* However, it is possible that duplicate messages will be sent to the broker.
+ For example:
+ Different nodes of a RavenDB cluster are regarded as different producers by the broker.
+ If the node responsible for the ETL task fails while sending a batch of messages,
+ the new responsible node may resend messages that were already received by the broker.
+* Therefore, if processing each message only once is important to the consumer,
+ it is **the consumer's responsibility** to verify the uniqueness of each consumed message.
+
+{NOTE/}
+{PANEL/}
+
+{PANEL: CloudEvents}
+
+* After preparing a JSON object that needs to be sent to a message broker,
+ the ETL task wraps it as a CloudEvents message using the [CloudEvents Library](https://cloudevents.io).
+
+* To do that, the JSON object is provided with additional [required attributes](https://github.com/cloudevents/spec/blob/main/cloudevents/spec.md#required-attributes),
+ added as headers to the message, including:
+
+ | Attribute | Type | Description | Default Value |
+ |-----------------|----------|-----------------------------------------------------------------------------------------------------------|------------------------------------------------------|
+ | **id** | `string` | [Event identifier](https://github.com/cloudevents/spec/blob/main/cloudevents/spec.md#id) | The document Change Vector |
+ | **type** | `string` | [Event type](https://github.com/cloudevents/spec/blob/main/cloudevents/spec.md#type) | "ravendb.etl.put" |
+ | **source** | `string` | [Event context](https://github.com/cloudevents/spec/blob/main/cloudevents/spec.md#source-1) | `//` |
+
+* The optional 'partitionkey' attribute can also be added.
+ Currently, it is only implemented by [Kafka ETL](../../../../server/ongoing-tasks/etl/queue-etl/kafka).
+
+ | Optional Attribute | Type | Description | Default Value |
+ |----------------------|------------|----------------------------------------------------------------------------------------------------------------------------------------------|------------------|
+ | **partitionkey** | `string` | [Events relationship/grouping definition](https://github.com/cloudevents/spec/blob/main/cloudevents/extensions/partitioning.md#partitionkey) | The document ID |
+
+{PANEL/}
+
+{PANEL: Task statistics}
+
+Use the Studio's [Ongoing tasks stats view](../../../../studio/database/stats/ongoing-tasks-stats/overview) to see various statistics related to data extraction, transformation,
+and loading to the target broker.
+
+![Queue Brokers Stats](images/overview_stats.png "Ongoing tasks stats view")
+
+{PANEL/}
+
+
+## Related Articles
+
+### Server
+
+- [ETL Basics](../../../../server/ongoing-tasks/etl/basics)
+- [Kafka ETL](../../../../server/ongoing-tasks/etl/queue-etl/kafka)
+- [RabbitMQ ETL](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq)
+- [Azure Queue Storage ETL](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue)
+
+### Studio
+
+- [Studio: Kafka ETL Task](../../../../studio/database/tasks/ongoing-tasks/kafka-etl-task)
+- [Studio: RabbitMQ ETL Task](../../../../studio/database/tasks/ongoing-tasks/rabbitmq-etl-task)
+- [Studio: Azure Queue Storage ETL Task](../../../../studio/database/tasks/ongoing-tasks/azure-queue-storage-etl)
diff --git a/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/rabbit-mq.markdown b/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/rabbit-mq.markdown
new file mode 100644
index 0000000000..4db551a0a1
--- /dev/null
+++ b/Documentation/6.1/Raven.Documentation.Pages/server/ongoing-tasks/etl/queue-etl/rabbit-mq.markdown
@@ -0,0 +1,201 @@
+# Queue ETL: RabbitMQ
+---
+
+{NOTE: }
+
+* RabbitMQ exchanges are designed to disperse data to multiple queues,
+ creating a flexible data channeling system that can easily handle complex message streaming scenarios.
+
+* Create a **RabbitMQ ETL Task** to:
+ * Extract data from a RavenDB database
+ * Transform the data using one or more custom scripts
+ * Load the resulting JSON object to a RabbitMQ destination as a CloudEvents message
+
+* Utilizing this task allows RavenDB to act as an event producer in a RabbitMQ architecture.
+
+* Read more about RabbitMQ in the platform's [official documentation](https://www.rabbitmq.com/).
+
+---
+
+* This article focuses on how to create a RabbitMQ ETL task using the Client API.
+ To define a RabbitMQ ETL task from the Studio see [Studio: RabbitMQ ETL Task](../../../../studio/database/tasks/ongoing-tasks/rabbitmq-etl-task).
+ For an **overview of Queue ETL tasks**, see [Queue ETL tasks overview](../../../../server/ongoing-tasks/etl/queue-etl/overview).
+
+* In this page:
+ * [Add a RabbitMQ connection string](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq#add-a-rabbitmq-connection-string)
+ * [Exmaple](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq#example)
+ * [Syntax](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq#syntax)
+ * [Add a RabbitMQ ETL task](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq#add-a-rabbitmq-etl-task)
+ * [Example - basic](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq#example-basic)
+ * [Example - delete processed documents](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq#delete-processed-documents)
+ * [Syntax](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq#syntax-1)
+ * [The transformation script](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq#the-transformation-script)
+ * [The loadTo method](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq#the-loadto-method)
+ * [Available method overloads](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq#available-method-overloads)
+
+{NOTE/}
+
+---
+
+{PANEL: Add a RabbitMQ connection string}
+
+Before setting up the ETL task, define a connection string that the task will use to connect to RabbitMQ.
+
+---
+
+#### Example
+
+{CODE add_rabbitMq_connection_string@Server\OngoingTasks\ETL\Queue\RabbitMqEtl.cs /}
+
+---
+
+#### Syntax
+
+{CODE queue_connection_string@Server\OngoingTasks\ETL\Queue\RabbitMqEtl.cs /}
+{CODE queue_broker_type@Server\OngoingTasks\ETL\Queue\RabbitMqEtl.cs /}
+{CODE rabbitMq_con_str_settings@Server\OngoingTasks\ETL\Queue\RabbitMqEtl.cs /}
+
+{PANEL/}
+
+{PANEL: Add a RabbitMQ ETL task}
+
+{NOTE: }
+
+ __Example - basic__:
+
+---
+
+* In this example, the RabbitMQ ETL Task will -
+ * Extract source documents from the "Orders" collection in RavenDB.
+ * Process each "Order" document using a defined script that creates a new `orderData` object.
+ * Load the `orderData` object to the "OrdersExchange" in a RabbitMQ broker.
+* For more details about the script and the `loadTo` method overloads, see the [transromation script](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq#the-transformation-script) section below.
+
+{CODE add_rabbitMq_etl_task@Server\OngoingTasks\ETL\Queue\RabbitMqEtl.cs /}
+
+{NOTE/}
+{NOTE: }
+
+ __Example - delete processed documents__:
+
+---
+
+* You have the option to delete documents from your RavenDB database once they have been processed by the Queue ETL task.
+
+* Set the optional `Queues` property in your ETL configuration with the list of RabbitMQ queues for which processed documents should be deleted.
+
+{CODE rabbitMq_delete_documents@Server\OngoingTasks\ETL\Queue\RabbitMqEtl.cs /}
+
+{NOTE/}
+
+---
+
+#### Syntax
+
+{CODE etl_configuration@Server\OngoingTasks\ETL\Queue\RabbitMqEtl.cs /}
+
+{PANEL/}
+
+{PANEL: The transformation script}
+
+The [basic characteristics](../../../../server/ongoing-tasks/etl/basics) of a RabbitMQ ETL script are similar to those of other ETL types.
+The script defines what data to **extract** from the source document, how to **transform** this data,
+and which RabbitMQ Exchange to **load** it to.
+
+---
+
+#### The loadTo method
+
+To specify which RabbitMQ Exchange to load the data into, use either of the following methods in your script.
+The two methods are equivalent, offering alternative syntax:
+
+ * **`loadTo(obj, 'routingKey', {attributes})`**
+ * Here the target is specified as part of the function name.
+ * The target _<ExchangeName>_ in this syntax is Not a variable and cannot be used as one,
+ it is simply a string literal of the target's name.
+
+ * **`loadTo('ExchangeName', obj, 'routingKey', {attributes})`**
+ * Here the target is passed as an argument to the method.
+ * Separating the target name from the `loadTo` command makes it possible to include symbols like `'-'` and `'.'` in target names.
+ This is not possible when the `loadTo` syntax is used because including special characters in the name of a JavaScript function makes it invalid.
+
+ | Parameter | Type | Description |
+ |------------------|---------|------------------------------------------------------------------------------------------------------------------------------|
+ | **ExchangeName** | string | The name of the RabbitMQ exchange. |
+ | **obj** | object | The object to transfer. |
+ | **routingKey** | string | The RabbitMQ exchange evaluates this attribute to determine how to route the message to queues based on the exchange type. |
+ | **attributes** | object | An object with [CloudEvents attributes](../../../../server/ongoing-tasks/etl/queue-etl/overview#cloudevents). |
+
+For example, the following two calls, which load data to the Orders exchange, are equivalent:
+
+ * `loadToOrdersExchange(obj, 'users', {attributes})`
+ * `loadTo('OrdersExchange', obj, 'users', {attributes})`
+
+---
+
+#### Available method overloads
+
+ * `loadTo('', obj, 'routingKey', {attributes})`
+ When replacing the exchange name with an empty string,
+ the message will be routed using the routingKey via the default exchange, which is predefined by the broker.
+
+ * `loadTo(obj)`
+ `loadTo(obj, {attributes})`
+ When omitting the routingKey, messages delivery will depend on the exchange type.
+
+ * `loadTo(obj, 'routingKey')`
+ When omitting the attributes, default attribute values will be assigned.
+
+---
+
+{NOTE: }
+If no exchange is defined in the RabbitMQ platform, RavenDB will create a default exchange of the **Fanout** type.
+In this case, all routing keys will be ignored, and messages will be distributed to all bound queues.
+{NOTE/}
+
+---
+
+A sample script that process documents from the Orders collection:
+
+{CODE-BLOCK: JavaScript}
+// Create an orderData object
+// ==========================
+var orderData = {
+ Id: id(this),
+ OrderLinesCount: this.Lines.length,
+ TotalCost: 0
+};
+
+// Update the orderData's TotalCost field
+// ======================================
+for (var i = 0; i < this.Lines.length; i++) {
+ var line = this.Lines[i];
+ var cost = (line.Quantity * line.PricePerUnit) * ( 1 - line.Discount);
+ orderData.TotalCost += cost;
+}
+
+// Load the object to "OrdersExchange" in RabbitMQ
+// ===============================================
+loadToOrdersExchange(orderData, 'users-queue', {
+ Id: id(this),
+ Type: 'com.example.promotions',
+ Source: '/promotion-campaigns/summer-sale'
+})
+{CODE-BLOCK/}
+
+{PANEL/}
+
+## Related Articles
+
+### Server
+
+- [ETL Basics](../../../../server/ongoing-tasks/etl/basics)
+- [Queue ETL Overview](../../../../server/ongoing-tasks/etl/queue-etl/overview)
+- [Kafka ETL](../../../../server/ongoing-tasks/etl/queue-etl/kafka)
+- [Azure Queue Storage ETL](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue)
+
+### Studio
+
+- [Studio: RabbitMQ ETL Task](../../../../studio/database/tasks/ongoing-tasks/rabbitmq-etl-task)
+- [Studio: Kafka ETL Task](../../../../studio/database/tasks/ongoing-tasks/kafka-etl-task)
+- [Studio: Azure Queue Storage ETL Task](../../../../studio/database/tasks/ongoing-tasks/azure-queue-storage-etl)
diff --git a/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/.docs.json b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/.docs.json
index c9c6231889..6afa71e6fe 100644
--- a/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/.docs.json
+++ b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/.docs.json
@@ -1,7 +1,7 @@
[
{
"Path": "general-info.markdown",
- "Name": "General Info",
+ "Name": "Overview",
"DiscussionId": "3ebce840-1ea8-4f9e-ae50-0275661a3c1d",
"Mappings": []
},
@@ -42,14 +42,14 @@
"Mappings": []
},
{
- "Path": "kafka-etl-task.markdown",
- "Name": "Kafka ETL Task",
+ "Path": "azure-queue-storage-etl.markdown",
+ "Name": "Azure Queue Storage ETL Task",
"DiscussionId": "89c14b90-9ddd-426d-b0f4-60a49d5865c0",
"Mappings": []
},
{
- "Path": "rabbitmq-etl-task.markdown",
- "Name": "RabbitMQ ETL Task",
+ "Path": "kafka-etl-task.markdown",
+ "Name": "Kafka ETL Task",
"DiscussionId": "89c14b90-9ddd-426d-b0f4-60a49d5865c0",
"Mappings": []
},
@@ -59,6 +59,12 @@
"DiscussionId": "89c14b90-9ddd-426d-b0f4-60a49d5865c0",
"Mappings": []
},
+ {
+ "Path": "rabbitmq-etl-task.markdown",
+ "Name": "RabbitMQ ETL Task",
+ "DiscussionId": "89c14b90-9ddd-426d-b0f4-60a49d5865c0",
+ "Mappings": []
+ },
{
"Path": "rabbitmq-queue-sink.markdown",
"Name": "RabbitMQ Queue Sink",
diff --git a/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/azure-queue-storage-etl.markdown b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/azure-queue-storage-etl.markdown
new file mode 100644
index 0000000000..15e1971945
--- /dev/null
+++ b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/azure-queue-storage-etl.markdown
@@ -0,0 +1,220 @@
+# Azure Queue Storage ETL Task
+---
+
+{NOTE: }
+
+* The RavenDB **Azure Queue Storage ETL task** -
+ * **Extracts** selected data from RavenDB documents from specified collections.
+ * **Transforms** the data into JSON object.
+ * Wraps the JSON objects as [CloudEvents messages](https://cloudevents.io) and **Loads** them to an Azure Queue Storage.
+
+* The Azure Queue Storage ETL task transfers **documents only**.
+ Document extensions like attachments, counters, time series, and revisions are not sent.
+ The maximum message size in Azure Queue Storage is 64KB, documents larger than this won’t be loaded.
+
+* The Azure Queue Storage enqueues incoming messages at the tail of a queue.
+ [Azure Functions](https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-queue-trigger?tabs=python-v2%2Cisolated-process%2Cnodejs-v4%2Cextensionv5&pivots=programming-language-csharp)
+ can be triggered to access and consume messages when the enqueued messages advance to the queue head.
+
+---
+
+* This page explains how to create an Azure Queue Storage ETL task using the Studio.
+ [Learn here](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue-storage) how to define an Azure Queue Storage ETL task using the Client API.
+
+* In this page:
+ * [Open Azure Queue Storage ETL task view](../../../../studio/database/tasks/ongoing-tasks/azure-queue-storage-etl#open-azure-queue-storage-etl-task-view)
+ * [Define Azure Queue Storage ETL task](../../../../studio/database/tasks/ongoing-tasks/azure-queue-storage-etl#define-azure-queue-storage-etl-task)
+ * [Authentication method](../../../../studio/database/tasks/ongoing-tasks/azure-queue-storage-etl#authentication-method)
+ * [Options per queue](../../../../studio/database/tasks/ongoing-tasks/azure-queue-storage-etl#options-per-queue)
+ * [Add transformation script](../../../../studio/database/tasks/ongoing-tasks/azure-queue-storage-etl#add-transformation-script)
+
+{NOTE/}
+
+---
+
+{PANEL: Open Azure Queue Storage ETL task view}
+
+![Ongoing Tasks View](images/queue/add-ongoing-task.png "Add Ongoing Task")
+
+1. **Ongoing Tasks**
+ Click to open the ongoing tasks view.
+2. **Add a Database Task**
+ Click to create a new ongoing task.
+
+---
+
+![Define ETL Task](images/queue/aqs-task-selection.png "Define ETL Task")
+
+{PANEL/}
+
+{PANEL: Define Azure Queue Storage ETL task}
+
+![Define Azure Queue Storage ETL Task](images/queue/aqs-etl-define-task.png "Define Azure Queue Storage ETL Task")
+
+1. **Task Name** (Optional)
+ * Enter a name for your task
+ * If no name is provided, the server will create a name based on the defined connection string name,
+ e.g. *"Queue ETL to <ConStrName>"*
+
+2. **Task State**
+ Select the task state:
+ Enabled - The task runs in the background, transforming and sending documents as defined in this view.
+ Disabled - No documents are transformed and sent.
+
+3. **Set Responsible Node** (Optional)
+ * Select a node from the [Database Group](../../../../studio/database/settings/manage-database-group) to be responsible for this task.
+ * If no node is selected, the cluster will assign a responsible node (see [Members Duties](../../../../studio/database/settings/manage-database-group#database-group-topology---members-duties)).
+
+4. **Create new Azure Queue Storage connection String**
+ * The connection string contains the necessary information to connect to an Azure storage account.
+ Toggle OFF to select an existing connection string from the list, or toggle ON to create a new one.
+ * **Name** - Enter a name for the connection string.
+ * **Authentication method** - Select the authentication method by which to connect to an Azure storage account.
+ Learn more about the available authentication methods [below](../../../../studio/database/tasks/ongoing-tasks/azure-queue-storage-etl#authentication-method).
+
+5. **Test Connection**
+ After defining the connection string, click to test the connection to the Azure storage account.
+
+6. **Add Transformation Script**
+ The sent data can be filtered and modified by multiple transformation JavaScript scripts that are added to the task.
+ Click to add a transformation script.
+
+7. **Advanced**
+ Click to open the [advanced section](../../../../studio/database/tasks/ongoing-tasks/azure-queue-storage-etl#options-per-queue) where you can configure the deletion of RavenDB documents per queue.
+
+{PANEL/}
+
+{PANEL: Authentication method}
+
+The available authentication methods to an Azure storage account are:
+
+---
+
+* **Connection String**
+
+ * A single string that includes all the options required to connect the Azure storage account.
+ Learn more about Azure Storage connection strings [here](https://learn.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string).
+ * The following connection string parameters are mandatory:
+ * `AccountName`
+ * `AccountKey`
+ * `DefaultEndpointsProtocol`
+ * `QueueEndpoint` (when using http protocol)
+
+ ![Connection string method](images/queue/aqs-etl-conn-str-method.png "Connection string method")
+
+* **Entra ID**
+
+ * Use the Entra ID authorization method to achieve enhanced security by leveraging Microsoft Entra’s robust identity solutions.
+ * This approach minimizes the risks associated with exposed credentials commonly found in connection strings and enables more granular control through [Role-Based Access Controls](https://learn.microsoft.com/en-us/azure/role-based-access-control/).
+
+ ![Entra ID method](images/queue/aqs-etl-entra-id-method.png "Entra ID method")
+
+* **Passwordless**
+
+ * This authorization method requires the machine to be pre-authorized and can only be used in self-hosted mode.
+ * Passwordless authorization works only when the account on the machine is assigned the Storage Account Queue Data Contributor role; the Contributor role alone is inadequate.
+
+ ![Passwordless method](images/queue/aqs-etl-passwordless-method.png "passwordless method")
+
+{PANEL/}
+
+{PANEL: Options per queue}
+
+You can configure the ETL process to delete documents from RavenDB that have already been sent to the queues.
+
+![Options per queue](images/queue/aqs-etl-advanced.png "Advanced options")
+
+1. **Add queue options**
+ Click to add a per-queue option.
+2. **Queue Name**
+ Enter the name of the Azure Queue Storage the documents are loaded to.
+3. **Delete processed documents**
+ Enable this option to remove documents that were processed and loaded to the Azure Queue Storage from the RavenDB database.
+4. **Delete queue option**
+ Click to delete the queue option from the list.
+
+{PANEL/}
+
+{PANEL: Add transformation script}
+
+![Add or Edit Transformation Script](images/queue/add-or-edit-script.png "Add or edit transformation script")
+
+1. **Add transformation script**
+ Click to add a new transformation script that will process documents from RavenDB collection(s).
+2. **Edit transformation script**
+ Click to edit this script.
+3. **Delete script**
+ Click to remove this script.
+
+---
+
+![Define Transformation Script](images/queue/aqs-etl-transformation-script.png "Define transform script")
+
+1. **Script Name** - Enter a name for the script (Optional).
+ A default name will be generated if no name is entered, e.g. _Script_1_.
+
+2. **Script** - Edit the transformation script.
+ Sample script:
+
+ {CODE-BLOCK:javascript}
+ {
+ // Define a "document object" whose contents will be extracted from RavenDB documents
+ // and sent to the Azure Queue Storage. e.g. 'var orderData':
+ var orderData = {
+ // Verify that one of the properties of this object is given the value 'id(this)'.
+ Id: id(this), // Property with RavenDB document ID
+ OrderLinesCount: this.Lines.length,
+ TotalCost: 0
+ };
+
+ for (var i = 0; i < this.Lines.length; i++) {
+ var line = this.Lines[i];
+ var cost = (line.Quantity * line.PricePerUnit) * ( 1 - line.Discount);
+ orderData.TotalCost += cost;
+ }
+
+ // Use the 'loadTo' method
+ // to transfer the document object to the Azure Queue destination.
+ loadToOrders(orderData, { // Load to a Queue by the name of "Orders" with optional params
+ Id: id(this),
+ Type: 'com.github.users',
+ Source: '/registrations/direct-signup'
+ });
+ }
+ {CODE-BLOCK/}
+
+3. **Syntax** - Click for a transformation script syntax Sample.
+
+4. **Collections**
+ * **Select (or enter) a collection**
+ Type or select the names of the RavenDB collections your script is using.
+ * **Collections Selected**
+ A list of collections that were already selected.
+
+5. **Apply script to documents from beginning of time (Reset)**
+ * This toggle is available only when editing an existing script.
+ * When this option is **enabled**:
+ The script will be executed over all existing documents in the specified collections the first time the task runs.
+ * When this option is **disabled**:
+ The script will be executed only over new and modified documents.
+
+6. **Add/Update** - Click to add a new script or update an existing script.
+ **Cancel** - Click to cancel your changes.
+
+7. **Test Script** - Click to test the transformation script.
+
+{PANEL/}
+
+## Related Articles
+
+### Server
+
+- [ETL Basics](../../../../server/ongoing-tasks/etl/basics)
+- [Queue ETL Overview](../../../../server/ongoing-tasks/etl/queue-etl/overview)
+- [Kafka ETL](../../../../server/ongoing-tasks/etl/queue-etl/kafka)
+- [RabbitMQ ETL](../../../../server/ongoing-tasks/etl/queue-etl/rabbit-mq)
+- [Azure Queue Storage ETL](../../../../server/ongoing-tasks/etl/queue-etl/azure-queue)
+
+### Studio
+
+- [Studio: Kafka ETL Task](../../../../studio/database/tasks/ongoing-tasks/kafka-etl-task)
diff --git a/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/general-info.markdown b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/general-info.markdown
new file mode 100644
index 0000000000..8b574c006f
--- /dev/null
+++ b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/general-info.markdown
@@ -0,0 +1,109 @@
+# Ongoing Tasks - Overview
+---
+
+{NOTE: }
+
+* Ongoing tasks are **work tasks** defined for the database.
+
+* Each task is assigned a responsible node from the [Database Group nodes](../../../../studio/database/settings/manage-database-group) to handle the work.
+ * If not specified by the user, the cluster decides which node will be responsible for the task. See [Members Duties](../../../../studio/database/settings/manage-database-group#database-group-topology---members-duties).
+ * If a node is down, the cluster will reassign the work to another node for the duration.
+
+* Once enabled, an **ongoing task** runs in the background,
+ and its responsible node executes the defined task work whenever relevant data changes occur.
+
+* In this page:
+ * [The ongoing tasks](../../../../studio/database/tasks/ongoing-tasks/general-info#the-ongoing-tasks)
+ * [The ongoing tasks list - View](../../../../studio/database/tasks/ongoing-tasks/general-info#the-ongoing-tasks-list---view)
+ * [The ongoing tasks list - Actions](../../../../studio/database/tasks/ongoing-tasks/general-info#the-ongoing-tasks-list---actions)
+
+{NOTE/}
+
+---
+
+{PANEL: The ongoing tasks}
+
+The available ongoing tasks are:
+
+![Figure 3. Ongoing Tasks New Task](images/task-list-1.png "Add ongoing task")
+
+**Replication:**
+
+* **[External Replication](../../../../studio/database/tasks/ongoing-tasks/external-replication-task)**
+ Create a live replica of your database in another RavenDB database in another cluster.
+ This replication is initiated by the source database.
+* **[Hub/Sink Replication](../../../../studio/database/tasks/ongoing-tasks/hub-sink-replication/overview)**
+ Create a live replica of your database, or a part of it, in another RavenDB database.
+ The replication is initiated by the *Sink* task.
+ The replication can be *bidirectional* or limited to a *single direction*.
+ The replication can be *filtered* to allow the delivery of selected documents.
+
+**Backups & Subscriptions:**
+
+* **[Backup](../../../../studio/database/tasks/backup-task)**
+ Schedule a backup or a snapshot of the database at a specified point in time.
+* **[Subscription](../../../../client-api/data-subscriptions/what-are-data-subscriptions)**
+ Send batches of documents that match a pre-defined query for processing on a client.
+
+**ETL (RavenDB => Target):**
+
+* **[RavenDB ETL](../../../../studio/database/tasks/ongoing-tasks/ravendb-etl-task)**
+ Write all or chosen database documents to another RavenDB database.
+ Data can be filtered and modified with transformation scripts.
+* **[SQL ETL](../../../../server/ongoing-tasks/etl/sql)**
+ Write the database data to a relational database.
+ Data can be filtered and modified with transformation scripts.
+* **[OLAP ETL](../../../../studio/database/tasks/ongoing-tasks/olap-etl-task)**
+ Convert database data to the _Parquet_ file format for OLAP purposes.
+ Data can be filtered and modified with transformation scripts.
+* **[Elasticsearch ETL](../../../../studio/database/tasks/ongoing-tasks/elasticsearch-etl-task)**
+ Write all or chosen database documents to an Elasticsearch destination.
+ Data can be filtered and modified with transformation scripts.
+* **[Kafka ETL](../../../../studio/database/tasks/ongoing-tasks/kafka-etl-task)**
+ Write all or chosen database documents to topics of a Kafka broker.
+ Data can be filtered and modified with transformation scripts.
+* **[RabbitMQ ETL](../../../../studio/database/tasks/ongoing-tasks/rabbitmq-etl-task)**
+ Write all or chosen database documents to a RabbitMQ exchange.
+ Data can be filtered and modified with transformation scripts.
+* **[Azure Queue Storage ETL](../../../../studio/database/tasks/ongoing-tasks/azure-queue-storage-etl-task)**
+ Write all or chosen database documents to Azure Queue Storage.
+ Data can be filtered and modified with transformation scripts.
+
+**Sink (Source => RavendB)**
+
+* **[Kafka Sink](../../../../studio/database/tasks/ongoing-tasks/kafka-queue-sink)**
+ Consume and process incoming messages from Kafka topics.
+ Add scripts to Load, Put, or Delete documents in RavenDB based on the incoming messages.
+* **[RabbitMQ Sink](../../../../studio/database/tasks/ongoing-tasks/rabbitmq-queue-sink)**
+ Consume and process incoming messages from RabbitMQ queues.
+ Add scripts to Load, Put, or Delete documents in RavenDB based on the incoming messages.
+
+{PANEL/}
+
+{PANEL: The ongoing tasks list - View}
+
+![Figure 1. Ongoing Tasks View](images/task-list-2.png "Ongoing tasks list for database DB1")
+
+1. Navigate to **Tasks > Ongoing Tasks**
+
+2. The list of the current tasks defined for the database.
+
+3. The task name.
+
+4. The node that is currently responsible for executing the task.
+
+{PANEL/}
+
+{PANEL: The ongoing tasks list - Actions}
+
+![Figure 2. Ongoing Tasks Actions](images/task-list-3.png "Ongoing tasks - actions")
+
+1. **Add Task** - Create a new task for the database.
+2. **Enable / Disable** the task.
+3. **Details** - Click to see a short task details summary in this view.
+4. **Edit** - Click to edit the task.
+5. **Delete** the task.
+
+The ongoing tasks can also be managed via the Client API. See [Ongoing tasks operations](../../../../client-api/operations/maintenance/ongoing-tasks/ongoing-task-operations).
+
+{PANEL/}
diff --git a/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/add-ongoing-task.png b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/add-ongoing-task.png
new file mode 100644
index 0000000000..126649b259
Binary files /dev/null and b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/add-ongoing-task.png differ
diff --git a/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/add-or-edit-script.png b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/add-or-edit-script.png
new file mode 100644
index 0000000000..515079735d
Binary files /dev/null and b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/add-or-edit-script.png differ
diff --git a/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-etl-advanced.png b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-etl-advanced.png
new file mode 100644
index 0000000000..b3f8f8a04b
Binary files /dev/null and b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-etl-advanced.png differ
diff --git a/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-etl-conn-str-method.png b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-etl-conn-str-method.png
new file mode 100644
index 0000000000..c58c437e25
Binary files /dev/null and b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-etl-conn-str-method.png differ
diff --git a/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-etl-define-task.png b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-etl-define-task.png
new file mode 100644
index 0000000000..3db0e36e41
Binary files /dev/null and b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-etl-define-task.png differ
diff --git a/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-etl-entra-id-method.png b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-etl-entra-id-method.png
new file mode 100644
index 0000000000..b42638f85f
Binary files /dev/null and b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-etl-entra-id-method.png differ
diff --git a/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-etl-passwordless-method.png b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-etl-passwordless-method.png
new file mode 100644
index 0000000000..d08495d6f5
Binary files /dev/null and b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-etl-passwordless-method.png differ
diff --git a/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-etl-transformation-script.png b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-etl-transformation-script.png
new file mode 100644
index 0000000000..f40b5509b2
Binary files /dev/null and b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-etl-transformation-script.png differ
diff --git a/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-task-selection.png b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-task-selection.png
new file mode 100644
index 0000000000..96ad8c8efc
Binary files /dev/null and b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/aqs-task-selection.png differ
diff --git a/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/ongoing-tasks.png b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/ongoing-tasks.png
new file mode 100644
index 0000000000..f9099c453d
Binary files /dev/null and b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/queue/ongoing-tasks.png differ
diff --git a/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/task-list-1.png b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/task-list-1.png
new file mode 100644
index 0000000000..00f120d0be
Binary files /dev/null and b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/task-list-1.png differ
diff --git a/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/task-list-2.png b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/task-list-2.png
new file mode 100644
index 0000000000..08d858233c
Binary files /dev/null and b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/task-list-2.png differ
diff --git a/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/task-list-3.png b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/task-list-3.png
new file mode 100644
index 0000000000..8aaedb49e3
Binary files /dev/null and b/Documentation/6.1/Raven.Documentation.Pages/studio/database/tasks/ongoing-tasks/images/task-list-3.png differ
diff --git a/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/ClientApi/Operations/Maintenance/ConnectionStrings/AddConnectionStrings.cs b/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/ClientApi/Operations/Maintenance/ConnectionStrings/AddConnectionStrings.cs
new file mode 100644
index 0000000000..ef4f606e87
--- /dev/null
+++ b/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/ClientApi/Operations/Maintenance/ConnectionStrings/AddConnectionStrings.cs
@@ -0,0 +1,219 @@
+using Raven.Client.Documents;
+using Raven.Client.Documents.Operations.Backups;
+using Raven.Client.Documents.Operations.ConnectionStrings;
+using Raven.Client.Documents.Operations.ETL;
+using Raven.Client.Documents.Operations.ETL.ElasticSearch;
+using Raven.Client.Documents.Operations.ETL.OLAP;
+using Raven.Client.Documents.Operations.ETL.SQL;
+
+namespace Raven.Documentation.Samples.ClientApi.Operations.Maintenance.ConnectionStrings
+{
+ public class AddConnectionStrings
+ {
+ public AddConnectionStrings()
+ {
+ using (var store = new DocumentStore())
+ {
+ #region add_raven_connection_string
+ // Define a connection string to a RavenDB database destination
+ // ============================================================
+ var ravenDBConStr = new RavenConnectionString
+ {
+ Name = "ravendb-connection-string-name",
+ Database = "target-database-name",
+ TopologyDiscoveryUrls = new[] { "https://rvn2:8080" }
+ };
+
+ // Deploy (send) the connection string to the server via the PutConnectionStringOperation
+ // ======================================================================================
+ var PutConnectionStringOp = new PutConnectionStringOperation(ravenDBConStr);
+ PutConnectionStringResult connectionStringResult = store.Maintenance.Send(PutConnectionStringOp);
+ #endregion
+ }
+
+ using (var store = new DocumentStore())
+ {
+ #region add_sql_connection_string
+ // Define a connection string to a SQL database destination
+ // ========================================================
+ var sqlConStr = new SqlConnectionString
+ {
+ Name = "sql-connection-string-name",
+
+ // Define destination factory name
+ FactoryName = "MySql.Data.MySqlClient",
+
+ // Define the destination database
+ // May also need to define authentication and encryption parameters
+ // By default, encrypted databases are sent over encrypted channels
+ ConnectionString = "host=127.0.0.1;user=root;database=Northwind"
+ };
+
+ // Deploy (send) the connection string to the server via the PutConnectionStringOperation
+ // ======================================================================================
+ var PutConnectionStringOp = new PutConnectionStringOperation(sqlConStr);
+ PutConnectionStringResult connectionStringResult = store.Maintenance.Send(PutConnectionStringOp);
+ #endregion
+ }
+
+ using (var store = new DocumentStore())
+ {
+ #region add_olap_connection_string_1
+ // Define a connection string to a local OLAP destination
+ // ======================================================
+ OlapConnectionString olapConStr = new OlapConnectionString
+ {
+ Name = "olap-connection-string-name",
+ LocalSettings = new LocalSettings
+ {
+ FolderPath = "path-to-local-folder"
+ }
+ };
+
+ // Deploy (send) the connection string to the server via the PutConnectionStringOperation
+ // ======================================================================================
+ var PutConnectionStringOp = new PutConnectionStringOperation(olapConStr);
+ PutConnectionStringResult connectionStringResult = store.Maintenance.Send(PutConnectionStringOp);
+ #endregion
+ }
+
+ using (var store = new DocumentStore())
+ {
+ #region add_olap_connection_string_2
+ // Define a connection string to an AWS OLAP destination
+ // =====================================================
+ var olapConStr = new OlapConnectionString
+ {
+ Name = "myOlapConnectionStringName",
+ S3Settings = new S3Settings
+ {
+ BucketName = "myBucket",
+ RemoteFolderName = "my/folder/name",
+ AwsAccessKey = "myAccessKey",
+ AwsSecretKey = "myPassword",
+ AwsRegionName = "us-east-1"
+ }
+ };
+
+ // Deploy (send) the connection string to the server via the PutConnectionStringOperation
+ // ======================================================================================
+ var PutConnectionStringOp = new PutConnectionStringOperation(olapConStr);
+ PutConnectionStringResult connectionStringResult = store.Maintenance.Send(PutConnectionStringOp);
+ #endregion
+ }
+
+ using (var store = new DocumentStore())
+ {
+ #region add_elasticsearch_connection_string
+ // Define a connection string to an Elasticsearch destination
+ // ==========================================================
+ var elasticSearchConStr = new ElasticSearchConnectionString
+ {
+ Name = "elasticsearch-connection-string-name",
+
+ // Elasticsearch Nodes URLs
+ Nodes = new[] { "http://localhost:9200" },
+
+ // Authentication Method
+ Authentication = new Raven.Client.Documents.Operations.ETL.ElasticSearch.Authentication
+ {
+ Basic = new BasicAuthentication
+ {
+ Username = "John",
+ Password = "32n4j5kp8"
+ }
+ }
+ };
+
+ // Deploy (send) the connection string to the server via the PutConnectionStringOperation
+ // ======================================================================================
+ var PutConnectionStringOp =
+ new PutConnectionStringOperation(elasticSearchConStr);
+ PutConnectionStringResult connectionStringResult = store.Maintenance.Send(PutConnectionStringOp);
+ #endregion
+ }
+ }
+
+ public class Foo
+ {
+ #region raven_connection_string
+ public class RavenConnectionString : ConnectionString
+ {
+ public override ConnectionStringType Type => ConnectionStringType.Raven;
+
+ public string Database { get; set; } // Target database name
+ public string[] TopologyDiscoveryUrls; // List of server urls in the target RavenDB cluster
+ }
+ #endregion
+
+ #region sql_connection_string
+ public class SqlConnectionString : ConnectionString
+ {
+ public override ConnectionStringType Type => ConnectionStringType.Sql;
+
+ public string ConnectionString { get; set; }
+ public string FactoryName { get; set; }
+ }
+ #endregion
+
+ #region olap_connection_string
+ public class OlapConnectionString : ConnectionString
+ {
+ public override ConnectionStringType Type => ConnectionStringType.Olap;
+
+ public LocalSettings LocalSettings { get; set; }
+ public S3Settings S3Settings { get; set; }
+ public AzureSettings AzureSettings { get; set; }
+ public GlacierSettings GlacierSettings { get; set; }
+ public GoogleCloudSettings GoogleCloudSettings { get; set; }
+ public FtpSettings FtpSettings { get; set; }
+ }
+ #endregion
+
+ #region elasticsearch_connection_string
+ public class ElasticsearchConnectionString : ConnectionString
+ {
+ public override ConnectionStringType Type => ConnectionStringType.ElasticSearch;
+
+ public string Nodes { get; set; }
+ public string Authentication { get; set; }
+ public string Basic { get; set; }
+ public string Username { get; set; }
+ public string Password { get; set; }
+ }
+ #endregion
+
+ #region connection_string_class
+ // All the connection string class types inherit from this abstract ConnectionString class:
+ // ========================================================================================
+
+ public abstract class ConnectionString
+ {
+ // A name for the connection string
+ public string Name { get; set; }
+
+ // The connection string type
+ public abstract ConnectionStringType Type { get; }
+ }
+
+ public enum ConnectionStringType
+ {
+ Raven,
+ Sql,
+ Olap,
+ ElasticSearch,
+ Queue
+ }
+ #endregion
+
+ private interface IFoo
+ {
+ /*
+ #region put_connection_string
+ public PutConnectionStringOperation(T connectionString)
+ #endregion
+ */
+ }
+ }
+ }
+}
diff --git a/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/ClientApi/Operations/Maintenance/ConnectionStrings/GetConnectionStrings.cs b/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/ClientApi/Operations/Maintenance/ConnectionStrings/GetConnectionStrings.cs
new file mode 100644
index 0000000000..a100dd4a38
--- /dev/null
+++ b/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/ClientApi/Operations/Maintenance/ConnectionStrings/GetConnectionStrings.cs
@@ -0,0 +1,112 @@
+using System.Collections.Generic;
+using Raven.Client.Documents;
+using Raven.Client.Documents.Operations.ConnectionStrings;
+using Raven.Client.Documents.Operations.ETL;
+using Raven.Client.Documents.Operations.ETL.ElasticSearch;
+using Raven.Client.Documents.Operations.ETL.OLAP;
+using Raven.Client.Documents.Operations.ETL.Queue;
+using Raven.Client.Documents.Operations.ETL.SQL;
+
+namespace Raven.Documentation.Samples.ClientApi.Operations.Maintenance.ConnectionStrings
+{
+ public class GetConnectionStrings
+ {
+ public GetConnectionStrings()
+ {
+ #region get_connection_string_by_name
+ using (var store = new DocumentStore())
+ {
+ // Request to get a specific connection string, pass its name and type:
+ // ====================================================================
+ var getRavenConStrOp =
+ new GetConnectionStringsOperation("ravendb-connection-string-name", ConnectionStringType.Raven);
+
+ GetConnectionStringsResult connectionStrings = store.Maintenance.Send(getRavenConStrOp);
+
+ // Access results:
+ // ===============
+ Dictionary ravenConnectionStrings =
+ connectionStrings.RavenConnectionStrings;
+
+ var numberOfRavenConnectionStrings = ravenConnectionStrings.Count;
+ var ravenConStr = ravenConnectionStrings["ravendb-connection-string-name"];
+
+ var targetUrls = ravenConStr.TopologyDiscoveryUrls;
+ var targetDatabase = ravenConStr.Database;
+ }
+ #endregion
+
+ #region get_all_connection_strings
+ using (var store = new DocumentStore())
+ {
+ // Get all connection strings:
+ // ===========================
+ var getAllConStrOp = new GetConnectionStringsOperation();
+ GetConnectionStringsResult allConnectionStrings = store.Maintenance.Send(getAllConStrOp);
+
+ // Access results:
+ // ===============
+
+ // RavenDB
+ Dictionary ravenConnectionStrings =
+ allConnectionStrings.RavenConnectionStrings;
+
+ // SQL
+ Dictionary sqlConnectionStrings =
+ allConnectionStrings.SqlConnectionStrings;
+
+ // OLAP
+ Dictionary olapConnectionStrings =
+ allConnectionStrings.OlapConnectionStrings;
+
+ // Elasticsearch
+ Dictionary elasticsearchConnectionStrings =
+ allConnectionStrings.ElasticSearchConnectionStrings;
+
+ // Access the Queue ETL connection strings in a similar manner:
+ // ============================================================
+ Dictionary queueConnectionStrings =
+ allConnectionStrings.QueueConnectionStrings;
+
+ var kafkaConStr = queueConnectionStrings["kafka-connection-string-name"];
+ }
+ #endregion
+ }
+
+ private interface IFoo
+ {
+ /*
+ #region syntax_1
+ public GetConnectionStringsOperation()
+ public GetConnectionStringsOperation(string connectionStringName, ConnectionStringType type)
+ #endregion
+ */
+
+ /*
+ #region syntax_2
+ public enum ConnectionStringType
+ {
+ Raven,
+ Sql,
+ Olap,
+ ElasticSearch,
+ Queue
+ }
+ #endregion
+ */
+
+ /*
+ #region syntax_3
+ public class GetConnectionStringsResult
+ {
+ public Dictionary RavenConnectionStrings { get; set; }
+ public Dictionary SqlConnectionStrings { get; set; }
+ public Dictionary OlapConnectionStrings { get; set; }
+ public Dictionary ElasticSearchConnectionStrings { get; set; }
+ public Dictionary QueueConnectionStrings { get; set; }
+ }
+ #endregion
+ */
+ }
+ }
+}
diff --git a/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/ClientApi/Operations/Maintenance/ConnectionStrings/RemoveConnectionStrings.cs b/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/ClientApi/Operations/Maintenance/ConnectionStrings/RemoveConnectionStrings.cs
new file mode 100644
index 0000000000..917f0db17c
--- /dev/null
+++ b/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/ClientApi/Operations/Maintenance/ConnectionStrings/RemoveConnectionStrings.cs
@@ -0,0 +1,41 @@
+using Raven.Client.Documents;
+using Raven.Client.Documents.Operations.ConnectionStrings;
+using Raven.Client.Documents.Operations.ETL;
+namespace Raven.Documentation.Samples.ClientApi.Operations
+{
+ public class RemoveConnectionStrings
+ {
+ public RemoveConnectionStrings()
+ {
+ using (var store = new DocumentStore())
+ {
+ #region remove_raven_connection_string
+ var ravenConnectionString = new RavenConnectionString()
+ {
+ // Note:
+ // Only the 'Name' property of the connection string is needed for the remove operation.
+ // Other properties are not considered.
+ Name = "ravendb-connection-string-name"
+ };
+
+ // Define the remove connection string operation,
+ // pass the connection string to be removed.
+ var removeConStrOp
+ = new RemoveConnectionStringOperation(ravenConnectionString);
+
+ // Execute the operation by passing it to Maintenance.Send
+ store.Maintenance.Send(removeConStrOp);
+ #endregion
+ }
+ }
+
+ private interface IFoo
+ {
+ /*
+ #region remove_connection_string
+ public RemoveConnectionStringOperation(T connectionString)
+ #endregion
+ */
+ }
+ }
+}
diff --git a/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/ClientApi/Operations/Maintenance/Etl/AddEtl.cs b/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/ClientApi/Operations/Maintenance/Etl/AddEtl.cs
new file mode 100644
index 0000000000..c02b8c4b09
--- /dev/null
+++ b/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/ClientApi/Operations/Maintenance/Etl/AddEtl.cs
@@ -0,0 +1,219 @@
+using Raven.Client.Documents;
+using Raven.Client.Documents.Operations.ETL;
+using Raven.Client.Documents.Operations.ETL.ElasticSearch;
+using Raven.Client.Documents.Operations.ETL.OLAP;
+using Raven.Client.Documents.Operations.ETL.SQL;
+
+namespace Raven.Documentation.Samples.ClientApi.Operations.Maintenance.Etl
+{
+ public class AddEtl
+ {
+ public AddEtl()
+ {
+ using (var store = new DocumentStore())
+ {
+ #region add_raven_etl
+ // Define the RavenDB ETL task configuration object
+ // ================================================
+ var ravenEtlConfig = new RavenEtlConfiguration
+ {
+ Name = "task-name",
+ ConnectionStringName = "raven-connection-string-name",
+ Transforms =
+ {
+ new Transformation
+ {
+ // The script name
+ Name = "script-name",
+
+ // RavenDB collections the script uses
+ Collections = { "Employees" },
+
+ // The transformation script
+ Script = @"loadToEmployees ({
+ Name: this.FirstName + ' ' + this.LastName,
+ Title: this.Title
+ });"
+ }
+ },
+
+ // Do not prevent task failover to another node (optional)
+ PinToMentorNode = false
+ };
+
+ // Define the AddEtlOperation
+ // ==========================
+ var operation = new AddEtlOperation(ravenEtlConfig);
+
+ // Execute the operation by passing it to Maintenance.Send
+ // =======================================================
+ AddEtlOperationResult result = store.Maintenance.Send(operation);
+ #endregion
+ }
+
+ using (var store = new DocumentStore())
+ {
+ #region add_sql_etl
+ // Define the SQL ETL task configuration object
+ // ============================================
+ var sqlEtlConfig = new SqlEtlConfiguration
+ {
+ Name = "task-name",
+ ConnectionStringName = "sql-connection-string-name",
+ SqlTables =
+ {
+ new SqlEtlTable {TableName = "Orders", DocumentIdColumn = "Id", InsertOnlyMode = false},
+ new SqlEtlTable {TableName = "OrderLines", DocumentIdColumn = "OrderId", InsertOnlyMode = false},
+ },
+ Transforms =
+ {
+ new Transformation
+ {
+ Name = "script-name",
+ Collections = { "Orders" },
+ Script = @"var orderData = {
+ Id: id(this),
+ OrderLinesCount: this.Lines.length,
+ TotalCost: 0
+ };
+
+ for (var i = 0; i < this.Lines.length; i++) {
+ var line = this.Lines[i];
+ orderData.TotalCost += line.PricePerUnit;
+
+ // Load to SQL table 'OrderLines'
+ loadToOrderLines({
+ OrderId: id(this),
+ Qty: line.Quantity,
+ Product: line.Product,
+ Cost: line.PricePerUnit
+ });
+ }
+ orderData.TotalCost = Math.round(orderData.TotalCost * 100) / 100;
+
+ // Load to SQL table 'Orders'
+ loadToOrders(orderData)"
+ }
+ },
+
+ // Do not prevent task failover to another node (optional)
+ PinToMentorNode = false
+ };
+
+ // Define the AddEtlOperation
+ // ===========================
+ var operation = new AddEtlOperation(sqlEtlConfig);
+
+ // Execute the operation by passing it to Maintenance.Send
+ // =======================================================
+ AddEtlOperationResult result = store.Maintenance.Send(operation);
+ #endregion
+ }
+
+ using (var store = new DocumentStore())
+ {
+ #region add_olap_etl
+ // Define the OLAP ETL task configuration object
+ // =============================================
+ var olapEtlConfig = new OlapEtlConfiguration
+ {
+ Name = "task-name",
+ ConnectionStringName = "olap-connection-string-name",
+ Transforms =
+ {
+ new Transformation
+ {
+ Name = "script-name",
+ Collections = {"Orders"},
+ Script = @"var orderDate = new Date(this.OrderedAt);
+ var year = orderDate.getFullYear();
+ var month = orderDate.getMonth();
+ var key = new Date(year, month);
+ loadToOrders(key, {
+ Company : this.Company,
+ ShipVia : this.ShipVia
+ })"
+ }
+ }
+ };
+
+ // Define the AddEtlOperation
+ // ==========================
+ var operation = new AddEtlOperation(olapEtlConfig);
+
+ // Execute the operation by passing it to Maintenance.Send
+ // =======================================================
+ AddEtlOperationResult result = store.Maintenance.Send(operation);
+ #endregion
+ }
+
+ using (var store = new DocumentStore())
+ {
+ #region add_elasticsearch_etl
+ // Define the Elasticsearch ETL task configuration object
+ // ======================================================
+ var elasticsearchEtlConfig = new ElasticSearchEtlConfiguration
+ {
+ Name = "task-name",
+ ConnectionStringName = "elasticsearch-connection-string-name",
+ ElasticIndexes =
+ {
+ // Define Elasticsearch Indexes
+ new ElasticSearchIndex
+ {
+ // Elasticsearch Index name
+ IndexName = "orders",
+ // The Elasticsearch document property that will contain the source RavenDB document id.
+ // Make sure this property is also defined inside the transform script.
+ DocumentIdProperty = "DocId",
+ InsertOnlyMode = false
+ },
+ new ElasticSearchIndex
+ {
+ IndexName = "lines",
+ DocumentIdProperty = "OrderLinesCount",
+ // If true, don't send _delete_by_query before appending docs
+ InsertOnlyMode = true
+ }
+ },
+ Transforms =
+ {
+ new Transformation()
+ {
+ Collections = { "Orders" },
+ Script = @"var orderData = {
+ DocId: id(this),
+ OrderLinesCount: this.Lines.length,
+ TotalCost: 0
+ };
+
+ // Write the `orderData` as a document to the Elasticsearch 'orders' index
+ loadToOrders(orderData);",
+
+ Name = "script-name"
+ }
+ }
+ };
+
+ // Define the AddEtlOperation
+ // ==========================
+ var operation = new AddEtlOperation(elasticsearchEtlConfig);
+
+ // Execute the operation by passing it to Maintenance.Send
+ // =======================================================
+ store.Maintenance.Send(operation);
+ #endregion
+ }
+ }
+
+ private interface IFoo
+ {
+ /*
+ #region add_etl_operation
+ public AddEtlOperation(EtlConfiguration configuration)
+ #endregion
+ */
+ }
+ }
+}
+
diff --git a/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/ClientApi/Operations/Maintenance/OngoingTasks/OngoingTaskOperations.cs b/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/ClientApi/Operations/Maintenance/OngoingTasks/OngoingTaskOperations.cs
new file mode 100644
index 0000000000..2cbe5e184d
--- /dev/null
+++ b/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/ClientApi/Operations/Maintenance/OngoingTasks/OngoingTaskOperations.cs
@@ -0,0 +1,190 @@
+using System;
+using System.Threading.Tasks;
+using Raven.Client.Documents;
+using Raven.Client.Documents.Operations.OngoingTasks;
+using Raven.Client.Documents.Operations.Replication;
+
+namespace Raven.Documentation.Samples.ClientApi.Operations.Maintenance.OngoingTasks
+{
+ public class OngoingTaskOperations
+ {
+ public void CreateTask()
+ {
+ using (var store = new DocumentStore())
+ {
+ #region create_task
+ // Define a simple External Replication task
+ var taskDefintion = new ExternalReplication
+ {
+ Name = "MyExtRepTask",
+ ConnectionStringName = "MyConnectionStringName"
+ };
+
+ // Deploy the task to the server
+ var taskOp = new UpdateExternalReplicationOperation(taskDefintion);
+ var sendResult = store.Maintenance.Send(taskOp);
+
+ // The task ID is available in the send result
+ var taskId = sendResult.TaskId;
+ #endregion
+ }
+ }
+
+ public void GetTaskInfo()
+ {
+ using (var store = new DocumentStore())
+ {
+ var taskId = 1; // Placeholder value for compilation purpose
+
+ #region get
+ // Define the get task operation, pass:
+ // * The ongoing task ID or the task name
+ // * The task type
+ var getTaskInfoOp = new GetOngoingTaskInfoOperation(taskId, OngoingTaskType.Replication);
+
+ // Execute the operation by passing it to Maintenance.Send
+ var taskInfo = (OngoingTaskReplication)store.Maintenance.Send(getTaskInfoOp);
+
+ // Access the task info
+ var taskState = taskInfo.TaskState;
+ var taskDelayTime = taskInfo.DelayReplicationFor;
+ var destinationUrls= taskInfo.TopologyDiscoveryUrls;
+ // ...
+ #endregion
+ }
+ }
+
+ public async Task GetTaskInfoAsync()
+ {
+ using (var store = new DocumentStore())
+ {
+ var taskId = 1;
+
+ #region get_async
+ var getTaskInfoOp = new GetOngoingTaskInfoOperation(taskId, OngoingTaskType.Replication);
+ var taskInfo = (OngoingTaskReplication) await store.Maintenance.SendAsync(getTaskInfoOp);
+
+ var taskState = taskInfo.TaskState;
+ var taskDelayTime = taskInfo.DelayReplicationFor;
+ var destinationUrls= taskInfo.TopologyDiscoveryUrls;
+ // ...
+ #endregion
+ }
+ }
+
+ public void ToggleTask()
+ {
+ using (var store = new DocumentStore())
+ {
+ var taskId = 1;
+
+ #region toggle
+ // Define the delete task operation, pass:
+ // * The ongoing task ID
+ // * The task type
+ // * A boolean value to enable/disable
+ var toggleTaskOp = new ToggleOngoingTaskStateOperation(taskId, OngoingTaskType.Replication, true);
+
+ // Execute the operation by passing it to Maintenance.Send
+ store.Maintenance.Send(toggleTaskOp);
+ #endregion
+ }
+ }
+
+ public async Task ToggleTaskAsync()
+ {
+ using (var store = new DocumentStore())
+ {
+ var taskId = 1;
+
+ #region toggle_async
+ var toggleTaskOp = new ToggleOngoingTaskStateOperation(taskId, OngoingTaskType.Replication, true);
+ await store.Maintenance.SendAsync(toggleTaskOp);
+ #endregion
+ }
+ }
+
+ public void DeleteTask()
+ {
+ using (var store = new DocumentStore())
+ {
+ var taskId = 1;
+
+ #region delete
+ // Define the delete task operation, pass:
+ // * The ongoing task ID
+ // * The task type
+ var deleteTaskOp = new DeleteOngoingTaskOperation(taskId, OngoingTaskType.Replication);
+
+ // Execute the operation by passing it to Maintenance.Send
+ store.Maintenance.Send(deleteTaskOp);
+ #endregion
+ }
+ }
+
+ public async Task DeleteTaskAsync()
+ {
+ using (var store = new DocumentStore())
+ {
+ var taskId = 1;
+
+ #region delete_async
+ var deleteTaskOp = new DeleteOngoingTaskOperation(taskId, OngoingTaskType.Replication);
+ await store.Maintenance.SendAsync(deleteTaskOp);
+ #endregion
+ }
+ }
+
+ private interface IFoo
+ {
+ /*
+ #region syntax_1
+ // Get
+ public GetOngoingTaskInfoOperation(long taskId, OngoingTaskType type);
+ public GetOngoingTaskInfoOperation(string taskName, OngoingTaskType type);
+ #endregion
+
+ #region syntax_2
+ // Delete
+ public DeleteOngoingTaskOperation(long taskId, OngoingTaskType taskType);
+ #endregion
+
+ #region syntax_3
+ // Toggle
+ public ToggleOngoingTaskStateOperation(long taskId, OngoingTaskType type, bool disable);
+ #endregion
+ */
+ }
+
+ /*
+ #region syntax_4
+ private enum OngoingTaskType
+ {
+ Replication,
+ RavenEtl,
+ SqlEtl,
+ OlapEtl,
+ ElasticSearchEtl,
+ QueueEtl,
+ Backup,
+ Subscription,
+ PullReplicationAsHub,
+ PullReplicationAsSink,
+ QueueSink,
+ }
+ #endregion
+ */
+
+ #region syntax_5
+ public sealed class OngoingTaskReplication : OngoingTask
+ {
+ public OngoingTaskReplication() => this.TaskType = OngoingTaskType.Replication;
+ public string DestinationUrl { get; set; }
+ public string[] TopologyDiscoveryUrls { get; set; }
+ public string DestinationDatabase { get; set; }
+ public string ConnectionStringName { get; set; }
+ public TimeSpan DelayReplicationFor { get; set; }
+ }
+ #endregion
+ }
+}
diff --git a/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/ClientApi/Operations/WhatAreOperations.cs b/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/ClientApi/Operations/WhatAreOperations.cs
new file mode 100644
index 0000000000..e1c9f4a726
--- /dev/null
+++ b/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/ClientApi/Operations/WhatAreOperations.cs
@@ -0,0 +1,361 @@
+using System;
+using System.Threading;
+using System.Threading.Tasks;
+using Raven.Client.Documents;
+using Raven.Client.Documents.Indexes;
+using Raven.Client.Documents.Operations;
+using Raven.Client.Documents.Operations.Counters;
+using Raven.Client.Documents.Operations.Indexes;
+using Raven.Client.Documents.Session;
+using Raven.Client.ServerWide.Operations;
+using Xunit;
+
+namespace Raven.Documentation.Samples.ClientApi.Operations
+{
+ public class WhatAreOperations
+ {
+ public interface ISendSyntax
+ {
+ #region operations_send
+ // Available overloads:
+ void Send(IOperation operation, SessionInfo sessionInfo = null);
+ TResult Send(IOperation operation, SessionInfo sessionInfo = null);
+ Operation Send(IOperation operation, SessionInfo sessionInfo = null);
+
+ PatchStatus Send(PatchOperation operation);
+ PatchOperation.Result Send(PatchOperation operation);
+ #endregion
+
+ #region operations_send_async
+ // Available overloads:
+ Task SendAsync(IOperation operation,
+ CancellationToken token = default(CancellationToken), SessionInfo sessionInfo = null);
+ Task SendAsync(IOperation operation,
+ CancellationToken token = default(CancellationToken), SessionInfo sessionInfo = null);
+ Task SendAsync(IOperation operation,
+ CancellationToken token = default(CancellationToken), SessionInfo sessionInfo = null);
+
+ Task SendAsync(PatchOperation operation,
+ CancellationToken token = default(CancellationToken));
+ Task> SendAsync(PatchOperation operation,
+ CancellationToken token = default(CancellationToken));
+ #endregion
+
+ #region maintenance_send
+ // Available overloads:
+ void Send(IMaintenanceOperation operation);
+ TResult Send(IMaintenanceOperation operation);
+ Operation Send(IMaintenanceOperation operation);
+ #endregion
+
+ #region maintenance_send_async
+ // Available overloads:
+ Task SendAsync(IMaintenanceOperation operation,
+ CancellationToken token = default(CancellationToken));
+ Task SendAsync(IMaintenanceOperation operation,
+ CancellationToken token = default(CancellationToken));
+ Task SendAsync(IMaintenanceOperation operation,
+ CancellationToken token = default(CancellationToken));
+ #endregion
+
+ #region server_send
+ // Available overloads:
+ void Send(IServerOperation operation);
+ TResult Send(IServerOperation operation);
+ Operation Send(IServerOperation operation);
+ #endregion
+
+ #region server_send_async
+ // Available overloads:
+ Task SendAsync(IServerOperation operation,
+ CancellationToken token = default(CancellationToken));
+ Task SendAsync(IServerOperation operation,
+ CancellationToken token = default(CancellationToken));
+ Task SendAsync(IServerOperation operation,
+ CancellationToken token = default(CancellationToken));
+ #endregion
+
+ /*
+ #region waitForCompletion_syntax
+ // Available overloads:
+ public IOperationResult WaitForCompletion(TimeSpan? timeout = null)
+ public IOperationResult WaitForCompletion(CancellationToken token)
+
+ public TResult WaitForCompletion(TimeSpan? timeout = null)
+ where TResult : IOperationResult
+ public TResult WaitForCompletion(CancellationToken token)
+ where TResult : IOperationResult
+ #endregion
+ */
+
+ /*
+ #region waitForCompletion_syntax_async
+ // Available overloads:
+ public Task WaitForCompletionAsync(TimeSpan? timeout = null)
+ public Task WaitForCompletionAsync(CancellationToken token)
+
+ public async Task WaitForCompletionAsync(TimeSpan? timeout = null)
+ where TResult : IOperationResult
+ public async Task WaitForCompletionAsync(CancellationToken token)
+ where TResult : IOperationResult
+ #endregion
+ */
+
+ /*
+ #region kill_syntax
+ // Available overloads:
+ public void Kill()
+ public async Task KillAsync(CancellationToken token = default)
+ #endregion
+ */
+ }
+
+ public void Examples()
+ {
+ using var documentStore = new DocumentStore
+ {
+ Urls = new[] { "http://localhost:8080" },
+ Database = "Northwind"
+ };
+
+ #region operations_ex
+ // Define operation, e.g. get all counters info for a document
+ IOperation getCountersOp = new GetCountersOperation("products/1-A");
+
+ // Execute the operation by passing the operation to Operations.Send
+ CountersDetail allCountersResult = documentStore.Operations.Send(getCountersOp);
+
+ // Access the operation result
+ int numberOfCounters = allCountersResult.Counters.Count;
+ #endregion
+
+ #region maintenance_ex
+ // Define operation, e.g. stop an index
+ IMaintenanceOperation stopIndexOp = new StopIndexOperation("Orders/ByCompany");
+
+ // Execute the operation by passing the operation to Maintenance.Send
+ documentStore.Maintenance.Send(stopIndexOp);
+
+ // This specific operation returns void
+ // You can send another operation to verify the index running status
+ IMaintenanceOperation indexStatsOp = new GetIndexStatisticsOperation("Orders/ByCompany");
+ IndexStats indexStats = documentStore.Maintenance.Send(indexStatsOp);
+ IndexRunningStatus status = indexStats.Status; // will be "Paused"
+ #endregion
+
+ #region server_ex
+ // Define operation, e.g. get the server build number
+ IServerOperation getBuildNumberOp = new GetBuildNumberOperation();
+
+ // Execute the operation by passing the operation to Maintenance.Server.Send
+ BuildNumber buildNumberResult = documentStore.Maintenance.Server.Send(getBuildNumberOp);
+
+ // Access the operation result
+ int version = buildNumberResult.BuildVersion;
+ #endregion
+
+ #region kill_ex
+ // Define operation, e.g. delete all discontinued products
+ // Note: This operation implements interface: 'IOperation'
+ IOperation deleteByQueryOp =
+ new DeleteByQueryOperation("from Products where Discontinued = true");
+
+ // Execute the operation
+ // Send returns an 'Operation' object that can be 'killed'
+ Operation operation = documentStore.Operations.Send(deleteByQueryOp);
+
+ // Call 'Kill' to abort operation
+ operation.Kill();
+ #endregion
+ }
+
+ #region wait_timeout_ex
+ public void WaitForCompletionWithTimout(
+ TimeSpan timeout,
+ DocumentStore documentStore)
+ {
+ // Define operation, e.g. delete all discontinued products
+ // Note: This operation implements interface: 'IOperation'
+ IOperation deleteByQueryOp =
+ new DeleteByQueryOperation("from Products where Discontinued = true");
+
+ // Execute the operation
+ // Send returns an 'Operation' object that can be awaited on
+ Operation operation = documentStore.Operations.Send(deleteByQueryOp);
+
+ try
+ {
+ // Call method 'WaitForCompletion' to wait for the operation to complete.
+ // If a timeout is specified, the method will only wait for the specified time frame.
+ BulkOperationResult result =
+ (BulkOperationResult)operation.WaitForCompletion(timeout);
+
+ // The operation has finished within the specified timeframe
+ long numberOfItemsDeleted = result.Total; // Access the operation result
+ }
+ catch (TimeoutException e)
+ {
+ // The operation did Not finish within the specified timeframe
+ }
+ }
+ #endregion
+
+ #region wait_token_ex
+ public void WaitForCompletionWithCancellationToken(
+ CancellationToken token,
+ DocumentStore documentStore)
+ {
+ // Define operation, e.g. delete all discontinued products
+ // Note: This operation implements interface: 'IOperation'
+ IOperation deleteByQueryOp =
+ new DeleteByQueryOperation("from Products where Discontinued = true");
+
+ // Execute the operation
+ // Send returns an 'Operation' object that can be awaited on
+ Operation operation = documentStore.Operations.Send(deleteByQueryOp);
+
+ try
+ {
+ // Call method 'WaitForCompletion' to wait for the operation to complete.
+ // Pass a CancellationToken in order to stop waiting upon a cancellation request.
+ BulkOperationResult result =
+ (BulkOperationResult)operation.WaitForCompletion(token);
+
+ // The operation has finished, no cancellation request was made
+ long numberOfItemsDeleted = result.Total; // Access the operation result
+ }
+ catch (TimeoutException e)
+ {
+ // The operation did Not finish at cancellation time
+ }
+ }
+ #endregion
+
+ public async Task ExamplesAsync()
+ {
+ using var documentStore = new DocumentStore
+ {
+ Urls = new[] { "http://localhost:8080" },
+ Database = "Northwind"
+ };
+
+ #region operations_ex_async
+ // Define operation, e.g. get all counters info for a document
+ IOperation getCountersOp = new GetCountersOperation("products/1-A");
+
+ // Execute the operation by passing the operation to Operations.Send
+ CountersDetail allCountersResult = await documentStore.Operations.SendAsync(getCountersOp);
+
+ // Access the operation result
+ int numberOfCounters = allCountersResult.Counters.Count;
+ #endregion
+
+ #region maintenance_ex_async
+ // Define operation, e.g. stop an index
+ IMaintenanceOperation stopIndexOp = new StopIndexOperation("Orders/ByCompany");
+
+ // Execute the operation by passing the operation to Maintenance.Send
+ await documentStore.Maintenance.SendAsync(stopIndexOp);
+
+ // This specific operation returns void
+ // You can send another operation to verify the index running status
+ IMaintenanceOperation indexStatsOp = new GetIndexStatisticsOperation("Orders/ByCompany");
+ IndexStats indexStats = await documentStore.Maintenance.SendAsync(indexStatsOp);
+ IndexRunningStatus status = indexStats.Status; // will be "Paused"
+ #endregion
+
+ #region server_ex_async
+ // Define operation, e.g. get the server build number
+ IServerOperation getBuildNumberOp = new GetBuildNumberOperation();
+
+ // Execute the operation by passing the operation to Maintenance.Server.Send
+ BuildNumber buildNumberResult = await documentStore.Maintenance.Server.SendAsync(getBuildNumberOp);
+
+ // Access the operation result
+ int version = buildNumberResult.BuildVersion;
+ #endregion
+
+ #region kill_ex_async
+ // Define operation, e.g. delete all discontinued products
+ // Note: This operation implements interface: 'IOperation'
+ IOperation deleteByQueryOp =
+ new DeleteByQueryOperation("from Products where Discontinued = true");
+
+ // Execute the operation
+ // SendAsync returns an 'Operation' object that can be 'killed'
+ Operation operation = await documentStore.Operations.SendAsync(deleteByQueryOp);
+
+ // Call 'KillAsync' to abort operation
+ await operation.KillAsync();
+
+ // Assert that operation is no longer running
+ await Assert.ThrowsAsync(() =>
+ operation.WaitForCompletionAsync(TimeSpan.FromSeconds(30)));
+ #endregion
+ }
+
+ #region wait_timeout_ex_async
+ public async Task WaitForCompletionWithTimoutAsync(
+ TimeSpan timeout,
+ DocumentStore documentStore)
+ {
+ // Define operation, e.g. delete all discontinued products
+ // Note: This operation implements interface: 'IOperation'
+ IOperation deleteByQueryOp =
+ new DeleteByQueryOperation("from Products where Discontinued = true");
+
+ // Execute the operation
+ // SendAsync returns an 'Operation' object that can be awaited on
+ Operation operation = await documentStore.Operations.SendAsync(deleteByQueryOp);
+
+ try
+ {
+ // Call method 'WaitForCompletionAsync' to wait for the operation to complete.
+ // If a timeout is specified, the method will only wait for the specified time frame.
+ BulkOperationResult result =
+ await operation.WaitForCompletionAsync(timeout)
+ .ConfigureAwait(false) as BulkOperationResult;
+
+ // The operation has finished within the specified timeframe
+ long numberOfItemsDeleted = result.Total; // Access the operation result
+ }
+ catch (TimeoutException e)
+ {
+ // The operation did Not finish within the specified timeframe
+ }
+ }
+ #endregion
+
+ #region wait_token_ex_async
+ public async Task WaitForCompletionWithCancellationTokenAsync(
+ CancellationToken token,
+ DocumentStore documentStore)
+ {
+ // Define operation, e.g. delete all discontinued products
+ // Note: This operation implements interface: 'IOperation'
+ IOperation deleteByQueryOp =
+ new DeleteByQueryOperation("from Products where Discontinued = true");
+
+ // Execute the operation
+ // SendAsync returns an 'Operation' object that can be awaited on
+ Operation operation = await documentStore.Operations.SendAsync(deleteByQueryOp);
+
+ try
+ {
+ // Call method 'WaitForCompletionAsync' to wait for the operation to complete.
+ // Pass a CancellationToken in order to stop waiting upon a cancellation request.
+ BulkOperationResult result =
+ await operation.WaitForCompletionAsync(token)
+ .ConfigureAwait(false) as BulkOperationResult;
+
+ // The operation has finished, no cancellation request was made
+ long numberOfItemsDeleted = result.Total; // Access the operation result
+ }
+ catch (TimeoutException e)
+ {
+ // The operation did Not finish at cancellation time
+ }
+ }
+ #endregion
+ }
+}
diff --git a/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/Server/OngoingTasks/ETL/Queue/AzureQueueStorageEtl.cs b/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/Server/OngoingTasks/ETL/Queue/AzureQueueStorageEtl.cs
new file mode 100644
index 0000000000..54d4bf8c55
--- /dev/null
+++ b/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/Server/OngoingTasks/ETL/Queue/AzureQueueStorageEtl.cs
@@ -0,0 +1,241 @@
+using System.Collections.Generic;
+using Raven.Client.Documents;
+using Raven.Client.Documents.Operations.ConnectionStrings;
+using Raven.Client.Documents.Operations.ETL;
+using Raven.Client.Documents.Operations.ETL.Queue;
+namespace Raven.Documentation.Samples.Server.OngoingTasks.ETL.Queue
+{
+ public class AzureQueueStorageEtl
+ {
+ public void AddConnectionString()
+ {
+ using (var store = new DocumentStore())
+ {
+ #region add_azure_connection_string
+ // Prepare the connection string:
+ // ==============================
+ var conStr = new QueueConnectionString
+ {
+ // Provide a name for this connection string
+ Name = "myAzureQueueConStr",
+
+ // Set the broker type
+ BrokerType = QueueBrokerType.AzureQueueStorage,
+
+ // In this example we provide a simple string for the connection string
+ AzureQueueStorageConnectionSettings = new AzureQueueStorageConnectionSettings()
+ {
+ ConnectionString = @"DefaultEndpointsProtocol=https;
+ AccountName=myAccountName;
+ AccountKey=myAccountKey;
+ EndpointSuffix=core.windows.net"
+ }
+ };
+
+ // Deploy (send) the connection string to the server via the PutConnectionStringOperation:
+ // =======================================================================================
+ var res = store.Maintenance.Send(
+ new PutConnectionStringOperation(conStr));
+ #endregion
+ }
+ }
+
+ public void AddAzureQueueStorageEtlTask()
+ {
+ using (var store = new DocumentStore())
+ {
+ #region add_azure_etl_task
+ // Define a transformation script for the task:
+ // ============================================
+ Transformation transformation = new Transformation
+ {
+ // Define the input collections
+ Collections = { "Orders" },
+ ApplyToAllDocuments = false,
+
+ // The transformation script
+ Name = "scriptName",
+ Script = @"// Create an orderData object
+ // ==========================
+ var orderData = {
+ Id: id(this),
+ OrderLinesCount: this.Lines.length,
+ TotalCost: 0
+ };
+
+ // Update the orderData's TotalCost field
+ // ======================================
+ for (var i = 0; i < this.Lines.length; i++) {
+ var line = this.Lines[i];
+ var cost = (line.Quantity * line.PricePerUnit) * ( 1 - line.Discount);
+ orderData.TotalCost += cost;
+ }
+
+ // Load the object to the 'OrdersQueue' in Azure
+ // =============================================
+ loadToOrdersQueue(orderData, {
+ Id: id(this),
+ Type: 'com.example.promotions',
+ Source: '/promotion-campaigns/summer-sale'
+ });"
+ };
+
+ // Define the Azure Queue Storage ETL task:
+ // ========================================
+ var etlTask = new QueueEtlConfiguration()
+ {
+ BrokerType = QueueBrokerType.AzureQueueStorage,
+
+ Name = "myAzureQueueEtlTaskName",
+ ConnectionStringName = "myAzureQueueConStr",
+
+ Transforms = { transformation },
+
+ // Set to false to allow task failover to another node if current one is down
+ PinToMentorNode = false
+ };
+
+ // Deploy (send) the task to the server via the AddEtlOperation:
+ // =============================================================
+ store.Maintenance.Send(new AddEtlOperation(etlTask));
+ #endregion
+ }
+ }
+
+ public void DeleteProcessedDocuments()
+ {
+ using (var store = new DocumentStore())
+ {
+ Transformation transformation = new Transformation(); // Defined here only for compilation purposes
+
+ #region azure_delete_documents
+ var etlTask = new QueueEtlConfiguration()
+ {
+ BrokerType = QueueBrokerType.AzureQueueStorage,
+
+ Name = "myAzureQueueEtlTaskName",
+ ConnectionStringName = "myAzureQueueConStr",
+
+ Transforms = { transformation },
+
+ // Define whether to delete documents from RavenDB after they are sent to the target queue
+ Queues = new List()
+ {
+ new()
+ {
+ // The name of the Azure queue
+ Name = "OrdersQueue",
+
+ // When set to 'true',
+ // documents that were processed by the transformation script will be deleted
+ // from RavenDB after the message is loaded to the "OrdersQueue" in Azure.
+ DeleteProcessedDocuments = true
+ }
+ }
+ };
+
+ store.Maintenance.Send(new AddEtlOperation(etlTask));
+ #endregion
+ }
+ }
+
+ private interface IFoo
+ {
+ #region queue_broker_type
+ public enum QueueBrokerType
+ {
+ None,
+ Kafka,
+ RabbitMq,
+ AzureQueueStorage
+ }
+ #endregion
+ }
+
+ private class Definition
+ {
+ #region queue_connection_string
+ public class QueueConnectionString : ConnectionString
+ {
+ // Set the broker type to QueueBrokerType.AzureQueueStorage
+ // for an Azure Queue Storage connection string
+ public QueueBrokerType BrokerType { get; set; }
+
+ // Configure this when setting a connection string for Kafka
+ public KafkaConnectionSettings KafkaConnectionSettings { get; set; }
+
+ // Configure this when setting a connection string for RabbitMQ
+ public RabbitMqConnectionSettings RabbitMqConnectionSettings { get; set; }
+
+ // Configure this when setting a connection string for Azure Queue Storage
+ public AzureQueueStorageConnectionSettings AzureQueueStorageConnectionSettings { get; set; }
+ }
+ #endregion
+
+ public abstract class ConnectionString
+ {
+ public string Name { get; set; }
+ }
+
+ #region azure_con_str_settings
+ public class AzureQueueStorageConnectionSettings
+ {
+ public EntraId EntraId { get; set; }
+ public string ConnectionString { get; set; }
+ public Passwordless Passwordless { get; set; }
+ }
+
+ public class EntraId
+ {
+ public string StorageAccountName { get; set; }
+ public string TenantId { get; set; }
+ public string ClientId { get; set; }
+ public string ClientSecret { get; set; }
+ }
+
+ public class Passwordless
+ {
+ public string StorageAccountName { get; set; }
+ }
+ #endregion
+
+ #region etl_configuration
+ public class QueueEtlConfiguration
+ {
+ // Set to QueueBrokerType.AzureQueueStorage to define an Azure Queue Storage ETL task
+ public QueueBrokerType BrokerType { get; set; }
+ // The ETL task name
+ public string Name { get; set; }
+ // The registered connection string name
+ public string ConnectionStringName { get; set; }
+ // List of transformation scripts
+ public List Transforms { get; set; }
+ // Optional configuration per queue
+ public List Queues { get; set; }
+ // Set to 'false' to allow task failover to another node if current one is down
+ public bool PinToMentorNode { get; set; }
+ }
+
+ public class Transformation
+ {
+ // The script name
+ public string Name { get; set; }
+ // The source RavenDB collections that serve as the input for the script
+ public List Collections { get; set; }
+ // Set whether to apply the script on all collections
+ public bool ApplyToAllDocuments { get; set; }
+ // The script itself
+ public string Script { get; set; }
+ }
+
+ public class EtlQueue
+ {
+ // The Azure queue name
+ public string Name { get; set; }
+ // Delete processed documents when set to 'true'
+ public bool DeleteProcessedDocuments { get; set; }
+ }
+ #endregion
+ }
+ }
+}
diff --git a/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/Server/OngoingTasks/ETL/Queue/KafkaEtl.cs b/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/Server/OngoingTasks/ETL/Queue/KafkaEtl.cs
new file mode 100644
index 0000000000..8f14c4e05c
--- /dev/null
+++ b/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/Server/OngoingTasks/ETL/Queue/KafkaEtl.cs
@@ -0,0 +1,227 @@
+using System.Collections.Generic;
+using Raven.Client.Documents;
+using Raven.Client.Documents.Operations.ConnectionStrings;
+using Raven.Client.Documents.Operations.ETL;
+using Raven.Client.Documents.Operations.ETL.Queue;
+namespace Raven.Documentation.Samples.Server.OngoingTasks.ETL.Queue
+{
+ public class KafkaEtl
+ {
+ public void AddConnectionString()
+ {
+ using (var store = new DocumentStore())
+ {
+ #region add_kafka_connection_string
+ // Prepare the connection string:
+ // ==============================
+ var conStr = new QueueConnectionString
+ {
+ // Provide a name for this connection string
+ Name = "myKafkaConStr",
+
+ // Set the broker type
+ BrokerType = QueueBrokerType.Kafka,
+
+ // Configure the connection details
+ KafkaConnectionSettings = new KafkaConnectionSettings()
+ { BootstrapServers = "localhost:9092" }
+ };
+
+ // Deploy (send) the connection string to the server via the PutConnectionStringOperation:
+ // =======================================================================================
+ var res = store.Maintenance.Send(
+ new PutConnectionStringOperation(conStr));
+ #endregion
+ }
+ }
+
+ public void AddKafkaEtlTask()
+ {
+ using (var store = new DocumentStore())
+ {
+ #region add_kafka_etl_task
+ // Define a transformation script for the task:
+ // ============================================
+ Transformation transformation = new Transformation
+ {
+ // Define the input collections
+ Collections = { "Orders" },
+ ApplyToAllDocuments = false,
+
+ // The transformation script
+ Name = "scriptName",
+ Script = @"// Create an orderData object
+ // ==========================
+ var orderData = {
+ Id: id(this),
+ OrderLinesCount: this.Lines.length,
+ TotalCost: 0
+ };
+
+ // Update the orderData's TotalCost field
+ // ======================================
+ for (var i = 0; i < this.Lines.length; i++) {
+ var line = this.Lines[i];
+ var cost = (line.Quantity * line.PricePerUnit) * ( 1 - line.Discount);
+ orderData.TotalCost += cost;
+ }
+
+ // Load the object to the 'OrdersTopic' in Kafka
+ // =============================================
+ loadToOrdersTopic(orderData, {
+ Id: id(this),
+ PartitionKey: id(this),
+ Type: 'com.example.promotions',
+ Source: '/promotion-campaigns/summer-sale'
+ });"
+ };
+
+ // Define the Kafka ETL task:
+ // ==========================
+ var etlTask = new QueueEtlConfiguration()
+ {
+ BrokerType = QueueBrokerType.Kafka,
+
+ Name = "myKafkaEtlTaskName",
+ ConnectionStringName = "myKafkaConStr",
+
+ Transforms = { transformation },
+
+ // Set to false to allow task failover to another node if current one is down
+ PinToMentorNode = false
+ };
+
+ // Deploy (send) the task to the server via the AddEtlOperation:
+ // =============================================================
+ store.Maintenance.Send(new AddEtlOperation(etlTask));
+ #endregion
+ }
+ }
+
+ public void DeleteProcessedDocuments()
+ {
+ using (var store = new DocumentStore())
+ {
+ Transformation transformation = new Transformation(); // Defined here only for compilation purposes
+
+ #region kafka_delete_documents
+ var etlTask = new QueueEtlConfiguration()
+ {
+ BrokerType = QueueBrokerType.Kafka,
+
+ Name = "myKafkaEtlTaskName",
+ ConnectionStringName = "myKafkaConStr",
+
+ Transforms = { transformation },
+
+ // Define whether to delete documents from RavenDB after they are sent to the target topic
+ Queues = new List()
+ {
+ new()
+ {
+ // The name of the Kafka topic
+ Name = "OrdersTopic",
+
+ // When set to 'true',
+ // documents that were processed by the transformation script will be deleted
+ // from RavenDB after the message is loaded to the "OrdersTopic" in Kafka.
+ DeleteProcessedDocuments = true
+ }
+ }
+ };
+
+ store.Maintenance.Send(new AddEtlOperation(etlTask));
+ #endregion
+ }
+ }
+
+ private interface IFoo
+ {
+ #region queue_broker_type
+ public enum QueueBrokerType
+ {
+ None,
+ Kafka,
+ RabbitMq,
+ AzureQueueStorage
+ }
+ #endregion
+ }
+
+ private class Definition
+ {
+ #region queue_connection_string
+ public class QueueConnectionString : ConnectionString
+ {
+ // Set the broker type to QueueBrokerType.Kafka for a Kafka connection string
+ public QueueBrokerType BrokerType { get; set; }
+
+ // Configure this when setting a connection string for Kafka
+ public KafkaConnectionSettings KafkaConnectionSettings { get; set; }
+
+ // Configure this when setting a connection string for RabbitMQ
+ public RabbitMqConnectionSettings RabbitMqConnectionSettings { get; set; }
+
+ // Configure this when setting a connection string for Azure Queue Storage
+ public AzureQueueStorageConnectionSettings AzureQueueStorageConnectionSettings { get; set; }
+ }
+ #endregion
+
+ public abstract class ConnectionString
+ {
+ public string Name { get; set; }
+ }
+
+ #region kafka_con_str_settings
+ public class KafkaConnectionSettings
+ {
+ // A string containing comma-separated keys of "host:port" URLs to Kafka brokers
+ public string BootstrapServers { get; set; }
+
+ // Various configuration options
+ public Dictionary ConnectionOptions { get; set; }
+
+ public bool UseRavenCertificate { get; set; }
+ }
+ #endregion
+
+ #region etl_configuration
+ public class QueueEtlConfiguration
+ {
+ // Set to QueueBrokerType.Kafka to define a Kafka ETL task
+ public QueueBrokerType BrokerType { get; set; }
+ // The ETL task name
+ public string Name { get; set; }
+ // The registered connection string name
+ public string ConnectionStringName { get; set; }
+ // List of transformation scripts
+ public List Transforms { get; set; }
+ // Optional configuration per queue
+ public List Queues { get; set; }
+ // Set to 'false' to allow task failover to another node if current one is down
+ public bool PinToMentorNode { get; set; }
+ }
+
+ public class Transformation
+ {
+ // The script name
+ public string Name { get; set; }
+ // The source RavenDB collections that serve as the input for the script
+ public List Collections { get; set; }
+ // Set whether to apply the script on all collections
+ public bool ApplyToAllDocuments { get; set; }
+ // The script itself
+ public string Script { get; set; }
+ }
+
+ public class EtlQueue
+ {
+ // The Kafka topic name
+ public string Name { get; set; }
+ // Delete processed documents when set to 'true'
+ public bool DeleteProcessedDocuments { get; set; }
+ }
+ #endregion
+ }
+ }
+}
diff --git a/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/Server/OngoingTasks/ETL/Queue/RabbitMqEtl.cs b/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/Server/OngoingTasks/ETL/Queue/RabbitMqEtl.cs
new file mode 100644
index 0000000000..b879983321
--- /dev/null
+++ b/Documentation/6.1/Samples/csharp/Raven.Documentation.Samples/Server/OngoingTasks/ETL/Queue/RabbitMqEtl.cs
@@ -0,0 +1,229 @@
+using System.Collections.Generic;
+using Raven.Client.Documents;
+using Raven.Client.Documents.Operations.ConnectionStrings;
+using Raven.Client.Documents.Operations.ETL;
+using Raven.Client.Documents.Operations.ETL.Queue;
+namespace Raven.Documentation.Samples.Server.OngoingTasks.ETL.Queue
+{
+ public class RabbitMqEtl
+ {
+ void AddConnectionString()
+ {
+ using (var store = new DocumentStore())
+ {
+ #region add_rabbitMq_connection_string
+ // Prepare the connection string:
+ // ==============================
+ var conStr = new QueueConnectionString
+ {
+ // Provide a name for this connection string
+ Name = "myRabbitMqConStr",
+
+ // Set the broker type
+ BrokerType = QueueBrokerType.RabbitMq,
+
+ // Configure the connection details
+ RabbitMqConnectionSettings = new RabbitMqConnectionSettings()
+ { ConnectionString = "amqp://guest:guest@localhost:49154" }
+ };
+
+ // Deploy (send) the connection string to the server via the PutConnectionStringOperation:
+ // =======================================================================================
+ var res = store.Maintenance.Send(
+ new PutConnectionStringOperation(conStr));
+ #endregion
+ }
+ }
+
+ public void AddRabbitmqEtlTask()
+ {
+ using (var store = new DocumentStore())
+ {
+ #region add_rabbitMq_etl_task
+ // Define a transformation script for the task:
+ // ============================================
+ Transformation transformation = new Transformation
+ {
+ // Define the input collections
+ Collections = { "Orders" },
+ ApplyToAllDocuments = false,
+
+ // The transformation script
+ Name = "scriptName",
+ Script = @"// Create an orderData object
+ // ==========================
+ var orderData = {
+ Id: id(this),
+ OrderLinesCount: this.Lines.length,
+ TotalCost: 0
+ };
+
+ // Update the orderData's TotalCost field
+ // ======================================
+ for (var i = 0; i < this.Lines.length; i++) {
+ var line = this.Lines[i];
+ var cost = (line.Quantity * line.PricePerUnit) * ( 1 - line.Discount);
+ orderData.TotalCost += cost;
+ }
+
+ // Load the object to the 'OrdersExchange' in RabbitMQ
+ // ===================================================
+ loadToOrdersExchange(orderData, `routingKey`, {
+ Id: id(this),
+ Type: 'com.example.promotions',
+ Source: '/promotion-campaigns/summer-sale'
+ });"
+ };
+
+ // Define the RabbitMQ ETL task:
+ // =============================
+ var etlTask = new QueueEtlConfiguration()
+ {
+ BrokerType = QueueBrokerType.RabbitMq,
+
+ Name = "myRabbitMqEtlTaskName",
+ ConnectionStringName = "myRabbitMqConStr",
+
+ Transforms = { transformation },
+
+ // Set to false to have the RabbitMQ client library declare the queue if does not exist
+ SkipAutomaticQueueDeclaration = false,
+
+ // Set to false to allow task failover to another node if current one is down
+ PinToMentorNode = false
+ };
+
+ // Deploy (send) the task to the server via the AddEtlOperation:
+ // =============================================================
+ store.Maintenance.Send(new AddEtlOperation(etlTask));
+ #endregion
+ }
+ }
+
+ public void DeleteProcessedDocuments()
+ {
+ using (var store = new DocumentStore())
+ {
+ Transformation transformation = new Transformation(); // Defined here only for compilation purposes
+
+ #region rabbitMq_delete_documents
+ var etlTask = new QueueEtlConfiguration()
+ {
+ BrokerType = QueueBrokerType.RabbitMq,
+
+ Name = "myRabbitMqEtlTaskName",
+ ConnectionStringName = "myRabbitMqConStr",
+
+ Transforms = { transformation },
+
+ // Define whether to delete documents from RavenDB after they are sent to RabbitMQ
+ Queues = new List()
+ {
+ new()
+ {
+ // The name of the target queue
+ Name = "OrdersQueue",
+
+ // When set to 'true',
+ // documents that were processed by the transformation script will be deleted
+ // from RavenDB after the message is loaded to the "OrdersQueue" in RabbitMQ.
+ DeleteProcessedDocuments = true
+ }
+ }
+ };
+
+ store.Maintenance.Send(new AddEtlOperation(etlTask));
+ #endregion
+ }
+ }
+
+ private interface IFoo
+ {
+ #region queue_broker_type
+ public enum QueueBrokerType
+ {
+ None,
+ Kafka,
+ RabbitMq,
+ AzureQueueStorage
+ }
+ #endregion
+ }
+
+ private class Definition
+ {
+ #region queue_connection_string
+ public class QueueConnectionString : ConnectionString
+ {
+ // Set the broker type to QueueBrokerType.RabbitMq for a RabbitMQ connection string
+ public QueueBrokerType BrokerType { get; set; }
+
+ // Configure this when setting a connection string for Kafka
+ public KafkaConnectionSettings KafkaConnectionSettings { get; set; }
+
+ // Configure this when setting a connection string for RabbitMQ
+ public RabbitMqConnectionSettings RabbitMqConnectionSettings { get; set; }
+
+ // Configure this when setting a connection string for Azure Queue Storage
+ public AzureQueueStorageConnectionSettings AzureQueueStorageConnectionSettings { get; set; }
+ }
+ #endregion
+
+ public abstract class ConnectionString
+ {
+ public string Name { get; set; }
+ }
+
+ #region rabbitMq_con_str_settings
+ public sealed class RabbitMqConnectionSettings
+ {
+ // A single string that specifies the RabbitMQ exchange connection details
+ public string ConnectionString { get; set; }
+ }
+ #endregion
+
+ #region etl_configuration
+ public class QueueEtlConfiguration
+ {
+ // Set to QueueBrokerType.RabbitMq to define a RabbitMQ ETL task
+ public QueueBrokerType BrokerType { get; set; }
+ // The ETL task name
+ public string Name { get; set; }
+ // The registered connection string name
+ public string ConnectionStringName { get; set; }
+ // List of transformation scripts
+ public List Transforms { get; set; }
+ // Optional configuration per queue
+ public List Queues { get; set; }
+ // Set to 'false' to allow task failover to another node if current one is down
+ public bool PinToMentorNode { get; set; }
+
+ // Set to 'false' to have the RabbitMQ client library declare the queue if does not exist.
+ // Set to 'true' to skip automatic queue declaration,
+ // use this option when you prefer to define Exchanges, Queues & Bindings manually.
+ public bool SkipAutomaticQueueDeclaration { get; set; }
+ }
+
+ public class Transformation
+ {
+ // The script name
+ public string Name { get; set; }
+ // The source RavenDB collections that serve as the input for the script
+ public List Collections { get; set; }
+ // Set whether to apply the script on all collections
+ public bool ApplyToAllDocuments { get; set; }
+ // The script itself
+ public string Script { get; set; }
+ }
+
+ public class EtlQueue
+ {
+ // The RabbitMQ target queue name
+ public string Name { get; set; }
+ // Delete processed documents when set to 'true'
+ public bool DeleteProcessedDocuments { get; set; }
+ }
+ #endregion
+ }
+ }
+}
diff --git a/Documentation/6.1/Samples/java/src/test/java/net/ravendb/ClientApi/Operations/Maintenance/Etl/AddEtl.java b/Documentation/6.1/Samples/java/src/test/java/net/ravendb/ClientApi/Operations/Maintenance/Etl/AddEtl.java
new file mode 100644
index 0000000000..4c739e9ecb
--- /dev/null
+++ b/Documentation/6.1/Samples/java/src/test/java/net/ravendb/ClientApi/Operations/Maintenance/Etl/AddEtl.java
@@ -0,0 +1,122 @@
+package net.ravendb.ClientApi.Operations;
+
+import net.ravendb.client.documents.DocumentStore;
+import net.ravendb.client.documents.IDocumentStore;
+import net.ravendb.client.documents.operations.etl.*;
+import net.ravendb.client.documents.operations.etl.olap.OlapConnectionString;
+import net.ravendb.client.documents.operations.etl.olap.OlapEtlConfiguration;
+import net.ravendb.client.documents.operations.etl.sql.SqlConnectionString;
+import net.ravendb.client.documents.operations.etl.sql.SqlEtlConfiguration;
+import net.ravendb.client.documents.operations.etl.sql.SqlEtlTable;
+
+import java.util.Arrays;
+
+public class AddEtl {
+
+ private interface IFoo {
+ /*
+ //region add_etl_operation
+ public AddEtlOperation(EtlConfiguration configuration);
+ //endregion
+ */
+ }
+
+ public AddEtl() {
+ try (IDocumentStore store = new DocumentStore()) {
+ //region add_raven_etl
+ RavenEtlConfiguration configuration = new RavenEtlConfiguration();
+ configuration.setName("Employees ETL");
+ Transformation transformation = new Transformation();
+ transformation.setName("Script #1");
+ transformation.setScript("loadToEmployees ({\n" +
+ " Name: this.FirstName + ' ' + this.LastName,\n" +
+ " Title: this.Title\n" +
+ "});");
+
+ configuration.setTransforms(Arrays.asList(transformation));
+ AddEtlOperation operation = new AddEtlOperation<>(configuration);
+ AddEtlOperationResult result = store.maintenance().send(operation);
+ //endregion
+ }
+
+ try (IDocumentStore store = new DocumentStore()) {
+ //region add_sql_etl
+ SqlEtlConfiguration configuration = new SqlEtlConfiguration();
+ SqlEtlTable table1 = new SqlEtlTable();
+ table1.setTableName("Orders");
+ table1.setDocumentIdColumn("Id");
+ table1.setInsertOnlyMode(false);
+
+ SqlEtlTable table2 = new SqlEtlTable();
+ table2.setTableName("OrderLines");
+ table2.setDocumentIdColumn("OrderId");
+ table2.setInsertOnlyMode(false);
+
+ configuration.setSqlTables(Arrays.asList(table1, table2));
+ configuration.setName("Order to SQL");
+ configuration.setConnectionStringName("sql-connection-string-name");
+
+ Transformation transformation = new Transformation();
+ transformation.setName("Script #1");
+ transformation.setCollections(Arrays.asList("Orders"));
+ transformation.setScript("var orderData = {\n" +
+ " Id: id(this),\n" +
+ " OrderLinesCount: this.Lines.length,\n" +
+ " TotalCost: 0\n" +
+ "};\n" +
+ "\n" +
+ " for (var i = 0; i < this.Lines.length; i++) {\n" +
+ " var line = this.Lines[i];\n" +
+ " orderData.TotalCost += line.PricePerUnit;\n" +
+ "\n" +
+ " // Load to SQL table 'OrderLines'\n" +
+ " loadToOrderLines({\n" +
+ " OrderId: id(this),\n" +
+ " Qty: line.Quantity,\n" +
+ " Product: line.Product,\n" +
+ " Cost: line.PricePerUnit\n" +
+ " });\n" +
+ " }\n" +
+ " orderData.TotalCost = Math.round(orderData.TotalCost * 100) / 100;\n" +
+ "\n" +
+ " // Load to SQL table 'Orders'\n" +
+ " loadToOrders(orderData)");
+
+ configuration.setTransforms(Arrays.asList(transformation));
+
+ AddEtlOperation operation = new AddEtlOperation<>(configuration);
+
+ AddEtlOperationResult result = store.maintenance().send(operation);
+ //endregion
+ }
+
+ try (IDocumentStore store = new DocumentStore()) {
+ //region add_olap_etl
+ OlapEtlConfiguration configuration = new OlapEtlConfiguration();
+
+ configuration.setName("Orders ETL");
+ configuration.setConnectionStringName("olap-connection-string-name");
+
+ Transformation transformation = new Transformation();
+ transformation.setName("Script #1");
+ transformation.setCollections(Arrays.asList("Orders"));
+ transformation.setScript("var orderDate = new Date(this.OrderedAt);\n"+
+ "var year = orderDate.getFullYear();\n"+
+ "var month = orderDate.getMonth();\n"+
+ "var key = new Date(year, month);\n"+
+ "loadToOrders(key, {\n"+
+ " Company : this.Company,\n"+
+ " ShipVia : this.ShipVia\n"+
+ "})"
+ );
+
+ configuration.setTransforms(Arrays.asList(transformation));
+
+ AddEtlOperation operation = new AddEtlOperation(configuration);
+
+ AddEtlOperationResult result = store.maintenance().send(operation);
+ //endregion
+ }
+
+ }
+}
diff --git a/Documentation/6.1/Samples/java/src/test/java/net/ravendb/ClientApi/Operations/WhatAreOperations.java b/Documentation/6.1/Samples/java/src/test/java/net/ravendb/ClientApi/Operations/WhatAreOperations.java
new file mode 100644
index 0000000000..be5519c6f8
--- /dev/null
+++ b/Documentation/6.1/Samples/java/src/test/java/net/ravendb/ClientApi/Operations/WhatAreOperations.java
@@ -0,0 +1,101 @@
+package net.ravendb.ClientApi.Operations;
+
+import net.ravendb.client.documents.DocumentStore;
+import net.ravendb.client.documents.IDocumentStore;
+import net.ravendb.client.documents.attachments.AttachmentType;
+import net.ravendb.client.documents.operations.CompactDatabaseOperation;
+import net.ravendb.client.documents.operations.DeleteByQueryOperation;
+import net.ravendb.client.documents.operations.Operation;
+import net.ravendb.client.documents.operations.attachments.CloseableAttachmentResult;
+import net.ravendb.client.documents.operations.attachments.GetAttachmentOperation;
+import net.ravendb.client.documents.operations.configuration.GetClientConfigurationOperation;
+import net.ravendb.client.documents.operations.indexes.StopIndexOperation;
+import net.ravendb.client.documents.queries.IndexQuery;
+import net.ravendb.client.serverwide.CompactSettings;
+
+public class WhatAreOperations {
+
+ private interface IFoo {
+ /*
+ //region Client_Operations_api
+ public void send(IVoidOperation operation)
+
+ public void send(IVoidOperation operation, SessionInfo sessionInfo)
+
+ public TResult send(IOperation operation)
+
+ public TResult send(IOperation operation, SessionInfo sessionInfo)
+
+ public PatchStatus send(PatchOperation operation, SessionInfo sessionInfo)
+
+ public PatchOperation.Result send(Class entityClass, PatchOperation operation, SessionInfo sessionInfo)
+ //endregion
+
+ //region Client_Operations_api_async
+ public Operation sendAsync(IOperation operation)
+
+ public Operation sendAsync(IOperation operation, SessionInfo sessionInfo)
+ //endregion
+
+ //region Maintenance_Operations_api
+ public void send(IVoidMaintenanceOperation operation)
+
+ public TResult send(IMaintenanceOperation operation)
+ //endregion
+
+ //region Maintenance_Operations_api_async
+ public Operation sendAsync(IMaintenanceOperation operation)
+ //endregion
+
+ //region Server_Operations_api
+ public void send(IVoidServerOperation operation)
+
+ public TResult send(IServerOperation operation)
+ //endregion
+
+ //region Server_Operations_api_async
+ public Operation sendAsync(IServerOperation operation)
+ //endregion
+ */
+ }
+
+ public WhatAreOperations() {
+ try (IDocumentStore store = new DocumentStore()) {
+ //region Client_Operations_1
+ try (CloseableAttachmentResult fetchedAttachment = store
+ .operations()
+ .send(new GetAttachmentOperation("users/1", "file.txt", AttachmentType.DOCUMENT, null))) {
+ // do stuff with the attachment stream --> fetchedAttachment.data
+ }
+ //endregion
+
+ {
+ //region Client_Operations_1_async
+ IndexQuery indexQuery = new IndexQuery();
+ indexQuery.setQuery("from users where Age == 5");
+ DeleteByQueryOperation operation = new DeleteByQueryOperation(indexQuery);
+ Operation asyncOp = store.operations().sendAsync(operation);
+ //endregion
+ }
+
+ //region Maintenance_Operations_1
+ store.maintenance().send(new StopIndexOperation("Orders/ByCompany"));
+ //endregion
+
+ //region Server_Operations_1
+ GetClientConfigurationOperation.Result result
+ = store.maintenance().send(new GetClientConfigurationOperation());
+ //endregion
+
+ {
+ //region Server_Operations_1_async
+ CompactSettings settings = new CompactSettings();
+ settings.setDocuments(true);
+ settings.setDatabaseName("Northwind");
+ Operation operation = store.maintenance().server().sendAsync(new CompactDatabaseOperation(settings));
+ operation.waitForCompletion();
+ //endregion
+ }
+ }
+ }
+}
diff --git a/Documentation/6.1/Samples/nodejs/client-api/operations/maintenance/etl/AddEtl.js b/Documentation/6.1/Samples/nodejs/client-api/operations/maintenance/etl/AddEtl.js
new file mode 100644
index 0000000000..5c3fb9d4bb
--- /dev/null
+++ b/Documentation/6.1/Samples/nodejs/client-api/operations/maintenance/etl/AddEtl.js
@@ -0,0 +1,99 @@
+import {
+ AddEtlOperation,
+ DocumentStore, OlapEtlConfiguration,
+ RavenEtlConfiguration,
+ SqlEtlConfiguration,
+ SqlEtlTable,
+ Transformation
+} from 'ravendb';
+import { EtlConfiguration } from 'ravendb';
+
+ let urls, database, authOptions;
+
+ class T {
+ }
+
+{
+ //document_store_creation
+ const store = new DocumentStore(["http://localhost:8080"], "Northwind2");
+ store.initialize();
+ const session = store.openSession();
+ let configuration;
+ let etlConfiguration;
+
+ //region add_etl_operation
+ const operation = new AddEtlOperation(etlConfiguration);
+ //endregion
+
+ //region add_raven_etl
+ const etlConfigurationRvn = Object.assign(new RavenEtlConfiguration(), {
+ connectionStringName: "raven-connection-string-name",
+ disabled: false,
+ name: "etlRvn"
+ });
+
+ const transformationRvn = {
+ applyToAllDocuments: true,
+ name: "Script #1"
+ };
+
+ etlConfigurationRvn.transforms = [transformationRvn];
+
+ const operationRvn = new AddEtlOperation(etlConfigurationRvn);
+ const etlResultRvn = await store.maintenance.send(operationRvn);
+ //endregion
+
+
+ //region add_sql_etl
+ const transformation = {
+ applyToAllDocuments: true,
+ name: "Script #1"
+ };
+
+ const table1 = {
+ documentIdColumn: "Id",
+ insertOnlyMode: false,
+ tableName: "Users"
+ };
+
+ const etlConfigurationSql = Object.assign(new SqlEtlConfiguration(), {
+ connectionStringName: "sql-connection-string-name",
+ disabled: false,
+ name: "etlSql",
+ transforms: [transformation],
+ sqlTables: [table1]
+ });
+
+ const operationSql = new AddEtlOperation(etlConfigurationSql);
+ const etlResult = await store.maintenance.send(operationSql);
+ //endregion
+
+ //region add_olap_etl
+ const transformationOlap = {
+ applyToAllDocuments: true,
+ name: "Script #1"
+ };
+
+ const etlConfigurationOlap = Object.assign(new OlapEtlConfiguration(), {
+ connectionStringName: "olap-connection-string-name",
+ disabled: false,
+ name: "etlOlap",
+ transforms: [transformationOlap],
+ });
+
+ const operationOlap = new AddEtlOperation(etlConfigurationOlap);
+ const etlResultOlap = await store.maintenance.send(operationOlap);
+ //endregion
+}
+
+//region syntax
+class EtlConfiguration {
+ taskId?; // number
+ name; // string
+ mentorNode?: // string
+ connectionStringName; // string
+ transforms; // Transformation[]
+ disabled?; // boolean
+ allowEtlOnNonEncryptedChannel?; // boolean
+}
+//endregion
diff --git a/Documentation/6.1/Samples/nodejs/client-api/operations/whatAreOperations.js b/Documentation/6.1/Samples/nodejs/client-api/operations/whatAreOperations.js
new file mode 100644
index 0000000000..497c650bb9
--- /dev/null
+++ b/Documentation/6.1/Samples/nodejs/client-api/operations/whatAreOperations.js
@@ -0,0 +1,93 @@
+import { DocumentStore } from "ravendb";
+
+const documentStore = new DocumentStore();
+
+async function operations() {
+ //region operations_ex
+ // Define operation, e.g. get all counters info for a document
+ const getCountersOp = new GetCountersOperation("products/1-A");
+
+ // Execute the operation by passing the operation to operations.send
+ const allCountersResult = await documentStore.operations.send(getCountersOp);
+
+ // Access the operation result
+ const numberOfCounters = allCountersResult.counters.length;
+ //endregion
+
+ //region maintenance_ex
+ // Define operation, e.g. stop an index
+ const stopIndexOp = new StopIndexOperation("Orders/ByCompany");
+
+ // Execute the operation by passing the operation to maintenance.send
+ await documentStore.maintenance.send(stopIndexOp);
+
+ // This specific operation returns void
+ // You can send another operation to verify the index running status
+ const indexStatsOp = new GetIndexStatisticsOperation("Orders/ByCompany");
+ const indexStats = await documentStore.maintenance.send(indexStatsOp);
+ const status = indexStats.status; // will be "Paused"
+ //endregion
+
+ //region server_ex
+ // Define operation, e.g. get the server build number
+ const getBuildNumberOp = new GetBuildNumberOperation();
+
+ // Execute the operation by passing the operation to maintenance.server.send
+ const buildNumberResult = await documentStore.maintenance.server.send(getBuildNumberOp);
+
+ // Access the operation result
+ const version = buildNumberResult.buildVersion;
+ //endregion
+
+ //region wait_ex
+ // Define operation, e.g. delete all discontinued products
+ // Note: This operation implements interface: 'IOperation'
+ const deleteByQueryOp = new DeleteByQueryOperation("from Products where Discontinued = true");
+
+ // Execute the operation
+ // 'send' returns an object that can be awaited on
+ const asyncOperation = await documentStore.operations.send(deleteByQueryOp);
+
+ // Call method 'waitForCompletion' to wait for the operation to complete
+ await asyncOperation.waitForCompletion();
+ //endregion
+
+ //region kill_ex
+ // Define operation, e.g. delete all discontinued products
+ // Note: This operation implements interface: 'IOperation'
+ const deleteByQueryOp = new DeleteByQueryOperation("from Products where Discontinued = true");
+
+ // Execute the operation
+ // 'send' returns an object that can be 'killed'
+ const asyncOperation = await documentStore.operations.send(deleteByQueryOp);
+
+ // Call method 'kill' to abort operation
+ await asyncOperation.kill();
+ //endregion
+}
+
+{
+ //region operations_send
+ // Available overloads:
+ await send(operation);
+ await send(operation, sessionInfo);
+ await send(operation, sessionInfo, documentType);
+
+ await send(patchOperaton);
+ await send(patchOperation, sessionInfo);
+ await send(patchOperation, sessionInfo, resultType);
+ //endregion
+
+ //region maintenance_send
+ await send(operation);
+ //endregion
+
+ //region server_send
+ await send(operation);
+ //endregion
+
+ //region wait_kill_syntax
+ await waitForCompletion();
+ await kill()
+ //endregion
+}
diff --git a/Documentation/6.1/Samples/python/ClientApi/Operations/WhatAreOperations.py b/Documentation/6.1/Samples/python/ClientApi/Operations/WhatAreOperations.py
new file mode 100644
index 0000000000..b86c46e82f
--- /dev/null
+++ b/Documentation/6.1/Samples/python/ClientApi/Operations/WhatAreOperations.py
@@ -0,0 +1,151 @@
+import datetime
+from typing import TypeVar, Optional, Union, Generic
+
+from ravendb import (
+ PatchOperation,
+ PatchStatus,
+ DocumentStore,
+ StopIndexOperation,
+ GetIndexStatisticsOperation,
+ DeleteByQueryOperation,
+ RavenCommand,
+)
+from ravendb.documents.conventions import DocumentConventions
+from ravendb.documents.operations.counters import GetCountersOperation
+from ravendb.documents.operations.definitions import (
+ IOperation,
+ OperationIdResult,
+ _T,
+ VoidMaintenanceOperation,
+ MaintenanceOperation,
+)
+from ravendb.documents.operations.operation import Operation
+from ravendb.documents.session.misc import SessionInfo
+from ravendb.http.http_cache import HttpCache
+from ravendb.serverwide.operations.common import ServerOperation, GetBuildNumberOperation
+
+from examples_base import ExampleBase
+
+_Operation_T = TypeVar("_Operation_T")
+_T_OperationResult = TypeVar("_T_OperationResult")
+
+
+class WhatAreOperations(ExampleBase):
+ class SendSyntax:
+ # region operations_send
+ # Available overloads:
+ def send(self, operation: IOperation[_Operation_T], session_info: SessionInfo = None) -> _Operation_T: ...
+
+ def send_async(self, operation: IOperation[OperationIdResult]) -> Operation: ...
+
+ def send_patch_operation(self, operation: PatchOperation, session_info: SessionInfo) -> PatchStatus: ...
+
+ def send_patch_operation_with_entity_class(
+ self, entity_class: _T, operation: PatchOperation, session_info: Optional[SessionInfo] = None
+ ) -> PatchOperation.Result[_T]: ...
+
+ # endregion
+
+ class MaintenanceSyntax:
+ # region maintenance_send
+ def send(
+ self, operation: Union[VoidMaintenanceOperation, MaintenanceOperation[_Operation_T]]
+ ) -> Optional[_Operation_T]: ...
+
+ def send_async(self, operation: MaintenanceOperation[OperationIdResult]) -> Operation: ...
+
+ # endregion
+
+ # region ioperation
+
+ # (It's beginning with capital I to mirror the different clients API - and is similar to what is an interface)
+ class IOperation(Generic[_Operation_T]):
+ def get_command(
+ self, store: "DocumentStore", conventions: "DocumentConventions", cache: HttpCache
+ ) -> "RavenCommand[_Operation_T]":
+ pass
+
+ # endregion
+
+ class ServerSend:
+ # region server_send
+ def send(self, operation: ServerOperation[_T_OperationResult]) -> Optional[_T_OperationResult]: ...
+
+ def send_async(self, operation: ServerOperation[OperationIdResult]) -> Operation: ...
+
+ def test_examples(self):
+ with self.embedded_server.get_document_store("WhatAreOperations") as store:
+ # region operations_ex
+ # Define operation, e.g. get all counters info for a document
+ get_counters_op = GetCountersOperation("products/1-A")
+
+ # Execute the operation by passing the operation to operations.send
+ all_counters_result = store.operations.send(get_counters_op)
+
+ # Access the operation result
+ number_of_counters = len(all_counters_result.counters)
+ # endregion
+
+ # region maintenance_ex
+ # Define operation, e.g. stop an index
+ stop_index_op = StopIndexOperation("Orders/ByCompany")
+
+ # Execute the operation by passing the operation to maintenance.send
+ store.maintenance.send(stop_index_op)
+
+ # This specific operation returns void
+ # You can send another operation to verify the index running status
+ index_stats_op = GetIndexStatisticsOperation("Orders/ByCompany")
+ index_stats = store.maintenance.send(index_stats_op)
+ status = index_stats.status # will be "Paused"
+ # endregion
+
+ # region server_ex
+ # Define operation, e.g. get the server build number
+ get_build_number_op = GetBuildNumberOperation()
+
+ # Execute the operation by passing to maintenance.server.send
+ build_number_result = store.maintenance.server.send(get_build_number_op)
+
+ # Access the operation result
+ version = build_number_result.build_version
+ # endregion
+
+ # region kill_ex
+ # Define operation, e.g. delete all discontinued products
+ # Note: This operation implements class: IOperation[OperationIdResult]
+
+ delete_by_query_op = DeleteByQueryOperation("from Products where Discontinued = true")
+
+ # Execute the operation
+ # Send returns an 'Operation' object that can be 'killed'
+ operation = store.operations.send_async(delete_by_query_op)
+
+ # Call 'kill' to abort operation
+ # todo: skip it - wait for the merge of the ticket linked in the checklist
+ operation.kill()
+ # endregion
+
+ # region wait_timeout_ex
+
+ def wait_for_completion_with_timeout(timeout: datetime.timedelta, document_store: DocumentStore):
+ # Define operation, e.g. delete all discontinued products
+ # Note: This operation implements:'IOperation[OperationIdResult]'
+ delete_by_query_op = DeleteByQueryOperation("from Products where Discontinued = true")
+
+ # Execute the operation
+ # Send returns an 'Operation' object that can be awaited on
+ operation = document_store.operations.send_async(delete_by_query_op)
+
+ try:
+ # Call method 'WaitForCompletion' to wait for the operation to complete.
+ # If a timeout is specified, the method will only wait for the specified time frame.
+ result = operation.wait_for_completion_with_timeout(timeout) # todo: skip, wait for the merge
+
+ # The operation has finished within the specified timeframe
+ number_of_items_deleted = result.total
+ except TimeoutError:
+ # The operation did not finish within the specified timeframe
+ ...
+
+ # endregion