diff --git a/articles/active-directory/authentication/howto-sspr-reporting.md b/articles/active-directory/authentication/howto-sspr-reporting.md index 651d797d7c01d..f685829c956a2 100644 --- a/articles/active-directory/authentication/howto-sspr-reporting.md +++ b/articles/active-directory/authentication/howto-sspr-reporting.md @@ -47,15 +47,14 @@ In the Azure portal experience, we have improved the way that you can view passw 1. Browse to the [Azure portal](https://portal.azure.com). 2. Select **All services** in the left pane. 3. Search for **Azure Active Directory** in the list of services and select it. -4. Select **Users and groups**. -5. Select **Audit Logs** from the **Users and groups** menu. This shows you all of the audit events that occurred against all the users in your directory. You can filter this view to see all the password-related events. -6. To filter this view to see only the password-reset-related events, select the **Filter** button at the top of the pane. -7. From the **Filter** menu, select the **Category** drop-down list, and change it to the **Self-service Password Management** category type. -8. Optionally, further filter the list by choosing the specific **Activity** you're interested in. +4. Select **Users** from the Manage section. +5. Select **Audit Logs** from the **Users** blade. This shows you all of the audit events that occurred against all the users in your directory. You can filter this view to see all the password-related events. +6. From the **Filter** menu at the top of the pane, select the **Service** drop-down list, and change it to the **Self-service Password Management** service type. +7. Optionally, further filter the list by choosing the specific **Activity** you're interested in. ### Converged registration (preview) -If you are participating in the public preview of converged registration, information regarding user activity in the audit logs will be found under the category **Authentication Methods**. +If you are participating in the public preview of converged registration, information regarding user activity in the audit logs will be found under the service **Authentication Methods**. ## Description of the report columns in the Azure portal diff --git a/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp.md b/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp.md index 893dac474eb5e..db6f746f58fa2 100644 --- a/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp.md +++ b/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp.md @@ -52,9 +52,9 @@ In this quickstart, you'll learn how an ASP.NET Core web app can sign in persona > 1. Select **New registration**. > 1. When the **Register an application** page appears, enter your application's registration information: > - In the **Name** section, enter a meaningful application name that will be displayed to users of the app, for example `AspNetCore-Quickstart`. -> - In **Reply URL**, add `https://localhost:44321/`, and select **Register**. +> - In **Redirect URI**, add `https://localhost:44321/`, and select **Register**. > 1. Select the **Authentication** menu, and then add the following information: -> - In **Reply URL**, add `https://localhost:44321/signin-oidc`, and select **Register**. +> - In **Redirect URIs**, add `https://localhost:44321/signin-oidc`, and select **Save**. > - In the **Advanced settings** section, set **Logout URL** to `https://localhost:44321/signout-oidc`. > - Under **Implicit grant**, check **ID tokens**. > - Select **Save**. @@ -145,7 +145,7 @@ The line containing `.AddAzureAd` adds the Microsoft identity platform authentic > [!NOTE] -> Setting `ValidateIssuer = false` is a simplification for this quickstart. In real applications you need to validate the issuer +> Setting `ValidateIssuer = false` is a simplification for this quickstart. In real applications you need to validate the issuer. > See the samples to understand how to do that. ### Protect a controller or a controller's method diff --git a/articles/active-directory/develop/quickstart-v2-aspnet-webapp.md b/articles/active-directory/develop/quickstart-v2-aspnet-webapp.md index 7518be3ee7980..699fc32012251 100644 --- a/articles/active-directory/develop/quickstart-v2-aspnet-webapp.md +++ b/articles/active-directory/develop/quickstart-v2-aspnet-webapp.md @@ -51,7 +51,7 @@ In this quickstart, you'll learn how an ASP.NET web app can sign in personal acc > 1. Select **New registration**. > 1. When the **Register an application** page appears, enter your application's registration information: > - In the **Name** section, enter a meaningful application name that will be displayed to users of the app, for example `ASPNET-Quickstart`. -> - Add `https://localhost:44368/` in **Reply URL**, and click **Register**. +> - Add `https://localhost:44368/` in **Redirect URI**, and click **Register**. Select **Authentication** menu, set **ID tokens** under **Implicit Grant**, and then select **Save**. > [!div class="sxs-lookup" renderon="portal"] @@ -155,7 +155,7 @@ public void Configuration(IAppBuilder app) > [!NOTE] -> Setting `ValidateIssuer = false` is a simplification for this quickstart. In real applications you need to validate the issuer +> Setting `ValidateIssuer = false` is a simplification for this quickstart. In real applications you need to validate the issuer. > See the samples to understand how to do that. ### Initiate an authentication challenge diff --git a/articles/active-directory/hybrid/whatis-aadc-admin-agent.md b/articles/active-directory/hybrid/whatis-aadc-admin-agent.md index 7d94d8c84e7e0..4e12dbf083376 100644 --- a/articles/active-directory/hybrid/whatis-aadc-admin-agent.md +++ b/articles/active-directory/hybrid/whatis-aadc-admin-agent.md @@ -37,16 +37,18 @@ The Microsoft Support Engineer cannot change any data in your system and cannot If you do not want the Microsoft service engineer to access your data for a support call you can disable this by modifying the service config file as described below: - 1. Open **C:\Program Files\Microsoft Azure AD Connect Administration Agent\AzureADConnectAdministrationAgentService.exe.config** in notepad. - 2. Disable **UserDataEnabled** setting as shown below. If **UserDataEnabled** setting exists and is set to true, then set it to false. If the setting does not exist, then add the setting as shown below. - ` - - - - - ` - 3. Save the config file. - 4. Restart Azure AD Connect Administration Agent service as shown below +1. Open **C:\Program Files\Microsoft Azure AD Connect Administration Agent\AzureADConnectAdministrationAgentService.exe.config** in notepad. +2. Disable **UserDataEnabled** setting as shown below. If **UserDataEnabled** setting exists and is set to true, then set it to false. If the setting does not exist, then add the setting as shown below. + + ```xml + + + + + ``` + +3. Save the config file. +4. Restart Azure AD Connect Administration Agent service as shown below ![admin agent](media/whatis-aadc-admin-agent/adminagent2.png) diff --git a/articles/active-directory/manage-apps/application-proxy-release-version-history.md b/articles/active-directory/manage-apps/application-proxy-release-version-history.md index 307fad89c4871..189373df73ca5 100644 --- a/articles/active-directory/manage-apps/application-proxy-release-version-history.md +++ b/articles/active-directory/manage-apps/application-proxy-release-version-history.md @@ -26,7 +26,7 @@ Here is a list of related resources: Resource | Details --------- | --------- | How to enable Application Proxy | Pre-requisites for enabling Application Proxy and installing and registering a connector are described in this [tutorial](application-proxy-add-on-premises-application.md). -Understand Azure AD Application Proxy connectors | Find out more about [connector management](application-proxy-connectors.md) and how connectors auto-upgrade. +Understand Azure AD Application Proxy connectors | Find out more about [connector management](application-proxy-connectors.md) and how connectors [auto-upgrade](application-proxy-connectors.md#automatic-updates). Azure AD Application Proxy Connector Download | [Download the latest connector](https://download.msappproxy.net/subscription/d3c8b69d-6bf7-42be-a529-3fe9c2e70c90/connector/download). ## 1.5.612.0 @@ -85,4 +85,4 @@ If you're using an Application Proxy connector version earlier than 1.5.36.0, up ## Next steps - Learn more about [Remote access to on-premises applications through Azure AD Application Proxy](application-proxy.md). -- To start using Application Proxy, see [Tutorial: Add an on-premises application for remote access through Application Proxy](application-proxy-add-on-premises-application.md). \ No newline at end of file +- To start using Application Proxy, see [Tutorial: Add an on-premises application for remote access through Application Proxy](application-proxy-add-on-premises-application.md). diff --git a/articles/active-directory/saas-apps/coralogix-tutorial.md b/articles/active-directory/saas-apps/coralogix-tutorial.md index b5949cedfcdc9..b783a8f0594f1 100644 --- a/articles/active-directory/saas-apps/coralogix-tutorial.md +++ b/articles/active-directory/saas-apps/coralogix-tutorial.md @@ -24,131 +24,131 @@ In this tutorial, you learn how to integrate Coralogix with Azure Active Directo Integrating Coralogix with Azure AD provides you with the following benefits: * You can control in Azure AD who has access to Coralogix. -* You can enable your users to be automatically signed-in to Coralogix (Single Sign-On) with their Azure AD accounts. -* You can manage your accounts in one central location - the Azure portal. +* You can enable your users to be automatically signed in to Coralogix (single sign-on) with their Azure AD accounts. +* You can manage your accounts in one central location: the Azure portal. -If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](https://docs.microsoft.com/azure/active-directory/active-directory-appssoaccess-whatis). +For more information about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](https://docs.microsoft.com/azure/active-directory/active-directory-appssoaccess-whatis). If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. ## Prerequisites To configure Azure AD integration with Coralogix, you need the following items: -* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/) -* Coralogix single sign-on enabled subscription +- An Azure AD subscription. If you don't have an Azure AD environment, you can get a [one-month trial](https://azure.microsoft.com/pricing/free-trial/). +- A Coralogix single-sign-on enabled subscription. ## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment. -* Coralogix supports **SP** initiated SSO +* Coralogix supports SP-initiated SSO. -## Adding Coralogix from the gallery +## Add Coralogix from the gallery -To configure the integration of Coralogix into Azure AD, you need to add Coralogix from the gallery to your list of managed SaaS apps. +To configure the integration of Coralogix into Azure AD, first add Coralogix from the gallery to your list of managed SaaS apps. -**To add Coralogix from the gallery, perform the following steps:** +To add Coralogix from the gallery, take the following steps: -1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon. +1. In the [Azure portal](https://portal.azure.com), in the left pane, select the **Azure Active Directory** icon. ![The Azure Active Directory button](common/select-azuread.png) -2. Navigate to **Enterprise Applications** and then select the **All Applications** option. +2. Go to **Enterprise Applications**, and then select **All Applications**. ![The Enterprise applications blade](common/enterprise-applications.png) -3. To add new application, click **New application** button on the top of dialog. +3. To add a new application, select the **New application** button at the top of the dialog box. ![The New application button](common/add-new-app.png) -4. In the search box, type **Coralogix**, select **Coralogix** from result panel then click **Add** button to add the application. +4. In the search box, enter **Coralogix**. Select **Coralogix** from the results pane, and then select the **Add** button to add the application. ![Coralogix in the results list](common/search-new-app.png) ## Configure and test Azure AD single sign-on -In this section, you configure and test Azure AD single sign-on with Coralogix based on a test user called **Britta Simon**. -For single sign-on to work, a link relationship between an Azure AD user and the related user in Coralogix needs to be established. +In this section, you configure and test Azure AD single sign-on with Coralogix based on a test user called Britta Simon. +For single sign-on to work, you need to establish a link between an Azure AD user and the related user in Coralogix. -To configure and test Azure AD single sign-on with Coralogix, you need to complete the following building blocks: +To configure and test Azure AD single sign-on with Coralogix, first complete the following building blocks: -1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature. -2. **[Configure Coralogix Single Sign-On](#configure-coralogix-single-sign-on)** - to configure the Single Sign-On settings on application side. -3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon. -4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on. -5. **[Create Coralogix test user](#create-coralogix-test-user)** - to have a counterpart of Britta Simon in Coralogix that is linked to the Azure AD representation of user. -6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works. +1. [Configure Azure AD single sign-on](#configure-azure-ad-single-sign-on) to enable your users to use this feature. +2. [Configure Coralogix single sign-on](#configure-coralogix-single-sign-on) to configure the single sign-on settings on the application side. +3. [Create an Azure AD test user](#create-an-azure-ad-test-user) to test Azure AD single sign-on with Britta Simon. +4. [Assign the Azure AD test user](#assign-the-azure-ad-test-user) to enable Britta Simon to use Azure AD single sign-on. +5. [Create a Coralogix test user](#create-a-coralogix-test-user) to have a counterpart of Britta Simon in Coralogix that is linked to the Azure AD representation of user. +6. [Test single sign-on](#test-single-sign-on) to verify that the configuration works. ### Configure Azure AD single sign-on In this section, you enable Azure AD single sign-on in the Azure portal. -To configure Azure AD single sign-on with Coralogix, perform the following steps: +To configure Azure AD single sign-on with Coralogix, take the following steps: 1. In the [Azure portal](https://portal.azure.com/), on the **Coralogix** application integration page, select **Single sign-on**. ![Configure single sign-on link](common/select-sso.png) -2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on. +2. In the **Select a Single sign-on method** dialog box, select **SAML** to enable single sign-on. ![Single sign-on select mode](common/select-saml-option.png) -3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog. +3. On the **Set up Single Sign-On with SAML** page, select the **Edit** icon to open the **Basic SAML Configuration** dialog box. ![Edit Basic SAML Configuration](common/edit-urls.png) -4. On the **Basic SAML Configuration** section, perform the following steps: +4. In the **Basic SAML Configuration** dialog box, take the following steps: ![Coralogix Domain and URLs single sign-on information](common/sp-identifier.png) - a. In the **Sign on URL** text box, type a URL using the following pattern: + a. In the **Sign on URL** box, enter a URL with the following pattern: `https://.coralogix.com` - b. In the **Identifier (Entity ID)** text box, type a URL: + b. In the **Identifier (Entity ID)** text box, enter a URL, such as: + + `https://api.coralogix.com/saml/metadata.xml` - | | - |--| - | `https://api.coralogix.com/saml/metadata.xml` | - | `https://aws-client-prod.coralogix.com/saml/metadata.xml` | + or + + `https://aws-client-prod.coralogix.com/saml/metadata.xml` > [!NOTE] - > The Sign on URL value is not real. Update the value with the actual Sign on URL. Contact [Coralogix Client support team](mailto:info@coralogix.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + > The sign-on URL value isn't real. Update the value with the actual sign-on URL. Contact the [Coralogix Client support team](mailto:info@coralogix.com) to get the value. You can also refer to the patterns in the **Basic SAML Configuration** section in the Azure portal. -5. Coralogix application expects the SAML assertions in a specific format. Configure the following claims for this application. You can manage the values of these attributes from the **User Attributes** section on application integration page. On the **Set up Single Sign-On with SAML** page, click **Edit** button to open **User Attributes** dialog. +5. The Coralogix application expects the SAML assertions in a specific format. Configure the following claims for this application. You can manage the values of these attributes from the **User Attributes** section on the application integration page. On the **Set up Single Sign-On with SAML** page, select the **Edit** button to open the **User Attributes** dialog box. ![image](common/edit-attribute.png) -6. In the **User Claims** section on the **User Attributes** dialog, edit the claims by using **Edit icon** or add the claims by using **Add new claim** to configure SAML token attribute as shown in the image above and perform the following steps: +6. In the **User Claims** section in the **User Attributes** dialog box, edit the claims by using the **Edit** icon. You can also add the claims by using **Add new claim** to configure the SAML token attribute as shown in the previous image. Then take the following steps: - a. Click **Edit icon** to open the **Manage user claims** dialog. + a. Select the **Edit icon** to open the **Manage user claims** dialog box. ![image](./media/coralogix-tutorial/tutorial_usermail.png) - ![image](./media/coralogix-tutorial/tutorial_usermailedit.png) b. From the **Choose name identifier format** list, select **Email address**. c. From the **Source attribute** list, select **user.mail**. - d. Click **Save**. + d. Select **Save**. -7. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer. +7. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, select **Download** to download the **Federation Metadata XML** from the given options according to your requirements. Then save it on your computer. ![The Certificate download link](common/metadataxml.png) -8. On the **Set up Coralogix** section, copy the appropriate URL(s) as per your requirement. +8. In the **Set up Coralogix** section, copy the appropriate URL(s). ![Copy configuration URLs](common/copy-configuration-urls.png) a. Login URL - b. Azure Ad Identifier + b. Azure AD Identifier c. Logout URL -### Configure Coralogix Single Sign-On +### Configure Coralogix single sign-on -To configure single sign-on on **Coralogix** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Coralogix support team](mailto:info@coralogix.com). They set this setting to have the SAML SSO connection set properly on both sides. +To configure single sign-on on the **Coralogix** side, send the downloaded **Federation Metadata XML** and copied URLs from the Azure portal to the [Coralogix support team](mailto:info@coralogix.com). They ensure that the SAML SSO connection is set properly on both sides. ### Create an Azure AD test user @@ -158,28 +158,27 @@ The objective of this section is to create a test user in the Azure portal calle ![The "Users and groups" and "All users" links](common/users.png) -2. Select **New user** at the top of the screen. +2. At the top of the screen, select **New user**. ![New user Button](common/new-user.png) -3. In the User properties, perform the following steps. +3. In the **User** dialog box, take the following steps. ![The User dialog box](common/user-properties.png) - a. In the **Name** field enter **BrittaSimon**. + a. In the **Name** field, enter **BrittaSimon**. - b. In the **User name** field type **brittasimon\@yourcompanydomain.extension** - For example, BrittaSimon@contoso.com + b. In the **User name** field, enter "brittasimon@yourcompanydomain.extension." For example, in this case, you might enter "brittasimon@contoso.com." - c. Select **Show password** check box, and then write down the value that's displayed in the Password box. + c. Select the **Show password** check box, and then note the value that's displayed in the **Password** box. - d. Click **Create**. + d. Select **Create**. ### Assign the Azure AD test user In this section, you enable Britta Simon to use Azure single sign-on by granting access to Coralogix. -1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Coralogix**. +1. In the Azure portal, select **Enterprise Applications**, select **All applications**, and then select **Coralogix**. ![Enterprise applications blade](common/enterprise-applications.png) @@ -191,29 +190,29 @@ In this section, you enable Britta Simon to use Azure single sign-on by granting ![The "Users and groups" link](common/users-groups-blade.png) -4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog. +4. Select the **Add user** button. Then select **Users and groups** in the **Add Assignment** dialog box. ![The Add Assignment pane](common/add-assign-user.png) -5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen. +5. In the **Users and groups** dialog box, select **Britta Simon** in the users list. Then click the **Select** button at the bottom of the screen. -6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen. +6. If you're expecting a role value in the SAML assertion, in the **Select Role** dialog box, select the appropriate role for the user from the list. Then click the **Select** button at the bottom of the screen. -7. In the **Add Assignment** dialog click the **Assign** button. +7. In the **Add Assignment** dialog box, select the **Assign** button. -### Create Coralogix test user +### Create a Coralogix test user -In this section, you create a user called Britta Simon in Coralogix. Work with [Coralogix support team](mailto:info@coralogix.com) to add the users in the Coralogix platform. Users must be created and activated before you use single sign-on. +In this section, you create a user called Britta Simon in Coralogix. Work with the [Coralogix support team](mailto:info@coralogix.com) to add the users in the Coralogix platform. You must create and activate users before you use single sign-on. ### Test single sign-on -In this section, you test your Azure AD single sign-on configuration using the Access Panel. +In this section, you test your Azure AD single sign-on configuration by using the MyApps portal. -When you click the Coralogix tile in the Access Panel, you should be automatically signed in to the Coralogix for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction). +When you select the Coralogix tile in the MyApps portal, you should be automatically signed in to Coralogix. For more information about the MyApps portal, see [What is the MyApps portal?](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction). -## Additional Resources +## Additional resources -- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](https://docs.microsoft.com/azure/active-directory/active-directory-saas-tutorial-list) +- [ List of tutorials on how to integrate SaaS apps with Azure Active Directory ](https://docs.microsoft.com/azure/active-directory/active-directory-saas-tutorial-list) - [What is application access and single sign-on with Azure Active Directory?](https://docs.microsoft.com/azure/active-directory/active-directory-appssoaccess-whatis) diff --git a/articles/active-directory/saas-apps/signagelive-tutorial.md b/articles/active-directory/saas-apps/signagelive-tutorial.md index d9441b8daa827..37edfa2e3b78f 100644 --- a/articles/active-directory/saas-apps/signagelive-tutorial.md +++ b/articles/active-directory/saas-apps/signagelive-tutorial.md @@ -24,106 +24,107 @@ In this tutorial, you learn how to integrate Signagelive with Azure Active Direc Integrating Signagelive with Azure AD provides you with the following benefits: * You can control in Azure AD who has access to Signagelive. -* You can enable your users to be automatically signed-in to Signagelive (Single Sign-On) with their Azure AD accounts. -* You can manage your accounts in one central location - the Azure portal. +* You can enable your users to be automatically signed in to Signagelive (single sign-on) with their Azure AD accounts. +* You can manage your accounts in one central location: the Azure portal. -If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](https://docs.microsoft.com/azure/active-directory/active-directory-appssoaccess-whatis). -If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. +For more information about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](https://docs.microsoft.com/azure/active-directory/active-directory-appssoaccess-whatis). If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. ## Prerequisites To configure Azure AD integration with Signagelive, you need the following items: -* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/) -* Signagelive single sign-on enabled subscription +* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [one-month trial](https://azure.microsoft.com/pricing/free-trial/). +* A Signagelive single-sign-on-enabled subscription. ## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment. -* Signagelive supports **SP** initiated SSO +* Signagelive supports SP-initiated SSO. -## Adding Signagelive from the gallery +## Add Signagelive from the gallery -To configure the integration of Signagelive into Azure AD, you need to add Signagelive from the gallery to your list of managed SaaS apps. +To configure the integration of Signagelive into Azure AD, first add Signagelive from the gallery to your list of managed SaaS apps. -**To add Signagelive from the gallery, perform the following steps:** +To add Signagelive from the gallery, take the following steps: -1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon. +1. In the [Azure portal](https://portal.azure.com), in the left pane, select the **Azure Active Directory** icon. ![The Azure Active Directory button](common/select-azuread.png) -2. Navigate to **Enterprise Applications** and then select the **All Applications** option. +2. Go to **Enterprise Applications**, and then select the **All Applications** option. ![The Enterprise applications blade](common/enterprise-applications.png) -3. To add new application, click **New application** button on the top of dialog. +3. To add a new application, select the **New application** button at the top of the dialog box. ![The New application button](common/add-new-app.png) -4. In the search box, type **Signagelive**, select **Signagelive** from result panel then click **Add** button to add the application. +4. In the search box, enter **Signagelive**. ![Signagelive in the results list](common/search-new-app.png) +5. Select **Signagelive** from the results pane, and then select the **Add** button to add the application. + ## Configure and test Azure AD single sign-on In this section, you configure and test Azure AD single sign-on with Signagelive based on a test user called **Britta Simon**. -For single sign-on to work, a link relationship between an Azure AD user and the related user in Signagelive needs to be established. +For single sign-on to work, you must establish a link between an Azure AD user and the related user in Signagelive. -To configure and test Azure AD single sign-on with Signagelive, you need to complete the following building blocks: +To configure and test Azure AD single sign-on with Signagelive, first complete the following building blocks: -1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature. -2. **[Configure Signagelive Single Sign-On](#configure-signagelive-single-sign-on)** - to configure the Single Sign-On settings on application side. -3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon. -4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on. -5. **[Create Signagelive test user](#create-signagelive-test-user)** - to have a counterpart of Britta Simon in Signagelive that is linked to the Azure AD representation of user. -6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works. +1. [Configure Azure AD single sign-on](#configure-azure-ad-single-sign-on) to enable your users to use this feature. +2. [Configure Signagelive single sign-on](#configure-signagelive-single-sign-on) to configure the single sign-on settings on the application side. +3. [Create an Azure AD test user](#create-an-azure-ad-test-user) to test Azure AD single sign-on with Britta Simon. +4. [Assign the Azure AD test user](#assign-the-azure-ad-test-user) to enable Britta Simon to use Azure AD single sign-on. +5. [Create a Signagelive test user](#create-a-signagelive-test-user) to have a counterpart of Britta Simon in Signagelive that is linked to the Azure AD representation of the user. +6. [Test single sign-on](#test-single-sign-on) to verify that the configuration works. ### Configure Azure AD single sign-on In this section, you enable Azure AD single sign-on in the Azure portal. -To configure Azure AD single sign-on with Signagelive, perform the following steps: +To configure Azure AD single sign-on with Signagelive, take the following steps: 1. In the [Azure portal](https://portal.azure.com/), on the **Signagelive** application integration page, select **Single sign-on**. ![Configure single sign-on link](common/select-sso.png) -2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on. +2. In the **Select a single sign-on method** dialog box, select **SAML** to enable single sign-on. ![Single sign-on select mode](common/select-saml-option.png) -3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog. +3. On the **Set up single sign-on with SAML** page, select **Edit** to open the **Basic SAML Configuration** dialog box. ![Edit Basic SAML Configuration](common/edit-urls.png) -4. On the **Basic SAML Configuration** section, perform the following steps: +4. In the **Basic SAML Configuration** section, take the following steps: ![Signagelive Domain and URLs single sign-on information](common/sp-signonurl.png) - In the **Sign-on URL** text box, type a URL using the following pattern: + In the **Sign-on URL** box, enter a URL that uses the following pattern: `https://login.signagelive.com/sso/` > [!NOTE] - > The value is not real. Update the value with the actual Sign-On URL. Contact [Signagelive Client support team](mailto:support@signagelive.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + > The value is not real. Update the value with the actual sign-on URL. To get the value, contact the [Signagelive Client support team](mailto:support@signagelive.com) . You can also refer to the patterns that are shown in the **Basic SAML Configuration** section in the Azure portal. -5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Raw)** from the given options as per your requirement and save it on your computer. +5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, select **Download** to download the **Certificate (Raw)** from the given options per your requirement. Then save it on your computer. ![The Certificate download link](common/certificateraw.png) -6. On the **Set up Signagelive** section, copy the appropriate URL(s) as per your requirement. +6. In the **Set up Signagelive** section, copy the URL(s) that you need. ![Copy configuration URLs](common/copy-configuration-urls.png) a. Login URL - b. Azure Ad Identifier + b. Azure AD Identifier c. Logout URL -### Configure Signagelive Single Sign-On +### Configure Signagelive Single sign-on -To configure single sign-on on **Signagelive** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [Signagelive support team](mailto:support@signagelive.com). They set this setting to have the SAML SSO connection set properly on both sides. +To configure single sign-on on the Signagelive side, send the downloaded **Certificate (Raw)** and copied URLs from the Azure portal to the [Signagelive support team](mailto:support@signagelive.com). They ensure that the SAML SSO connection is set properly on both sides. ### Create an Azure AD test user @@ -135,26 +136,25 @@ The objective of this section is to create a test user in the Azure portal calle 2. Select **New user** at the top of the screen. - ![New user Button](common/new-user.png) + ![New user button](common/new-user.png) -3. In the User properties, perform the following steps. +3. In the **User** dialog box, take the following steps. ![The User dialog box](common/user-properties.png) a. In the **Name** field, enter **BrittaSimon**. - b. In the **User name** field type **brittasimon\@yourcompanydomain.extension** - For example, BrittaSimon@contoso.com + b. In the **User name** field, enter "brittasimon@yourcompanydomain.extension". For example, in this case, you might enter "BrittaSimon@contoso.com". - c. Select **Show password** check box, and then write down the value that's displayed in the Password box. + c. Select the **Show password** check box, and then note the value that's displayed in the Password box. - d. Click **Create**. + d. Select **Create**. ### Assign the Azure AD test user In this section, you enable Britta Simon to use Azure single sign-on by granting access to Signagelive. -1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Signagelive**. +1. In the Azure portal, select **Enterprise Applications**, select **All applications**, and then select **Signagelive**. ![Enterprise applications blade](common/enterprise-applications.png) @@ -166,29 +166,29 @@ In this section, you enable Britta Simon to use Azure single sign-on by granting ![The "Users and groups" link](common/users-groups-blade.png) -4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog. +4. Select the **Add user** button. Then, in the **Add Assignment** dialog box, select **Users and groups**. ![The Add Assignment pane](common/add-assign-user.png) -5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen. +5. In the **Users and groups** dialog box, in the **Users** list, select **Britta Simon**. Then click the **Select** button at the bottom of the screen. -6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen. +6. If you are expecting a role value in the SAML assertion, then, in the **Select Role** dialog box, select the appropriate role for the user from the list. Next, click the **Select** button at the bottom of the screen. -7. In the **Add Assignment** dialog, click the **Assign** button. +7. In the **Add Assignment** dialog box, select the **Assign** button. -### Create Signagelive test user +### Create a Signagelive test user -In this section, you create a user called Britta Simon in Signagelive. Work with [Signagelive support team](mailto:support@signagelive.com) to add the users in the Signagelive platform. Users must be created and activated before you use single sign-on. +In this section, you create a user called Britta Simon in Signagelive. Work with the [Signagelive support team](mailto:support@signagelive.com) to add the users in the Signagelive platform. You must create and activate users before you use single sign-on. ### Test single sign-on -In this section, you test your Azure AD single sign-on configuration using the Access Panel. +In this section, you test your Azure AD single sign-on configuration by using the MyApps portal. -When you click the Signagelive tile in the Access Panel, you should be automatically signed in to the Signagelive for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction). +When you select the **Signagelive** tile in the MyApps portal, you should be automatically signed in. For more information about the MyApps portal, see [What is the MyApps portal?](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction). -## Additional Resources +## Additional resources -- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](https://docs.microsoft.com/azure/active-directory/active-directory-saas-tutorial-list) +- [ List of tutorials on how to integrate SaaS Apps with Azure Active Directory ](https://docs.microsoft.com/azure/active-directory/active-directory-saas-tutorial-list) - [What is application access and single sign-on with Azure Active Directory?](https://docs.microsoft.com/azure/active-directory/active-directory-appssoaccess-whatis) diff --git a/articles/active-directory/saas-apps/trello-tutorial.md b/articles/active-directory/saas-apps/trello-tutorial.md index 1fa1e577794ad..1851578c18d78 100644 --- a/articles/active-directory/saas-apps/trello-tutorial.md +++ b/articles/active-directory/saas-apps/trello-tutorial.md @@ -24,109 +24,112 @@ In this tutorial, you learn how to integrate Trello with Azure Active Directory Integrating Trello with Azure AD provides you with the following benefits: * You can control in Azure AD who has access to Trello. -* You can enable your users to be automatically signed-in to Trello (Single Sign-On) with their Azure AD accounts. -* You can manage your accounts in one central location - the Azure portal. +* You can enable your users to be automatically signed-in to Trello (single sign-on) with their Azure AD accounts. +* You can manage your accounts in one central location: the Azure portal. -If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](https://docs.microsoft.com/azure/active-directory/active-directory-appssoaccess-whatis). +For more information about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](https://docs.microsoft.com/azure/active-directory/active-directory-appssoaccess-whatis). If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. ## Prerequisites To configure Azure AD integration with Trello, you need the following items: -* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/) -* Trello single sign-on enabled subscription +* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [one-month trial](https://azure.microsoft.com/pricing/free-trial/). +* A Trello single-sign-on-enabled subscription. ## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment. -* Trello supports **SP and IDP** initiated SSO +* Trello supports SP- and IDP-initiated SSO -* Trello supports **Just In Time** user provisioning +* Trello supports Just In Time user provisioning -## Adding Trello from the gallery +## Add Trello from the gallery -To configure the integration of Trello into Azure AD, you need to add Trello from the gallery to your list of managed SaaS apps. +To configure the integration of Trello into Azure AD, first add Trello from the gallery to your list of managed SaaS apps. -**To add Trello from the gallery, perform the following steps:** +To add Trello from the gallery, take the following steps: -1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon. +1. In the [Azure portal](https://portal.azure.com), in the left pane, select the **Azure Active Directory** icon. ![The Azure Active Directory button](common/select-azuread.png) -2. Navigate to **Enterprise Applications** and then select the **All Applications** option. +2. Select **Enterprise Applications**, and then select **All Applications**. ![The Enterprise applications blade](common/enterprise-applications.png) -3. To add new application, click **New application** button on the top of dialog. +3. To add a new application, select the **New application** button at the top of the dialog box. ![The New application button](common/add-new-app.png) -4. In the search box, type **Trello**, select **Trello** from result panel then click **Add** button to add the application. +4. In the search box, enter **Trello**, and then select **Trello** from the results pane. + +5. Select the **Add** button to add the application. ![Trello in the results list](common/search-new-app.png) ## Configure and test Azure AD single sign-on -In this section, you configure and test Azure AD single sign-on with [Application name] based on a test user called **Britta Simon**. -For single sign-on to work, a link relationship between an Azure AD user and the related user in [Application name] needs to be established. +In this section, you configure and test Azure AD single sign-on with Trello based on a test user called **Britta Simon**. + +For single sign-on to work, you need to establish a link between an Azure AD user and the related user in Trello. -To configure and test Azure AD single sign-on with [Application name], you need to complete the following building blocks: +To configure and test Azure AD single sign-on with Trello, you need to complete the following building blocks: -1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature. -2. **[Configure Trello Single Sign-On](#configure-trello-single-sign-on)** - to configure the Single Sign-On settings on application side. -3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon. -4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on. -5. **[Create Trello test user](#create-trello-test-user)** - to have a counterpart of Britta Simon in Trello that is linked to the Azure AD representation of user. -6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works. +1. [Configure Azure AD single sign-on](#configure-azure-ad-single-sign-on) to enable your users to use this feature. +2. [Configure Trello single sign-on](#configure-trello-single-sign-on) to configure the single sign-on settings on the application side. +3. [Create an Azure AD test user](#create-an-azure-ad-test-user) to test Azure AD single sign-on with Britta Simon. +4. [Assign the Azure AD test user](#assign-the-azure-ad-test-user) to enable Britta Simon to use Azure AD single sign-on. +5. [Create a Trello test user](#create-a-trello-test-user) to have a counterpart of Britta Simon in Trello that is linked to the Azure AD representation of the user. +6. [Test single sign-on](#test-single-sign-on) to verify that the configuration works. ### Configure Azure AD single sign-on In this section, you enable Azure AD single sign-on in the Azure portal. > [!NOTE] -> You should get the **\** slug from Trello. If you don't have the slug value, contact [Trello support team](mailto:support@trello.com) to get the slug for you enterprise. +> You should get the **\** slug from Trello. If you don't have the slug value, contact the [Trello support team](mailto:support@trello.com) to get the slug for your enterprise. -To configure Azure AD single sign-on with [Application name], perform the following steps: +To configure Azure AD single sign-on with Trello, take the following steps: 1. In the [Azure portal](https://portal.azure.com/), on the **Trello** application integration page, select **Single sign-on**. ![Configure single sign-on link](common/select-sso.png) -2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on. +2. In the **Select a Single sign-on method** dialog box, select **SAML** to enable single sign-on. ![Single sign-on select mode](common/select-saml-option.png) -3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog. +3. On the **Set up Single Sign-on with SAML** page, select the **Edit** icon to open the **Basic SAML Configuration** dialog box. ![Edit Basic SAML Configuration](common/edit-urls.png) -4. On the **Basic SAML Configuration** section, If you wish to configure the application in **IDP** initiated mode, perform the following steps: +4. In the **Basic SAML Configuration** section, if you want to configure the application in IDP-initiated mode, take the following steps: - ![Trello Domain and URLs single sign-on information](common/idp-intiated.png) + ![Trello domain and URLs single sign-on information](common/idp-intiated.png) - a. In the **Identifier** text box, type a URL using the following pattern: + a. In the **Identifier** box, enter a URL by using the following pattern: `https://trello.com/auth/saml/metadata` - b. In the **Reply URL** text box, type a URL using the following pattern: + b. In the **Reply URL** box, enter a URL by using the following pattern: `https://trello.com/auth/saml/consume/` -5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: +5. Select **Set additional URLs**, and then take the following steps if you want to configure the application in SP-initiated mode: - ![Trello Domain and URLs single sign-on information](common/metadata-upload-additional-signon.png) + ![Trello domain and URLs single sign-on information](common/metadata-upload-additional-signon.png) - In the **Sign-on URL** text box, type a URL using the following pattern: + In the **Sign-on URL** box, enter a URL by using the following pattern: `https://trello.com/auth/saml/login/` > [!NOTE] - > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Trello Client support team](mailto:support@trello.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + > These values are not real. Update these values with the actual identifier, reply URL, and sign-on URL. Contact the [Trello Client support team](mailto:support@trello.com) to get these values. You can also refer to the patterns in the **Basic SAML Configuration** section in the Azure portal. -6. Trello application expects the SAML assertions in a specific format. Configure the following claims for this application. You can manage the values of these attributes from the **User Attributes** section on application integration page. On the **Set up Single Sign-On with SAML** page, click **Edit** button to open **User Attributes** dialog. +6. The Trello application expects the SAML assertions in a specific format. Configure the following claims for this application. You can manage the values of these attributes from the **User Attributes** section on the application integration page. On the **Set up Single Sign-On with SAML** page, select the **Edit** button to open the **User Attributes** dialog box. - ![image](common/edit-attribute.png) + ![User Attributes dialog box](common/edit-attribute.png) -7. In the **User Claims** section on the **User Attributes** dialog, configure SAML token attribute as shown in the image above and perform the following steps: +7. In the **User Claims** section in the **User Attributes** dialog box, configure the SAML token attribute as shown in the previous image. Then take the following steps: | Name | Source Attribute| | --- | --- | @@ -134,41 +137,41 @@ To configure Azure AD single sign-on with [Application name], perform the follow | User.FirstName | user.givenname | | User.LastName | user.surname | - a. Click **Add new claim** to open the **Manage user claims** dialog. + a. Select **Add new claim** to open the **Manage user claims** dialog box. - ![image](common/new-save-attribute.png) + ![User claims dialog box](common/new-save-attribute.png) - ![image](common/new-attribute-details.png) + ![Manage user claims](common/new-attribute-details.png) - b. In the **Name** textbox, type the attribute name shown for that row. + b. In the **Name** box, enter the attribute name that's shown for that row. - c. Leave the **Namespace** blank. + c. Leave **Namespace** blank. - d. Select Source as **Attribute**. + d. For **Source**, select **Attribute**. - e. From the **Source attribute** list, type the attribute value shown for that row. + e. In the **Source attribute** list, enter the attribute value that's shown for that row. - f. Click **Ok** + f. Select **Ok**. - g. Click **Save**. + g. Select **Save**. -8. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer. +8. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, select **Download** to download the **Certificate (Base64)** from the given options as per your requirements. Then save it on your computer. ![The Certificate download link](common/certificatebase64.png) -9. On the **Set up Trello** section, copy the appropriate URL(s) as per your requirement. +9. On the **Set up Trello** section, copy the appropriate URL(s) according to your requirements. ![Copy configuration URLs](common/copy-configuration-urls.png) a. Login URL - b. Azure Ad Identifier + b. Azure AD identifier c. Logout URL -### Configure Trello Single Sign-On +### Configure Trello single sign-on -To configure single sign-on on **Trello** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Trello support team](mailto:support@trello.com). They set this setting to have the SAML SSO connection set properly on both sides. +To configure single sign-on on the Trello side, first send the downloaded **Certificate (Base64)** and copied URLs from the Azure portal to the [Trello support team](mailto:support@trello.com). They ensure that the SAML SSO connection is set properly on both sides. ### Create an Azure AD test user @@ -180,63 +183,62 @@ The objective of this section is to create a test user in the Azure portal calle 2. Select **New user** at the top of the screen. - ![New user Button](common/new-user.png) + ![New user button](common/new-user.png) -3. In the User properties, perform the following steps. +3. In the **User** dialog box, take the following steps. ![The User dialog box](common/user-properties.png) - a. In the **Name** field enter **BrittaSimon**. + a. In the **Name** field, enter **BrittaSimon**. - b. In the **User name** field type **brittasimon\@yourcompanydomain.extension** - For example, BrittaSimon@contoso.com + b. In the **User name** field, enter "brittasimon@yourcompanydomain.extension". For example, in this case, you might enter "BrittaSimon@contoso.com". - c. Select **Show password** check box, and then write down the value that's displayed in the Password box. + c. Select the **Show password** check box, and then note the value that's displayed in the **Password** box. - d. Click **Create**. + d. Select **Create**. ### Assign the Azure AD test user In this section, you enable Britta Simon to use Azure single sign-on by granting access to Trello. -1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Trello**. +1. In the Azure portal, select **Enterprise Applications**, select **All applications**, and then select **Trello**. - ![Enterprise applications blade](common/enterprise-applications.png) + ![Enterprise Applications blade](common/enterprise-applications.png) 2. In the applications list, select **Trello**. - ![The Trello link in the Applications list](common/all-applications.png) + ![The Trello link in the applications list](common/all-applications.png) 3. In the menu on the left, select **Users and groups**. ![The "Users and groups" link](common/users-groups-blade.png) -4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog. +4. Select the **Add user** button. Then, in the **Add Assignment** dialog box, select **Users and groups**. ![The Add Assignment pane](common/add-assign-user.png) -5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen. +5. In the **Users and groups** dialog box, select **Britta Simon** in the users list. Then click the **Select** button at the bottom of the screen. -6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen. +6. If you are expecting any role value in the SAML assertion, then, in the **Select Role** dialog box, select the appropriate role for the user from the list. Then click the **Select** button at the bottom of the screen. -7. In the **Add Assignment** dialog click the **Assign** button. +7. In the **Add Assignment** dialog box, select the **Assign** button. -### Create Trello test user +### Create a Trello test user -In this section, a user called Britta Simon is created in Trello. Trello supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Trello, a new one is created after authentication. +In this section, you create a user called Britta Simon in Trello. Trello supports Just in Time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Trello, a new one is created after authentication. -> [!Note] -> If you need to create a user manually, Contact [Trello support team](mailto:support@trello.com). +> [!NOTE] +> If you need to create a user manually, contact the [Trello support team](mailto:support@trello.com). ### Test single sign-on -In this section, you test your Azure AD single sign-on configuration using the Access Panel. +In this section, you test your Azure AD single sign-on configuration by using the MyApps portal. -When you click the Trello tile in the Access Panel, you should be automatically signed in to the Trello for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction). +When you select the Trello tile in the MyApps portal, you should be automatically signed in to Trello. For more information about the My Apps portal, see [What is the MyApps portal?](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction). -## Additional Resources +## Additional resources -- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](https://docs.microsoft.com/azure/active-directory/active-directory-saas-tutorial-list) +- [List of tutorials on how to integrate SaaS Apps with Azure Active Directory](https://docs.microsoft.com/azure/active-directory/active-directory-saas-tutorial-list) - [What is application access and single sign-on with Azure Active Directory?](https://docs.microsoft.com/azure/active-directory/active-directory-appssoaccess-whatis) diff --git a/articles/active-directory/saas-apps/zscaler-three-provisioning-tutorial.md b/articles/active-directory/saas-apps/zscaler-three-provisioning-tutorial.md index 796402ad2e882..1062d4fe757bf 100644 --- a/articles/active-directory/saas-apps/zscaler-three-provisioning-tutorial.md +++ b/articles/active-directory/saas-apps/zscaler-three-provisioning-tutorial.md @@ -1,6 +1,6 @@ --- title: 'Tutorial: Configure Zscaler Three for automatic user provisioning with Azure Active Directory | Microsoft Docs' -description: Learn how to configure Azure Active Directory to automatically provision and de-provision user accounts to Zscaler Three. +description: In this tutorial, you'll learn how to configure Azure Active Directory to automatically provision and deprovision user accounts to Zscaler Three. services: active-directory documentationcenter: '' author: zchia @@ -13,151 +13,145 @@ ms.subservice: saas-app-tutorial ms.workload: identity ms.tgt_pltfrm: na ms.devlang: na -ms.topic: article +ms.topic: tutorial ms.date: 03/27/2019 ms.author: v-ant-msft --- # Tutorial: Configure Zscaler Three for automatic user provisioning -The objective of this tutorial is to demonstrate the steps to be performed in Zscaler Three and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-provision users and/or groups to Zscaler Three. +In this tutorial, you'll learn how to configure Azure Active Directory (Azure AD) to automatically provision and deprovision users and/or groups to Zscaler Three. > [!NOTE] -> This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../active-directory-saas-app-provisioning.md). +> This tutorial describes a connector that's built on the Azure AD user provisioning service. For important details on what this service does and how it works, and answers to frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../active-directory-saas-app-provisioning.md). > -> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> This connector is currently in Public Preview. For more information on the general Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Prerequisites -The scenario outlined in this tutorial assumes that you already have the following: +To complete the steps outlined in this tutorial, you need the following: -* An Azure AD tenant -* A Zscaler Three tenant -* A user account in Zscaler Three with Admin permissions +* An Azure AD tenant. +* A Zscaler Three tenant. +* A user account in Zscaler Three with admin permissions. > [!NOTE] -> The Azure AD provisioning integration relies on the Zscaler Three SCIM API, which is available to Zscaler Three developers for accounts with the Enterprise package. +> The Azure AD provisioning integration relies on the Zscaler ZSCloud SCIM API, which is available for Enterprise accounts. ## Adding Zscaler Three from the gallery -Before configuring Zscaler Three for automatic user provisioning with Azure AD, you need to add Zscaler Three from the Azure AD application gallery to your list of managed SaaS applications. +Before you configure Zscaler Three for automatic user provisioning with Azure AD, you need to add Zscaler Three from the Azure AD application gallery to your list of managed SaaS applications. -**To add Zscaler Three from the Azure AD application gallery, perform the following steps:** +In the [Azure portal](https://portal.azure.com), in the left pane, select **Azure Active Directory**: -1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon. +![Select Azure Active Directory](common/select-azuread.png) - ![The Azure Active Directory button](common/select-azuread.png) +Go to **Enterprise applications** and then select **All applications**: -2. Navigate to **Enterprise Applications** and then select the **All Applications** option. +![Enterprise applications](common/enterprise-applications.png) - ![The Enterprise applications blade](common/enterprise-applications.png) +To add an application, select **New application** at the top of the window: -3. To add new application, click **New application** button on the top of dialog. +![Select New application](common/add-new-app.png) - ![The New application button](common/add-new-app.png) +In the search box, enter **Zscaler Three**. Select **Zscaler Three** in the results and then select **Add**. -4. In the search box, type **Zscaler Three**, select **Zscaler Three** from result panel then click **Add** button to add the application. +![Results list](common/search-new-app.png) - ![Zscaler Three in the results list](common/search-new-app.png) +## Assign users to Zscaler Three -## Assigning users to Zscaler Three +Azure AD users need to be assigned access to selected apps before they can use them. In the context of automatic user provisioning, only the users or groups that are assigned to an application in Azure AD are synchronized. -Azure Active Directory uses a concept called "assignments" to determine which users should receive access to selected apps. In the context of automatic user provisioning, only the users and/or groups that have been "assigned" to an application in Azure AD are synchronized. - -Before configuring and enabling automatic user provisioning, you should decide which users and/or groups in Azure AD need access to Zscaler Three. Once decided, you can assign these users and/or groups to Zscaler Three by following the instructions here: - -* [Assign a user or group to an enterprise app](https://docs.microsoft.com/azure/active-directory/active-directory-coreapps-assign-user-azure-portal) +Before you configure and enable automatic user provisioning, you should decide which users and/or groups in Azure AD need access to Zscaler Three. After you decide that, you can assign these users and groups to Zscaler Three by following the instructions in [Assign a user or group to an enterprise app](https://docs.microsoft.com/azure/active-directory/active-directory-coreapps-assign-user-azure-portal). ### Important tips for assigning users to Zscaler Three -* It is recommended that a single Azure AD user is assigned to Zscaler Three to test the automatic user provisioning configuration. Additional users and/or groups may be assigned later. +* We recommended that you first assign a single Azure AD user to Zscaler Three to test the automatic user provisioning configuration. You can assign more users and groups later. -* When assigning a user to Zscaler Three, you must select any valid application-specific role (if available) in the assignment dialog. Users with the **Default Access** role are excluded from provisioning. +* When you assign a user to Zscaler Three, you need to select any valid application-specific role (if available) in the assignment dialog box. Users with the **Default Access** role are excluded from provisioning. -## Configuring automatic user provisioning to Zscaler Three +## Set up automatic user provisioning -This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Zscaler Three based on user and/or group assignments in Azure AD. +This section guides you through the steps for configuring the Azure AD provisioning service to create, update, and disable users and groups in Zscaler Three based on user and group assignments in Azure AD. > [!TIP] -> You may also choose to enable SAML-based single sign-on for Zscaler Three, following the instructions provided in the [Zscaler Three single sign-on tutorial](zscaler-three-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, though these two features compliment each other. - -### To configure automatic user provisioning for Zscaler Three in Azure AD: +> You might also want to enable SAML-based single sign-on for Zscaler Three. If you do, follow the instructions in the [Zscaler Three single sign-on tutorial](zscaler-three-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, but the two features complement each other. -1. Sign in to the [Azure portal](https://portal.azure.com) and select **Enterprise Applications**, select **All applications**, then select **Zscaler Three**. +1. Sign in to the [Azure portal](https://portal.azure.com) and select **Enterprise applications** > **All applications** > **Zscaler Three**: - ![Enterprise applications blade](common/enterprise-applications.png) + ![Enterprise applications](common/enterprise-applications.png) -2. In the applications list, select **Zscaler Three**. +2. In the applications list, select **Zscaler Three**: - ![The Zscaler Three link in the Applications list](common/all-applications.png) + ![Applications list](common/all-applications.png) -3. Select the **Provisioning** tab. +3. Select the **Provisioning** tab: ![Zscaler Three Provisioning](./media/zscaler-three-provisioning-tutorial/provisioning-tab.png) -4. Set the **Provisioning Mode** to **Automatic**. +4. Set the **Provisioning Mode** to **Automatic**: - ![Zscaler Three Provisioning](./media/zscaler-three-provisioning-tutorial/provisioning-credentials.png) + ![Set the Provisioning Mode](./media/zscaler-three-provisioning-tutorial/provisioning-credentials.png) -5. Under the **Admin Credentials** section, input the **Tenant URL** and **Secret Token** of your Zscaler Three account as described in Step 6. +5. In the **Admin Credentials** section, enter the **Tenant URL** and **Secret Token** of your Zscaler Three account, as described in the next step. -6. To obtain the **Tenant URL** and **Secret Token**, navigate to **Administration > Authentication Settings** in the Zscaler Three portal user interface and click on **SAML** under **Authentication Type**. +6. To get the **Tenant URL** and **Secret Token**, go to **Administration** > **Authentication Settings** in the Zscaler Three portal and select **SAML** under **Authentication Type**: - ![Zscaler Three Provisioning](./media/zscaler-three-provisioning-tutorial/secret-token-1.png) + ![Zscaler Three Authentication Settings](./media/zscaler-three-provisioning-tutorial/secret-token-1.png) - Click on **Configure SAML** to open **Configuration SAML** options. + Select **Configure SAML** to open the **Configure SAML** window: - ![Zscaler Three Provisioning](./media/zscaler-three-provisioning-tutorial/secret-token-2.png) + ![Configure SAML window](./media/zscaler-three-provisioning-tutorial/secret-token-2.png) - Select **Enable SCIM-Based Provisioning** to retrieve **Base URL** and **Bearer Token**, then save the settings. Copy the **Base URL** to **Tenant URL** and **Bearer Token** to **Secret Token** in the Azure portal. + Select **Enable SCIM-Based Provisioning** and copy the **Base URL** and **Bearer Token**, and then save the settings. In the Azure portal, paste the **Base URL** into the **Tenant URL** box and the **Bearer Token** into the **Secret Token** box. -7. Upon populating the fields shown in Step 5, click **Test Connection** to ensure Azure AD can connect to Zscaler Three. If the connection fails, ensure your Zscaler Three account has Admin permissions and try again. +7. After you enter the values in the **Tenant URL** and **Secret Token** boxes, select **Test Connection** to make sure Azure AD can connect to Zscaler Three. If the connection fails, make sure your Zscaler Three account has admin permissions and try again. - ![Zscaler Three Provisioning](./media/zscaler-three-provisioning-tutorial/test-connection.png) + ![Test the connection](./media/zscaler-three-provisioning-tutorial/test-connection.png) -8. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and check the checkbox **Send an email notification when a failure occurs**. +8. In the **Notification Email** box, enter the email address of a person or group that should receive the provisioning error notifications. Select **Send an email notification when a failure occurs**: - ![Zscaler Three Provisioning](./media/zscaler-three-provisioning-tutorial/notification.png) + ![Set up notification email](./media/zscaler-three-provisioning-tutorial/notification.png) -9. Click **Save**. +9. Select **Save**. -10. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Zscaler Three**. +10. In the **Mappings** section, select **Synchronize Azure Active Directory Users to ZscalerThree**: - ![Zscaler Three Provisioning](./media/zscaler-three-provisioning-tutorial/user-mappings.png) + ![Synchronize Azure AD users](./media/zscaler-three-provisioning-tutorial/user-mappings.png) -11. Review the user attributes that are synchronized from Azure AD to Zscaler Three in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Zscaler Three for update operations. Select the **Save** button to commit any changes. +11. Review the user attributes that are synchronized from Azure AD to Zscaler Three in the **Attribute Mappings** section. The attributes selected as **Matching** properties are used to match the user accounts in Zscaler Three for update operations. Select **Save** to commit any changes. - ![Zscaler Three Provisioning](./media/zscaler-three-provisioning-tutorial/user-attribute-mappings.png) + ![Attribute Mappings](./media/zscaler-three-provisioning-tutorial/user-attribute-mappings.png) -12. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Zscaler Three**. +12. In the **Mappings** section, select **Synchronize Azure Active Directory Groups to ZscalerThree**: - ![Zscaler Three Provisioning](./media/zscaler-three-provisioning-tutorial/group-mappings.png) + ![Synchronize Azure AD groups](./media/zscaler-three-provisioning-tutorial/group-mappings.png) -13. Review the group attributes that are synchronized from Azure AD to Zscaler Three in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Zscaler Three for update operations. Select the **Save** button to commit any changes. +13. Review the group attributes that are synchronized from Azure AD to Zscaler Three in the **Attribute Mappings** section. The attributes selected as **Matching** properties are used to match the groups in Zscaler Three for update operations. Select **Save** to commit any changes. - ![Zscaler Three Provisioning](./media/zscaler-three-provisioning-tutorial/group-attribute-mappings.png) + ![Attribute Mappings](./media/zscaler-three-provisioning-tutorial/group-attribute-mappings.png) -14. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](./../active-directory-saas-scoping-filters.md). +14. To configure scoping filters, refer to the instructions in the [Scoping filter tutorial](./../active-directory-saas-scoping-filters.md). -15. To enable the Azure AD provisioning service for Zscaler Three, change the **Provisioning Status** to **On** in the **Settings** section. +15. To enable the Azure AD provisioning service for Zscaler Three, change the **Provisioning Status** to **On** in the **Settings** section: - ![Zscaler Three Provisioning](./media/zscaler-three-provisioning-tutorial/provisioning-status.png) + ![Provisioning Status](./media/zscaler-three-provisioning-tutorial/provisioning-status.png) -16. Define the users and/or groups that you would like to provision to Zscaler Three by choosing the desired values in **Scope** in the **Settings** section. +16. Define the users and/or groups that you want to provision to Zscaler Three by choosing the values you want under **Scope** in the **Settings** section: - ![Zscaler Three Provisioning](./media/zscaler-three-provisioning-tutorial/scoping.png) + ![Scope values](./media/zscaler-three-provisioning-tutorial/scoping.png) -17. When you are ready to provision, click **Save**. +17. When you're ready to provision, select **Save**: - ![Zscaler Three Provisioning](./media/zscaler-three-provisioning-tutorial/save-provisioning.png) + ![Select Save](./media/zscaler-three-provisioning-tutorial/save-provisioning.png) -This operation starts the initial synchronization of all users and/or groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. You can use the **Synchronization Details** section to monitor progress and follow links to provisioning activity report, which describes all actions performed by the Azure AD provisioning service on Zscaler Three. +This operation starts the initial synchronization of all users and groups defined under **Scope** in the **Settings** section. The initial sync takes longer than subsequent syncs, which occur about every 40 minutes, as long as the Azure AD provisioning service is running. You can monitor progress in the **Synchronization Details** section. You can also follow links to a provisioning activity report, which describes all actions performed by the Azure AD provisioning service on Zscaler Three. -For more information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../active-directory-saas-provisioning-reporting.md). +For information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../active-directory-saas-provisioning-reporting.md). ## Additional resources -* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md) +* [Managing user account provisioning for enterprise apps](../manage-apps/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps diff --git a/articles/active-directory/saas-apps/zscaler-two-provisioning-tutorial.md b/articles/active-directory/saas-apps/zscaler-two-provisioning-tutorial.md index 86b5f6d741320..c0042a8b64ee6 100644 --- a/articles/active-directory/saas-apps/zscaler-two-provisioning-tutorial.md +++ b/articles/active-directory/saas-apps/zscaler-two-provisioning-tutorial.md @@ -1,6 +1,6 @@ --- title: 'Tutorial: Configure Zscaler Two for automatic user provisioning with Azure Active Directory | Microsoft Docs' -description: Learn how to configure Azure Active Directory to automatically provision and de-provision user accounts to Zscaler Two Two. +description: In this tutorial, you'll learn how to configure Azure Active Directory to automatically provision and deprovision user accounts to Zscaler Two. services: active-directory documentationcenter: '' author: zchia @@ -13,152 +13,145 @@ ms.subservice: saas-app-tutorial ms.workload: identity ms.tgt_pltfrm: na ms.devlang: na -ms.topic: article +ms.topic: tutorial ms.date: 03/27/2019 ms.author: v-ant-msft --- # Tutorial: Configure Zscaler Two for automatic user provisioning -The objective of this tutorial is to demonstrate the steps to be performed in Zscaler Two and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-provision users and/or groups to Zscaler Two. +In this tutorial, you'll learn how to configure Azure Active Directory (Azure AD) to automatically provision and deprovision users and/or groups to Zscaler Two. > [!NOTE] -> This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../active-directory-saas-app-provisioning.md). +> This tutorial describes a connector that's built on the Azure AD user provisioning service. For important details on what this service does and how it works, and answers to frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../active-directory-saas-app-provisioning.md). > - -> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> This connector is currently in Public Preview. For more information on the general Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Prerequisites -The scenario outlined in this tutorial assumes that you already have the following: +To complete the steps outlined in this tutorial, you need the following: -* An Azure AD tenant -* A Zscaler Two tenant -* A user account in Zscaler Two with Admin permissions +* An Azure AD tenant. +* A Zscaler Two tenant. +* A user account in Zscaler Two with admin permissions. > [!NOTE] -> The Azure AD provisioning integration relies on the Zscaler Two SCIM API, which is available to Zscaler Two developers for accounts with the Enterprise package. - -## Adding Zscaler Two from the gallery +> The Azure AD provisioning integration relies on the Zscaler Two SCIM API, which is available for Enterprise accounts. -Before configuring Zscaler Two for automatic user provisioning with Azure AD, you need to add Zscaler Two from the Azure AD application gallery to your list of managed SaaS applications. +## Add Zscaler Two from the gallery -**To add Zscaler Two from the Azure AD application gallery, perform the following steps:** +Before you configure Zscaler Two for automatic user provisioning with Azure AD, you need to add Zscaler Two from the Azure AD application gallery to your list of managed SaaS applications. -1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon. +In the [Azure portal](https://portal.azure.com), in the left pane, select **Azure Active Directory**: - ![The Azure Active Directory button](common/select-azuread.png) +![Select Azure Active Directory](common/select-azuread.png) -2. Navigate to **Enterprise Applications** and then select the **All Applications** option. +Go to **Enterprise applications** and then select **All applications**: - ![The Enterprise applications blade](common/enterprise-applications.png) +![Enterprise applications](common/enterprise-applications.png) -3. To add new application, click **New application** button on the top of dialog. +To add an application, select **New application** at the top of the window: - ![The New application button](common/add-new-app.png) +![Select New application](common/add-new-app.png) -4. In the search box, type **Zscaler Two**, select **Zscaler Two** from result panel then click **Add** button to add the application. +In the search box, enter **Zscaler Two**. Select **Zscaler Two** in the results and then select **Add**. - ![Zscaler Two in the results list](common/search-new-app.png) +![Results list](common/search-new-app.png) -## Assigning users to Zscaler Two +## Assign users to Zscaler Two -Azure Active Directory uses a concept called "assignments" to determine which users should receive access to selected apps. In the context of automatic user provisioning, only the users and/or groups that have been "assigned" to an application in Azure AD are synchronized. +Azure AD users need to be assigned access to selected apps before they can use them. In the context of automatic user provisioning, only users or groups that are assigned to an application in Azure AD are synchronized. -Before configuring and enabling automatic user provisioning, you should decide which users and/or groups in Azure AD need access to Zscaler Two. Once decided, you can assign these users and/or groups to Zscaler Two by following the instructions here: - -* [Assign a user or group to an enterprise app](https://docs.microsoft.com/azure/active-directory/active-directory-coreapps-assign-user-azure-portal) +Before you configure and enable automatic user provisioning, you should decide which users and/or groups in Azure AD need access to Zscaler Two. After you decide that, you can assign these users and groups to Zscaler Two by following the instructions in [Assign a user or group to an enterprise app](https://docs.microsoft.com/azure/active-directory/active-directory-coreapps-assign-user-azure-portal). ### Important tips for assigning users to Zscaler Two -* It is recommended that a single Azure AD user is assigned to Zscaler Two to test the automatic user provisioning configuration. Additional users and/or groups may be assigned later. +* We recommend that you first assign a single Azure AD user to Zscaler Two to test the automatic user provisioning configuration. You can assign more users and groups later. -* When assigning a user to Zscaler Two, you must select any valid application-specific role (if available) in the assignment dialog. Users with the **Default Access** role are excluded from provisioning. +* When you assign a user to Zscaler Two, you need to select any valid application-specific role (if available) in the assignment dialog box. Users with the **Default Access** role are excluded from provisioning. -## Configuring automatic user provisioning to Zscaler Two +## Set up automatic user provisioning -This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Zscaler Two based on user and/or group assignments in Azure AD. +This section guides you through the steps for configuring the Azure AD provisioning service to create, update, and disable users and groups in Zscaler Two based on user and group assignments in Azure AD. > [!TIP] -> You may also choose to enable SAML-based single sign-on for Zscaler Two, following the instructions provided in the [Zscaler Two single sign-on tutorial](zscaler-two-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, though these two features compliment each other. - -### To configure automatic user provisioning for Zscaler Two in Azure AD: +> You might also want to enable SAML-based single sign-on for Zscaler Two. If you do, follow the instructions in the [Zscaler Two single sign-on tutorial](zscaler-two-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, but the two features complement each other. -1. Sign in to the [Azure portal](https://portal.azure.com) and select **Enterprise Applications**, select **All applications**, then select **Zscaler Two**. +1. Sign in to the [Azure portal](https://portal.azure.com) and select **Enterprise applications** > **All applications** > **Zscaler Two**: - ![Enterprise applications blade](common/enterprise-applications.png) + ![Enterprise applications](common/enterprise-applications.png) -2. In the applications list, select **Zscaler Two**. +2. In the applications list, select **Zscaler Two**: - ![The Zscaler Two link in the Applications list](common/all-applications.png) + ![Applications list](common/all-applications.png) -3. Select the **Provisioning** tab. +3. Select the **Provisioning** tab: ![Zscaler Two Provisioning](./media/zscaler-two-provisioning-tutorial/provisioning-tab.png) -4. Set the **Provisioning Mode** to **Automatic**. +4. Set the **Provisioning Mode** to **Automatic**: - ![Zscaler Two Provisioning](./media/zscaler-two-provisioning-tutorial/provisioning-credentials.png) + ![Set the Provisioning Mode](./media/zscaler-two-provisioning-tutorial/provisioning-credentials.png) -5. Under the **Admin Credentials** section, input the **Tenant URL** and **Secret Token** of your Zscaler Two account as described in Step 6. +5. In the **Admin Credentials** section, enter the **Tenant URL** and **Secret Token** of your Zscaler Two account, as described in the next step. -6. To obtain the **Tenant URL** and **Secret Token**, navigate to **Administration > Authentication Settings** in the Zscaler Two portal user interface and click on **SAML** under **Authentication Type**. +6. To get the **Tenant URL** and **Secret Token**, go to **Administration** > **Authentication Settings** in the Zscaler Two portal and select **SAML** under **Authentication Type**: - ![Zscaler Two Provisioning](./media/zscaler-two-provisioning-tutorial/secret-token-1.png) + ![Zscaler Two Authentication Settings](./media/zscaler-two-provisioning-tutorial/secret-token-1.png) - Click on **Configure SAML** to open **Configuration SAML** options. + Select **Configure SAML** to open the **Configure SAML** window: - ![Zscaler Two Provisioning](./media/zscaler-two-provisioning-tutorial/secret-token-2.png) + ![Configure SAML window](./media/zscaler-two-provisioning-tutorial/secret-token-2.png) - Select **Enable SCIM-Based Provisioning** to retrieve **Base URL** and **Bearer Token**, then save the settings. Copy the **Base URL** to **Tenant URL** and **Bearer Token** to **Secret Token** in the Azure portal. + Select **Enable SCIM-Based Provisioning** and copy the **Base URL** and **Bearer Token**, and then save the settings. In the Azure portal, paste the **Base URL** into the **Tenant URL** box and the **Bearer Token** into the **Secret Token** box. -7. Upon populating the fields shown in Step 5, click **Test Connection** to ensure Azure AD can connect to Zscaler Two. If the connection fails, ensure your Zscaler Two account has Admin permissions and try again. +7. After you enter the values in the **Tenant URL** and **Secret Token** boxes, select **Test Connection** to make sure Azure AD can connect to Zscaler Two. If the connection fails, make sure your Zscaler Two account has admin permissions and try again. - ![Zscaler Two Provisioning](./media/zscaler-two-provisioning-tutorial/test-connection.png) + ![Test the connection](./media/zscaler-two-provisioning-tutorial/test-connection.png) -8. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and check the checkbox **Send an email notification when a failure occurs**. +8. In the **Notification Email** box, enter the email address of a person or group that should receive the provisioning error notifications. Select **Send an email notification when a failure occurs**: - ![Zscaler Two Provisioning](./media/zscaler-two-provisioning-tutorial/notification.png) + ![Set up notification email](./media/zscaler-two-provisioning-tutorial/notification.png) -9. Click **Save**. +9. Select **Save**. -10. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Zscaler Two**. +10. In the **Mappings** section, select **Synchronize Azure Active Directory Users to ZscalerTwo**: - ![Zscaler Two Provisioning](./media/zscaler-two-provisioning-tutorial/user-mappings.png) + ![Synchronize Azure AD users](./media/zscaler-two-provisioning-tutorial/user-mappings.png) -11. Review the user attributes that are synchronized from Azure AD to Zscaler Two in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Zscaler Two for update operations. Select the **Save** button to commit any changes. +11. Review the user attributes that are synchronized from Azure AD to Zscaler Two in the **Attribute Mappings** section. The attributes selected as **Matching** properties are used to match the user accounts in Zscaler Two for update operations. Select **Save** to commit any changes. - ![Zscaler Two Provisioning](./media/zscaler-two-provisioning-tutorial/user-attribute-mappings.png) + ![Attribute Mappings](./media/zscaler-two-provisioning-tutorial/user-attribute-mappings.png) -12. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Zscaler Two**. +12. In the **Mappings** section, select **Synchronize Azure Active Directory Groups to ZscalerTwo**: - ![Zscaler Two Provisioning](./media/zscaler-two-provisioning-tutorial/group-mappings.png) + ![Synchronize Azure AD groups](./media/zscaler-two-provisioning-tutorial/group-mappings.png) -13. Review the group attributes that are synchronized from Azure AD to Zscaler Two in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Zscaler Two for update operations. Select the **Save** button to commit any changes. +13. Review the group attributes that are synchronized from Azure AD to Zscaler Two in the **Attribute Mappings** section. The attributes selected as **Matching** properties are used to match the groups in Zscaler Two for update operations. Select **Save** to commit any changes. - ![Zscaler Two Provisioning](./media/zscaler-two-provisioning-tutorial/group-attribute-mappings.png) + ![Attribute Mappings](./media/zscaler-two-provisioning-tutorial/group-attribute-mappings.png) -14. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](./../active-directory-saas-scoping-filters.md). +14. To configure scoping filters, refer to the instructions in the [Scoping filter tutorial](./../active-directory-saas-scoping-filters.md). -15. To enable the Azure AD provisioning service for Zscaler Two, change the **Provisioning Status** to **On** in the **Settings** section. +15. To enable the Azure AD provisioning service for Zscaler Two, change the **Provisioning Status** to **On** in the **Settings** section: - ![Zscaler Two Provisioning](./media/zscaler-two-provisioning-tutorial/provisioning-status.png) + ![Provisioning Status](./media/zscaler-two-provisioning-tutorial/provisioning-status.png) -16. Define the users and/or groups that you would like to provision to Zscaler Two by choosing the desired values in **Scope** in the **Settings** section. +16. Define the users and/or groups that you want to provision to Zscaler Two by choosing the values you want under **Scope** in the **Settings** section: - ![Zscaler Two Provisioning](./media/zscaler-two-provisioning-tutorial/scoping.png) + ![Scope values](./media/zscaler-two-provisioning-tutorial/scoping.png) -17. When you are ready to provision, click **Save**. +17. When you're ready to provision, select **Save**: - ![Zscaler Two Provisioning](./media/zscaler-two-provisioning-tutorial/save-provisioning.png) + ![Select Save](./media/zscaler-two-provisioning-tutorial/save-provisioning.png) -This operation starts the initial synchronization of all users and/or groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. You can use the **Synchronization Details** section to monitor progress and follow links to provisioning activity report, which describes all actions performed by the Azure AD provisioning service on Zscaler Two. +This operation starts the initial synchronization of all users and groups defined under **Scope** in the **Settings** section. The initial sync takes longer than subsequent syncs, which occur about every 40 minutes, as long as the Azure AD provisioning service is running. You can monitor progress in the **Synchronization Details** section. You can also follow links to a provisioning activity report, which describes all actions performed by the Azure AD provisioning service on Zscaler Two. -For more information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../active-directory-saas-provisioning-reporting.md). +For information about how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../active-directory-saas-provisioning-reporting.md). ## Additional resources -* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md) +* [Managing user account provisioning for enterprise apps](../manage-apps/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps diff --git a/articles/active-directory/saas-apps/zscaler-zscloud-provisioning-tutorial.md b/articles/active-directory/saas-apps/zscaler-zscloud-provisioning-tutorial.md index e96c28c544316..ed275a815567c 100644 --- a/articles/active-directory/saas-apps/zscaler-zscloud-provisioning-tutorial.md +++ b/articles/active-directory/saas-apps/zscaler-zscloud-provisioning-tutorial.md @@ -1,6 +1,6 @@ --- title: 'Tutorial: Configure Zscaler ZSCloud for automatic user provisioning with Azure Active Directory | Microsoft Docs' -description: Learn how to configure Azure Active Directory to automatically provision and de-provision user accounts to Zscaler ZSCloud. +description: In this tutorial, you'll learn how to configure Azure Active Directory to automatically provision and deprovision user accounts to Zscaler ZSCloud. services: active-directory documentationcenter: '' author: zchia @@ -13,151 +13,145 @@ ms.subservice: saas-app-tutorial ms.workload: identity ms.tgt_pltfrm: na ms.devlang: na -ms.topic: article +ms.topic: tutorial ms.date: 03/27/2019 ms.author: v-ant-msft --- # Tutorial: Configure Zscaler ZSCloud for automatic user provisioning -The objective of this tutorial is to demonstrate the steps to be performed in Zscaler ZSCloud and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-provision users and/or groups to Zscaler ZSCloud. +In this tutorial, you'll learn how to configure Azure Active Directory (Azure AD) to automatically provision and deprovision users and/or groups to Zscaler ZSCloud. > [!NOTE] -> This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../active-directory-saas-app-provisioning.md). -> -> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> This tutorial describes a connector that's built on the Azure AD user provisioning service. For important details on what this service does and how it works, and answers to frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../active-directory-saas-app-provisioning.md). +> +> This connector is currently in Public Preview. For more information on the general Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Prerequisites -The scenario outlined in this tutorial assumes that you already have the following: +To complete the steps outlined in this tutorial, you need the following: -* An Azure AD tenant -* A Zscaler ZSCloud tenant -* A user account in Zscaler ZSCloud with Admin permissions +* An Azure AD tenant. +* A Zscaler ZSCloud tenant. +* A user account in Zscaler ZSCloud with admin permissions. > [!NOTE] -> The Azure AD provisioning integration relies on the Zscaler ZSCloud SCIM API, which is available to Zscaler ZSCloud developers for accounts with the Enterprise package. +> The Azure AD provisioning integration relies on the Zscaler ZSCloud SCIM API, which is available for Enterprise accounts. -## Adding Zscaler ZSCloud from the gallery +## Add Zscaler ZSCloud from the gallery -Before configuring Zscaler ZSCloud for automatic user provisioning with Azure AD, you need to add Zscaler ZSCloud from the Azure AD application gallery to your list of managed SaaS applications. +Before you configure Zscaler ZSCloud for automatic user provisioning with Azure AD, you need to add Zscaler ZSCloud from the Azure AD application gallery to your list of managed SaaS applications. -**To add Zscaler ZSCloud from the Azure AD application gallery, perform the following steps:** +In the [Azure portal](https://portal.azure.com), in the left pane, select **Azure Active Directory**: -1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon. +![Select Azure Active Directory](common/select-azuread.png) - ![The Azure Active Directory button](common/select-azuread.png) +Go to **Enterprise applications** and then select **All applications**: -2. Navigate to **Enterprise Applications** and then select the **All Applications** option. +![Enterprise applications](common/enterprise-applications.png) - ![The Enterprise applications blade](common/enterprise-applications.png) +To add an application, select **New application** at the top of the window: -3. To add new application, click **New application** button on the top of dialog. +![Select New application](common/add-new-app.png) - ![The New application button](common/add-new-app.png) +In the search box, enter **Zscaler ZSCloud**. Select **Zscaler ZSCloud** in the results and then select **Add**. -4. In the search box, type **Zscaler ZSCloud**, select **Zscaler ZSCloud** from result panel then click **Add** button to add the application. +![Results list](common/search-new-app.png) - ![Zscaler ZSCloud in the results list](common/search-new-app.png) +## Assign users to Zscaler ZSCloud -## Assigning users to Zscaler ZSCloud +Azure AD users need to be assigned access to selected apps before they can use them. In the context of automatic user provisioning, only the users or groups that are assigned to an application in Azure AD are synchronized. -Azure Active Directory uses a concept called "assignments" to determine which users should receive access to selected apps. In the context of automatic user provisioning, only the users and/or groups that have been "assigned" to an application in Azure AD are synchronized. - -Before configuring and enabling automatic user provisioning, you should decide which users and/or groups in Azure AD need access to Zscaler ZSCloud. Once decided, you can assign these users and/or groups to Zscaler ZSCloud by following the instructions here: - -* [Assign a user or group to an enterprise app](https://docs.microsoft.com/azure/active-directory/active-directory-coreapps-assign-user-azure-portal) +Before you configure and enable automatic user provisioning, you should decide which users and/or groups in Azure AD need access to Zscaler ZSCloud. After you decide that, you can assign these users and groups to Zscaler ZSCloud by following the instructions in [Assign a user or group to an enterprise app](https://docs.microsoft.com/azure/active-directory/active-directory-coreapps-assign-user-azure-portal). ### Important tips for assigning users to Zscaler ZSCloud -* It is recommended that a single Azure AD user is assigned to Zscaler ZSCloud to test the automatic user provisioning configuration. Additional users and/or groups may be assigned later. +* We recommend that you first assign a single Azure AD user to Zscaler ZSCloud to test the automatic user provisioning configuration. You can assign more users and groups later. -* When assigning a user to Zscaler ZSCloud, you must select any valid application-specific role (if available) in the assignment dialog. Users with the **Default Access** role are excluded from provisioning. +* When you assign a user to Zscaler ZSCloud, you need to select any valid application-specific role (if available) in the assignment dialog box. Users with the **Default Access** role are excluded from provisioning. -## Configuring automatic user provisioning to Zscaler ZSCloud +## Set up automatic user provisioning -This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Zscaler ZSCloud based on user and/or group assignments in Azure AD. +This section guides you through the steps for configuring the Azure AD provisioning service to create, update, and disable users and groups in Zscaler ZSCloud based on user and group assignments in Azure AD. > [!TIP] -> You may also choose to enable SAML-based single sign-on for Zscaler ZSCloud, following the instructions provided in the [Zscaler ZSCloud single sign-on tutorial](zscaler-zsCloud-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, though these two features compliment each other. - -### To configure automatic user provisioning for Zscaler ZSCloud in Azure AD: +> You might also want to enable SAML-based single sign-on for Zscaler ZSCloud. If you do, follow the instructions in the [Zscaler ZSCloud single sign-on tutorial](zscaler-zsCloud-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, but the two features complement each other. -1. Sign in to the [Azure portal](https://portal.azure.com) and select **Enterprise Applications**, select **All applications**, then select **Zscaler ZSCloud**. +1. Sign in to the [Azure portal](https://portal.azure.com) and select **Enterprise applications** > **All applications** > **Zscaler ZSCloud**: - ![Enterprise applications blade](common/enterprise-applications.png) + ![Enterprise applications](common/enterprise-applications.png) -2. In the applications list, select **Zscaler ZSCloud**. +2. In the applications list, select **Zscaler ZSCloud**: - ![The Zscaler ZSCloud link in the Applications list](common/all-applications.png) + ![Applications list](common/all-applications.png) -3. Select the **Provisioning** tab. +3. Select the **Provisioning** tab: ![Zscaler ZSCloud Provisioning](./media/zscaler-zscloud-provisioning-tutorial/provisioningtab.png) -4. Set the **Provisioning Mode** to **Automatic**. +4. Set the **Provisioning Mode** to **Automatic**: - ![Zscaler ZSCloud Provisioning](./media/zscaler-zscloud-provisioning-tutorial/provisioningcredentials.png) + ![Set the Provisioning Mode](./media/zscaler-zscloud-provisioning-tutorial/provisioningcredentials.png) -5. Under the **Admin Credentials** section, input the **Tenant URL** and **Secret Token** of your Zscaler ZSCloud account as described in Step 6. +5. In the **Admin Credentials** section, enter the **Tenant URL** and **Secret Token** of your Zscaler ZSCloud account, as described in the next step. -6. To obtain the **Tenant URL** and **Secret Token**, navigate to **Administration > Authentication Settings** in the Zscaler ZSCloud portal user interface and click on **SAML** under **Authentication Type**. +6. To get the **Tenant URL** and **Secret Token**, go to **Administration** > **Authentication Settings** in the Zscaler ZSCloud portal and select **SAML** under **Authentication Type**: - ![Zscaler ZSCloud Provisioning](./media/zscaler-zscloud-provisioning-tutorial/secrettoken1.png) + ![Zscaler ZSCloud Authentication Settings](./media/zscaler-zscloud-provisioning-tutorial/secrettoken1.png) - Click on **Configure SAML** to open **Configuration SAML** options. + Select **Configure SAML** to open the **Configure SAML** window: - ![Zscaler ZSCloud Provisioning](./media/zscaler-zscloud-provisioning-tutorial/secrettoken2.png) + ![Configure SAML window](./media/zscaler-zscloud-provisioning-tutorial/secrettoken2.png) - Select **Enable SCIM-Based Provisioning** to retrieve **Base URL** and **Bearer Token**, then save the settings. Copy the **Base URL** to **Tenant URL** and **Bearer Token** to **Secret Token** in the Azure portal. + Select **Enable SCIM-Based Provisioning** and copy the **Base URL** and **Bearer Token**, and then save the settings. In the Azure portal, paste the **Base URL** into the **Tenant URL** box and the **Bearer Token** into the **Secret Token** box. -7. Upon populating the fields shown in Step 5, click **Test Connection** to ensure Azure AD can connect to Zscaler ZSCloud. If the connection fails, ensure your Zscaler ZSCloud account has Admin permissions and try again. +7. After you enter the values in the **Tenant URL** and **Secret Token** boxes, select **Test Connection** to make sure Azure AD can connect to Zscaler ZSCloud. If the connection fails, make sure your Zscaler ZSCloud account has admin permissions and try again. - ![Zscaler ZSCloud Provisioning](./media/zscaler-zscloud-provisioning-tutorial/testconnection.png) + ![Test the connection](./media/zscaler-zscloud-provisioning-tutorial/testconnection.png) -8. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and check the checkbox **Send an email notification when a failure occurs**. +8. In the **Notification Email** box, enter the email address of a person or group that should receive the provisioning error notifications. Select **Send an email notification when a failure occurs**: - ![Zscaler ZSCloud Provisioning](./media/zscaler-zscloud-provisioning-tutorial/Notification.png) + ![Set up notification email](./media/zscaler-zscloud-provisioning-tutorial/Notification.png) -9. Click **Save**. +9. Select **Save**. -10. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Zscaler ZSCloud**. +10. In the **Mappings** section, select **Synchronize Azure Active Directory Users to ZscalerZSCloud**: - ![Zscaler ZSCloud Provisioning](./media/zscaler-zscloud-provisioning-tutorial/usermappings.png) + ![Synchronize Azure AD users](./media/zscaler-zscloud-provisioning-tutorial/usermappings.png) -11. Review the user attributes that are synchronized from Azure AD to Zscaler ZSCloud in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Zscaler ZSCloud for update operations. Select the **Save** button to commit any changes. +11. Review the user attributes that are synchronized from Azure AD to Zscaler ZSCloud in the **Attribute Mappings** section. The attributes selected as **Matching** properties are used to match the user accounts in Zscaler ZSCloud for update operations. Select **Save** to commit any changes. - ![Zscaler ZSCloud Provisioning](./media/zscaler-zscloud-provisioning-tutorial/userattributemappings.png) + ![Attribute Mappings](./media/zscaler-zscloud-provisioning-tutorial/userattributemappings.png) -12. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Zscaler ZSCloud**. +12. In the **Mappings** section, select **Synchronize Azure Active Directory Groups to ZscalerZSCloud**: - ![Zscaler ZSCloud Provisioning](./media/zscaler-zscloud-provisioning-tutorial/groupmappings.png) + ![Synchronize Azure AD groups](./media/zscaler-zscloud-provisioning-tutorial/groupmappings.png) -13. Review the group attributes that are synchronized from Azure AD to Zscaler ZSCloud in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Zscaler ZSCloud for update operations. Select the **Save** button to commit any changes. +13. Review the group attributes that are synchronized from Azure AD to Zscaler ZSCloud in the **Attribute Mappings** section. The attributes selected as **Matching** properties are used to match the groups in Zscaler ZSCloud for update operations. Select **Save** to commit any changes. - ![Zscaler ZSCloud Provisioning](./media/zscaler-zscloud-provisioning-tutorial/groupattributemappings.png) + ![Attribute Mappings](./media/zscaler-zscloud-provisioning-tutorial/groupattributemappings.png) -14. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](./../active-directory-saas-scoping-filters.md). +14. To configure scoping filters, refer to the instructions in the [Scoping filter tutorial](./../active-directory-saas-scoping-filters.md). -15. To enable the Azure AD provisioning service for Zscaler ZSCloud, change the **Provisioning Status** to **On** in the **Settings** section. +15. To enable the Azure AD provisioning service for Zscaler ZSCloud, change the **Provisioning Status** to **On** in the **Settings** section: - ![Zscaler ZSCloud Provisioning](./media/zscaler-zscloud-provisioning-tutorial/provisioningstatus.png) + ![Provisioning Status](./media/zscaler-zscloud-provisioning-tutorial/provisioningstatus.png) -16. Define the users and/or groups that you would like to provision to Zscaler ZSCloud by choosing the desired values in **Scope** in the **Settings** section. +16. Define the users and/or groups that you want to provision to Zscaler ZSCloud by choosing the values you want under **Scope** in the **Settings** section: - ![Zscaler ZSCloud Provisioning](./media/zscaler-zscloud-provisioning-tutorial/scoping.png) + ![Scope values](./media/zscaler-zscloud-provisioning-tutorial/scoping.png) -17. When you are ready to provision, click **Save**. +17. When you're ready to provision, select **Save**: - ![Zscaler ZSCloud Provisioning](./media/zscaler-zscloud-provisioning-tutorial/saveprovisioning.png) + ![Select Save](./media/zscaler-zscloud-provisioning-tutorial/saveprovisioning.png) -This operation starts the initial synchronization of all users and/or groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. You can use the **Synchronization Details** section to monitor progress and follow links to provisioning activity report, which describes all actions performed by the Azure AD provisioning service on Zscaler ZSCloud. +This operation starts the initial synchronization of all users and groups defined under **Scope** in the **Settings** section. The initial sync takes longer than subsequent syncs, which occur about every 40 minutes, as long as the Azure AD provisioning service is running. You can monitor progress in the **Synchronization Details** section. You can also follow links to a provisioning activity report, which describes all actions performed by the Azure AD provisioning service on Zscaler ZSCloud. -For more information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../active-directory-saas-provisioning-reporting.md). +For information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../active-directory-saas-provisioning-reporting.md). ## Additional resources -* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md) +* [Managing user account provisioning for enterprise apps](../manage-apps/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps diff --git a/articles/active-directory/users-groups-roles/TOC.yml b/articles/active-directory/users-groups-roles/TOC.yml index 153e75dd340d5..222d70ac9dd90 100644 --- a/articles/active-directory/users-groups-roles/TOC.yml +++ b/articles/active-directory/users-groups-roles/TOC.yml @@ -120,10 +120,12 @@ href: licensing-service-plan-reference.md - name: Azure AD administrator roles items: - - name: Roles and permissions + - name: Roles and permissions href: directory-assign-admin-roles.md - name: View and assign roles href: directory-manage-roles-portal.md + - name: Assign roles with PowerShell + href: roles-assign-powershell.md - name: Delegate app admin roles href: roles-delegate-app-roles.md - name: Least-privileged roles by task diff --git a/articles/active-directory/users-groups-roles/roles-assign-powershell.md b/articles/active-directory/users-groups-roles/roles-assign-powershell.md new file mode 100644 index 0000000000000..7995ca84a4287 --- /dev/null +++ b/articles/active-directory/users-groups-roles/roles-assign-powershell.md @@ -0,0 +1,164 @@ +--- +title: Assign and remove administrator roles assignment with Azure PowerShell - Azure Active Directory | Microsoft Docs +description: For those who frequently manage role assignments, you can now manage members of an Azure AD administrator role with Azure PowerShell. +services: active-directory +author: curtand +manager: mtillman + +ms.service: active-directory +ms.workload: identity +ms.subservice: users-groups-roles +ms.topic: article +ms.date: 04/15/2019 +ms.author: curtand +ms.reviewer: vincesm +ms.custom: it-pro + +ms.collection: M365-identity-device-management +--- +# Assign Azure Active Directory admin roles using PowerShell + +You can automate how you assign roles to user accounts using Azure PowerShell. This article uses the [Azure Active Directory PowerShell Version 2](https:/docs.microsoft.com/powershell/module/azuread/?view=azureadps-2.0#directory_roles) module. + +## Prepare PowerShell + +First, you must [download the Azure AD PowerShell module](https://www.powershellgallery.com/packages/AzureAD/). + +## Install the Azure AD PowerShell module + +To install the Azure AD PowerShell module, use the following commands: + +```powershell +install-module azuread +import-module azuread +``` + +To verify that the module is ready to use, use the following command: + +```powershell +get-module azuread + ModuleType Version Name ExportedCommands + ---------- --------- ---- ---------------- + Binary 2.0.0.115 azuread {Add-AzureADAdministrati...} +``` + +Now you can start using the cmdlets in the module. For a full description of the cmdlets in the Azure AD module, please refer to the online reference documentation for [Azure Active Directory PowerShell Version 2](https://docs.microsoft.com/powershell/module/azuread/?view=azureadps-2.0#directory_roles). + +## Permissions required + +Connect to your Azure AD tenant using a global administrator account to assign or remove roles. + +## Assign a single role + +To assign a role, you must first obtain its display name and the name of the role you're assigning. When you have the display name of the account and the name of the role, use the following cmdlets to assign the role to the user. + +``` PowerShell +# Fetch user to assign to role +$roleMember = Get-AzureADUser -ObjectId "username@contoso.com" + +# Fetch User Account Administrator role instance +$role = Get-AzureADDirectoryRole | Where-Object {$_.displayName -eq 'User Account Administrator'} + +# If role instance does not exist, instantiate it based on the role template +if ($role -eq $null) { + # Instantiate an instance of the role template + $roleTemplate = Get-AzureADDirectoryRoleTemplate | Where-Object {$_.displayName -eq 'User Account Administrator'} + Enable-AzureADDirectoryRole -RoleTemplateId $roleTemplate.ObjectId + + # Fetch User Account Administrator role instance again + $role = Get-AzureADDirectoryRole | Where-Object {$_.displayName -eq 'User Account Administrator'} +} + +# Add user to role +Add-AzureADDirectoryRoleMember -ObjectId $role.ObjectId -RefObjectId $roleMember.ObjectId + +# Fetch role membership for role to confirm +Get-AzureADDirectoryRoleMember -ObjectId $role.ObjectId | Get-AzureADUser +``` + +## Assign a role to a service principal + +Example of assigning a service principal to a role. + +```powershell +# Fetch a service principal to assign to role +$roleMember = Get-AzureADServicePrincipal -ObjectId "00221b6f-4387-4f3f-aa85-34316ad7f956" + +#Fetch list of all directory roles with object ID +Get-AzureADDirectoryRole + +# Fetch a directory role by ID +$role = Get-AzureADDirectoryRole -ObjectId "5b3fe201-fa8b-4144-b6f1-875829ff7543" + +# Add user to role +Add-AzureADDirectoryRoleMember -ObjectId $role.ObjectId -RefObjectId $roleMember.ObjectId + +# Fetch the assignment for the role to confirm +Get-AzureADDirectoryRoleMember -ObjectId $role.ObjectId | Get-AzureADServicePrincipal +``` + +## Multiple role assignments + +Examples of assigning and removing multiple roles at once. + +```powershell +#File name +$fileName="" + +$input_Excel = New-Object -ComObject Excel.Application +$input_Workbook = $input_Excel.Workbooks.Open($fileName) +$input_Worksheet = $input_Workbook.Sheets.Item(1) + + #Count number of users who have to be assigned to role +$count = $input_Worksheet.UsedRange.Rows.Count + +#Loop through each line of the csv file starting from line 2 (assuming first line is title) +for ($i=2; $i -le $count; $i++) +{ + #Fetch user display name + $displayName = $input_Worksheet.Cells.Item($i,1).Text + + #Fetch role name + $roleName = $input_Worksheet.Cells.Item($i,2).Text + + #Assign role + Add-AzureADDirectoryRoleMember -ObjectId (Get-AzureADDirectoryRole | Where-Object DisplayName -eq $roleName).ObjectId -RefObjectId (Get-AzureADUser | Where-Object DisplayName -eq $displayName).ObjectId +} + +#Remove multiple role assignments +for ($i=2; $i -le $count; $i++) +{ + $displayName = $input_Worksheet.Cells.Item($i,1).Text + $roleName = $input_Worksheet.Cells.Item($i,2).Text + + Remove-AzureADDirectoryRoleMember -ObjectId (Get-AzureADDirectoryRole | Where-Object DisplayName -eq $roleName).ObjectId -MemberId (Get-AzureADUser | Where-Object DisplayName -eq $displayName).ObjectId +} +``` + +## Remove a role assignment + +This example removes a role assignment for the specified user. + +```powershell +# Fetch user to assign to role +$roleMember = Get-AzureADUser -ObjectId "username@contoso.com" + +#Fetch list of all directory roles with object id +Get-AzureADDirectoryRole + +# Fetch a directory role by id +$role = Get-AzureADDirectoryRole -ObjectId "5b3fe201-fa8b-4144-b6f1-875829ff7543" + +# Remove user from role +Remove-AzureADDirectoryRoleMember -ObjectId $role.ObjectId -MemberId $roleMember.ObjectId + +# Fetch role membership for role to confirm +Get-AzureADDirectoryRoleMember -ObjectId $role.ObjectId | Get-AzureADUser + +``` + +## Next steps + +* Feel free to share with us on the [Azure AD administrative roles forum](https://feedback.azure.com/forums/169401-azure-active-directory?category_id=166032). +* For more about roles and Administrator role assignment, see [Assign administrator roles](directory-assign-admin-roles.md). +* For default user permissions, see a [comparison of default guest and member user permissions](../fundamentals/users-default-permissions.md). diff --git a/articles/asc-for-iot/how-to-deploy-windows-cs.md b/articles/asc-for-iot/how-to-deploy-windows-cs.md index 3c7bce0764ca7..55ddbba98e10d 100644 --- a/articles/asc-for-iot/how-to-deploy-windows-cs.md +++ b/articles/asc-for-iot/how-to-deploy-windows-cs.md @@ -86,7 +86,7 @@ For additional help, use the Get-Help command in PowerShell
Get-Help example ### Verify deployment status - Check the agent deployment status by running:
- ```sc.exe query "ASC IoT Agent" ``` + ```sc.exe query "ASC IoT Agent"``` ### Uninstall the agent diff --git a/articles/automation/automation-dsc-getting-started.md b/articles/automation/automation-dsc-getting-started.md index 79c3993c4725c..a332137817a9c 100644 --- a/articles/automation/automation-dsc-getting-started.md +++ b/articles/automation/automation-dsc-getting-started.md @@ -6,7 +6,7 @@ ms.service: automation ms.subservice: dsc author: bobbytreed ms.author: robreed -ms.date: 08/08/2018 +ms.date: 04/15/2019 ms.topic: conceptual manager: carmonm --- @@ -25,7 +25,7 @@ Automation State Configuration. To complete the examples in this article, the following are required: - An Azure Automation account. For instructions on creating an Azure Automation Run As account, see [Azure Run As Account](automation-sec-configure-azure-runas-account.md). -- An Azure Resource Manager VM (not Classic) running Windows Server 2008 R2 or later. For instructions on creating a VM, see [Create your first Windows virtual machine in the Azure portal](../virtual-machines/virtual-machines-windows-hero-tutorial.md) +- An Azure Resource Manager VM (not Classic) running a [supported operating system](automation-dsc-overview.md#operating-system-requirements). For instructions on creating a VM, see [Create your first Windows virtual machine in the Azure portal](../virtual-machines/virtual-machines-windows-hero-tutorial.md) ## Creating a DSC configuration @@ -165,9 +165,9 @@ State Configuration](automation-dsc-onboarding.md). 1. On the **Virtual machine** detail page, click **+ Connect**. > [!IMPORTANT] - > This must be an Azure Resource Manager VM running Windows Server 2008 R2 or later. + > This must be an Azure Resource Manager VM running a [supported operating system](automation-dsc-overview.md#operating-system-requirements). -1. In the **Registration** page, select the name of the node configuration you want to apply to the VM in the **Node configuration name** box. Providing a name at this point is optional. You can change the assigned node configuration after onboarding the node. +2. In the **Registration** page, select the name of the node configuration you want to apply to the VM in the **Node configuration name** box. Providing a name at this point is optional. You can change the assigned node configuration after onboarding the node. Check **Reboot Node if Needed**, then click **OK**. ![Screenshot of the Registration blade](./media/automation-dsc-getting-started/RegisterVM.png) diff --git a/articles/automation/automation-onboard-solutions-from-automation-account.md b/articles/automation/automation-onboard-solutions-from-automation-account.md index 092bd1cc240a2..ba70b214695c8 100644 --- a/articles/automation/automation-onboard-solutions-from-automation-account.md +++ b/articles/automation/automation-onboard-solutions-from-automation-account.md @@ -5,7 +5,7 @@ services: automation ms.service: automation author: georgewallace ms.author: gwallace -ms.date: 10/16/2018 +ms.date: 4/11/2019 ms.topic: conceptual manager: carmonm ms.custom: mvc @@ -38,7 +38,7 @@ The following table shows the supported mappings: |EastUS1|EastUS2| |JapanEast|JapanEast| |SoutheastAsia|SoutheastAsia| -|WestCentralUS|WestCentralUS| +|WestCentralUS2|WestCentralUS2| |WestEurope|WestEurope| |UKSouth|UKSouth| |USGovVirginia|USGovVirginia| @@ -46,8 +46,7 @@ The following table shows the supported mappings: 1 EastUS2EUAP and EastUS mappings for Log Analytics workspaces to Automation Accounts are not an exact region to region mapping but is the correct mapping. -> [!NOTE] -> Due to demand, a region may not be available when creating your Automation Account or Log Analytics workspace. If that is the case, ensure you are using a region in the preceding table that you can create resources in. +2 Due to capacity restraints the region is not available when creating new resources. This includes Automation Accounts and Log Analytics workspaces. However, preexisting linked resources in the region should continue to work. The Change Tracking and Inventory solution provides the ability to [track changes](automation-vm-change-tracking.md) and [inventory](automation-vm-inventory.md) on your virtual machines. In this step, you enable the solution on a virtual machine. diff --git a/articles/automation/automation-onboard-solutions-from-browse.md b/articles/automation/automation-onboard-solutions-from-browse.md index 5097c51dbb503..ddc7bdf05af1d 100644 --- a/articles/automation/automation-onboard-solutions-from-browse.md +++ b/articles/automation/automation-onboard-solutions-from-browse.md @@ -5,7 +5,7 @@ services: automation ms.service: automation author: georgewallace ms.author: gwallace -ms.date: 06/06/2018 +ms.date: 04/11/2019 ms.topic: article manager: carmonm ms.custom: mvc @@ -65,7 +65,7 @@ The following table shows the supported mappings: |EastUS1|EastUS2| |JapanEast|JapanEast| |SoutheastAsia|SoutheastAsia| -|WestCentralUS|WestCentralUS| +|WestCentralUS2|WestCentralUS2| |WestEurope|WestEurope| |UKSouth|UKSouth| |USGovVirginia|USGovVirginia| @@ -73,8 +73,7 @@ The following table shows the supported mappings: 1 EastUS2EUAP and EastUS mappings for Log Analytics workspaces to Automation Accounts are not an exact region to region mapping but is the correct mapping. -> [!NOTE] -> Due to demand, a region may not be available when creating your Automation Account or Log Analytics workspace. If that is the case, ensure you are using a region in the preceding table that you can create resources in. +2 Due to capacity restraints the region is not available when creating new resources. This includes Automation Accounts and Log Analytics workspaces. However, preexisting linked resources in the region should continue to work. Deselect the checkbox next to any virtual machine that you don't want to enable. Virtual machines that can't be enabled are already deselected. diff --git a/articles/azure-maps/data-driven-style-expressions-web-sdk.md b/articles/azure-maps/data-driven-style-expressions-web-sdk.md index 3ea69f69ab2b7..342e31b1b4882 100644 --- a/articles/azure-maps/data-driven-style-expressions-web-sdk.md +++ b/articles/azure-maps/data-driven-style-expressions-web-sdk.md @@ -291,7 +291,7 @@ var layer = new atlas.layer.BubbleLayer(datasource, null, { A `coalesce` expression steps through a set of expressions until the first non-null value is obtained and returns that value. -The following pseudocode defines the structure of the ` coalesce` expression. +The following pseudocode defines the structure of the `coalesce` expression. ```javascript [ diff --git a/articles/azure-monitor/learn/media/tutorial-alert/add-alert-01.png b/articles/azure-monitor/learn/media/tutorial-alert/add-alert-01.png deleted file mode 100644 index 5db2eb7f666a9..0000000000000 Binary files a/articles/azure-monitor/learn/media/tutorial-alert/add-alert-01.png and /dev/null differ diff --git a/articles/azure-monitor/learn/media/tutorial-alert/add-alert-02.png b/articles/azure-monitor/learn/media/tutorial-alert/add-alert-02.png deleted file mode 100644 index 0e8550bc1a0df..0000000000000 Binary files a/articles/azure-monitor/learn/media/tutorial-alert/add-alert-02.png and /dev/null differ diff --git a/articles/azure-monitor/learn/media/tutorial-alert/add-metric-alert.png b/articles/azure-monitor/learn/media/tutorial-alert/add-metric-alert.png deleted file mode 100644 index 7edff353b8e71..0000000000000 Binary files a/articles/azure-monitor/learn/media/tutorial-alert/add-metric-alert.png and /dev/null differ diff --git a/articles/azure-monitor/learn/media/tutorial-alert/add-test-001.png b/articles/azure-monitor/learn/media/tutorial-alert/add-test-001.png new file mode 100644 index 0000000000000..07ad681cecd4e Binary files /dev/null and b/articles/azure-monitor/learn/media/tutorial-alert/add-test-001.png differ diff --git a/articles/azure-monitor/learn/media/tutorial-alert/add-test.png b/articles/azure-monitor/learn/media/tutorial-alert/add-test.png deleted file mode 100644 index f028f889b3f1c..0000000000000 Binary files a/articles/azure-monitor/learn/media/tutorial-alert/add-test.png and /dev/null differ diff --git a/articles/azure-monitor/learn/media/tutorial-alert/alert-mail.png b/articles/azure-monitor/learn/media/tutorial-alert/alert-mail.png deleted file mode 100644 index 6f1171a8a2188..0000000000000 Binary files a/articles/azure-monitor/learn/media/tutorial-alert/alert-mail.png and /dev/null differ diff --git a/articles/azure-monitor/learn/media/tutorial-alert/create-test-001.png b/articles/azure-monitor/learn/media/tutorial-alert/create-test-001.png new file mode 100644 index 0000000000000..909b076e172ac Binary files /dev/null and b/articles/azure-monitor/learn/media/tutorial-alert/create-test-001.png differ diff --git a/articles/azure-monitor/learn/media/tutorial-alert/create-test.png b/articles/azure-monitor/learn/media/tutorial-alert/create-test.png deleted file mode 100644 index 96f443f67920d..0000000000000 Binary files a/articles/azure-monitor/learn/media/tutorial-alert/create-test.png and /dev/null differ diff --git a/articles/azure-monitor/learn/media/tutorial-alert/edit-alert-001.png b/articles/azure-monitor/learn/media/tutorial-alert/edit-alert-001.png new file mode 100644 index 0000000000000..30b342eafa55b Binary files /dev/null and b/articles/azure-monitor/learn/media/tutorial-alert/edit-alert-001.png differ diff --git a/articles/azure-monitor/learn/media/tutorial-alert/save-alert-001.png b/articles/azure-monitor/learn/media/tutorial-alert/save-alert-001.png new file mode 100644 index 0000000000000..3fb15b7f1ff27 Binary files /dev/null and b/articles/azure-monitor/learn/media/tutorial-alert/save-alert-001.png differ diff --git a/articles/azure-monitor/learn/media/tutorial-alert/test-details-001.png b/articles/azure-monitor/learn/media/tutorial-alert/test-details-001.png new file mode 100644 index 0000000000000..fb98654751484 Binary files /dev/null and b/articles/azure-monitor/learn/media/tutorial-alert/test-details-001.png differ diff --git a/articles/azure-monitor/learn/media/tutorial-alert/test-details.png b/articles/azure-monitor/learn/media/tutorial-alert/test-details.png deleted file mode 100644 index 8f97d951f395a..0000000000000 Binary files a/articles/azure-monitor/learn/media/tutorial-alert/test-details.png and /dev/null differ diff --git a/articles/azure-monitor/learn/media/tutorial-alert/test-result-001.png b/articles/azure-monitor/learn/media/tutorial-alert/test-result-001.png new file mode 100644 index 0000000000000..b973874deb6fa Binary files /dev/null and b/articles/azure-monitor/learn/media/tutorial-alert/test-result-001.png differ diff --git a/articles/azure-monitor/learn/media/tutorial-alert/test-result.png b/articles/azure-monitor/learn/media/tutorial-alert/test-result.png deleted file mode 100644 index e948752459103..0000000000000 Binary files a/articles/azure-monitor/learn/media/tutorial-alert/test-result.png and /dev/null differ diff --git a/articles/azure-monitor/learn/tutorial-alert.md b/articles/azure-monitor/learn/tutorial-alert.md index ecb59c4792212..2a3128588fb97 100644 --- a/articles/azure-monitor/learn/tutorial-alert.md +++ b/articles/azure-monitor/learn/tutorial-alert.md @@ -4,7 +4,7 @@ description: Tutorial to send alerts in response to errors in your application u keywords: author: mrbullwinkle ms.author: mbullwin -ms.date: 09/20/2017 +ms.date: 04/10/2019 ms.service: application-insights ms.custom: mvc ms.topic: tutorial @@ -13,79 +13,60 @@ manager: carmonm # Monitor and alert on application health with Azure Application Insights -Azure Application Insights allows you to monitor your application and send you alerts when it is either unavailable, experiencing failures, or suffering from performance issues. This tutorial takes you through the process of creating tests to continuously check the availability of your application and to send different kinds of alerts in response to detected issues. You learn how to: +Azure Application Insights allows you to monitor your application and send you alerts when it is either unavailable, experiencing failures, or suffering from performance issues. This tutorial takes you through the process of creating tests to continuously check the availability of your application. + +You learn how to: > [!div class="checklist"] > * Create availability test to continuously check the response of the application > * Send mail to administrators when a problem occurs -> * Create alerts based on performance metrics -> * Use a Logic App to send summarized telemetry on a schedule. - ## Prerequisites To complete this tutorial: -- Install [Visual Studio 2017](https://www.visualstudio.com/downloads/) with the following workloads: - - ASP.NET and web development - - Azure development - - Deploy a .NET application to Azure and [enable the Application Insights SDK](../../azure-monitor/app/asp-net.md). +Create an [Application Insights resource](https://docs.microsoft.com/azure/azure-monitor/learn/dotnetcore-quick-start#enable-application-insights). +## Sign in to Azure -## Log in to Azure -Log in to the Azure portal at [https://portal.azure.com](https://portal.azure.com). +Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com). ## Create availability test -Availability tests in Application Insights allow you to automatically test your application from various locations around the world. In this tutorial, you will perform a simple test to ensure that the application is available. You could also create a complete walkthrough to test its detailed operation. - -1. Select **Application Insights** and then select your subscription. -1. Select **Availability** under the **Investigate** menu and then click **Add test**. - - ![Add availability test](media/tutorial-alert/add-test.png) - -2. Type in a name for the test and leave the other defaults. This requests the home page of the application every 5 minutes from 5 different geographic locations. -3. Select **Alerts** to open the **Alerts** panel where you can define details for how to respond if the test fails. Type in an email address to send when the alert criteria is met. You could optionally type in the address of a webhook to call when the alert criteria is met. - ![Create test](media/tutorial-alert/create-test.png) - -4. Return to the test panel, and after a few minutes you should start seeing results from the availability test. Click on the test name to view details from each location. The scatter chart shows the success and duration of each test. +Availability tests in Application Insights allow you to automatically test your application from various locations around the world. In this tutorial, you will perform a url test to ensure that your web application is available. You could also create a complete walkthrough to test its detailed operation. - ![Test details](media/tutorial-alert/test-details.png) - -5. You can drill down in to the details of any particular test by clicking on its dot in the scatter chart. The example below shows the details for a failed request. +1. Select **Application Insights** and then select your subscription. - ![Test result](media/tutorial-alert/test-result.png) - -6. If the alert criteria is met, a mail similar to the one below is sent to the address that you specified. +2. Select **Availability** under the **Investigate** menu and then click **Create test**. - ![Alert mail](media/tutorial-alert/alert-mail.png) + ![Add availability test](media/tutorial-alert/add-test-001.png) +3. Type in a name for the test and leave the other defaults. This selection will trigger requests for the application url every 5 minutes from five different geographic locations. -## Create an alert from metrics -In addition to sending alerts from an availability test, you can create an alert from any performance metrics that are being collected for your application. +4. Select **Alerts** to open the **Alerts** dropdown where you can define details for how to respond if the test fails. Choose **Near-realtime** and set the status to **Enabled.** -1. Select **Alerts** from the **Configure** menu. This opens the Azure Alerts panel. There may be other alert rules configured here for other services. -1. Click **Add metric alert**. This opens the panel to create a new alert rule. + Type in an email address to send when the alert criteria is met. You could optionally type in the address of a webhook to call when the alert criteria is met. - ![Add metric alert](media/tutorial-alert/add-metric-alert.png) + ![Create test](media/tutorial-alert/create-test-001.png) -1. Type in a **Name** for the alert rule, and select your application in the dropdown for **Resource**. -1. Select a **Metric** to sample. A graph is displayed to indicate the value of this request over the past 24 hours. This assists you in setting the condition for the metric. +5. Return to the test panel, select the ellipses and edit alert to enter the configuration for your near-realtime alert. - ![Add alert rule](media/tutorial-alert/add-alert-01.png) + ![Edit alert](media/tutorial-alert/edit-alert-001.png) -1. Specify a **Condition** and **Threshold** for the alert. This is the number of times that the metric must be exceeded for an alert to be created. -1. Under **Notify via** check the **Email owners, contributors, and readers** box to send a mail to these users when the alert condition is met and add the email address of any additional recipients. You can also specify a webhook or a logic app here that runs when the condition is met. These could be used to attempt to mitigate the detected issue or +6. Set failed locations to greater than or equal to 3. Create an [action group](https://docs.microsoft.com/azure/azure-monitor/platform/action-groups) to configure who gets notified when your alert threshold is breached. - ![Add alert rule](media/tutorial-alert/add-alert-02.png) + ![Save alert UI](media/tutorial-alert/save-alert-001.png) +7. Once you have configured your alert, click on the test name to view details from each location. Tests can be viewed in both line graph and scatter plot format to visualize the success/failures for a given time range. -## Proactively send information -Alerts are created in reaction to a particular set of issues identified in your application, and you typically reserve alerts for critical conditions requiring immediate attention. You can proactively receive information about your application with a Logic App that runs automatically on a schedule. For example, you could have a mail sent to administrators daily with summary information that requires further evaluation. + ![Test details](media/tutorial-alert/test-details-001.png) -For details on creating a Logic App with Application Insights, see [Automate Application Insights processes by using Logic Apps](../../azure-monitor/app/automate-with-logic-apps.md) +8. You can drill down into the details of any test by clicking on its dot in the scatter chart. This will launch the end-to-end transaction details view. The example below shows the details for a failed request. + ![Test result](media/tutorial-alert/test-result-001.png) + ## Next steps + Now that you've learned how to alert on issues, advance to the next tutorial to learn how to analyze how users are interacting with your application. > [!div class="nextstepaction"] diff --git a/articles/azure-monitor/platform/alert-log-troubleshoot.md b/articles/azure-monitor/platform/alert-log-troubleshoot.md index 1292b25f47bc9..1a96621834c44 100644 --- a/articles/azure-monitor/platform/alert-log-troubleshoot.md +++ b/articles/azure-monitor/platform/alert-log-troubleshoot.md @@ -1,6 +1,6 @@ --- title: Troubleshooting log alerts in Azure Monitor | Microsoft Docs -description: Common issues, errors and resolution for log alert rules in Azure. +description: Common issues, errors, and resolution for log alert rules in Azure. author: msvijayn services: azure-monitor ms.service: azure-monitor @@ -20,7 +20,6 @@ The term **Log Alerts** describes alerts that fire based on a log query in a [Lo > [!NOTE] > This article doesn't consider cases when the Azure portal shows and alert rule triggered and a notification performed by an associated Action Group(s). For such cases, please refer to details in the article on [Action Groups](../platform/action-groups.md). - ## Log alert didn't fire Here are some common reasons why a configured [log alert rule in Azure Monitor](../platform/alerts-log.md) state doesn't show [as *fired* when expected](../platform/alerts-managing-alert-states.md). @@ -86,11 +85,96 @@ For example, if the log alert rule is configured to trigger when number of resul ### Alert query output misunderstood -You provide the logic for log alerts in an analytics query. The analytics query may use various big data and mathematical functions. The alerting service executes your query at intervals specified with data for time period specified. The alerting service makes subtle changes to query provided based on the alert type chosen. This can be seen in the "Query to be executed" section in *Configure signal logic* screen, as shown below: +You provide the logic for log alerts in an analytics query. The analytics query may use various big data and mathematical functions. The alerting service executes your query at intervals specified with data for time period specified. The alerting service makes subtle changes to query provided based on the alert type chosen. This change can be viewed in the "Query to be executed" section in *Configure signal logic* screen, as shown below: ![Query to be executed](media/alert-log-troubleshoot/LogAlertPreview.png) What is shown in the **query to be executed** box is what the log alert service runs. You can run the stated query as well as timespan via [Analytics portal](../log-query/portals.md) or the [Analytics API](https://docs.microsoft.com/rest/api/loganalytics/) if you want to understand what the alert query output may be before you actually create the alert. +## Log alert was disabled + +Listed below are some reasons due to which [log alert rule in Azure Monitor](../platform/alerts-log.md) may be disabled by Azure Monitor. + +### Resource on which alert was created no longer exists + +Log alert rules created in Azure Monitor target a specific resource like an Azure Log Analytics workspace, Azure Application Insights app, and Azure resource. And the log alert service will then run analytics query provided in the rule for the specified target. But after the rule creation, often users go on to delete from Azure or move inside Azure - the target of the alert rule. As the target of the log alert rule is no longer valid, execution of the rule fails. + +In such cases, Azure Monitor will disable the log alert and ensure customers are not billed unnecessarily, when the rule itself is not able to execute continually for sizeable period like a week. Users can find out on the exact time at which the log alert rule was disabled by Azure Monitor via [Azure Activity Log](../../azure-resource-manager/resource-group-audit.md). In Azure Activity Log when the log alert rule is disabled by Azure, an event is added in Azure Activity Log. + +A sample event in Azure Activity Log for alert rule disabling due to its continual failure; is shown below. + +```json +{ + "caller": "Microsoft.Insights/ScheduledQueryRules", + "channels": "Operation", + "claims": { + "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/spn": "Microsoft.Insights/ScheduledQueryRules" + }, + "correlationId": "abcdefg-4d12-1234-4256-21233554aff", + "description": "Alert: test-bad-alerts is disabled by the System due to : Alert has been failing consistently with the same exception for the past week", + "eventDataId": "f123e07-bf45-1234-4565-123a123455b", + "eventName": { + "value": "", + "localizedValue": "" + }, + "category": { + "value": "Administrative", + "localizedValue": "Administrative" + }, + "eventTimestamp": "2019-03-22T04:18:22.8569543Z", + "id": "/SUBSCRIPTIONS//RESOURCEGROUPS//PROVIDERS/MICROSOFT.INSIGHTS/SCHEDULEDQUERYRULES/TEST-BAD-ALERTS", + "level": "Informational", + "operationId": "", + "operationName": { + "value": "Microsoft.Insights/ScheduledQueryRules/disable/action", + "localizedValue": "Microsoft.Insights/ScheduledQueryRules/disable/action" + }, + "resourceGroupName": "", + "resourceProviderName": { + "value": "MICROSOFT.INSIGHTS", + "localizedValue": "Microsoft Insights" + }, + "resourceType": { + "value": "MICROSOFT.INSIGHTS/scheduledqueryrules", + "localizedValue": "MICROSOFT.INSIGHTS/scheduledqueryrules" + }, + "resourceId": "/SUBSCRIPTIONS//RESOURCEGROUPS//PROVIDERS/MICROSOFT.INSIGHTS/SCHEDULEDQUERYRULES/TEST-BAD-ALERTS", + "status": { + "value": "Succeeded", + "localizedValue": "Succeeded" + }, + "subStatus": { + "value": "", + "localizedValue": "" + }, + "submissionTimestamp": "2019-03-22T04:18:22.8569543Z", + "subscriptionId": "", + "properties": { + "resourceId": "/SUBSCRIPTIONS//RESOURCEGROUPS//PROVIDERS/MICROSOFT.INSIGHTS/SCHEDULEDQUERYRULES/TEST-BAD-ALERTS", + "subscriptionId": "", + "resourceGroup": "", + "eventDataId": "12e12345-12dd-1234-8e3e-12345b7a1234", + "eventTimeStamp": "03/22/2019 04:18:22", + "issueStartTime": "03/22/2019 04:18:22", + "operationName": "Microsoft.Insights/ScheduledQueryRules/disable/action", + "status": "Succeeded", + "reason": "Alert has been failing consistently with the same exception for the past week" + }, + "relatedEvents": [] +} +``` + +### Query used in Log Alert is not valid + +Each log alert rule created in Azure Monitor as part of its configuration must specify an analytics query to be executed periodically by the alert service. While the analytics query may have correct syntax at the time of rule creation or update. Some times over a period of time, the query provide in the log alert rule can develop syntax issues and cause the rule execution to start failing. Some common reasons why analytics query provided in a log alert rule can develop errors are: + +- Query is written to [run across multiple resources](../log-query/cross-workspace-query.md) and one or more of the resources specified, now do not exist. +- There has been no data flow to the analytics platform, due to which the [query execution gives error](https://dev.loganalytics.io/documentation/Using-the-API/Errors) as there is no data for the provided query. +- Changes in [Query Language](https://docs.microsoft.com/azure/kusto/query/) have occurred in which commands and functions have a revised format. Hence the earlier provided query in alert rule is no longer valid. + +The user shall be warned of this behavior first via [Azure Advisor](../../advisor/advisor-overview.md). A recommendation would be added for the specific log alert rule on Azure Advisor, under the category of High Availability with medium impact and description as "Repair your log alert rule to ensure monitoring". If after seven days of the providing recommendation on Azure Advisor the alert query in the specified log alert rule is not rectified. Then Azure Monitor will disable the log alert and ensure customers are not billed unnecessarily, when the rule itself is not able to execute continually for sizeable period like a week. + +Users can find out on the exact time at which the log alert rule was disabled by Azure Monitor via [Azure Activity Log](../../azure-resource-manager/resource-group-audit.md). In Azure Activity Log, when the log alert rule is disabled by Azure - an event is added in Azure Activity Log. + ## Next steps - Learn about [Log Alerts in Azure Alerts](../platform/alerts-unified-log.md) diff --git a/articles/azure-monitor/platform/personal-data-mgmt.md b/articles/azure-monitor/platform/personal-data-mgmt.md index 4973861c98913..85c18d9225bdf 100644 --- a/articles/azure-monitor/platform/personal-data-mgmt.md +++ b/articles/azure-monitor/platform/personal-data-mgmt.md @@ -82,6 +82,9 @@ As mentioned in the [strategy for personal data handling](#strategy-for-personal For both view and export data requests, the [Log Analytics query API](https://dev.loganalytics.io/) or the [Application Insights query API](https://dev.applicationinsights.io/quickstart) should be used. Logic to convert the shape of the data to an appropriate one to deliver to your users will be up to you to implement. [Azure Functions](https://azure.microsoft.com/services/functions/) makes a great place to host such logic. +> [!IMPORTANT] +> While the vast majority of purge operations may complete much quicker than the SLA, **the formal SLA for the completion of purge operations is set at 30 days** due to their heavy impact on the data platform used. This is an automated process; there is no way to request that an operation be handled faster. + ### Delete > [!WARNING] diff --git a/articles/azure-signalr/signalr-concept-performance.md b/articles/azure-signalr/signalr-concept-performance.md index 236ab4472fe70..eed5b1d5ab34d 100644 --- a/articles/azure-signalr/signalr-concept-performance.md +++ b/articles/azure-signalr/signalr-concept-performance.md @@ -1,6 +1,6 @@ --- title: Performance guide for Azure SignalR Service -description: An overview of Azure SignalR Service's performance. +description: An overview of the performance of Azure SignalR Service. author: sffamily ms.service: signalr ms.topic: conceptual @@ -9,201 +9,211 @@ ms.author: zhshang --- # Performance guide for Azure SignalR Service -One of the key benefits for using Azure SignalR Service is the ease to scale SignalR applications. In a large-scale scenario, performance becomes an important factor. In this guide we will introduce the factors that have impacts on the SignalR application performance, and under different use case scenarios, what is the typical performance? In the end, we will also introduce the environment and tools used to generate performance report. +One of the key benefits of using Azure SignalR Service is the ease of scaling SignalR applications. In a large-scale scenario, performance is an important factor. -## Terms definition +In this guide, we'll introduce the factors that affect SignalR application performance. We'll describe typical performance in different use-case scenarios. In the end, we'll introduce the environment and tools that you can use to generate a performance report. -*ASRS*: Azure SignalR Service +## Term definitions -*Inbound*: the incoming message to Azure SignalR Service +*Inbound*: The incoming message to Azure SignalR Service. -*Outbound*: the outgoing message from Azure SignalR Service +*Outbound*: The outgoing message from Azure SignalR Service. -*Bandwidth*: total size of all messages in 1 second +*Bandwidth*: The total size of all messages in 1 second. -*Default mode*: ASRS expects the app server to establish connection with it before accepting any client connections. It is the default working mode when an ASRS was created. +*Default mode*: The default working mode when an Azure SignalR Service instance was created. Azure SignalR Service expects the app server to establish a connection with it before it accepts any client connections. -*Serverless mode*: ASRS only accepts client connections. No server connection is allowed. +*Serverless mode*: A mode in which Azure SignalR Service accepts only client connections. No server connection is allowed. ## Overview -ASRS defines seven Standard tiers for different performance capacities, and this -guide intends to answer the following questions: +Azure SignalR Service defines seven Standard tiers for different performance capacities. This +guide answers the following questions: -- What is the typical ASRS performance for each tier? +- What is the typical Azure SignalR Service performance for each tier? -- Does ASRS meet my requirement of message throughput, for example, sending 100,000 messages per second? +- Does Azure SignalR Service meet my requirements for message throughput (for example, sending 100,000 messages per second)? - For my specific scenario, which tier is suitable for me? Or how can I select the proper tier? -- What kind of app server (VM size) is suitable for me and how many of them shall I deploy? +- What kind of app server (VM size) is suitable for me? How many of them should I deploy? -To answer these questions, this performance guide first gives a high-level explanation about the factors that have impacts on performance, then illustrates the maximum inbound and outbound messages for every tier for typical use cases: **echo**, **broadcast**, **send-to-group**, and **send-to-connection** (peer to peer chatting). +To answer these questions, this guide first gives a high-level explanation of the factors that affect performance. It then illustrates the maximum inbound and outbound messages for every tier for typical use cases: **echo**, **broadcast**, **send to group**, and **send to connection** (peer-to-peer chatting). -It is impossible for this document to cover all scenarios (and different use case, different message size, or message sending pattern etc.). However, it provides some evaluation methods to help users to approximately evaluate their requirement of the inbound or outbound messages, then find the proper tiers by checking the performance table. +This guide can't cover all scenarios (and different use cases, message sizes, message sending patterns, and so on). But it provides some methods to help you: + +- Evaluate your approximate requirement for the inbound or outbound messages. +- Find the proper tiers by checking the performance table. ## Performance insight -This section describes the performance evaluation methodologies, then lists all factors that have impacts on performance. In the end, it provides methods to help evaluate the performance requirements. +This section describes the performance evaluation methodologies, and then lists all factors that affect performance. In the end, it provides methods to help you evaluate performance requirements. ### Methodology -**Throughput** and **latency** are two typical aspects of performance checking. For ASRS, different SKU tier has different throughput throttling policy. This document defines **the maximum allowed throughput (inbound and outbound bandwidth)** as the max achieved throughput when 99% of messages have latency less than 1 second. - -The latency is the time span from the connection sending message to receiving the response message from ASRS. Let's take **echo** as an example, every client connection adds a timestamp in the message. App server's hub sends the original message back to the client. So the propagation delay is easily calculated by every client connection. The timestamp is attached for every message in **broadcast**, **send-to-group**, and **send-to-connection**. +*Throughput* and *latency* are two typical aspects of performance checking. For Azure SignalR Service, each SKU tier has its own throughput throttling policy. The policy defines *the maximum allowed throughput (inbound and outbound bandwidth)* as the maximum achieved throughput when 99 percent of messages have latency that's less than 1 second. -To simulate thousands of concurrent clients connections, multiple VMs are created in a virtual private network in Azure. All of these VMs connect to the same ASRS instance. +Latency is the time span from the connection sending the message to receiving the response message from Azure SignalR Service. Let's take **echo** as an example. Every client connection adds a time stamp in the message. The app server's hub sends the original message back to the client. So the propagation delay is easily calculated by every client connection. The time stamp is attached for every message in **broadcast**, **send to group**, and **send to connection**. -In ASRS default mode, app server VMs are also deployed in the same virtual private network as client VMs. +To simulate thousands of concurrent client connections, multiple VMs are created in a virtual private network in Azure. All of these VMs connect to the same Azure SignalR Service instance. -All client VMs and app server VMs are deployed in the same network of the same region to avoid cross region latency. +In the default mode of Azure SignalR Service, app server VMs are deployed in the same virtual private network as client VMs. All client VMs and app server VMs are deployed in the same network of the same region to avoid cross-region latency. ### Performance factors -Theoretically, ASRS capacity is limited by computation resources: CPU, Memory, and Network. For example, more connections to ASRS, more memory ASRS consumed. For larger message traffic, for example, every message is larger than 2048 bytes, it requires ASRS to spend more CPU cycles to process as well. Meanwhile, Azure network bandwidth also imposes a limit for maximum traffic. +Theoretically, Azure SignalR Service capacity is limited by computation resources: CPU, memory, and network. For example, more connections to Azure SignalR Service cause the service to use more memory. For larger message traffic (for example, every message is larger than 2,048 bytes), Azure SignalR Service needs to spend more CPU cycles to process traffic. Meanwhile, Azure network bandwidth also imposes a limit for maximum traffic. + +The transport type is another factor that affects performance. The three types are [WebSocket](https://en.wikipedia.org/wiki/WebSocket), [Server-Sent-Event](https://en.wikipedia.org/wiki/Server-sent_events), and [Long-Polling](https://en.wikipedia.org/wiki/Push_technology). + +WebSocket is a bidirectional and full-duplex communication protocol over a single TCP connection. Server-Sent-Event is a unidirectional protocol to push messages from server to client. Long-Polling requires the clients to periodically poll information from the server through an HTTP request. For the same API under the same conditions, WebSocket has the best performance, Server-Sent-Event is slower, and Long-Polling is the slowest. Azure SignalR Service recommends WebSocket by default. -The transport type, [WebSocket](https://en.wikipedia.org/wiki/WebSocket), [Sever-Sent-Event](https://en.wikipedia.org/wiki/Server-sent_events), or [Long-Polling](https://en.wikipedia.org/wiki/Push_technology), is another factor affects performance. WebSocket is a bi-directional and full-duplex communication protocol over a single TCP connection. However, Sever-Sent-Event is uni-directional protocol to push message from server to client. Long-Polling requires the clients to periodically poll information from server through HTTP request. For the same API under the same condition, WebSocket has the best performance, Sever-Sent-Event is slower, and Long-Polling is the slowest. ASRS recommends WebSocket by default. +The message routing cost also limits performance. Azure SignalR Service plays a role as a message router, which routes the message from a set of clients or servers to other clients or servers. A different scenario or API requires a different routing policy. -In addition, the message routing cost also limits the performance. ASRS plays a role as a message router, which routes the message from a set of clients or servers to other clients or servers. Different scenario or API requires different routing policy. For **echo**, the client sends a message to itself, and the routing destination is also itself. This pattern has the lowest routing cost. But for **broadcast**, **send-to-group**, **send-to-connection**, ASRS needs to look up the target connections through the internal distributed data structure, which consumes more CPU, Memory and even network bandwidth. As a result, performance is slower than **echo**. +For **echo**, the client sends a message to itself, and the routing destination is also itself. This pattern has the lowest routing cost. But for **broadcast**, **send to group**, and **send to connection**, Azure SignalR Service needs to look up the target connections through the internal distributed data structure. This extra processing uses more CPU, memory, and network bandwidth. As a result, performance is slower. -In the default mode, the app server may also become a bottleneck for certain scenarios, because Azure SignalR SDK has to invoke the Hub, meanwhile it maintains the live connection with every client through heart-beat signals. +In the default mode, the app server might also become a bottleneck for certain scenarios. The Azure SignalR SDK has to invoke the hub, while it maintains a live connection with every client through heartbeat signals. -In serverless mode, the client sends message by HTTP post, which is not as efficient as WebSocket. +In serverless mode, the client sends a message by HTTP post, which is not as efficient as WebSocket. -Another factor is protocol: JSON and [MessagePack](https://msgpack.org/index.html). MessagePack is smaller in size and delivered faster than JSON. Intuitively, MessagePack would benefit performance, but ASRS performance is not sensitive with protocols since it does not decode the message payload during message forwarding from clients to servers or vice versa. +Another factor is protocol: JSON and [MessagePack](https://msgpack.org/index.html). MessagePack is smaller in size and delivered faster than JSON. MessagePack might not improve performance, though. The performance of Azure SignalR Service is not sensitive to protocols because it doesn't decode the message payload during message forwarding from clients to servers or vice versa. -In summary, the following factors have impacts on the inbound and outbound capacity: +In summary, the following factors affect the inbound and outbound capacity: -- SKU tier (CPU/Memory) +- SKU tier (CPU/memory) -- number of connections +- Number of connections -- message size +- Message size -- message send rate +- Message send rate -- transport type (WebSocket/Sever-Sent-Event/Long-Polling) +- Transport type (WebSocket, Server-Sent-Event, or Long-Polling) -- use case scenario (routing cost) +- Use-case scenario (routing cost) -- app server and service connections (in server mode) +- App server and service connections (in server mode) -### Find a proper SKU +### Finding a proper SKU -How to evaluate the inbound/outbound capacity or how to find which tier is suitable for a specific use case? +How can you evaluate the inbound/outbound capacity or find which tier is suitable for a specific use case? -We assume the app server is powerful enough and is not the performance bottleneck. Then we can check the maximum inbound and outbound bandwidth for every tier. +Assume that the app server is powerful enough and is not the performance bottleneck. Then, check the maximum inbound and outbound bandwidth for every tier. #### Quick evaluation -Let's simplify the evaluation first by assuming some default settings: WebSocket is used, message size is 2048 bytes, sending message every 1 second, and it is in default mode. +Let's simplify the evaluation first by assuming some default settings: -Every tier has its own maximum inbound bandwidth and outbound bandwidth. Smooth user experience is not guaranteed once the inbound or outbound exceeds the limit. +- The transport type is WebSocket. +- The message size is 2,048 bytes. +- A message is sent every 1 second. +- Azure SignalR Service is in the default mode. + +Every tier has its own maximum inbound bandwidth and outbound bandwidth. A smooth user experience is not guaranteed after the inbound or outbound connection exceeds the limit. **Echo** gives the maximum inbound bandwidth because it has the lowest routing cost. **Broadcast** defines the maximum outbound message bandwidth. -Do **NOT** exceed the highlighted values in the following two tables. +Do *not* exceed the highlighted values in the following two tables. | Echo | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 | |-----------------------------------|-------|-------|-------|--------|--------|--------|---------| | Connections | 1,000 | 2,000 | 5,000 | 10,000 | 20,000 | 50,000 | 100,000 | -| **Inbound bandwidth (byte/s)** | **2M** | **4M** | **10M** | **20M** | **40M** | **100M** | **200M** | -| Outbound bandwidth (byte/s) | 2M | 4M | 10M | 20M | 40M | 100M | 200M | +| **Inbound bandwidth** | **2 MBps** | **4 MBps** | **10 MBps** | **20 MBps** | **40 MBps** | **100 MBps** | **200 MBps** | +| Outbound bandwidth | 2 MBps | 4 MBps | 10 MBps | 20 MBps | 40 MBps | 100 MBps | 200 MBps | | Broadcast | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 | |---------------------------|-------|-------|--------|--------|--------|---------|---------| | Connections | 1,000 | 2,000 | 5,000 | 10,000 | 20,000 | 50,000 | 100,000 | -| Inbound bandwidth (byte/s) | 4K | 4K | 4K | 4K | 4K | 4K | 4K | -| **Outbound Bandwidth (byte/s)** | **4M** | **8M** | **20M** | **40M** | **80M** | **200M** | **400M** | +| Inbound bandwidth | 4 KBps | 4 KBps | 4 KBps | 4 KBps | 4 KBps | 4 KBps | 4 KBps | +| **Outbound bandwidth** | **4 MBps** | **8 MBps** | **20 MBps** | **40 MBps** | **80 MBps** | **200 MBps** | **400 MBps** | -The inbound bandwidth and outbound bandwidth formulas: +*Inbound bandwidth* and *outbound bandwidth* are the total message size per second. Here are the formulas for them: ``` inboundBandwidth = inboundConnections * messageSize / sendInterval outboundBandwidth = outboundConnections * messageSize / sendInterval ``` -*inboundConnections*: the number of connections sending message - -*outboundConnections*: the number of connections receiving message +- *inboundConnections*: The number of connections sending the message. -*messageSize*: the size of a single message (average value). For small message whose size is less than 1024 bytes, it has the similar performance impact as 1024-byte message. +- *outboundConnections*: The number of connections receiving the message. -*sendInterval*: the time of sending one message, typically it is 1 second per message, which means sending one message every second. Smaller sendInterval means sending more message in given time period. For example, 0.5 second per message means sending two messages every second. +- *messageSize*: The size of a single message (average value). A small message that's less than 1,024 bytes has a performance impact that's similar to a 1,024-byte message. -*Connections* is the ASRS committed maximum threshold for every tier. If the connection number is increased further, it will suffer from connection throttling. +- *sendInterval*: The time of sending one message. Typically it's 1 second per message, which means sending one message every second. A smaller interval means sending more message in a time period. For example, 0.5 seconds per message means sending two messages every second. -*Inbound bandwidth* and *Outbound bandwidth* are the total message size per second. Here 'M' means megabyte for simplicity. +- *Connections*: The committed maximum threshold for Azure SignalR Service for every tier. If the connection number is increased further, it will suffer from connection throttling. #### Evaluation for complex use cases ##### Bigger message size or different sending rate -The real use case is more complicated. It may send message larger than 2048 bytes, or sending message rate is not one message per second. Let's take unit100's broadcast as an example to find how to evaluate its performance. +The real use case is more complicated. It might send a message larger than 2,048 bytes, or the sending message rate is not one message per second. Let's take Unit100's broadcast as an example to find how to evaluate its performance. -The following table shows a real case of **broadcast**, but the message size, connection count, and message sending rate are different from what we assumed in the previous section. The question is how we can deduce any of those items (message size, connection count, or message sending rate) if we only know 2 of them. +The following table shows a real use case of **broadcast**. But the message size, connection count, and message sending rate are different from what we assumed in the previous section. The question is how we can deduce any of those items (message size, connection count, or message sending rate) if we know only two of them. -| Broadcast | Message size (byte) | Inbound (message/s) | Connections | Send intervals (second) | +| Broadcast | Message size | Inbound messages per second | Connections | Send intervals | |---|---------------------|--------------------------|-------------|-------------------------| -| 1 | 20 K | 1 | 100,000 | 5 | -| 2 | 256 K | 1 | 8,000 | 5 | +| 1 | 20 KB | 1 | 100,000 | 5 sec | +| 2 | 256 KB | 1 | 8,000 | 5 sec | -The following formula is easily to be inferred based on the previous existing formula: +The following formula is easy to infer based on the previous formula: ``` outboundConnections = outboundBandwidth * sendInterval / messageSize ``` -For unit100, we know the max outbound bandwidth is 400M from previous table, -then for 20-K message size, the max outbound connections should be 400M \* 5 / 20 K = +For Unit100, the maximum outbound bandwidth is 400 MB from the previous table. For a 20-KB message size, the maximum outbound connections should be 400 MB \* 5 / 20 KB = 100,000, which matches the real value. ##### Mixed use cases -The real use case typically mixes the four basic use cases together: **echo**, **broadcast**, **send to group**, or **send to connection**. The methodology used to evaluate the capacity is to divide the mixed use cases into four basic use cases, **calculate the maximum inbound and outbound message bandwidth** using the above formulas separately, and sum them to get the total maximum inbound/outbound bandwidth. Then pick up the proper tier from the maximum inbound/outbound bandwidth tables. +The real use case typically mixes the four basic use cases together: **echo**, **broadcast**, **send to group**, and **send to connection**. The methodology that you use to evaluate the capacity is to: + +1. Divide the mixed use cases into four basic use cases. +1. Calculate the maximum inbound and outbound message bandwidth by using the preceding formulas separately. +1. Sum the bandwidth calculations to get the total maximum inbound/outbound bandwidth. -Meanwhile, for sending message to hundreds or thousands of small groups, or thousands of clients sending message to each other, the routing cost will become dominant. This impact should be taken into account. More details are covered in the following "Case study" sections. +Then pick up the proper tier from the maximum inbound/outbound bandwidth tables. -For the use case of sending message to clients, make sure the app server is **NOT** the bottleneck. "Case study" section gives the guideline about how many app servers you need and how many server connections should be configured. +> [!NOTE] +> For sending a message to hundreds or thousands of small groups, or for thousands of clients sending a message to each other, the routing cost will become dominant. Take this impact into account. + +For the use case of sending a message to clients, make sure that the app server is *not* the bottleneck. The following "Case study" section gives guidelines about how many app servers you need and how many server connections you should configure. ## Case study -The following sections go through four typical use cases for WebSocket transport: **echo**, **broadcast**, **send-to-group**, and **send-to-connection**. For each scenario, it lists the current ASRS inbound and outbound capacity, meanwhile explains what is the main factors on performance. +The following sections go through four typical use cases for WebSocket transport: **echo**, **broadcast**, **send to group**, and **send to connection**. For each scenario, the section lists the current inbound and outbound capacity for Azure SignalR Service. It also explains the main factors that affect performance. -In default mode, App server, through Azure SignalR Service SDK by default, creates five server connections with ASRS. In the performance test result below, server connections are -increased to 15 (or more for broadcast and sending message to big group). +In the default mode, the app server creates five server connections with Azure SignalR Service. The app server uses the Azure SignalR Service SDK by default. In the following performance test results, server connections are increased to 15 (or more for broadcasting and sending a message to a big group). -Different use cases have different requirement on app servers. **Broadcast** needs small number of app servers. **Echo** or **send-to-connection** needs many app servers. +Different use cases have different requirements for app servers. **Broadcast** needs small number of app servers. **Echo** or **send to connection** needs many app servers. -In all use cases, the default message size is 2048 bytes, and message send -interval is 1 second. +In all use cases, the default message size is 2,048 bytes, and the message send interval is 1 second. -## Default mode +### Default mode -Clients, web app servers, and ASRS are involved in this mode. Every client stands for a single connection. +Clients, web app servers, and Azure SignalR Service are involved in the default mode. Every client stands for a single connection. -### Echo +#### Echo -Firstly, web apps connect to ASRS. Secondly, many clients connect to web app, which redirect the clients to ASRS with the access token and endpoint. Then, clients establish WebSocket connections with ASRS. +First, a web app connects to Azure SignalR Service. Second, many clients connect to the web app, which redirects the clients to Azure SignalR Service with the access token and endpoint. Then, the clients establish WebSocket connections with Azure SignalR Service. -After all clients establish connections, they start sending message, which contains a timestamp to the specific Hub every second. The Hub echoes the message back to its original client. Every client calculates the latency when it receives the echo message back. +After all clients establish connections, they start sending a message that contains a time stamp to the specific hub every second. The hub echoes the message back to its original client. Every client calculates the latency when it receives the echo message back. -The steps 5\~8 (red highlighted traffic) are in a loop, which will run for a -default duration (5 minutes) and get the statistic of all message latency. -The performance guide shows the maximum client connection number. +In the following diagram, 5 through 8 (red highlighted traffic) are in a loop. The loop runs for a default duration (5 minutes) and gets the statistic of all message latency. -![Echo](./media/signalr-concept-performance/echo.png) +![Traffic for the echo use case](./media/signalr-concept-performance/echo.png) -**Echo**'s behavior determines that the maximum inbound bandwidth is equal to maximum outbound bandwidth. See the following table. +The behavior of **echo** determines that the maximum inbound bandwidth is equal to the maximum outbound bandwidth. For details, see the following table. | Echo | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 | |-----------------------------------|-------|-------|-------|--------|--------|--------|---------| | Connections | 1,000 | 2,000 | 5,000 | 10,000 | 20,000 | 50,000 | 100,000 | -| Inbound/Outbound (message/s) | 1,000 | 2,000 | 5,000 | 10,000 | 20,000 | 50,000 | 100,000 | -| Inbound/Outbound bandwidth (byte/s) | 2M | 4M | 10M | 20M | 40M | 100M | 200M | +| Inbound/outbound messages per second | 1,000 | 2,000 | 5,000 | 10,000 | 20,000 | 50,000 | 100,000 | +| Inbound/outbound bandwidth | 2 MBps | 4 MBps | 10 MBps | 20 MBps | 40 MBps | 100 MBps | 200 MBps | -In this use case, every client invokes the hub defined in the app server. The hub just calls the method defined in the original client side. This hub is the most light weighed hub for **echo**. +In this use case, every client invokes the hub defined in the app server. The hub just calls the method defined in the original client side. This hub is the most lightweight hub for **echo**. ``` public void Echo(IDictionary data) @@ -212,7 +222,7 @@ In this use case, every client invokes the hub defined in the app server. The hu } ``` -Even for this simple hub, the traffic pressure on app server is also prominent as the **echo** inbound message increases. Therefore, it requires many app servers for large SKU tiers. The following table lists the app server count for every tier. +Even for this simple hub, the traffic pressure on the app server is prominent as the **echo** inbound message load increases. This traffic pressure requires many app servers for large SKU tiers. The following table lists the app server count for every tier. | Echo | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 | @@ -221,28 +231,27 @@ Even for this simple hub, the traffic pressure on app server is also prominent a | App server count | 2 | 2 | 2 | 3 | 3 | 10 | 20 | > [!NOTE] -> -> The client connection number, message size, message sending rate, SKU tier and app server's CPU/Memory have impact on overall performance of **echo**. +> The client connection number, message size, message sending rate, SKU tier, and CPU/memory of the app server affect the overall performance of **echo**. -### Broadcast +#### Broadcast -For **broadcast**, when web app receives the message, it broadcasts to all clients. The more clients to broadcast, the more message traffic to all clients. See the following diagram. +For **broadcast**, when the web app receives the message, it broadcasts to all clients. The more clients there are to broadcast, the more message traffic there is to all clients. See the following diagram. -![Broadcast](./media/signalr-concept-performance/broadcast.png) +![Traffic for the broadcast use case](./media/signalr-concept-performance/broadcast.png) -The characteristic of broadcast is that there are a small number of clients broadcasting, which means the inbound message bandwidth is small, but the outbound bandwidth is huge. The outbound message bandwidth increases as the client connection or broadcast rate increases. +A small number of clients are broadcasting. The inbound message bandwidth is small, but the outbound bandwidth is huge. The outbound message bandwidth increases as the client connection or broadcast rate increases. -The maximum client connections, inbound/outbound message count, and bandwidth are summarized in the following table. +The following table summarizes maximum client connections, inbound/outbound message count, and bandwidth. | Broadcast | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 | |---------------------------|-------|-------|--------|--------|--------|---------|---------| | Connections | 1,000 | 2,000 | 5,000 | 10,000 | 20,000 | 50,000 | 100,000 | -| Inbound (message/s) | 2 | 2 | 2 | 2 | 2 | 2 | 2 | -| Outbound (message/s) | 2,000 | 4,000 | 10,000 | 20,000 | 40,000 | 100,000 | 200,000 | -| Inbound bandwidth (byte/s) | 4K | 4K | 4K | 4K | 4K | 4K | 4K | -| Outbound bandwidth (byte/s) | 4M | 8M | 20M | 40M | 80M | 200M | 400M | +| Inbound messages per second | 2 | 2 | 2 | 2 | 2 | 2 | 2 | +| Outbound messages per second | 2,000 | 4,000 | 10,000 | 20,000 | 40,000 | 100,000 | 200,000 | +| Inbound bandwidth | 4 KBps | 4 KBps | 4 KBps | 4 KBps | 4 KBps | 4 KBps | 4 KBps | +| Outbound bandwidth | 4 MBps | 8 MBps | 20 MBps | 40 MBps | 80 MBps | 200 MBps | 400 MBps | -The broadcasting clients that post messages are no more than 4, thus requires fewer app servers compared with **echo** since the inbound message amount is small. Two app servers are enough for both SLA and performance consideration. But the default server connections should be increased to avoid unbalanced issue especially for Unit50 and Unit100. +The broadcasting clients that post messages are no more than four. They need fewer app servers compared with **echo** because the inbound message amount is small. Two app servers are enough for both SLA and performance considerations. But you should increase the default server connections to avoid imbalance, especially for Unit50 and Unit100. | Broadcast | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 | |------------------|-------|-------|-------|--------|--------|--------|---------| @@ -250,44 +259,42 @@ The broadcasting clients that post messages are no more than 4, thus requires fe | App server count | 2 | 2 | 2 | 2 | 2 | 2 | 2 | > [!NOTE] +> Increase the default server connections from 5 to 40 on every app server to avoid possible unbalanced server connections to Azure SignalR Service. > -> Increase the default server connections from 5 to 40 on every app server to avoid possible unbalanced server connections to ASRS. -> -> The client connection number, message size, message sending rate, and SKU tier have impact on overall performance for **broadcast** +> The client connection number, message size, message sending rate, and SKU tier affect the overall performance for **broadcast**. -### Send to group +#### Send to group -**Send-to-group** has similar traffic pattern except that after clients establishing WebSocket connections with ASRS, they must join groups before they can send message to a specific group. The traffic flow is illustrated by the following diagram. +The **send to group** use case has a similar traffic pattern to **broadcast**. The difference is that after clients establish WebSocket connections with Azure SignalR Service, they must join groups before they can send a message to a specific group. The following diagram illustrates the traffic flow. -![Send To Group](./media/signalr-concept-performance/sendtogroup.png) +![Traffic for the send-to-group use case](./media/signalr-concept-performance/sendtogroup.png) -Group member and group count are two factors with impact on the performance. To -simplify the analysis, we define two kinds of groups: small group and big -group. +Group member and group count are two factors that affect performance. To +simplify the analysis, we define two kinds of groups: -- `small group`: 10 connections in every group. The group number is equal to (max -connection count) / 10. For example, for Unit 1, if there are 1000 connection counts, then we have 1000 / 10 = 100 groups. +- **Small group**: Every group has 10 connections. The group number is equal to (max +connection count) / 10. For example, for Unit1, if there are 1,000 connection counts, then we have 1000 / 10 = 100 groups. -- `Big group`: Group number is always 10. The group member count is equal to (max -connection count) / 10. For example, for Unit 1, if there are 1000 connection counts, then every group has 1000 / 10 = 100 members. +- **Big group**: The group number is always 10. The group member count is equal to (max +connection count) / 10. For example, for Unit1, if there are 1,000 connection counts, then every group has 1000 / 10 = 100 members. -**Send-to-group** brings routing cost to ASRS because it has to find the target connections through a distributed data structure. As the sending connections increase, the cost increases as well. +**Send to group** brings a routing cost to Azure SignalR Service because it has to find the target connections through a distributed data structure. As the sending connections increase, the cost increases. -#### Small group +##### Small group -The routing cost is significant for sending message to many small groups. Currently, the ASRS implementation hits routing cost limit at unit50. Adding more CPU and memory does not help, so unit100 cannot improve further by design. If you demand more inbound bandwidth, contact customer support for customization. +The routing cost is significant for sending message to many small groups. Currently, the Azure SignalR Service implementation hits the routing cost limit at Unit50. Adding more CPU and memory doesn't help, so Unit100 can't improve further by design. If you need more inbound bandwidth, contact customer support. | Send to small group | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 | |---------------------------|-------|-------|--------|--------|--------|--------|---------| | Connections | 1,000 | 2,000 | 5,000 | 10,000 | 20,000 | 50,000 | 100,000 | Group member count | 10 | 10 | 10 | 10 | 10 | 10 | 10 | Group count | 100 | 200 | 500 | 1,000 | 2,000 | 5,000 | 10,000 -| Inbound (message/s) | 200 | 400 | 1,000 | 2,500 | 4,000 | 7,000 | 7,000 | -| Inbound bandwidth (byte/s) | 400 K | 800 K | 2M | 5M | 8M | 14M | 14M | -| Outbound (message/s) | 2,000 | 4,000 | 10,000 | 25,000 | 40,000 | 70,000 | 70,000 | -| Outbound bandwidth (byte/s) | 4M | 8M | 20M | 50M | 80M | 140M | 140M | +| Inbound messages per second | 200 | 400 | 1,000 | 2,500 | 4,000 | 7,000 | 7,000 | +| Inbound bandwidth | 400 KBps | 800 KBps | 2 MBps | 5 MBps | 8 MBps | 14 MBps | 14 MBps | +| Outbound messages per second | 2,000 | 4,000 | 10,000 | 25,000 | 40,000 | 70,000 | 70,000 | +| Outbound bandwidth | 4 MBps | 8 MBps | 20 MBps | 50 MBps | 80 MBps | 140 MBps | 140 MBps | -There are many client connections calling the hub, therefore, app server number is also critical for performance. The suggested app server count is listed in the following table. +Many client connections are calling the hub, so the app server number is also critical for performance. The following table lists the suggested app server counts. | Send to small group | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 | |------------------|-------|-------|-------|--------|--------|--------|---------| @@ -295,24 +302,23 @@ There are many client connections calling the hub, therefore, app server number | App server count | 2 | 2 | 2 | 3 | 3 | 10 | 20 | > [!NOTE] -> -> The client connection number, message size, message sending rate, routing cost, SKU tier and app server's CPU/Memory have impact on overall performance of **send-to-small-group**. +> The client connection number, message size, message sending rate, routing cost, SKU tier, and CPU/memory of the app server affect the overall performance of **send to small group**. -#### Big group +##### Big group -For **send-to-big-group**, the outbound bandwidth becomes the bottleneck before hitting the routing cost limit. The following table lists the maximum outbound bandwidth, which is almost the same as **broadcast**. +For **send to big group**, the outbound bandwidth becomes the bottleneck before hitting the routing cost limit. The following table lists the maximum outbound bandwidth, which is almost the same as that for **broadcast**. | Send to big group | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 | |---------------------------|-------|-------|--------|--------|--------|---------|---------| | Connections | 1,000 | 2,000 | 5,000 | 10,000 | 20,000 | 50,000 | 100,000 | Group member count | 100 | 200 | 500 | 1,000 | 2,000 | 5,000 | 10,000 | Group count | 10 | 10 | 10 | 10 | 10 | 10 | 10 -| Inbound (message/s) | 20 | 20 | 20 | 20 | 20 | 20 | 20 | -| Inbound bandwidth (byte/s) | 80 K | 40 K | 40 K | 20 K | 40 K | 40 K | 40 K | -| Outbound (message/s) | 2,000 | 4,000 | 10,000 | 20,000 | 40,000 | 100,000 | 200,000 | -| Outbound bandwidth (byte/s) | 8M | 8M | 20M | 40M | 80M | 200M | 400M | +| Inbound messages per second | 20 | 20 | 20 | 20 | 20 | 20 | 20 | +| Inbound bandwidth | 80 KBps | 40 KBps | 40 KBps | 20 KBps | 40 KBps | 40 KBps | 40 KBps | +| Outbound messages per second | 2,000 | 4,000 | 10,000 | 20,000 | 40,000 | 100,000 | 200,000 | +| Outbound bandwidth | 8 MBps | 8 MBps | 20 MBps | 40 MBps | 80 MBps | 200 MBps | 400 MBps | -The sending connection count is no more than 40, the burden on app server is small, thus the suggested web app number is also small. +The sending connection count is no more than 40. The burden on the app server is small, so the suggested number of web apps is small. | Send to big group | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 | |------------------|-------|-------|-------|--------|--------|--------|---------| @@ -320,31 +326,29 @@ The sending connection count is no more than 40, the burden on app server is sma | App server count | 2 | 2 | 2 | 2 | 2 | 2 | 2 | > [!NOTE] -> -> Increase the default server connections from 5 to 40 on every app server to -> avoid possible unbalanced server connections to ASRS. +> Increase the default server connections from 5 to 40 on every app server to avoid possible unbalanced server connections to Azure SignalR Service. > -> The client connection number, message size, message sending rate, routing cost, and SKU tier have impact on overall performance of **send-to-big-group**. +> The client connection number, message size, message sending rate, routing cost, and SKU tier affect the overall performance of **send to big group**. -### Send to connection +#### Send to connection -In this use case, when clients establish the connections to ASRS, every client calls a special hub to get their own connection ID. The performance benchmark is responsible to collect all connection IDs, shuffle them and reassign them to all clients as a sending target. The clients keep sending message to the target connection until the performance test finishes. +In the **send to connection** use case, when clients establish the connections to Azure SignalR Service, every client calls a special hub to get their own connection ID. The performance benchmark collects all connection IDs, shuffles them, and reassigns them to all clients as a sending target. The clients keep sending the message to the target connection until the performance test finishes. -![Send to client](./media/signalr-concept-performance/sendtoclient.png) +![Traffic for the send-to-client use case](./media/signalr-concept-performance/sendtoclient.png) -The routing cost for **Send-to-connection** is similar as **send-to-small-group**. +The routing cost for **send to connection** is similar to the cost for **send to small group**. -As the connection count increases, the overall performance is limited by routing cost. Unit 50 has reached the limit. As a result, unit 100 cannot improve further. +As the connection count increases, the routing cost limits overall performance. Unit50 has reached the limit. As a result, Unit100 can't improve further. -The following table is a statistic summary after many rounds of running **send-to-connection** benchmark +The following table is a statistical summary after many rounds of running the **send to connection** benchmark. | Send to connection | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 | |------------------------------------|-------|-------|-------|--------|--------|-----------------|-----------------| | Connections | 1,000 | 2,000 | 5,000 | 10,000 | 20,000 | 50,000 | 100,000 | -| Inbound/ Outbound (message/s) | 1,000 | 2,000 | 5,000 | 8,000 | 9,000 | 20,000 | 20,000 | -| Inbound/ Outbound bandwidth (byte/s) | 2M | 4M | 10M | 16M | 18M | 40M | 40M | +| Inbound/outbound messages per second | 1,000 | 2,000 | 5,000 | 8,000 | 9,000 | 20,000 | 20,000 | +| Inbound/outbound bandwidth | 2 MBps | 4 MBps | 10 MBps | 16 MBps | 18 MBps | 40 MBps | 40 MBps | -This use case requires high load on app server side. See the suggested app server count in the following table. +This use case requires high load on the app server side. See the suggested app server count in the following table. | Send to connection | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 | |------------------|-------|-------|-------|--------|--------|--------|---------| @@ -352,84 +356,82 @@ This use case requires high load on app server side. See the suggested app serve | App server count | 2 | 2 | 2 | 3 | 3 | 10 | 20 | > [!NOTE] -> -> The client connection number, message size, message sending rate, routing cost, SKU tier and app server's CPU/Memory have impact on overall performance of **send-to-connection**. +> The client connection number, message size, message sending rate, routing cost, SKU tier, and CPU/memory for the app server affect the overall performance of **send to connection**. -### ASP.NET SignalR echo/broadcast/send-to-connection +#### ASP.NET SignalR echo, broadcast, and send to small group -ASRS provides the same performance capacity for ASP.NET SignalR. This section gives the suggested web app count for ASP.NET SignalR **echo**, **broadcast**, and **send-to-small-group**. +Azure SignalR Service provides the same performance capacity for ASP.NET SignalR. -The performance test uses Azure Web App of [Standard Service Plan S3](https://azure.microsoft.com/pricing/details/app-service/windows/) for ASP.NET SignalR. +The performance test uses Azure Web Apps from [Standard Service Plan S3](https://azure.microsoft.com/pricing/details/app-service/windows/) for ASP.NET SignalR. -- `echo` +The following table gives the suggested web app count for ASP.NET SignalR **echo**. | Echo | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 | |------------------|-------|-------|-------|--------|--------|--------|---------| | Connections | 1,000 | 2,000 | 5,000 | 10,000 | 20,000 | 50,000 | 100,000 | | App server count | 2 | 2 | 4 | 4 | 8 | 32 | 40 | -- `broadcast` +The following table gives the suggested web app count for ASP.NET SignalR **broadcast**. | Broadcast | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 | |------------------|-------|-------|-------|--------|--------|--------|---------| | Connections | 1,000 | 2,000 | 5,000 | 10,000 | 20,000 | 50,000 | 100,000 | | App server count | 2 | 2 | 2 | 2 | 2 | 2 | 2 | -- `Send-to-small-group` +The following table gives the suggested web app count for ASP.NET SignalR **send to small group**. | Send to small group | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 | |------------------|-------|-------|-------|--------|--------|--------|---------| | Connections | 1,000 | 2,000 | 5,000 | 10,000 | 20,000 | 50,000 | 100,000 | | App server count | 2 | 2 | 4 | 4 | 8 | 32 | 40 | -## Serverless mode +### Serverless mode -Clients and ASRS are involved in this mode. Every client stands for a single connection. The client sends messages through REST API to another client or broadcast messages to all. +Clients and Azure SignalR Service are involved in serverless mode. Every client stands for a single connection. The client sends messages through the REST API to another client or broadcast messages to all. -Sending high-density messages through REST API is not as efficient as WebSocket, because it requires to build a new HTTP connection every time - an extra cost in serverless mode. +Sending high-density messages through the REST API is not as efficient as using WebSocket. It requires you to build a new HTTP connection every time, and that's an extra cost in serverless mode. -### Broadcast through REST API -All clients establish WebSocket connections with ASRS. Then some clients start broadcasting through REST API. The message sending (inbound) are all through HTTP Post, which is not efficient compared with WebSocket. +#### Broadcast through REST API +All clients establish WebSocket connections with Azure SignalR Service. Then some clients start broadcasting through the REST API. The message sending (inbound) is all through HTTP Post, which is not efficient compared with WebSocket. | Broadcast through REST API | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 | |---------------------------|-------|-------|--------|--------|--------|---------|---------| | Connections | 1,000 | 2,000 | 5,000 | 10,000 | 20,000 | 50,000 | 100,000 | -| Inbound (message/s) | 2 | 2 | 2 | 2 | 2 | 2 | 2 | -| Outbound (message/s) | 2,000 | 4,000 | 10,000 | 20,000 | 40,000 | 100,000 | 200,000 | -| Inbound bandwidth (byte/s) | 4K | 4K | 4K | 4K | 4K | 4K | 4K | -| Outbound bandwidth (byte/s) | 4M | 8M | 20M | 40M | 80M | 200M | 400M | +| Inbound messages per second | 2 | 2 | 2 | 2 | 2 | 2 | 2 | +| Outbound messages per second | 2,000 | 4,000 | 10,000 | 20,000 | 40,000 | 100,000 | 200,000 | +| Inbound bandwidth | 4 KBps | 4 KBps | 4 KBps | 4 KBps | 4 KBps | 4 KBps | 4 KBps | +| Outbound bandwidth | 4 MBps | 8 MBps | 20 MBps | 40 MBps | 80 MBps | 200 MBps | 400 MBps | -### Send to user through REST API -The benchmark assigns user names to all of the clients before they start connecting to ASRS. After the clients established WebSocket connections with ASRS, they start sending messages to others through HTTP Post. +#### Send to user through REST API +The benchmark assigns usernames to all of the clients before they start connecting to Azure SignalR Service. After the clients establish WebSocket connections with Azure SignalR Service, they start sending messages to others through HTTP Post. | Send to user through REST API | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 | |---------------------------|-------|-------|--------|--------|--------|---------|---------| | Connections | 1,000 | 2,000 | 5,000 | 10,000 | 20,000 | 50,000 | 100,000 | -| Inbound (message/s) | 300 | 600 | 900 | 1,300 | 2,000 | 10,000 | 18,000 | -| Outbound (message/s) | 300 | 600 | 900 | 1,300 | 2,000 | 10,000 | 18,000 | -| Inbound bandwidth (byte/s) | 600 K | 1.2M | 1.8M | 2.6M | 4M | 10M | 36M | -| Outbound bandwidth (byte/s) | 600 K | 1.2M | 1.8M | 2.6M | 4M | 10M | 36M | +| Inbound messages per second | 300 | 600 | 900 | 1,300 | 2,000 | 10,000 | 18,000 | +| Outbound messages per second | 300 | 600 | 900 | 1,300 | 2,000 | 10,000 | 18,000 | +| Inbound bandwidth | 600 KBps | 1.2 MBps | 1.8 MBps | 2.6 MBps | 4 MBps | 10 MBps | 36 MBps | +| Outbound bandwidth | 600 KBps | 1.2 MBps | 1.8 MBps | 2.6 MBps | 4 MBps | 10 MBps | 36 MBps | ## Performance test environments -The performance test for all use cases listed above were conducted in Azure -environment. At most 50 client VMs, and 20 app server VMs are used. +For all use cases listed earlier, we conducted the performance tests in an Azure environment. At most, we used 50 client VMs and 20 app server VMs. Here are some details: -Client VM size: StandardDS2V2 (2 vCPU, 7G memory) +- Client VM size: StandardDS2V2 (2 vCPU, 7G memory) -App server VM size: StandardF4sV2 (4 vCPU, 8G memory) +- App server VM size: StandardF4sV2 (4 vCPU, 8G memory) -Azure SignalR SDK server connections: 15 +- Azure SignalR SDK server connections: 15 ## Performance tools -https://github.com/Azure/azure-signalr-bench/tree/master/SignalRServiceBenchmarkPlugin +You can find performance tools for Azure SignalR Service on [GitHub](https://github.com/Azure/azure-signalr-bench/tree/master/SignalRServiceBenchmarkPlugin). ## Next steps -In this article, you get an overview of SignalR Service performance in typical use case scenarios. +In this article, you got an overview of Azure SignalR Service performance in typical use-case scenarios. -To get more details on the internals of SignalR Service and scaling for SignalR Service, read the following guide. +To get details on the internals of the service and scaling for it, read the following guides: -* [Azure SignalR Service Internals](signalr-concept-internals.md) -* [Azure SignalR Service Scaling](signalr-howto-scale-multi-instances.md) \ No newline at end of file +* [Azure SignalR Service internals](signalr-concept-internals.md) +* [Azure SignalR Service scaling](signalr-howto-scale-multi-instances.md) diff --git a/articles/azure-stack/azure-stack-download-azure-marketplace-item.md b/articles/azure-stack/azure-stack-download-azure-marketplace-item.md index 828ef705f4e77..565b8ca0077a0 100644 --- a/articles/azure-stack/azure-stack-download-azure-marketplace-item.md +++ b/articles/azure-stack/azure-stack-download-azure-marketplace-item.md @@ -128,6 +128,8 @@ There are two parts to this scenario: Export-AzSOfflineMarketplaceItem -Destination "Destination folder path in quotes" ``` + Note that `Export-AzSOfflineMarketplaceItem` has an additional `-cloud` flag that specifies the cloud environment. By default, it is **azurecloud**. + 6. When the tool runs, you should see a screen similar to the following image, with the list of available marketplace items: [![Azure Marketplace items popup](media/azure-stack-download-azure-marketplace-item/image05.png "Azure Marketplace items")](media/azure-stack-download-azure-marketplace-item/image05.png#lightbox) diff --git a/articles/azure-stack/azure-stack-marketplace-azure-items.md b/articles/azure-stack/azure-stack-marketplace-azure-items.md index eaa9907ff91d2..a87eb52fb706b 100644 --- a/articles/azure-stack/azure-stack-marketplace-azure-items.md +++ b/articles/azure-stack/azure-stack-marketplace-azure-items.md @@ -13,7 +13,7 @@ ms.workload: na ms.tgt_pltfrm: na ms.devlang: na ms.topic: article -ms.date: 03/28/2019 +ms.date: 04/12/2019 ms.author: sethm ms.reviewer: unknown ms.lastreviewed: 01/29/2019 @@ -35,7 +35,7 @@ Whenever there are updates to virtual machine (VM) extensions you use, you shoul | ![Microsoft Antimalware Extension](media/azure-stack-marketplace-azure-items/cse.png) | [Microsoft Antimalware Extension](https://docs.microsoft.com/azure/security/azure-security-antimalware)| Microsoft Antimalware for Azure is a single-agent solution for applications and tenant environments, designed to run in the background without human intervention. **Download this update to the in-box version of the Antimalware Extension.** | Microsoft | Windows | | ![Microsoft Azure Diagnostic Extension](media/azure-stack-marketplace-azure-items/cse.png) | [Microsoft Azure Diagnostic Extension](https://docs.microsoft.com/azure/virtual-machines/extensions/diagnostics-windows)| Microsoft Azure Diagnostics is the capability within Azure that enables the collection of diagnostic data on a deployed application. **Download this update to the in-box version of the Diagnostic Extension for Windows.** | Microsoft | Windows | | ![Microsoft Monitoring Extension](media/azure-stack-marketplace-azure-items/cse.png) | [Microsoft Monitoring Agent Extension](https://docs.microsoft.com/azure/virtual-machines/extensions/oms-windows)| Microsoft Monitoring Agent Extension is used with OMS to provide virtual machine monitoring capability. **Download this update to the in-box version of the Monitoring Agent Extension for Windows.** | Microsoft | Windows | -|![Custom Script Extension](media/azure-stack-marketplace-azure-items/cse.png) | [Custom Script Extension](https://docs.microsoft.com/azure/virtual-machines/windows/extensions-customscript)|**Download this update to the in-box version of the Custom Script Extension for Linux. There are multiple versions of this extension and you should download both 1.5.2.1 and 2.0.x.** | Microsoft | Linux | +|![Custom Script Extension](media/azure-stack-marketplace-azure-items/cse.png) | - [Custom Script Extension (version 1, deprecated)](https://docs.microsoft.com/azure/virtual-machines/extensions/custom-script-linuxostc) - [Custom Script Extension (version 2)](https://docs.microsoft.com/azure/virtual-machines/extensions/custom-script-linux) |**Download this update to the in-box version of the Custom Script Extension for Linux. There are multiple versions of this extension and you should download both 1.5.2.1 and 2.0.x.** | Microsoft | Linux | | ![VM Access for Linux](media/azure-stack-marketplace-azure-items/cse.png) | [VM Access for Linux](https://azure.microsoft.com/blog/using-vmaccess-extension-to-reset-login-credentials-for-linux-vm/)| **Download this update to the in-box version of the VMAccess for Linux Extension. This update is important if you plan to use Debian Linux VMs.** | Microsoft | Linux | | ![Acronis Backup Extension for Linux](media/azure-stack-marketplace-azure-items/acronis.png) | [Acronis Backup Extension for Linux](https://azuremarketplace.microsoft.com/marketplace/apps/Acronis.acronis-backup-lin-arm) | The Acronis Backup Extension for Microsoft Azure is part of the Acronis Backup family of data protection products. | Acronis International GmbH. | Linux | | ![Acronis Backup Extension for Windows](media/azure-stack-marketplace-azure-items/acronis.png) | [Acronis Backup Extension for Windows](https://azuremarketplace.microsoft.com/marketplace/apps/Acronis.acronis-backup-win-arm) | The Acronis Backup Extension for Microsoft Azure is part of the Acronis Backup family of data protection products. | Acronis International GmbH. | Windows | diff --git a/articles/backup/backup-sql-server-database-azure-vms.md b/articles/backup/backup-sql-server-database-azure-vms.md index 18c7041c2b056..6ad3a5d113589 100644 --- a/articles/backup/backup-sql-server-database-azure-vms.md +++ b/articles/backup/backup-sql-server-database-azure-vms.md @@ -109,7 +109,7 @@ Discover databases running on the VM. - Azure Backup creates the service account **NT Service\AzureWLBackupPluginSvc** on the VM. - All backup and restore operations use the service account. - **NT Service\AzureWLBackupPluginSvc** needs SQL sysadmin permissions. All SQL Server VMs created in the Azure Marketplace come with the **SqlIaaSExtension** installed. The **AzureBackupWindowsWorkload** extension uses the **SQLIaaSExtension** to automatically get the required permissions. - - If you didn't create the VM from the marketplace, then the VM doesn't have the **SqlIaaSExtension** installed, and the discovery operation fails with the error message **UserErrorSQLNoSysAdminMembership**. Follow the instructions in [#fix-sql-sysadmin-permissions] to fix this issue. + - If you didn't create the VM from the marketplace, then the VM doesn't have the **SqlIaaSExtension** installed, and the discovery operation fails with the error message **UserErrorSQLNoSysAdminMembership**. Follow the [instructions](backup-azure-sql-database.md#fix-sql-sysadmin-permissions) to fix this issue. ![Select the VM and database](./media/backup-azure-sql-database/registration-errors.png) diff --git a/articles/blockchain/workbench/configuration.md b/articles/blockchain/workbench/configuration.md index 5bb1ff6aacdb0..1860e6b46297b 100644 --- a/articles/blockchain/workbench/configuration.md +++ b/articles/blockchain/workbench/configuration.md @@ -5,7 +5,7 @@ services: azure-blockchain keywords: author: PatAltimore ms.author: patricka -ms.date: 01/08/2019 +ms.date: 04/15/2019 ms.topic: article ms.service: azure-blockchain ms.reviewer: brendal @@ -13,7 +13,7 @@ manager: femila --- # Azure Blockchain Workbench configuration reference - Azure Blockchain Workbench applications are multi-party workflows defined by configuration metadata and smart contract code. Configuration metadata defines the high-level workflows and interaction model of the blockchain application. Smart contracts define the business logic of the blockchain application. Workbench uses configuration and smart contract code to generate blockchain application user experiences. +Azure Blockchain Workbench applications are multi-party workflows defined by configuration metadata and smart contract code. Configuration metadata defines the high-level workflows and interaction model of the blockchain application. Smart contracts define the business logic of the blockchain application. Workbench uses configuration and smart contract code to generate blockchain application user experiences. Configuration metadata specifies the following information for each blockchain application: diff --git a/articles/blockchain/workbench/create-app.md b/articles/blockchain/workbench/create-app.md index 8b03da9492bbf..6a1eb2f919612 100644 --- a/articles/blockchain/workbench/create-app.md +++ b/articles/blockchain/workbench/create-app.md @@ -5,7 +5,7 @@ services: azure-blockchain keywords: author: PatAltimore ms.author: patricka -ms.date: 01/08/2019 +ms.date: 04/15/2019 ms.topic: tutorial ms.service: azure-blockchain ms.reviewer: brendal diff --git a/articles/blockchain/workbench/deploy.md b/articles/blockchain/workbench/deploy.md index 4d348c8ea1829..d57f7af798ae7 100644 --- a/articles/blockchain/workbench/deploy.md +++ b/articles/blockchain/workbench/deploy.md @@ -5,7 +5,7 @@ services: azure-blockchain keywords: author: PatAltimore ms.author: patricka -ms.date: 1/8/2019 +ms.date: 04/15/2019 ms.topic: article ms.service: azure-blockchain ms.reviewer: brendal diff --git a/articles/blockchain/workbench/use-api.md b/articles/blockchain/workbench/use-api.md index d0e8ef724811f..ffd2cd56e3bb5 100644 --- a/articles/blockchain/workbench/use-api.md +++ b/articles/blockchain/workbench/use-api.md @@ -5,7 +5,7 @@ services: azure-blockchain keywords: author: PatAltimore ms.author: patricka -ms.date: 02/21/2019 +ms.date: 04/15/2019 ms.topic: article ms.service: azure-blockchain ms.reviewer: zeyadr diff --git a/articles/blockchain/workbench/use.md b/articles/blockchain/workbench/use.md index 35ae72b86dab7..74a9b0221a1a6 100644 --- a/articles/blockchain/workbench/use.md +++ b/articles/blockchain/workbench/use.md @@ -5,7 +5,7 @@ services: azure-blockchain keywords: author: PatAltimore ms.author: patricka -ms.date: 01/08/2019 +ms.date: 04/15/2019 ms.topic: tutorial ms.service: azure-blockchain ms.reviewer: brendal @@ -36,9 +36,9 @@ You'll learn how to: You need to sign in as a member of the Blockchain Workbench. If there are no applications listed, you are a member of Blockchain Workbench but not a member of any applications. The Blockchain Workbench administrator can assign members to applications. -## Create new contract +## Create new contract -To create a new contract, you need to be a member specified as an contract **initiator**. For information defining application roles and initiators for the contract, see [workflows in the configuration overview](configuration.md#workflows). For information on assigning members to application roles, see [add a member to application](manage-users.md#add-member-to-application). +To create a new contract, you need to be a member specified as a contract **initiator**. For information defining application roles and initiators for the contract, see [workflows in the configuration overview](configuration.md#workflows). For information on assigning members to application roles, see [add a member to application](manage-users.md#add-member-to-application). 1. In Blockchain Workbench application section, select the application tile that contains the contract you want to create. A list of active contracts is displayed. diff --git a/articles/blockchain/workbench/version-app.md b/articles/blockchain/workbench/version-app.md index f8a5cd3417c29..e2b56264b8d5e 100644 --- a/articles/blockchain/workbench/version-app.md +++ b/articles/blockchain/workbench/version-app.md @@ -5,7 +5,7 @@ services: azure-blockchain keywords: author: PatAltimore ms.author: patricka -ms.date: 1/8/2019 +ms.date: 04/15/2019 ms.topic: article ms.service: azure-blockchain ms.reviewer: brendal diff --git a/articles/cdn/cdn-map-content-to-custom-domain.md b/articles/cdn/cdn-map-content-to-custom-domain.md index 333655a268ce6..73e1cf3873ba8 100644 --- a/articles/cdn/cdn-map-content-to-custom-domain.md +++ b/articles/cdn/cdn-map-content-to-custom-domain.md @@ -47,7 +47,7 @@ Before you can use a custom domain with an Azure CDN endpoint, you must first cr A custom domain and its subdomain can be associated with only a single endpoint at a time. However, you can use different subdomains from the same custom domain for different Azure service endpoints by using multiple CNAME records. You can also map a custom domain with different subdomains to the same CDN endpoint. > [!NOTE] -> Any alias record type can be used for Custom domains if you are using Azure DNS as your domain provider. This walkthrough uses the CNAME record type. If you are using A or AAAA record types just follow the same steps below while replacing CNAME with the record type of your choice. If you're using an alias record to add a root domain as a custom domain and you want to enable SSL, you must use manual validation as described [here](https://docs.microsoft.com/azure/cdn/cdn-custom-ssl?tabs=option-1-default-enable-https-with-a-cdn-managed-certificate#custom-domain-is-not-mapped-to-your-cdn-endpoint) +> Any alias record type can be used for Custom domains if you're using Azure DNS as your domain provider. This walkthrough uses the CNAME record type. If you're using A or AAAA record types, follow the same steps below and replace CNAME with the record type of your choice. If you're using an alias record to add a root domain as a custom domain and you want to enable SSL, you must use manual validation as described in [this article](https://docs.microsoft.com/azure/cdn/cdn-custom-ssl?tabs=option-1-default-enable-https-with-a-cdn-managed-certificate#custom-domain-is-not-mapped-to-your-cdn-endpoint). For more information, see [Point zone apex to Azure CDN endpoints](https://docs.microsoft.com/azure/dns/dns-alias#point-zone-apex-to-azure-cdn-endpoints). ## Map the temporary cdnverify subdomain diff --git a/articles/cognitive-services/QnAMaker/How-To/metadata-generateanswer-usage.md b/articles/cognitive-services/QnAMaker/How-To/metadata-generateanswer-usage.md index aa952c9ef7e53..1522dbb7342a6 100644 --- a/articles/cognitive-services/QnAMaker/How-To/metadata-generateanswer-usage.md +++ b/articles/cognitive-services/QnAMaker/How-To/metadata-generateanswer-usage.md @@ -69,7 +69,7 @@ https://{QnA-Maker-endpoint}/knowledgebases/{knowledge-base-ID}/generateAnswer?i |Header|Content-Type|string|The media type of the body sent to the API. Default value is: ``| |Header|Authorization|string|Your endpoint key (EndpointKey xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx).| |Post Body|JSON object|JSON|The question with settings| -|Query string parameter (optional)|`isTest`|boolean|If set to true, returns results from `testkb` Search index instead of published index.| + The JSON body has several settings: @@ -78,6 +78,7 @@ The JSON body has several settings: |`question`|required|string|A user question to be sent to your knowledge base.| |`top`|optional|integer|The number of ranked results to include in the output. The default value is 1.| |`userId`|optional|string|A unique ID to identify the user. This ID will be recorded in the chat logs.| +|`isTest`|optional|boolean|If set to true, returns results from `testkb` Search index instead of published index.| |`strictFilters`|optional|string|If specified, tells QnA Maker to return only answers that have the specified metadata.| An example JSON body looks like: @@ -86,6 +87,7 @@ An example JSON body looks like: { "question": "qna maker and luis", "top": 6, + "isTest": true, "strictFilters": [ { "name": "category", diff --git a/articles/cognitive-services/Speech-Service/batch-transcription.md b/articles/cognitive-services/Speech-Service/batch-transcription.md index 31ddfadfbead3..b12422feae232 100644 --- a/articles/cognitive-services/Speech-Service/batch-transcription.md +++ b/articles/cognitive-services/Speech-Service/batch-transcription.md @@ -83,6 +83,16 @@ Configuration parameters are provided as JSON: | `PunctuationMode` | Specifies how to handle punctuation in recognition results. Accepted values are `none` which disables punctuation, `dictated` which implies explicit punctuation, `automatic` which lets the decoder deal with punctuation, or `dictatedandautomatic` which implies dictated punctuation marks or automatic. | Optional | | `AddWordLevelTimestamps` | Specifies if word level timestamps should be added to the output. Accepted values are `true` which enables word level timestamps and `false` (the default value) to disable it. | Optional | +### Storage + +Batch transcription supports [Azure Blob storage](https://docs.microsoft.com/azure/storage/blobs/storage-blobs-overview) for reading audio and writing transcriptions to storage. + +## Webhooks + +Polling for transcription status may not be the most performant, or provide the best user experience. To poll for status, you can register callbacks, which will notify the client when long-running transcription tasks have completed. + +For more details, see [Webhooks](webhooks.md). + ## Sample code The complete sample is available in the [GitHub sample repository](https://aka.ms/csspeech/samples) inside the `samples/batch` subdirectory. @@ -104,10 +114,6 @@ The current sample code doesn't specify a custom model. The service uses the bas > [!NOTE] > For baseline transcriptions, you don't need to declare the ID for the baseline models. If you only specify a language model ID (and no acoustic model ID), a matching acoustic model is automatically selected. If you only specify an acoustic model ID, a matching language model is automatically selected. -### Supported storage - -Currently, only Azure Blob storage is supported. - ## Download the sample You can find the sample in the `samples/batch` directory in the [GitHub sample repository](https://aka.ms/csspeech/samples). diff --git a/articles/cognitive-services/Speech-Service/regions.md b/articles/cognitive-services/Speech-Service/regions.md index 6c0030d4cc5f8..d3aba52082a49 100644 --- a/articles/cognitive-services/Speech-Service/regions.md +++ b/articles/cognitive-services/Speech-Service/regions.md @@ -34,10 +34,10 @@ The Speech SDK is available in these regions for **speech recognition** and **tr West US2 | `westus2` | https://westus2.cris.ai East US | `eastus` | https://eastus.cris.ai East US2 | `eastus2` | https://eastus2.cris.ai - Central US | 'centralus' | https://centralus.cris.ai - North Central US | 'northcentralus' | https://northcentralus.cris.ai - South Central US | 'southcentralus' | https://southcentralus.cris.ai - Central India | 'centralindia' | https://centralindia.cris.ai + Central US | `centralus` | https://centralus.cris.ai + North Central US | `northcentralus` | https://northcentralus.cris.ai + South Central US | `southcentralus` | https://southcentralus.cris.ai + Central India | `centralindia` | https://centralindia.cris.ai East Asia | `eastasia` | https://eastasia.cris.ai South East Asia | `southeastasia` | https://southeastasia.cris.ai Japan East | `japaneast` | https://japaneast.cris.ai diff --git a/articles/cognitive-services/Speech-Service/swagger-documentation.md b/articles/cognitive-services/Speech-Service/swagger-documentation.md new file mode 100644 index 0000000000000..a957bfc492e63 --- /dev/null +++ b/articles/cognitive-services/Speech-Service/swagger-documentation.md @@ -0,0 +1,45 @@ +--- +title: Swagger documentation - Speech Services +titleSuffix: Azure Cognitive Services +description: The Swagger documentation can be used to auto-generate SDks for a number of programming languages. All operations in our service are supported by Swagger +services: cognitive-services +author: PanosPeriorellis +manager: nitinme +ms.service: cognitive-services +ms.subservice: speech-service +ms.topic: overview +ms.date: 04/12/2019 +ms.author: erhopf +--- + +# Swagger documentation + +The Speech Services offer a Swagger specification to interact with a handful of REST APIs used to import data, create models, test model accuracy, create custom endpoints, queue up batch transcriptions, and manage subscriptions. Most operations available through the Custom Speech portal can be completed programmatically using these APIs. + +> [!NOTE] +> Both Speech-to-Text and Text-to-Speech operations are supported available as REST APIs, which are in turn documented in the Swagger specification. + +## Generating code from the Swagger specification + +The [Swager specification](https://cris.ai/swagger/ui/index) has options that allow you to quickly test for various paths. However, sometimes it's desirable to generate code for all paths, creating a single library of calls that you can base future solutions on. Let's take a look at the process to generate a Python library. + +You'll need to set Swagger to the same region as your Speech Service subscription. You can confirm your region in the Azure portal under your Speech Services resource. For a complete list of supported regions, see [Regions](regions.md). + +1. Go to https://editor.swagger.io +2. Click **File**, then click **Import** +3. Enter the swagger URL including the region for your Speech Services subscription `https://.cris.ai/docs/v2.0/swagger` +4. Click **Generate Client** and select Python +5. Save the client library + +You can use the Python library that you generated with the [Speech Services samples on GitHub](https://aka.ms/csspeech/samples). + +## Reference docs + +* [REST (Swagger): Batch transcription and customization](https://westus.cris.ai/swagger/ui/index) +* [REST API: Speech-to-text](rest-speech-to-text.md) +* [REST API: Text-to-speech](rest-text-to-speech.md) + +## Next steps + +* [Speech Services samples on GitHub](https://aka.ms/csspeech/samples). +* [Get a Speech Services subscription key for free](get-started.md) diff --git a/articles/cognitive-services/Speech-Service/text-to-speech.md b/articles/cognitive-services/Speech-Service/text-to-speech.md index 8b9cba9932235..e20ae74da20b8 100644 --- a/articles/cognitive-services/Speech-Service/text-to-speech.md +++ b/articles/cognitive-services/Speech-Service/text-to-speech.md @@ -48,10 +48,7 @@ This table lists the core features for text-to-speech: | Create and manage voice font tests. | No | Yes\* | | Manage subscriptions. | No | Yes\* | -\* *These services are available using the cris.ai endpoint. See [Swagger reference](https://westus.cris.ai/swagger/ui/index).* - -> [!NOTE] -> The custom voice endpoints implement throttling that limits requests to 25 per 5 seconds. When throttling occurs, you'll be notified via message headers. +\* *These services are available using the cris.ai endpoint. See [Swagger reference](https://westus.cris.ai/swagger/ui/index). These custom voice training and management APIs implement throttling that limits requests to 25 per 5 seconds, while the speech synthesis API itself implements throttling that allows 200 requests per second as the highest. When throttling occurs, you'll be notified via message headers.* ## Get started with text to speech diff --git a/articles/cognitive-services/Speech-Service/toc.yml b/articles/cognitive-services/Speech-Service/toc.yml index 1ad5b8759abdb..b741cb61c10a3 100644 --- a/articles/cognitive-services/Speech-Service/toc.yml +++ b/articles/cognitive-services/Speech-Service/toc.yml @@ -207,6 +207,11 @@ - name: Batch transcription & customization href: https://westus.cris.ai/swagger/ui/index displayName: Batch transcription, transcription, custom models, customization, model management + - name: Webhooks + href: webhooks.md + displayName: Webhooks + - name: Swagger documentation + href: swagger-documentation.md - name: Speech Synthesis Markup Language (SSML) href: speech-synthesis-markup.md - name: PowerShell diff --git a/articles/cognitive-services/Speech-Service/webhooks.md b/articles/cognitive-services/Speech-Service/webhooks.md new file mode 100644 index 0000000000000..2accde6f8534c --- /dev/null +++ b/articles/cognitive-services/Speech-Service/webhooks.md @@ -0,0 +1,139 @@ +--- +title: Webhooks - Speech Services +titlesuffix: Azure Cognitive Services +description: Webhooks are HTTP call backs ideal for optimizing your solution when dealing with long running processes like imports, adaptation, accuracy tests, or transcriptions of long running files. +services: cognitive-services +author: PanosPeriorellis +manager: nitinme +ms.service: cognitive-services +ms.subservice: speech-service +ms.topic: conceptual +ms.date: 04/11/2019 +ms.author: panosper +ms.custom: seodec18 +--- + +# Webhooks for Speech Services + +Webhooks are like HTTP callbacks that allow your application to accept data from the Speech Services when it becomes available. Using webhooks, you can optimize your use of our REST APIs by eliminating the need to continuously poll for a response. In the next few sections, you'll learn how to use webhooks with the Speech Services. + +## Supported operations + +The Speech Services support webhooks for all long running operations. Each of the operations listed below can trigger an HTTP callback upon completion. + +* DataImportCompletion +* ModelAdaptationCompletion +* AccuracyTestCompletion +* TranscriptionCompletion +* EndpointDeploymentCompletion +* EndpointDataCollectionCompletion + +Next, let's create a webhook. + +## Create a webhook + +Let's create a webhook for an offline transcription. The scenario: a user has a long running audio file that they would like to transcribe asynchronously with the Batch Transcription API. + +Configuration parameters for the request are provided as JSON: + +```json +{ + "configuration": { + "url": "https://your.callback.url/goes/here", + "secret": "" + }, + "events": [ + "TranscriptionCompletion" + ], + "active": true, + "name": "TranscriptionCompletionWebHook", + "description": "This is a Webhook created to trigger an HTTP POST request when my audio file transcription is completed.", + "properties": { + "Active" : "True" + } + +} +``` +All POST requests to the Batch Transcription API require a `name`. The `description` and `properties` parameters are optional. + +The `Active` property is used to switch calling back into your URL on and off without having to delete and re-create the webhook registration. If you only need to call back once after the process has complete, then delete the webhook and switch the `Active` property to false. + +The event type `TranscriptionCompletion` is provided in the events array. It will call back to your endpoint when a transcription gets into a terminal state (`Succeeded` or `Failed`). When calling back to the registered URL, the request will contain an `X-MicrosoftSpeechServices-Event` header containing one of the registered event types. There is one request per registered event type. + +There is one event type that you cannot subscribe to. It is the `Ping` event type. A request with this type is sent to the URL when finished creating a webhook when using the ping URL (see below). + +In the configuration, the `url` property is required. POST requests are sent to this URL. The `secret` is used to create a SHA256 hash of the payload, with the secret as an HMAC key. The hash is set as the `X-MicrosoftSpeechServices-Signature` header when calling back to the registered URL. This header is Base64 encoded. + +This sample illustrates how to validate a payload using C#: + +```csharp + +private const string EventTypeHeaderName = "X-MicrosoftSpeechServices-Event"; +private const string SignatureHeaderName = "X-MicrosoftSpeechServices-Signature"; + +[HttpPost] +public async Task PostAsync([FromHeader(Name = EventTypeHeaderName)]WebHookEventType eventTypeHeader, [FromHeader(Name = SignatureHeaderName)]string signature) +{ + string body = string.Empty; + using (var streamReader = new StreamReader(this.Request.Body)) + { + body = await streamReader.ReadToEndAsync().ConfigureAwait(false); + var secretBytes = Encoding.UTF8.GetBytes("my_secret"); + using (var hmacsha256 = new HMACSHA256(secretBytes)) + { + var contentBytes = Encoding.UTF8.GetBytes(body); + var contentHash = hmacsha256.ComputeHash(contentBytes); + var storedHash = Convert.FromBase64String(signature); + var validated = contentHash.SequenceEqual(storedHash); + } + } + + switch (eventTypeHeader) + { + case WebHookEventType.Ping: + // Do your ping event related stuff here (or ignore this event) + break; + case WebHookEventType.TranscriptionCompletion: + // Do your subscription related stuff here. + break; + default: + break; + } + + return this.Ok(); +} + +``` +In this code snippet, the `secret` is decoded and validated. You'll also notice that the webhook event type has been switched. Currently there is one event per completed transcription. The code retries five times for each event (with a one second delay) before giving up. + +### Other webhook operations + +To get all registered webhooks: +GET https://westus.cris.ai/api/speechtotext/v2.1/transcriptions/hooks + +To get one specific webhook: +GET https://westus.cris.ai/api/speechtotext/v2.1/transcriptions/hooks/:id + +To remove one specific webhook: +DELETE https://westus.cris.ai/api/speechtotext/v2.1/transcriptions/hooks/:id + +> [!Note] +> In the example above, the region is 'westus'. This should be replaced by the region where you've created your Speech Services resource in the Azure portal. + +POST https://westus.cris.ai/api/speechtotext/v2.1/transcriptions/hooks/:id/ping +Body: empty + +Sends a POST request to the registered URL. The request contains an `X-MicrosoftSpeechServices-Event` header with a value ping. If the webhook was registered with a secret, it will contain an `X-MicrosoftSpeechServices-Signature` header with an SHA256 hash of the payload with the secret as HMAC key. The hash is Base64 encoded. + +POST https://westus.cris.ai/api/speechtotext/v2.1/transcriptions/hooks/:id/test +Body: empty + +Sends a POST request to the registered URL if an entity for the subscribed event type (transcription) is present in the system and is in the appropriate state. The payload will be generated from the last entity that would have invoked the web hook. If no entity is present, the POST will respond with 204. If a test request can be made, it will respond with 200. The request body is of the same shape as in the GET request for a specific entity the web hook has subscribed for (for instance transcription). The request will have the `X-MicrosoftSpeechServices-Event` and `X-MicrosoftSpeechServices-Signature` headers as described before. + +### Run a test + +A quick test can be done using the website https://bin.webhookrelay.com. From there, you can obtain call back URLs to pass as parameter to the HTTP POST for creating a webhook described earlier in the document. + +## Next steps + +* [Get your Speech trial subscription](https://azure.microsoft.com/try/cognitive-services/) diff --git a/articles/cognitive-services/Translator/language-support.md b/articles/cognitive-services/Translator/language-support.md index eff648f080926..53bbf5211c041 100644 --- a/articles/cognitive-services/Translator/language-support.md +++ b/articles/cognitive-services/Translator/language-support.md @@ -17,6 +17,8 @@ The Translator Text API supports the following languages for text to text transl [Learn more about how machine translation works](https://www.microsoft.com/translator/mt.aspx) +## Translation + **V2 Translator API** > [!NOTE] @@ -186,77 +188,8 @@ The dictionary supports the following languages to or from English using the Loo ## Detect -The following languages are supported by the Detect method. Detect may identify languages that the Microsoft Translator can't translate. +Translator Text API detects all languages available for translation and transliteration. -| Language | -|:----------- | -| Afrikaans | -| Albanian | -| Arabic | -| Basque | -| Belarusian | -| Bulgarian | -| Catalan | -| Chinese | -| Chinese (Simplified) | -| Chinese (Traditional) | -| Croatian | -| Czech | -| Danish | -| Dutch | -| English | -| Esperanto | -| Estonian | -| Finnish | -| French | -| Galician | -| German | -| Greek | -| Haitian Creole | -| Hebrew | -| Hindi | -| Hungarian | -| Icelandic | -| Indonesian | -| Irish | -| Italian | -| Japanese | -| Korean | -| Kurdish (Arabic) | -| Kurdish (Latin) | -| Latin | -| Latvian | -| Lithuanian | -| Macedonian | -| Malay | -| Maltese | -| Norwegian | -| Norwegian (Nynorsk) | -| Pashto | -| Persian | -| Polish | -| Portuguese | -| Romanian | -| Russian | -| Serbian (Cyrillic) | -| Serbian (Latin) | -| Slovak | -| Slovenian | -| Somali | -| Spanish | -| Swahili | -| Swedish | -| Tagalog | -| Telugu | -| Thai | -| Turkish | -| Ukrainian | -| Urdu | -| Uzbek (Cyrillic) | -| Uzbek (Latin) | -| Vietnamese | -| Welsh | -| Yiddish | ## Access the Translator Text API language list programmatically @@ -289,6 +222,7 @@ The following languages are available for customization to or from English using | Hindi | `hi` | | Hungarian | `hu` | | Icelandic | `is` | +| Indonesian| `id` | | Italian | `it` | | Japanese | `ja` | | Korean | `ko` | diff --git a/articles/cognitive-services/Translator/reference/v3-0-break-sentence.md b/articles/cognitive-services/Translator/reference/v3-0-break-sentence.md index 439b6a885684f..c128a6c02a795 100644 --- a/articles/cognitive-services/Translator/reference/v3-0-break-sentence.md +++ b/articles/cognitive-services/Translator/reference/v3-0-break-sentence.md @@ -52,8 +52,8 @@ Request headers include: Headers Description - _One authorization_
_header_ - *Required request header*.
See [available options for authentication](./v3-0-reference.md#authentication). + Authentication header(s) + Required request header.
See available options for authentication. Content-Type diff --git a/articles/cognitive-services/Translator/reference/v3-0-detect.md b/articles/cognitive-services/Translator/reference/v3-0-detect.md index 236829cc928c3..f2fc63185f27e 100644 --- a/articles/cognitive-services/Translator/reference/v3-0-detect.md +++ b/articles/cognitive-services/Translator/reference/v3-0-detect.md @@ -44,8 +44,8 @@ Request headers include: Headers Description - _One authorization_
_header_ - *Required request header*.
See [available options for authentication](./v3-0-reference.md#authentication). + Authentication header(s) + Required request header.
See available options for authentication. Content-Type diff --git a/articles/cognitive-services/Translator/reference/v3-0-dictionary-examples.md b/articles/cognitive-services/Translator/reference/v3-0-dictionary-examples.md index b4892244c8845..06e14bfd10a34 100644 --- a/articles/cognitive-services/Translator/reference/v3-0-dictionary-examples.md +++ b/articles/cognitive-services/Translator/reference/v3-0-dictionary-examples.md @@ -52,8 +52,8 @@ Request headers include: Headers Description - _One authorization_
_header_ - *Required request header*.
See [available options for authentication](./v3-0-reference.md#authentication). + Authentication header(s) + Required request header.
See available options for authentication. Content-Type diff --git a/articles/cognitive-services/Translator/reference/v3-0-dictionary-lookup.md b/articles/cognitive-services/Translator/reference/v3-0-dictionary-lookup.md index eff05ea2f423b..08b2790a82854 100644 --- a/articles/cognitive-services/Translator/reference/v3-0-dictionary-lookup.md +++ b/articles/cognitive-services/Translator/reference/v3-0-dictionary-lookup.md @@ -52,8 +52,8 @@ Request headers include: Headers Description - _One authorization_
_header_ - *Required request header*.
See [available options for authentication](./v3-0-reference.md#authentication). + Authentication header(s) + Required request header.
See available options for authentication. Content-Type diff --git a/articles/cognitive-services/Translator/reference/v3-0-translate.md b/articles/cognitive-services/Translator/reference/v3-0-translate.md index 7305a94490cc9..3d126e9500770 100644 --- a/articles/cognitive-services/Translator/reference/v3-0-translate.md +++ b/articles/cognitive-services/Translator/reference/v3-0-translate.md @@ -93,8 +93,8 @@ Request headers include: Headers Description - _One authorization_
_header_ - Required request header.
See [available options for authentication](./v3-0-reference.md#authentication). + Authentication header(s) + Required request header.
See available options for authentication. Content-Type diff --git a/articles/cognitive-services/Translator/reference/v3-0-transliterate.md b/articles/cognitive-services/Translator/reference/v3-0-transliterate.md index 3a43c81bf7481..405b8970b300b 100644 --- a/articles/cognitive-services/Translator/reference/v3-0-transliterate.md +++ b/articles/cognitive-services/Translator/reference/v3-0-transliterate.md @@ -56,8 +56,8 @@ Request headers include: Headers Description - _One authorization_
_header_ - *Required request header*.
See [available options for authentication](./v3-0-reference.md#authentication). + Authentication header(s) + Required request header.
See available options for authentication. Content-Type diff --git a/articles/cosmos-db/how-to-sql-query.md b/articles/cosmos-db/how-to-sql-query.md index dd089b6a855a3..66b5ed4f50b3f 100644 --- a/articles/cosmos-db/how-to-sql-query.md +++ b/articles/cosmos-db/how-to-sql-query.md @@ -1709,7 +1709,7 @@ The next example shows joins, expressed through LINQ `SelectMany`. The .NET client automatically iterates through all the pages of query results in the `foreach` blocks, as shown in the preceding example. The query options introduced in the [REST API](#RestAPI) section are also available in the .NET SDK, using the `FeedOptions` and `FeedResponse` classes in the `CreateDocumentQuery` method. You can control the number of pages by using the `MaxItemCount` setting. -You can also explicitly control paging by creating `IDocumentQueryable` using the `IQueryable` object, then by reading the` ResponseContinuationToken` values and passing them back as `RequestContinuationToken` in `FeedOptions`. You can set `EnableScanInQuery` to enable scans when the query isn't supported by the configured indexing policy. For partitioned containers, you can use `PartitionKey` to run the query against a single partition, although Azure Cosmos DB can automatically extract this from the query text. You can use `EnableCrossPartitionQuery` to run queries against multiple partitions. +You can also explicitly control paging by creating `IDocumentQueryable` using the `IQueryable` object, then by reading the `ResponseContinuationToken` values and passing them back as `RequestContinuationToken` in `FeedOptions`. You can set `EnableScanInQuery` to enable scans when the query isn't supported by the configured indexing policy. For partitioned containers, you can use `PartitionKey` to run the query against a single partition, although Azure Cosmos DB can automatically extract this from the query text. You can use `EnableCrossPartitionQuery` to run queries against multiple partitions. For more .NET samples with queries, see the [Azure Cosmos DB .NET samples](https://github.com/Azure/azure-cosmosdb-dotnet) in GitHub. diff --git a/articles/data-catalog/data-catalog-dsr.md b/articles/data-catalog/data-catalog-dsr.md index 188c54745e103..110cce42d95dc 100644 --- a/articles/data-catalog/data-catalog-dsr.md +++ b/articles/data-catalog/data-catalog-dsr.md @@ -4,17 +4,15 @@ description: This article lists specifications of the currently supported data s services: data-catalog author: markingmyname ms.author: maghan -ms.assetid: fd4345ca-2ed8-4c5e-9c4b-f954be2fc9f9 ms.service: data-catalog ms.topic: conceptual -ms.date: 01/18/2018 +ms.date: 04/15/2019 --- # Supported data sources in Azure Data Catalog You can publish metadata by using a public API or a click-once registration tool, or by manually entering information directly to the Azure Data Catalog web portal. The following table summarizes all data sources that are supported by the catalog today, and the publishing capabilities for each. Also listed are the external data tools that each data source can launch from our portal "open-in" experience. The second table contains a more technical specification of each data-source connection property. - ## List of supported data sources @@ -27,7 +25,7 @@ You can publish metadata by using a public API or a click-once registration tool - + @@ -35,7 +33,7 @@ You can publish metadata by using a public API or a click-once registration tool - + diff --git a/articles/data-factory/data-flow-expression-functions.md b/articles/data-factory/data-flow-expression-functions.md index 1e85fd1a5dece..0b9ee8ead056b 100644 --- a/articles/data-factory/data-flow-expression-functions.md +++ b/articles/data-factory/data-flow-expression-functions.md @@ -147,7 +147,7 @@ Concatenates a variable number of strings together. Same as the + operator with concatWS(<separator> : string, <this> : string, <that> : string, ...) => string

Concatenates a variable number of strings together with a separator. The first parameter is the separator * ``concatWS(' ', 'Awesome', 'Cool', 'Product') -> 'Awesome Cool Product'`` -* ``concatWS(' ' , addrLine1, addrLine2, city, state, zip) -> `` +* ``concatWS(' ' , addrLine1, addrLine2, city, state, zip) ->`` * ``concatWS(',' , toString(order_total), toString(order_discount))`` ********************************* cos @@ -967,14 +967,14 @@ Gets the aggregate sum of distinct values of a numeric column sumDistinctIf(<value1> : boolean, <value2> : number) => number

Based on criteria gets the aggregate sum of a numeric column. The condition can be based on any column * ``sumDistinctIf(state == 'CA' && commission < 10000, sales) -> value`` -* ``sumDistinctIf(true, sales) -> SUM(sales) `` +* ``sumDistinctIf(true, sales) -> SUM(sales)`` ********************************* sumIf ============================== sumIf(<value1> : boolean, <value2> : number) => number

Based on criteria gets the aggregate sum of a numeric column. The condition can be based on any column * ``sumIf(state == 'CA' && commission < 10000, sales) -> value`` -* ``sumIf(true, sales) -> SUM(sales) `` +* ``sumIf(true, sales) -> SUM(sales)`` ********************************* tan ============================== diff --git a/articles/databox-online/TOC.yml b/articles/databox-online/TOC.yml index f9706181cb7df..f2d32f77918ab 100644 --- a/articles/databox-online/TOC.yml +++ b/articles/databox-online/TOC.yml @@ -41,6 +41,8 @@ href: data-box-gateway-manage-access-power-connectivity-mode.md - name: Via PowerShell href: data-box-gateway-connect-powershell-interface.md + - name: Monitor + href: data-box-edge-monitor.md - name: Troubleshoot href: data-box-gateway-troubleshoot.md - name: Release notes diff --git a/articles/databox-online/data-box-edge-monitor.md b/articles/databox-online/data-box-edge-monitor.md new file mode 100644 index 0000000000000..dbea632dbd34f --- /dev/null +++ b/articles/databox-online/data-box-edge-monitor.md @@ -0,0 +1,106 @@ +--- +title: Monitor your Azure Data Box Edge device | Microsoft Docs +description: Describes how to use the Azure portal and local web UI to monitor your Azure Data Box Edge. +services: databox +author: alkohli + +ms.service: databox +ms.subservice: edge +ms.topic: overview +ms.date: 04/15/2019 +ms.author: alkohli +--- +# Monitor your Azure Data Box Edge + +This article describes how to monitor your Azure Data Box Edge. To monitor your device, you can use Azure portal or the local web UI. Use the Azure portal to view device events, configure and manage alerts, and view metrics. Use the local web UI on your physical device to view the hardware status of the various device components. + +In this article, you learn how to: + +> [!div class="checklist"] +> * View device events and the corresponding alerts +> * View hardware status of device components +> * View capacity and transaction metrics for your device +> * Configure and manage alerts + +## View device events + +Take the following steps in the Azure portal to view a device event. + +1. In the Azure portal, go to your Data Box Edge/ Data Box Gateway resource and then go to **Monitoring > Device events**. +2. Select an event and view the alert details. Take appropriate action to resolve the alert condition. + + ![Select event and view details](media/data-box-edge-monitor/view-device-events.png) + +## View hardware status + +Take the following steps in the local web UI to view the hardware status of your device components. This information is only available for a Data Box Edge device. + +1. Connect to the local web UI of your device. +2. Go to **Maintenance > Hardware status**. You can view the health of the various device components. + + ![View hardware status](media/data-box-edge-monitor/view-hardware-status.png) + +## View metrics + +You can also view the metrics to monitor the performance of the device and in some instances for troubleshooting device issues. + +Take the following steps in the Azure portal to create a chart for selected device metrics. + +1. For your resource in the Azure portal, go to **Monitoring > Metrics** and select **Add metric**. + + ![Add metric](media/data-box-edge-monitor/view-metrics-1.png) + +2. The resource is automatically populated. + + ![Current resource](media/data-box-edge-monitor/view-metrics-2.png) + + To specify another resource, select the resource. On **Select a resource** blade, select the subscription, resource group, resource type, and the specific resource for which you want to show the metrics and select **Apply**. + + ![Choose another resource](media/data-box-edge-monitor/view-metrics-3.png) + +3. From the dropdown list, select a metric to monitor your device. The metrics can be **Capacity metrics** or **Transaction metrics**. The capacity metrics are related to the capacity of the device. The transaction metrics are related to the read and write operations to Azure Storage. + + |Capacity metrics |Description | + |-------------------------------------|-------------| + |**Available capacity** | Refers to the size of the data that can be written to the device. In other words, this is the capacity that can be made available on the device.

You can free up the device capacity by deleting the local copy of files that have a copy on both the device as well as the cloud. | + |**Total capacity** | Refers to the total bytes on the device to write data to. This is also referred to as the total size of the local cache.

You can now increase the capacity of an existing virtual device by adding a data disk. Add a data disk through the hypervisor management for the VM and then restart your VM. The local storage pool of the Gateway device will expand to accommodate the newly added data disk.

For more information, go to [Add a hard drive for Hyper-V virtual machine](https://www.youtube.com/watch?v=EWdqUw9tTe4). | + + |Transaction metrics | Description | + |-------------------------------------|---------| + |**Cloud bytes uploaded (device)** | Sum of all the bytes uploaded across all the shares on your device | + |**Cloud bytes uploaded (share)** | Bytes uploaded per share. This can be:

Avg, which is the (Sum of all the bytes uploaded per share / Number of shares),

Max, which is the maximum number of bytes uploaded from a share

Min, which is the minimum number of bytes uploaded from a share | + |**Cloud download throughput (share)**| Bytes downloaded per share. This can be:

Avg, which is the (Sum of all bytes read or downloaded to a share / Number of shares)

Max, which is the maximum number of bytes downloaded from a share

and Min, which is the minimum number of bytes downloaded from a share | + |**Cloud read throughput** | Sum of all the bytes read from the cloud across all the shares on your device | + |**Cloud upload throughput** | Sum of all the bytes written to the cloud across all the shares on your device | + |**Cloud upload throughput (share)** | Sum of all bytes written to the cloud from a share / # of shares is average, max, and min per share | + |**Read throughput (network)** | Includes the system network throughput for all the bytes read from the cloud. This view can include data that is not restricted to shares.

Splitting will show the traffic over all the network adapters on the device. This includes adapters that are not connected or enabled. | + |**Write throughput (network)** | Includes the system network throughput for all the bytes written to the cloud. This view can include data that is not restricted to shares.

Splitting will show the traffic over all the network adapters on the device. This includes adapters that are not connected or enabled. | + |**Edge compute - memory usage** | Memory usage for the IoT Edge device for your Data Box Edge. If you see a high usage and if your device performance is affected by the current workloads that you have deployed, contact Microsoft Support to determine next steps.

This metric is not populated for Data Box Gateway. | + |**Edge compute - percentage CPU** | CPU usage for IoT Edge device for your Data Box Edge. If you see a high usage and if your device performance is affected by the current workloads that you have deployed, contact Microsoft Support to determine next steps.

This metric is not populated for Data Box Gateway. | +1. When a metric is selected from the dropdown list, aggregation can also be defined. Aggregation refers to the actual value aggregated over a specified span of time. The aggregated values can be average, minimum, or the maximum value. Select the Aggregation from Avg, Max, or Min. + + ![View chart](media/data-box-edge-monitor/view-metrics-4.png) + +5. If the metric you selected has multiple instances, then the splitting option is available. Select **Apply splitting** and then select the value by which you want to see the breakdown. + + ![Apply splitting](media/data-box-edge-monitor/view-metrics-5.png) + +6. If you now want to see the breakdown only for a few instances, you can filter the data. For example, in this case, if you want to see the network throughput only for the two connected network interfaces on your device, you could filter those interfaces. Select **Add filter** and specify the network interface name for filtering. + + ![Add filter](media/data-box-edge-monitor/view-metrics-6.png) + +7. You could also pin the chart to dashboard for easy access. + + ![Pin to dashboard](media/data-box-edge-monitor/view-metrics-7.png) + +8. To export chart data to an Excel spreadsheet or get a link to the chart that you can share, select the share option from the command bar. + + ![Export data](media/data-box-edge-monitor/view-metrics-8.png) + +## Manage alerts + +Configure alert rules to inform you of alert conditions related to the consumption of resources on your device. You can configure alert rules to monitor your device for alert conditions. For more detailed information on alerts, go to [Create, view, and manage metric alerts in Azure monitor](../azure-monitor/platform/alerts-metric.md). + +## Next steps + +Learn how to [Manage bandwidth](data-box-edge-manage-bandwidth-schedules.md). \ No newline at end of file diff --git a/articles/databox-online/data-box-edge-security.md b/articles/databox-online/data-box-edge-security.md index 6612310a6c4df..941ee7705feb4 100644 --- a/articles/databox-online/data-box-edge-security.md +++ b/articles/databox-online/data-box-edge-security.md @@ -7,7 +7,7 @@ author: alkohli ms.service: databox ms.subservice: edge ms.topic: article -ms.date: 04/02/2019 +ms.date: 04/15/2019 ms.author: alkohli --- # Data Box Edge security and data protection diff --git a/articles/databox-online/media/data-box-edge-monitor/add-alert-rule-1.png b/articles/databox-online/media/data-box-edge-monitor/add-alert-rule-1.png new file mode 100644 index 0000000000000..893035b25d6a3 Binary files /dev/null and b/articles/databox-online/media/data-box-edge-monitor/add-alert-rule-1.png differ diff --git a/articles/databox-online/media/data-box-edge-monitor/view-device-events.png b/articles/databox-online/media/data-box-edge-monitor/view-device-events.png new file mode 100644 index 0000000000000..49030888d8cfe Binary files /dev/null and b/articles/databox-online/media/data-box-edge-monitor/view-device-events.png differ diff --git a/articles/databox-online/media/data-box-edge-monitor/view-hardware-status.png b/articles/databox-online/media/data-box-edge-monitor/view-hardware-status.png new file mode 100644 index 0000000000000..d684c1dd9f895 Binary files /dev/null and b/articles/databox-online/media/data-box-edge-monitor/view-hardware-status.png differ diff --git a/articles/databox-online/media/data-box-edge-monitor/view-metrics-1.png b/articles/databox-online/media/data-box-edge-monitor/view-metrics-1.png new file mode 100644 index 0000000000000..f74b159e59907 Binary files /dev/null and b/articles/databox-online/media/data-box-edge-monitor/view-metrics-1.png differ diff --git a/articles/databox-online/media/data-box-edge-monitor/view-metrics-2.png b/articles/databox-online/media/data-box-edge-monitor/view-metrics-2.png new file mode 100644 index 0000000000000..caf33fcda6089 Binary files /dev/null and b/articles/databox-online/media/data-box-edge-monitor/view-metrics-2.png differ diff --git a/articles/databox-online/media/data-box-edge-monitor/view-metrics-3.png b/articles/databox-online/media/data-box-edge-monitor/view-metrics-3.png new file mode 100644 index 0000000000000..19c72b0acd4ef Binary files /dev/null and b/articles/databox-online/media/data-box-edge-monitor/view-metrics-3.png differ diff --git a/articles/databox-online/media/data-box-edge-monitor/view-metrics-4.png b/articles/databox-online/media/data-box-edge-monitor/view-metrics-4.png new file mode 100644 index 0000000000000..2c0a6ce1bb951 Binary files /dev/null and b/articles/databox-online/media/data-box-edge-monitor/view-metrics-4.png differ diff --git a/articles/databox-online/media/data-box-edge-monitor/view-metrics-5.png b/articles/databox-online/media/data-box-edge-monitor/view-metrics-5.png new file mode 100644 index 0000000000000..9e702d1d5ab51 Binary files /dev/null and b/articles/databox-online/media/data-box-edge-monitor/view-metrics-5.png differ diff --git a/articles/databox-online/media/data-box-edge-monitor/view-metrics-6.png b/articles/databox-online/media/data-box-edge-monitor/view-metrics-6.png new file mode 100644 index 0000000000000..7af7b9f8f653d Binary files /dev/null and b/articles/databox-online/media/data-box-edge-monitor/view-metrics-6.png differ diff --git a/articles/databox-online/media/data-box-edge-monitor/view-metrics-7.png b/articles/databox-online/media/data-box-edge-monitor/view-metrics-7.png new file mode 100644 index 0000000000000..b5865ee2b3ee9 Binary files /dev/null and b/articles/databox-online/media/data-box-edge-monitor/view-metrics-7.png differ diff --git a/articles/databox-online/media/data-box-edge-monitor/view-metrics-8.png b/articles/databox-online/media/data-box-edge-monitor/view-metrics-8.png new file mode 100644 index 0000000000000..84e0753d5f79f Binary files /dev/null and b/articles/databox-online/media/data-box-edge-monitor/view-metrics-8.png differ diff --git a/articles/event-grid/overview.md b/articles/event-grid/overview.md index 169aaf04351c5..5a980f50288ea 100644 --- a/articles/event-grid/overview.md +++ b/articles/event-grid/overview.md @@ -18,7 +18,7 @@ Azure Event Grid allows you to easily build applications with event-based archit You can use filters to route specific events to different endpoints, multicast to multiple endpoints, and make sure your events are reliably delivered. -Currently, Azure Event Grid is available in all public regions. It isn't yet available in the Azure Germany, Azure China, or Azure Government clouds. +Currently, Azure Event Grid is available in all public regions. It isn't yet available in the Azure Germany, Azure China 21Vianet, or Azure Government clouds. This article provides an overview of Azure Event Grid. If you want to get started with Event Grid, see [Create and route custom events with Azure Event Grid](custom-event-quickstart.md). @@ -30,30 +30,29 @@ This image shows how Event Grid connects sources and handlers, and isn't a compr For full details on the capabilities of each source as well as related articles, see [event sources](event-sources.md). Currently, the following Azure services support sending events to Event Grid: -* Azure Subscriptions (management operations) -* Container Registry -* Custom Topics -* Event Hubs -* IoT Hub -* Media Services -* Resource Groups (management operations) -* Service Bus -* Storage Blob -* Storage General-purpose v2 (GPv2) -* Azure Maps +* [Azure Subscriptions (management operations)](event-sources.md#azure-subscriptions) +* [Container Registry](event-sources.md#container-registry) +* [Custom Topics](event-sources.md#custom-topics) +* [Event Hubs](event-sources.md#event-hubs) +* [IoT Hub](event-sources.md#iot-hub) +* [Media Services](event-sources.md#media-services) +* [Resource Groups (management operations)](event-sources.md#resource-groups) +* [Service Bus](event-sources.md#service-bus) +* [Storage Blob](event-sources.md#storage) +* [Azure Maps](event-sources.md#maps) ## Event handlers For full details on the capabilities of each handler as well as related articles, see [event handlers](event-handlers.md). Currently, the following Azure services support handling events from Event Grid: -* Azure Automation -* Azure Functions -* Event Hubs -* Hybrid Connections -* Logic Apps -* Microsoft Flow -* Queue Storage -* WebHooks +* [Azure Automation](event-handlers.md#azure-automation) +* [Azure Functions](event-handlers.md#azure-functions) +* [Event Hubs](event-handlers.md#event-hubs) +* [Hybrid Connections](event-handlers.md#hybrid-connections) +* [Logic Apps](event-handlers.md#logic-apps) +* [Microsoft Flow](https://preview.flow.microsoft.com/connectors/shared_azureeventgrid/azure-event-grid/) +* [Queue Storage](event-handlers.md#queue-storage) +* [WebHooks](event-handlers.md#webhooks) ## Concepts diff --git a/articles/hdinsight/hadoop/TOC.yml b/articles/hdinsight/hadoop/TOC.yml index 922135f436181..f91cf8ecd12d6 100644 --- a/articles/hdinsight/hadoop/TOC.yml +++ b/articles/hdinsight/hadoop/TOC.yml @@ -13,7 +13,7 @@ - name: HDInsight 4.0 href: ../hdinsight-version-release.md maintainContext: true -- name: Quickstart +- name: Quickstarts items: - name: Create Apache Hadoop cluster - Portal href: apache-hadoop-linux-create-cluster-get-started-portal.md @@ -29,7 +29,18 @@ href: ../hdinsight-hadoop-create-linux-clusters-adf.md maintainContext: true - name: Monitor cluster availability with Ambari and Azure Monitor logs - href: ../hdinsight-cluster-availability.md + href: ../hdinsight-cluster-availability.md + maintainContext: true +- name: Samples + items: + - name: .NET samples + href: ../hdinsight-sdk-dotnet-samples.md + maintainContext: true + - name: Java samples + href: ../hdinsight-sdk-java-samples.md + maintainContext: true + - name: Python samples + href: ../hdinsight-sdk-python-samples.md maintainContext: true - name: Concepts items: @@ -78,7 +89,7 @@ href: apache-hadoop-use-hive-curl.md - name: Use Azure PowerShell href: apache-hadoop-use-hive-powershell.md - - name: Use .NET SDK + - name: Use SDK for .NET href: apache-hadoop-use-hive-dotnet-sdk.md - name: Use a Java UDF with Hive href: apache-hadoop-hive-java-udf.md @@ -91,7 +102,7 @@ href: apache-hadoop-use-mapreduce-curl.md - name: Use Azure PowerShell href: apache-hadoop-use-mapreduce-powershell.md - - name: Use .NET SDK + - name: Use SDK for .NET href: apache-hadoop-use-mapreduce-dotnet-sdk.md - name: Run the MapReduce samples href: apache-hadoop-run-samples-linux.md @@ -102,7 +113,7 @@ href: apache-hadoop-use-pig-ssh.md - name: Use Azure PowerShell href: apache-hadoop-use-pig-powershell.md - - name: Use the .NET SDK + - name: Use the SDK for .NET href: apache-hadoop-use-pig-dotnet-sdk.md - name: Use cURL href: apache-hadoop-use-pig-curl.md @@ -251,22 +262,10 @@ maintainContext: true - name: Manage items: - - name: Create Linux clusters + - name: Create HDInsight clusters href: ../hdinsight-hadoop-provision-linux-clusters.md maintainContext: true items: - - name: Use Azure PowerShell - href: ../hdinsight-hadoop-create-linux-clusters-azure-powershell.md - maintainContext: true - - name: Use cURL and the Azure REST API - href: ../hdinsight-hadoop-create-linux-clusters-curl-rest.md - maintainContext: true - - name: Use the .NET SDK - href: ../hdinsight-hadoop-create-linux-clusters-dotnet-sdk.md - maintainContext: true - - name: Use the Azure Classic CLI - href: ../hdinsight-hadoop-create-linux-clusters-azure-cli.md - maintainContext: true - name: Use the Azure portal href: ../hdinsight-hadoop-create-linux-clusters-portal.md maintainContext: true @@ -274,13 +273,33 @@ displayName: resource manager template, arm template, resource manager group href: ../hdinsight-hadoop-create-linux-clusters-arm-templates.md maintainContext: true - - name: Manage Apache Hadoop clusters + - name: Use SDK for .NET + href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet#create-a-cluster + - name: Use SDK for Python + href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python#create-a-cluster + - name: Use SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-stable#create-a-cluster + - name: Use Azure PowerShell + href: ../hdinsight-hadoop-create-linux-clusters-azure-powershell.md + maintainContext: true + - name: Use cURL and the Azure REST API + href: ../hdinsight-hadoop-create-linux-clusters-curl-rest.md + maintainContext: true + - name: Use the Azure Classic CLI + href: ../hdinsight-hadoop-create-linux-clusters-azure-cli.md + maintainContext: true + - name: Manage HDInsight clusters href: ../hdinsight-administer-use-portal-linux.md maintainContext: true items: - - name: Use .NET SDK - href: ../hdinsight-administer-use-dotnet-sdk.md - maintainContext: true + - name: Use SDK for .NET + href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet#management + - name: Use SDK for Python + href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python + - name: Use SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-stable + - name: Use SDK for Go (Preview) + href: https://docs.microsoft.com/azure/hdinsight/hdinsight-go-sdk-overview - name: Use Azure PowerShell href: ../hdinsight-administer-use-powershell.md maintainContext: true @@ -325,7 +344,7 @@ href: apache-hadoop-use-sqoop-mac-linux.md - name: Run using cURL href: apache-hadoop-use-sqoop-curl.md - - name: Run using .NET SDK + - name: Run using SDK for .NET href: apache-hadoop-use-sqoop-dotnet-sdk.md - name: Run using Azure PowerShell href: apache-hadoop-use-sqoop-powershell.md @@ -441,13 +460,13 @@ href: https://azure.microsoft.com/resources/samples/?service=hdinsight - name: Azure PowerShell href: /powershell/module/az.hdinsight - - name: .NET SDK + - name: SDK for .NET href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet - - name: Python SDK + - name: SDK for Python href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python - - name: Java SDK - href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-preview - - name: Go SDK + - name: SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-stable + - name: SDK for Go href: ../hdinsight-go-sdk-overview.md - name: .NET (Apache HBase) href: https://www.nuget.org/packages/Microsoft.HBase.Client/ @@ -480,11 +499,11 @@ - name: Azure Roadmap href: https://azure.microsoft.com/roadmap/?category=intelligence-analytics - name: Get help on the forum - href: https://social.msdn.microsoft.com/forums/azure/en-US/home?forum=hdinsight + href: https://social.msdn.microsoft.com/forums/azure/home?forum=hdinsight - name: Learning path href: https://azure.microsoft.com/documentation/learning-paths/hdinsight-self-guided-hadoop-training/ - name: Microsoft Professional Program for Big Data - href: https://academy.microsoft.com/en-us/professional-program/big-data/ + href: https://academy.microsoft.com/en-us/professional-program/tracks/big-data/ - name: Pricing calculator href: https://azure.microsoft.com/pricing/calculator/ - name: Windows tools for HDInsight diff --git a/articles/hdinsight/hbase/TOC.yml b/articles/hdinsight/hbase/TOC.yml index 901cfc25ed5e3..866877008a12c 100644 --- a/articles/hdinsight/hbase/TOC.yml +++ b/articles/hdinsight/hbase/TOC.yml @@ -35,7 +35,7 @@ maintainContext: true - name: Develop items: - - name: Use the Apache HBase .NET SDK + - name: Use the Apache HBase SDK for .NET href: apache-hbase-rest-sdk.md - name: Develop Java applications href: apache-hbase-build-java-maven-linux.md @@ -88,39 +88,50 @@ maintainContext: true - name: Manage items: - - name: Create Linux clusters + - name: Create HDInsight clusters href: ../hdinsight-hadoop-provision-linux-clusters.md maintainContext: true items: + - name: Use the Azure portal + href: ../hdinsight-hadoop-create-linux-clusters-portal.md + maintainContext: true + - name: Use Azure Resource Manager templates + displayName: resource manager template, arm template, resource manager group + href: ../hdinsight-hadoop-create-linux-clusters-arm-templates.md + maintainContext: true + - name: Use SDK for .NET + href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet#create-a-cluster + - name: Use SDK for Python + href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python#create-a-cluster + - name: Use SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-stable#create-a-cluster - name: Use Azure PowerShell href: ../hdinsight-hadoop-create-linux-clusters-azure-powershell.md maintainContext: true - name: Use cURL and the Azure REST API href: ../hdinsight-hadoop-create-linux-clusters-curl-rest.md - maintainContext: true - - name: Use the .NET SDK - href: ../hdinsight-hadoop-create-linux-clusters-dotnet-sdk.md - maintainContext: true + maintainContext: true - name: Use the Azure Classic CLI href: ../hdinsight-hadoop-create-linux-clusters-azure-cli.md maintainContext: true - - name: Use the Azure portal - href: ../hdinsight-hadoop-create-linux-clusters-portal.md - maintainContext: true - - name: Use Azure Resource Manager templates - displayName: resource manager template, arm template, resource manager group - href: ../hdinsight-hadoop-create-linux-clusters-arm-templates.md + - name: Create on-demand clusters + href: ../hdinsight-hadoop-create-linux-clusters-adf.md maintainContext: true - name: Create on-demand clusters href: ../hdinsight-hadoop-create-linux-clusters-adf.md maintainContext: true - - name: Manage Apache Hadoop clusters + - name: Manage HDInsight clusters href: ../hdinsight-administer-use-portal-linux.md maintainContext: true items: - - name: Use .NET SDK - href: ../hdinsight-administer-use-dotnet-sdk.md - maintainContext: true + - name: Use SDK for .NET + href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet#management + - name: Use SDK for Python + href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python + - name: Use SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-stable + - name: Use SDK for Go (Preview) + href: https://docs.microsoft.com/azure/hdinsight/hdinsight-go-sdk-overview - name: Use Azure PowerShell href: ../hdinsight-administer-use-powershell.md maintainContext: true @@ -162,7 +173,7 @@ href: ../hadoop/apache-hadoop-use-sqoop-mac-linux.md - name: Run using cURL href: ../hadoop/apache-hadoop-use-sqoop-curl.md - - name: Run using .NET SDK + - name: Run using SDK for .NET href: ../hadoop/apache-hadoop-use-sqoop-dotnet-sdk.md - name: Run using Azure PowerShell href: ../hadoop/apache-hadoop-use-sqoop-powershell.md @@ -257,13 +268,13 @@ href: https://azure.microsoft.com/resources/samples/?service=hdinsight - name: Azure PowerShell href: /powershell/module/az.hdinsight - - name: .NET SDK + - name: SDK for .NET href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet - - name: Python SDK + - name: SDK for Python href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python - - name: Java SDK - href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-preview - - name: Go SDK + - name: SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-stable + - name: SDK for Go href: ../hdinsight-go-sdk-overview.md - name: .NET (Apache HBase) href: https://www.nuget.org/packages/Microsoft.HBase.Client/ diff --git a/articles/hdinsight/hdinsight-sdk-dotnet-samples.md b/articles/hdinsight/hdinsight-sdk-dotnet-samples.md new file mode 100644 index 0000000000000..e7c914d86360c --- /dev/null +++ b/articles/hdinsight/hdinsight-sdk-dotnet-samples.md @@ -0,0 +1,43 @@ +--- +title: 'Azure HDInsight: .NET samples' +description: Find C# .NET examples on GitHub for common tasks using the HDInsight SDK for .NET. +author: hrasheed-msft +ms.service: hdinsight +ms.topic: sample +ms.date: 04/15/2019 +ms.author: hrasheed + +--- +# Azure HDInsight: .NET samples + +> [!div class="op_single_selector"] +> * [.NET Examples](hdinsight-sdk-dotnet-samples.md) +> * [Python Examples](hdinsight-sdk-python-samples.md) +> * [Java Examples](hdinsight-sdk-java-samples.md) + + +This article provides: + +* Links to samples for cluster creation tasks. +* Links to reference content for other management tasks. + +You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio): Your Visual Studio subscription gives you credits every month that you can use for paid Azure services. + +## Prerequisites + +[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] + +- [Azure HDInsight SDK for .NET](https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight#sdk-installation) + +## Cluster management - creation + +* [Create a Kafka cluster](https://github.com/Azure-Samples/hdinsight-dotnet-sdk-samples/blob/master/Management/Microsoft.Azure.Management.HDInsight.Samples/Microsoft.Azure.Management.HDInsight.Samples/CreateKafkaClusterSample.cs) +* [Create a Spark cluster](https://github.com/Azure-Samples/hdinsight-dotnet-sdk-samples/blob/master/Management/Microsoft.Azure.Management.HDInsight.Samples/Microsoft.Azure.Management.HDInsight.Samples/CreateSparkClusterSample.cs) +* [Create a Spark cluster with Azure Data Lake Storage Gen2](https://github.com/Azure-Samples/hdinsight-dotnet-sdk-samples/blob/master/Management/Microsoft.Azure.Management.HDInsight.Samples/Microsoft.Azure.Management.HDInsight.Samples/CreateHadoopClusterWithAdlsGen2Sample.cs) +* [Create a Spark cluster with Enterprise Security Package (ESP)](https://github.com/Azure-Samples/hdinsight-dotnet-sdk-samples/blob/master/Management/Microsoft.Azure.Management.HDInsight.Samples/Microsoft.Azure.Management.HDInsight.Samples/CreateEspClusterSample.cs) + +You can get these samples for .NET by cloning the the [hdinsight-dotnet-sdk-samples](https://github.com/Azure-Samples/hdinsight-dotnet-sdk-samples) GitHub repository. + +[!INCLUDE [hdinsight-sdk-additional-functionality](../../includes/hdinsight-sdk-additional-functionality.md)] + +Code snippets for this additional SDK functionality can be found in the [HDInsight SDK for .NET reference documentation](https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet). \ No newline at end of file diff --git a/articles/hdinsight/hdinsight-sdk-java-samples.md b/articles/hdinsight/hdinsight-sdk-java-samples.md new file mode 100644 index 0000000000000..050a7f208382f --- /dev/null +++ b/articles/hdinsight/hdinsight-sdk-java-samples.md @@ -0,0 +1,41 @@ +--- +title: 'Azure HDInsight: Java samples' +description: Find Java examples on GitHub for common tasks using the HDInsight SDK for Java. +author: hrasheed-msft +ms.service: hdinsight +ms.topic: sample +ms.date: 04/15/2019 +ms.author: hrasheed + +--- +# Azure HDInsight: Java samples + +> [!div class="op_single_selector"] +> * [Java Examples](hdinsight-sdk-java-samples.md) +> * [.NET Examples](hdinsight-sdk-dotnet-samples.md) +> * [Python Examples](hdinsight-sdk-python-samples.md) + + +This article provides: + +* Links to samples for cluster creation tasks. +* Links to reference content for other management tasks. + +## Prerequisites + +[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] + +- [Azure HDInsight SDK for Java](https://docs.microsoft.com/java/api/overview/azure/hdinsight#sdk-installation) + +## Cluster management - creation + +* [Create a Kafka cluster](https://github.com/Azure-Samples/hdinsight-java-sdk-samples/blob/master/management/src/main/java/com/microsoft/azure/hdinsight/samples/CreateKafkaClusterSample.java) +* [Create a Spark cluster](https://github.com/Azure-Samples/hdinsight-java-sdk-samples/blob/master/management/src/main/java/com/microsoft/azure/hdinsight/samples/CreateSparkClusterSample.java) +* [Create a Spark cluster with Azure Data Lake Storage Gen2](https://github.com/Azure-Samples/hdinsight-java-sdk-samples/blob/master/management/src/main/java/com/microsoft/azure/hdinsight/samples/CreateHadoopClusterWithAdlsGen2Sample.java) +* [Create a Spark cluster with Enterprise Security Package (ESP)](https://github.com/Azure-Samples/hdinsight-java-sdk-samples/blob/master/management/src/main/java/com/microsoft/azure/hdinsight/samples/CreateEspClusterSample.java) + +You can get these samples for Java by cloning the the [hdinsight-java-sdk-samples](https://github.com/Azure-Samples/hdinsight-java-sdk-samples) GitHub repository. + +[!INCLUDE [hdinsight-sdk-additional-functionality](../../includes/hdinsight-sdk-additional-functionality.md)] + +Code snippets for this additional SDK functionality can be found in the [HDInsight SDK for Java reference documentation](https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-preview). \ No newline at end of file diff --git a/articles/hdinsight/hdinsight-sdk-python-samples.md b/articles/hdinsight/hdinsight-sdk-python-samples.md new file mode 100644 index 0000000000000..65fc41d084647 --- /dev/null +++ b/articles/hdinsight/hdinsight-sdk-python-samples.md @@ -0,0 +1,42 @@ +--- +title: 'Azure HDInsight: Python samples' +description: Find Python examples on GitHub for common tasks using the HDInsight SDK for Python. +author: hrasheed-msft +ms.service: hdinsight +ms.topic: sample +ms.date: 04/15/2019 +ms.author: hrasheed + +--- +# Azure HDInsight: Python samples + +> [!div class="op_single_selector"] +> * [Python Examples](hdinsight-sdk-python-samples.md) +> * [.NET Examples](hdinsight-sdk-dotnet-samples.md) +> * [Java Examples](hdinsight-sdk-java-samples.md) + + + +This article provides: + +* Links to samples for cluster creation tasks. +* Links to reference content for other management tasks. + +## Prerequisites + +[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] + +- [Azure HDInsight SDK for Python](https://docs.microsoft.com/python/api/overview/azure/hdinsight#sdk-installation) + +## Cluster management - creation + +* [Create a Kafka cluster](https://github.com/Azure-Samples/hdinsight-python-sdk-samples/blob/master/samples/create_kafka_cluster_sample.py) +* [Create a Spark cluster](https://github.com/Azure-Samples/hdinsight-python-sdk-samples/blob/master/samples/create_spark_cluster_sample.py) +* [Create a Spark cluster with Azure Data Lake Storage Gen2](https://github.com/Azure-Samples/hdinsight-python-sdk-samples/blob/master/samples/create_hadoop_cluster_with_adls_gen2_sample.py) +* [Create a Spark cluster with Enterprise Security Package (ESP)](https://github.com/Azure-Samples/hdinsight-python-sdk-samples/blob/master/samples/create_esp_cluster_sample.py) + +You can get these samples for Python by cloning the [hdinsight-python-sdk-samples](https://github.com/Azure-Samples/hdinsight-python-sdk-samples) GitHub repository. + +[!INCLUDE [hdinsight-sdk-additional-functionality](../../includes/hdinsight-sdk-additional-functionality.md)] + +Code snippets for this additional SDK functionality can be found in the [HDInsight SDK for Python reference documentation](https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python). \ No newline at end of file diff --git a/articles/hdinsight/interactive-query/TOC.yml b/articles/hdinsight/interactive-query/TOC.yml index 3eca0f184daad..3e04ad09de6af 100644 --- a/articles/hdinsight/interactive-query/TOC.yml +++ b/articles/hdinsight/interactive-query/TOC.yml @@ -58,7 +58,7 @@ - name: Use Azure PowerShell href: ../hadoop/apache-hadoop-use-hive-powershell.md maintainContext: true - - name: Use .NET SDK + - name: Use SDK for .NET href: ../hadoop/apache-hadoop-use-hive-dotnet-sdk.md maintainContext: true - name: Use the HDInsight tools for Visual Studio @@ -154,39 +154,50 @@ maintainContext: true - name: Manage items: - - name: Create clusters + - name: Create HDInsight clusters href: ../hdinsight-hadoop-provision-linux-clusters.md maintainContext: true items: + - name: Use the Azure portal + href: ../hdinsight-hadoop-create-linux-clusters-portal.md + maintainContext: true + - name: Use Azure Resource Manager templates + displayName: resource manager template, arm template, resource manager group + href: ../hdinsight-hadoop-create-linux-clusters-arm-templates.md + maintainContext: true + - name: Use SDK for .NET + href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet#create-a-cluster + - name: Use SDK for Python + href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python#create-a-cluster + - name: Use SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-stable#create-a-cluster - name: Use Azure PowerShell href: ../hdinsight-hadoop-create-linux-clusters-azure-powershell.md maintainContext: true - name: Use cURL and the Azure REST API href: ../hdinsight-hadoop-create-linux-clusters-curl-rest.md - maintainContext: true - - name: Use the .NET SDK - href: ../hdinsight-hadoop-create-linux-clusters-dotnet-sdk.md - maintainContext: true + maintainContext: true - name: Use the Azure Classic CLI href: ../hdinsight-hadoop-create-linux-clusters-azure-cli.md maintainContext: true - - name: Use the Azure portal - href: ../hdinsight-hadoop-create-linux-clusters-portal.md - maintainContext: true - - name: Use Azure Resource Manager templates - displayName: resource manager template, arm template, resource manager group - href: ../hdinsight-hadoop-create-linux-clusters-arm-templates.md + - name: Create on-demand clusters + href: ../hdinsight-hadoop-create-linux-clusters-adf.md maintainContext: true - name: Create on-demand clusters href: ../hdinsight-hadoop-create-linux-clusters-adf.md maintainContext: true - - name: Manage Apache Hadoop clusters + - name: Manage HDInsight clusters href: ../hdinsight-administer-use-portal-linux.md maintainContext: true items: - - name: Use .NET SDK - href: ../hdinsight-administer-use-dotnet-sdk.md - maintainContext: true + - name: Use SDK for .NET + href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet#management + - name: Use SDK for Python + href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python + - name: Use SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-stable + - name: Use SDK for Go (Preview) + href: https://docs.microsoft.com/azure/hdinsight/hdinsight-go-sdk-overview - name: Use Azure PowerShell href: ../hdinsight-administer-use-powershell.md maintainContext: true @@ -325,13 +336,13 @@ href: https://azure.microsoft.com/resources/samples/?service=hdinsight - name: Azure PowerShell href: /powershell/module/az.hdinsight - - name: .NET SDK + - name: SDK for .NET href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet - - name: Python SDK + - name: SDK for Python href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python - - name: Java SDK - href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-preview - - name: Go SDK + - name: SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-stable + - name: SDK for Go href: ../hdinsight-go-sdk-overview.md - name: .NET (Apache HBase) href: https://www.nuget.org/packages/Microsoft.HBase.Client/ diff --git a/articles/hdinsight/kafka/TOC.yml b/articles/hdinsight/kafka/TOC.yml index 94e791a00dee8..38e4e163a22b6 100644 --- a/articles/hdinsight/kafka/TOC.yml +++ b/articles/hdinsight/kafka/TOC.yml @@ -110,22 +110,10 @@ maintainContext: true - name: Manage items: - - name: Create Linux clusters + - name: Create HDInsight clusters href: ../hdinsight-hadoop-provision-linux-clusters.md maintainContext: true items: - - name: Use Azure PowerShell - href: ../hdinsight-hadoop-create-linux-clusters-azure-powershell.md - maintainContext: true - - name: Use cURL and the Azure REST API - href: ../hdinsight-hadoop-create-linux-clusters-curl-rest.md - maintainContext: true - - name: Use the .NET SDK - href: ../hdinsight-hadoop-create-linux-clusters-dotnet-sdk.md - maintainContext: true - - name: Use the Azure Classic CLI - href: ../hdinsight-hadoop-create-linux-clusters-azure-cli.md - maintainContext: true - name: Use the Azure portal href: ../hdinsight-hadoop-create-linux-clusters-portal.md maintainContext: true @@ -133,19 +121,42 @@ displayName: resource manager template, arm template, resource manager group href: ../hdinsight-hadoop-create-linux-clusters-arm-templates.md maintainContext: true - - name: Manage Apache Hadoop clusters - href: ../hdinsight-administer-use-portal-linux.md - maintainContext: true - items: - - name: Use .NET SDK - href: ../hdinsight-administer-use-dotnet-sdk.md - maintainContext: true + - name: Use SDK for .NET + href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet#create-a-cluster + - name: Use SDK for Python + href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python#create-a-cluster + - name: Use SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-stable#create-a-cluster - name: Use Azure PowerShell - href: ../hdinsight-administer-use-powershell.md + href: ../hdinsight-hadoop-create-linux-clusters-azure-powershell.md maintainContext: true + - name: Use cURL and the Azure REST API + href: ../hdinsight-hadoop-create-linux-clusters-curl-rest.md + maintainContext: true - name: Use the Azure Classic CLI - href: ../hdinsight-administer-use-command-line.md + href: ../hdinsight-hadoop-create-linux-clusters-azure-cli.md maintainContext: true + - name: Create on-demand clusters + href: ../hdinsight-hadoop-create-linux-clusters-adf.md + maintainContext: true + - name: Manage HDInsight clusters + href: ../hdinsight-administer-use-portal-linux.md + maintainContext: true + items: + - name: Use SDK for .NET + href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet#management + - name: Use SDK for Python + href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python + - name: Use SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-stable + - name: Use SDK for Go (Preview) + href: https://docs.microsoft.com/azure/hdinsight/hdinsight-go-sdk-overview + - name: Use Azure PowerShell + href: ../hdinsight-administer-use-powershell.md + maintainContext: true + - name: Use the Azure Classic CLI + href: ../hdinsight-administer-use-command-line.md + maintainContext: true - name: Manage clusters using the Apache Ambari web UI href: ../hdinsight-hadoop-manage-ambari.md maintainContext: true @@ -240,13 +251,13 @@ href: https://azure.microsoft.com/resources/samples/?service=hdinsight - name: Azure PowerShell href: /powershell/module/az.hdinsight - - name: .NET SDK + - name: SDK for .NET href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet - - name: Python SDK + - name: SDK for Python href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python - - name: Java SDK - href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-preview - - name: Go SDK + - name: SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-stable + - name: SDK for Go href: ../hdinsight-go-sdk-overview.md - name: REST href: /rest/api/hdinsight/ diff --git a/articles/hdinsight/r-server/TOC.yml b/articles/hdinsight/r-server/TOC.yml index e00c814d36e62..610188c03165d 100644 --- a/articles/hdinsight/r-server/TOC.yml +++ b/articles/hdinsight/r-server/TOC.yml @@ -115,28 +115,34 @@ maintainContext: true - name: Manage items: - - name: Create Linux clusters + - name: Create HDInsight clusters href: ../hdinsight-hadoop-provision-linux-clusters.md maintainContext: true items: + - name: Use the Azure portal + href: ../hdinsight-hadoop-create-linux-clusters-portal.md + maintainContext: true + - name: Use Azure Resource Manager templates + displayName: resource manager template, arm template, resource manager group + href: ../hdinsight-hadoop-create-linux-clusters-arm-templates.md + maintainContext: true + - name: Use SDK for .NET + href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet#create-a-cluster + - name: Use SDK for Python + href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python#create-a-cluster + - name: Use SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-stable#create-a-cluster - name: Use Azure PowerShell href: ../hdinsight-hadoop-create-linux-clusters-azure-powershell.md maintainContext: true - name: Use cURL and the Azure REST API href: ../hdinsight-hadoop-create-linux-clusters-curl-rest.md - maintainContext: true - - name: Use the .NET SDK - href: ../hdinsight-hadoop-create-linux-clusters-dotnet-sdk.md - maintainContext: true + maintainContext: true - name: Use the Azure Classic CLI href: ../hdinsight-hadoop-create-linux-clusters-azure-cli.md maintainContext: true - - name: Use the Azure portal - href: ../hdinsight-hadoop-create-linux-clusters-portal.md - maintainContext: true - - name: Use Azure Resource Manager templates - displayName: resource manager template, arm template, resource manager group - href: ../hdinsight-hadoop-create-linux-clusters-arm-templates.md + - name: Create on-demand clusters + href: ../hdinsight-hadoop-create-linux-clusters-adf.md maintainContext: true - name: Create on-demand clusters href: ../hdinsight-hadoop-create-linux-clusters-adf.md @@ -145,9 +151,14 @@ href: ../hdinsight-administer-use-portal-linux.md maintainContext: true items: - - name: Use .NET SDK - href: ../hdinsight-administer-use-dotnet-sdk.md - maintainContext: true + - name: Use SDK for .NET + href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet#management + - name: Use SDK for Python + href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python + - name: Use SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-stable + - name: Use SDK for Go (Preview) + href: https://docs.microsoft.com/azure/hdinsight/hdinsight-go-sdk-overview - name: Use Azure PowerShell href: ../hdinsight-administer-use-powershell.md maintainContext: true @@ -183,7 +194,7 @@ href: ../hadoop/apache-hadoop-use-sqoop-mac-linux.md - name: Run using cURL href: ../hadoop/apache-hadoop-use-sqoop-curl.md - - name: Run using .NET SDK + - name: Run using SDK for .NET href: ../hadoop/apache-hadoop-use-sqoop-dotnet-sdk.md - name: Run using Azure PowerShell href: ../hadoop/apache-hadoop-use-sqoop-powershell.md @@ -274,13 +285,13 @@ href: https://azure.microsoft.com/resources/samples/?service=hdinsight - name: Azure PowerShell href: /powershell/module/az.hdinsight - - name: .NET SDK + - name: SDK for .NET href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet - - name: Python SDK + - name: SDK for Python href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python - - name: Java SDK - href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-preview - - name: Go SDK + - name: SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-stable + - name: SDK for Go href: ../hdinsight-go-sdk-overview.md - name: .NET (Apache HBase) href: https://www.nuget.org/packages/Microsoft.HBase.Client/ @@ -305,11 +316,11 @@ - name: Azure Roadmap href: https://azure.microsoft.com/roadmap/?category=intelligence-analytics - name: Get help on the forum - href: https://social.msdn.microsoft.com/forums/azure/en-US/home?forum=hdinsight + href: https://social.msdn.microsoft.com/forums/azure/home?forum=hdinsight - name: Learning path href: https://azure.microsoft.com/documentation/learning-paths/hdinsight-self-guided-hadoop-training/ - name: Microsoft Professional Program for Big Data - href: https://academy.microsoft.com/en-us/professional-program/big-data/ + href: https://academy.microsoft.com/en-us/professional-program/tracks/big-data/ - name: Pricing calculator href: https://azure.microsoft.com/pricing/calculator/ - name: Windows tools for HDInsight diff --git a/articles/hdinsight/spark/TOC.yml b/articles/hdinsight/spark/TOC.yml index c9f933c548e40..e6a62db52db51 100644 --- a/articles/hdinsight/spark/TOC.yml +++ b/articles/hdinsight/spark/TOC.yml @@ -1,443 +1,462 @@ -- name: Azure HDInsight Documentation - href: /azure/hdinsight -- name: Overview - items: - - name: About Apache Spark on HDInsight - href: apache-spark-overview.md - - name: Apache Hadoop components on HDInsight - href: ../hdinsight-component-versioning.md - maintainContext: true - - name: HDInsight 4.0 - href: ../hdinsight-version-release.md - maintainContext: true -- name: Quickstarts - items: - - name: Create an Apache Spark cluster - Portal - href: apache-spark-jupyter-spark-sql-use-portal.md - - name: Create an Apache Spark cluster - PowerShell - href: apache-spark-jupyter-spark-sql-use-powershell.md - - name: Create an Apache Spark cluster - Resource Manager Template - displayName: resource manager template, arm template, resource manager group - href: apache-spark-jupyter-spark-sql.md -- name: Tutorials - items: - - name: Run queries on an Apache Spark cluster - href: apache-spark-load-data-run-query.md - - name: Use VSCode to run Apache Spark queries - href: ../hdinsight-for-vscode.md - maintainContext: true - - name: Use IntelliJ to run Apache Spark queries - href: apache-spark-intellij-tool-plugin.md - - name: Analyze data using BI tools - href: apache-spark-use-bi-tools.md - - name: Run a streaming job - href: apache-spark-eventhub-streaming.md - - name: Create a machine learning app - href: apache-spark-ipython-notebook-machine-learning.md - - name: Create an Apache Spark app in IntelliJ - href: apache-spark-create-standalone-application.md - - name: Monitor cluster availability with Ambari and Azure Monitor logs - href: ../hdinsight-cluster-availability.md - maintainContext: true -- name: How to - items: - - name: Use cluster storage - items: - - name: Using Azure Storage - href: ../hdinsight-hadoop-use-blob-storage.md - maintainContext: true - - name: Using Azure Data Lake Storage Gen2 - href: ../hdinsight-hadoop-use-data-lake-storage-gen2.md - maintainContext: true - - name: Using Azure Data Lake Storage Gen1 - href: ../hdinsight-hadoop-use-data-lake-store.md - maintainContext: true - - name: Develop - items: - - name: Use an interactive Apache Spark Shell - href: apache-spark-shell.md - - name: Remote jobs with Apache Livy - href: apache-spark-livy-rest-interface.md - - name: Create apps using Eclipse - href: apache-spark-eclipse-tool-plugin.md - - name: Create apps using IntelliJ - href: apache-spark-intellij-tool-plugin.md - - name: Debug Apache Spark jobs remotely with IntelliJ through SSH - href: apache-spark-intellij-tool-debug-remotely-through-ssh.md - - name: Debug Apache Spark jobs remotely with IntelliJ through VPN - href: apache-spark-intellij-tool-plugin-debug-jobs-remotely.md - - name: Create non-interactive authentication .NET HDInsight applications - href: ../hdinsight-create-non-interactive-authentication-dotnet-applications.md - maintainContext: true - - name: Use HDInsight tools - items: - - name: Tools for Visual Studio - href: ../hadoop/apache-hadoop-visual-studio-tools-get-started.md - maintainContext: true - - name: Tools for VSCode - href: ../hdinsight-for-vscode.md - maintainContext: true - - name: Tools for IntelliJ - href: apache-spark-intellij-tool-plugin.md - - name: Tools for Eclipse - href: apache-spark-eclipse-tool-plugin.md - - name: PySpark for VSCode - href: ../set-up-pyspark-interactive-environment.md - maintainContext: true - - name: Debug Apache Spark in IntelliJ - href: apache-spark-intellij-tool-debug-remotely-through-ssh.md - - name: Use notebooks with Apache Spark - items: - - name: Use a local Jupyter notebook - href: apache-spark-jupyter-notebook-install-locally.md - - name: Jupyter notebook kernels - href: apache-spark-jupyter-notebook-kernels.md - - name: Use external packages with Jupyter using cell magic - href: apache-spark-jupyter-notebook-use-external-packages.md - - name: Use external packages with Jupyter using script action - href: apache-spark-python-package-installation.md - - name: Use Apache Zeppelin notebooks - href: apache-spark-zeppelin-notebook.md - - name: Use with other Azure services - items: - - name: Use with Data Lake Storage - href: apache-spark-use-with-data-lake-store.md - - name: Connect to Azure SQL database - href: apache-spark-connect-to-sql-database.md - - name: Implement a lambda architecture using Apache Spark and Cosmos DB - href: ../../cosmos-db/lambda-architecture.md - maintainContext: true - - name: Run Azure ML workloads with AutoML - href: apache-spark-run-machine-learning-automl.md - maintainContext: true - - name: Apache Spark streaming - items: - - name: Streaming overview - href: apache-spark-streaming-overview.md - - name: Structured Streaming overview - href: apache-spark-structured-streaming-overview.md - - name: Create highly available Apache Spark streaming jobs in Apache Hadoop YARN - href: apache-spark-streaming-high-availability.md - - name: Structured streaming with Apache Kafka - href: ../hdinsight-apache-kafka-spark-structured-streaming.md - maintainContext: true - - name: Structured streaming from Apache Kafka to CosmosDB - href: ../apache-kafka-spark-structured-streaming-cosmosdb.md - maintainContext: true - - name: Spark streaming (DStream) with Apache Kafka - href: ../hdinsight-apache-spark-with-kafka.md - maintainContext: true - - name: Create Apache Spark streaming jobs with exactly-once event processing - href: apache-spark-streaming-exactly-once.md - - name: Apache Spark and Machine Learning - items: - - name: Predict food inspection results - href: apache-spark-machine-learning-mllib-ipython.md - - name: Analyze website logs - href: apache-spark-custom-library-website-log-analysis.md - - name: Use Caffe for deep learning - href: apache-spark-deep-learning-caffe.md - - name: Use with Microsoft Cognitive Toolkit - href: apache-spark-microsoft-cognitive-toolkit.md - - name: Create an Apache Spark machine learning pipeline - href: apache-spark-creating-ml-pipelines.md - - name: Analyze big data - items: - - name: Use Apache Spark to read and write Apache HBase data - href: ../hdinsight-using-spark-query-hbase.md - maintainContext: true - - name: Analyze Application Insights telemetry logs - href: apache-spark-analyze-application-insight-logs.md - - name: Extend clusters - items: - - name: Customize clusters using Bootstrap - href: ../hdinsight-hadoop-customize-cluster-bootstrap.md - maintainContext: true - - name: Customize clusters using Script Action - href: ../hdinsight-hadoop-customize-cluster-linux.md - maintainContext: true - - name: Develop script actions - href: ../hdinsight-hadoop-script-actions-linux.md - maintainContext: true - - name: Install and use Presto - href: ../hdinsight-hadoop-install-presto.md - maintainContext: true - - name: Add Apache Hive libraries - href: ../hdinsight-hadoop-add-hive-libraries.md - maintainContext: true - - name: Use Apache Giraph - href: ../hdinsight-hadoop-giraph-install-linux.md - maintainContext: true - - name: Use Hue - href: ../hdinsight-hadoop-hue-linux.md - maintainContext: true - - name: Use empty edge nodes - href: ../hdinsight-apps-use-edge-node.md - maintainContext: true - - name: Build HDInsight applications - items: - - name: Install custom apps - href: ../hdinsight-apps-install-custom-applications.md - maintainContext: true - - name: Use REST to install apps - href: https://msdn.microsoft.com/library/mt706515.aspx - - name: Publish HDInsight apps to Azure Marketplace - href: ../hdinsight-apps-publish-applications.md - maintainContext: true - - name: Secure - items: - - name: Use SSH with HDInsight - href: ../hdinsight-hadoop-linux-use-ssh-unix.md - maintainContext: true - - name: Use SSH tunneling - href: ../hdinsight-linux-ambari-ssh-tunnel.md - maintainContext: true - - name: Restrict access to data - href: ../hdinsight-storage-sharedaccesssignature-permissions.md - maintainContext: true - - name: Authorize users for Apache Ambari Views - href: ../hdinsight-authorize-users-to-ambari.md - maintainContext: true - - name: Manage - items: - - name: Manage cluster resources - href: apache-spark-resource-manager.md - - name: Manage Apache Spark applications using extended History Server - href: apache-azure-spark-history-server.md - - name: Configure Apache Spark settings - href: apache-spark-settings.md - - name: Optimize Apache Spark jobs - href: apache-spark-perf.md - - name: Enable caching with IO Cache - href: apache-spark-improve-performance-iocache.md - - name: Cluster capacity planning - href: ../hdinsight-capacity-planning.md - maintainContext: true - - name: Create Linux clusters - href: ../hdinsight-hadoop-provision-linux-clusters.md - maintainContext: true - items: - - name: Use Azure PowerShell - href: ../hdinsight-hadoop-create-linux-clusters-azure-powershell.md - maintainContext: true - - name: Use cURL and the Azure REST API - href: ../hdinsight-hadoop-create-linux-clusters-curl-rest.md - maintainContext: true - - name: Use the .NET SDK - href: ../hdinsight-hadoop-create-linux-clusters-dotnet-sdk.md - maintainContext: true - - name: Use the Azure Classic CLI - href: ../hdinsight-hadoop-create-linux-clusters-azure-cli.md - maintainContext: true - - name: Use the Azure portal - href: ../hdinsight-hadoop-create-linux-clusters-portal.md - maintainContext: true - - name: Use Azure Resource Manager templates - displayName: resource manager template, arm template, resource manager group - href: ../hdinsight-hadoop-create-linux-clusters-arm-templates.md - - name: Create on-demand clusters - href: ../hdinsight-hadoop-create-linux-clusters-adf.md - maintainContext: true - - name: Manage HDInsight clusters - items: - - name: Use Portal - href: ../hdinsight-administer-use-portal-linux.md - maintainContext: true - - name: Use .NET SDK - href: ../hdinsight-administer-use-dotnet-sdk.md - maintainContext: true - - name: Use Azure PowerShell - href: ../hdinsight-administer-use-powershell.md - maintainContext: true - - name: Use the Azure Classic CLI - href: ../hdinsight-administer-use-command-line.md - maintainContext: true - - name: Manage clusters using the Apache Ambari web UI - href: ../hdinsight-hadoop-manage-ambari.md - maintainContext: true - items: - - name: Optimize clusters with the Apache Ambari web UI - href: ../hdinsight-changing-configs-via-ambari.md - maintainContext: true - - name: Use Apache Ambari REST API - href: ../hdinsight-hadoop-manage-ambari-rest-api.md - maintainContext: true - - name: Manage logs for an HDInsight cluster - href: ../hdinsight-log-management.md - maintainContext: true - - name: Add storage accounts - href: ../hdinsight-hadoop-add-storage.md - maintainContext: true - - name: Upload data for Apache Hadoop jobs - href: ../hdinsight-upload-data.md - maintainContext: true - - name: Multiple HDInsight clusters with Data Lake Storage - href: ../hdinsight-multiple-clusters-data-lake-store.md - maintainContext: true - - name: Import and export data with Apache Sqoop - href: ../hadoop/hdinsight-use-sqoop.md - maintainContext: true - items: - - name: Connect with SSH - href: ../hadoop/apache-hadoop-use-sqoop-mac-linux.md - maintainContext: true - - name: Run using cURL - href: ../hadoop/apache-hadoop-use-sqoop-curl.md - maintainContext: true - - name: Run using .NET SDK - href: ../hadoop/apache-hadoop-use-sqoop-dotnet-sdk.md - maintainContext: true - - name: Run using Azure PowerShell - href: ../hadoop/apache-hadoop-use-sqoop-powershell.md - maintainContext: true - - name: Use Apache Oozie for workflows - href: ../hdinsight-use-oozie-linux-mac.md - maintainContext: true - - name: Cluster and service ports and URIs - href: ../hdinsight-hadoop-port-settings-for-services.md - maintainContext: true - - name: Migrate to Resource Manager development tools - href: ../hdinsight-hadoop-development-using-azure-resource-manager.md - maintainContext: true - - name: Availability and reliability - href: ../hdinsight-high-availability-linux.md - maintainContext: true - - name: Upgrade HDInsight cluster to newer version - href: ../hdinsight-upgrade-cluster.md - maintainContext: true - - name: OS patching for HDInsight cluster - href: ../hdinsight-os-patching.md - maintainContext: true - - name: Troubleshoot - items: - - name: Debug Apache Spark jobs - href: apache-spark-job-debugging.md - - name: Debug Apache Spark Jobs through Job Graph - href: apache-azure-spark-history-server.md - - name: Use IntelliJ to debug Apache Spark job - href: apache-spark-intellij-tool-debug-remotely-through-ssh.md - - name: Troubleshoot a slow or failing HDInsight cluster - href: ../hdinsight-troubleshoot-failed-cluster.md - maintainContext: true - - name: Apache Spark troubleshooting - href: apache-troubleshoot-spark.md - - name: Apache Hadoop HDFS troubleshooting - href: ../hdinsight-troubleshoot-hdfs.md - maintainContext: true - - name: Apache Hadoop YARN troubleshooting - href: ../hdinsight-troubleshoot-yarn.md - maintainContext: true - - name: Known issues - href: apache-spark-known-issues.md - - name: Resources - items: - - name: Information about using HDInsight on Linux - href: ../hdinsight-hadoop-linux-information.md - maintainContext: true - - name: Apache Hadoop memory and performance - href: ../hdinsight-hadoop-stack-trace-error-messages.md - maintainContext: true - - name: Access Apache Hadoop YARN application logs on Linux - href: ../hdinsight-hadoop-access-yarn-app-logs-linux.md - maintainContext: true - - name: Enable heap dumps for Apache Hadoop services - href: ../hdinsight-hadoop-collect-debug-heap-dump-linux.md - maintainContext: true - - name: Understand and resolve WebHCat errors - href: ../hdinsight-hadoop-templeton-webhcat-debug-errors.md - maintainContext: true - - name: Apache Hive settings fix Out of Memory error - href: ../hdinsight-hadoop-hive-out-of-memory-error-oom.md - maintainContext: true - - name: Optimize Apache Hive queries - href: ../hdinsight-hadoop-optimize-hive-query.md - maintainContext: true -- name: Enterprise readiness - items: - - name: Enterprise security - items: - - name: Overview - href: ../domain-joined/apache-domain-joined-introduction.md - maintainContext: true - - name: Plan for enterprise security - href: ../domain-joined/apache-domain-joined-architecture.md - maintainContext: true - - name: Configure enterprise security in HDInsight - href: ../domain-joined/apache-domain-joined-configure.md - maintainContext: true - - name: Configure enterprise security in HDInsight using Azure AD DS - href: ../domain-joined/apache-domain-joined-configure-using-azure-adds.md - maintainContext: true - - name: Configure Apache Hive policies - href: ../domain-joined/apache-domain-joined-run-hive.md - maintainContext: true - - name: Manage clusters with enterprise security - href: ../domain-joined/apache-domain-joined-manage.md - maintainContext: true - - name: Securing data - href: ../hdinsight-hadoop-create-linux-clusters-with-secure-transfer-storage.md - maintainContext: true - - name: Network access - items: - - name: HDInsight in Azure VNet - href: ../hdinsight-extend-hadoop-virtual-network.md - maintainContext: true - - name: Connect HDInsight with on-premises network - href: ../connect-on-premises-network.md - maintainContext: true - - name: Monitoring using Azure Monitor logs - items: - - name: Use Azure Monitor logs - href: ../hdinsight-hadoop-oms-log-analytics-tutorial.md - maintainContext: true - - name: Use queries with Azure Monitor logs - href: ../hdinsight-hadoop-oms-log-analytics-use-queries.md - maintainContext: true - - name: Monitor cluster performance - href: ../hdinsight-key-scenarios-to-monitor.md - maintainContext: true -- name: Reference - items: - - name: Code samples - href: https://azure.microsoft.com/resources/samples/?service=hdinsight - - name: Azure PowerShell - href: /powershell/module/az.hdinsight - - name: .NET SDK - href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet - - name: Python SDK - href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python - - name: Java SDK - href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-preview - - name: Go SDK - href: ../hdinsight-go-sdk-overview.md - - name: .NET (Apache HBase) - href: https://www.nuget.org/packages/Microsoft.HBase.Client/ - - name: REST API - href: /rest/api/hdinsight/ - - name: REST API (Apache Spark) - href: /rest/api/hdinsightspark/ - - name: Resource Manager template - displayName: resource manager template, arm template, resource manager group - href: /azure/templates/microsoft.hdinsight/allversions -- name: Resources - items: - - name: Release notes - href: ../hdinsight-release-notes.md - maintainContext: true - items: - - name: Archived release notes - href: ../hdinsight-release-notes-archive.md - maintainContext: true - - name: Azure Roadmap - href: https://azure.microsoft.com/roadmap/?category=intelligence-analytics - - name: Get help on the forum - href: https://social.msdn.microsoft.com/forums/azure/en-US/home?forum=hdinsight - - name: Learning path - href: https://azure.microsoft.com/documentation/learning-paths/hdinsight-self-guided-hadoop-training/ - - name: Microsoft Professional Program for Big Data - href: https://academy.microsoft.com/en-us/professional-program/big-data/ - - name: Pricing calculator - href: https://azure.microsoft.com/pricing/calculator/ - - name: Windows tools for HDInsight - href: ../hdinsight-hadoop-windows-tools.md - maintainContext: true +- name: Azure HDInsight Documentation + href: /azure/hdinsight +- name: Overview + items: + - name: About Apache Spark on HDInsight + href: apache-spark-overview.md + - name: Apache Hadoop components on HDInsight + href: ../hdinsight-component-versioning.md + maintainContext: true + - name: HDInsight 4.0 + href: ../hdinsight-version-release.md + maintainContext: true +- name: Quickstarts + items: + - name: Create an Apache Spark cluster - Portal + href: apache-spark-jupyter-spark-sql-use-portal.md + - name: Create an Apache Spark cluster - PowerShell + href: apache-spark-jupyter-spark-sql-use-powershell.md + - name: Create an Apache Spark cluster - Resource Manager template + displayName: resource manager template, arm template, resource manager group + href: apache-spark-jupyter-spark-sql.md +- name: Tutorials + items: + - name: Run queries on an Apache Spark cluster + href: apache-spark-load-data-run-query.md + - name: Use VSCode to run Apache Spark queries + href: ../hdinsight-for-vscode.md + maintainContext: true + - name: Use IntelliJ to run Apache Spark queries + href: apache-spark-intellij-tool-plugin.md + - name: Analyze data using BI tools + href: apache-spark-use-bi-tools.md + - name: Run a streaming job + href: apache-spark-eventhub-streaming.md + - name: Create a machine learning app + href: apache-spark-ipython-notebook-machine-learning.md + - name: Create an Apache Spark app in IntelliJ + href: apache-spark-create-standalone-application.md + - name: Monitor cluster availability with Ambari and Azure Monitor logs + href: ../hdinsight-cluster-availability.md + maintainContext: true +- name: Samples + items: + - name: .NET samples + href: ../hdinsight-sdk-dotnet-samples.md + maintainContext: true + - name: Java samples + href: ../hdinsight-sdk-java-samples.md + maintainContext: true + - name: Python samples + href: ../hdinsight-sdk-python-samples.md + maintainContext: true +- name: How to + items: + - name: Use cluster storage + items: + - name: Using Azure Storage + href: ../hdinsight-hadoop-use-blob-storage.md + maintainContext: true + - name: Using Azure Data Lake Storage Gen2 + href: ../hdinsight-hadoop-use-data-lake-storage-gen2.md + maintainContext: true + - name: Using Azure Data Lake Storage Gen1 + href: ../hdinsight-hadoop-use-data-lake-store.md + maintainContext: true + - name: Develop + items: + - name: Use an interactive Apache Spark Shell + href: apache-spark-shell.md + - name: Remote jobs with Apache Livy + href: apache-spark-livy-rest-interface.md + - name: Create apps using Eclipse + href: apache-spark-eclipse-tool-plugin.md + - name: Create apps using IntelliJ + href: apache-spark-intellij-tool-plugin.md + - name: Debug Apache Spark jobs remotely with IntelliJ through SSH + href: apache-spark-intellij-tool-debug-remotely-through-ssh.md + - name: Debug Apache Spark jobs remotely with IntelliJ through VPN + href: apache-spark-intellij-tool-plugin-debug-jobs-remotely.md + - name: Create non-interactive authentication .NET HDInsight applications + href: ../hdinsight-create-non-interactive-authentication-dotnet-applications.md + maintainContext: true + - name: Use HDInsight tools + items: + - name: Tools for Visual Studio + href: ../hadoop/apache-hadoop-visual-studio-tools-get-started.md + maintainContext: true + - name: Tools for VSCode + href: ../hdinsight-for-vscode.md + maintainContext: true + - name: Tools for IntelliJ + href: apache-spark-intellij-tool-plugin.md + - name: Tools for Eclipse + href: apache-spark-eclipse-tool-plugin.md + - name: PySpark for VSCode + href: ../set-up-pyspark-interactive-environment.md + maintainContext: true + - name: Debug Apache Spark in IntelliJ + href: apache-spark-intellij-tool-debug-remotely-through-ssh.md + - name: Use notebooks with Apache Spark + items: + - name: Use a local Jupyter notebook + href: apache-spark-jupyter-notebook-install-locally.md + - name: Jupyter notebook kernels + href: apache-spark-jupyter-notebook-kernels.md + - name: Use external packages with Jupyter using cell magic + href: apache-spark-jupyter-notebook-use-external-packages.md + - name: Use external packages with Jupyter using script action + href: apache-spark-python-package-installation.md + - name: Use Apache Zeppelin notebooks + href: apache-spark-zeppelin-notebook.md + - name: Use with other Azure services + items: + - name: Use with Data Lake Storage + href: apache-spark-use-with-data-lake-store.md + - name: Connect to Azure SQL database + href: apache-spark-connect-to-sql-database.md + - name: Implement a lambda architecture using Apache Spark and Cosmos DB + href: ../../cosmos-db/lambda-architecture.md + maintainContext: true + - name: Run Azure ML workloads with AutoML + href: apache-spark-run-machine-learning-automl.md + maintainContext: true + - name: Apache Spark streaming + items: + - name: Streaming overview + href: apache-spark-streaming-overview.md + - name: Structured Streaming overview + href: apache-spark-structured-streaming-overview.md + - name: Create highly available Apache Spark streaming jobs in Apache Hadoop YARN + href: apache-spark-streaming-high-availability.md + - name: Structured streaming with Apache Kafka + href: ../hdinsight-apache-kafka-spark-structured-streaming.md + maintainContext: true + - name: Structured streaming from Apache Kafka to CosmosDB + href: ../apache-kafka-spark-structured-streaming-cosmosdb.md + maintainContext: true + - name: Spark streaming (DStream) with Apache Kafka + href: ../hdinsight-apache-spark-with-kafka.md + maintainContext: true + - name: Create Apache Spark streaming jobs with exactly-once event processing + href: apache-spark-streaming-exactly-once.md + - name: Apache Spark and Machine Learning + items: + - name: Predict food inspection results + href: apache-spark-machine-learning-mllib-ipython.md + - name: Analyze website logs + href: apache-spark-custom-library-website-log-analysis.md + - name: Use Caffe for deep learning + href: apache-spark-deep-learning-caffe.md + - name: Use with Microsoft Cognitive Toolkit + href: apache-spark-microsoft-cognitive-toolkit.md + - name: Create an Apache Spark machine learning pipeline + href: apache-spark-creating-ml-pipelines.md + - name: Analyze big data + items: + - name: Use Apache Spark to read and write Apache HBase data + href: ../hdinsight-using-spark-query-hbase.md + maintainContext: true + - name: Analyze Application Insights telemetry logs + href: apache-spark-analyze-application-insight-logs.md + - name: Extend clusters + items: + - name: Customize clusters using Bootstrap + href: ../hdinsight-hadoop-customize-cluster-bootstrap.md + maintainContext: true + - name: Customize clusters using Script Action + href: ../hdinsight-hadoop-customize-cluster-linux.md + maintainContext: true + - name: Develop script actions + href: ../hdinsight-hadoop-script-actions-linux.md + maintainContext: true + - name: Install and use Presto + href: ../hdinsight-hadoop-install-presto.md + maintainContext: true + - name: Add Apache Hive libraries + href: ../hdinsight-hadoop-add-hive-libraries.md + maintainContext: true + - name: Use Apache Giraph + href: ../hdinsight-hadoop-giraph-install-linux.md + maintainContext: true + - name: Use Hue + href: ../hdinsight-hadoop-hue-linux.md + maintainContext: true + - name: Use empty edge nodes + href: ../hdinsight-apps-use-edge-node.md + maintainContext: true + - name: Build HDInsight applications + items: + - name: Install custom apps + href: ../hdinsight-apps-install-custom-applications.md + maintainContext: true + - name: Use REST to install apps + href: https://msdn.microsoft.com/library/mt706515.aspx + - name: Publish HDInsight apps to Azure Marketplace + href: ../hdinsight-apps-publish-applications.md + maintainContext: true + - name: Secure + items: + - name: Use SSH with HDInsight + href: ../hdinsight-hadoop-linux-use-ssh-unix.md + maintainContext: true + - name: Use SSH tunneling + href: ../hdinsight-linux-ambari-ssh-tunnel.md + maintainContext: true + - name: Restrict access to data + href: ../hdinsight-storage-sharedaccesssignature-permissions.md + maintainContext: true + - name: Authorize users for Apache Ambari Views + href: ../hdinsight-authorize-users-to-ambari.md + maintainContext: true + - name: Manage + items: + - name: Manage cluster resources + href: apache-spark-resource-manager.md + - name: Manage Apache Spark applications using extended History Server + href: apache-azure-spark-history-server.md + - name: Configure Apache Spark settings + href: apache-spark-settings.md + - name: Optimize Apache Spark jobs + href: apache-spark-perf.md + - name: Enable caching with IO Cache + href: apache-spark-improve-performance-iocache.md + - name: Cluster capacity planning + href: ../hdinsight-capacity-planning.md + maintainContext: true + - name: Create HDInsight clusters + href: ../hdinsight-hadoop-provision-linux-clusters.md + maintainContext: true + items: + - name: Use the Azure portal + href: ../hdinsight-hadoop-create-linux-clusters-portal.md + maintainContext: true + - name: Use Azure Resource Manager templates + displayName: resource manager template, arm template, resource manager group + href: ../hdinsight-hadoop-create-linux-clusters-arm-templates.md + maintainContext: true + - name: Use SDK for .NET + href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet#create-a-cluster + - name: Use SDK for Python + href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python#create-a-cluster + - name: Use SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-stable#create-a-cluster + - name: Use Azure PowerShell + href: ../hdinsight-hadoop-create-linux-clusters-azure-powershell.md + maintainContext: true + - name: Use cURL and the Azure REST API + href: ../hdinsight-hadoop-create-linux-clusters-curl-rest.md + maintainContext: true + - name: Use the Azure Classic CLI + href: ../hdinsight-hadoop-create-linux-clusters-azure-cli.md + maintainContext: true + - name: Create on-demand clusters + href: ../hdinsight-hadoop-create-linux-clusters-adf.md + maintainContext: true + - name: Manage HDInsight clusters + href: ../hdinsight-administer-use-portal-linux.md + maintainContext: true + items: + - name: Use SDK for .NET + href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet#management + - name: Use SDK for Python + href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python + - name: Use SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-stable + - name: Use SDK for Go (Preview) + href: https://docs.microsoft.com/azure/hdinsight/hdinsight-go-sdk-overview + - name: Use Azure PowerShell + href: ../hdinsight-administer-use-powershell.md + maintainContext: true + - name: Use the Azure Classic CLI + href: ../hdinsight-administer-use-command-line.md + maintainContext: true + - name: Manage clusters using the Apache Ambari web UI + href: ../hdinsight-hadoop-manage-ambari.md + maintainContext: true + items: + - name: Optimize clusters with the Apache Ambari web UI + href: ../hdinsight-changing-configs-via-ambari.md + maintainContext: true + - name: Use Apache Ambari REST API + href: ../hdinsight-hadoop-manage-ambari-rest-api.md + maintainContext: true + - name: Manage logs for an HDInsight cluster + href: ../hdinsight-log-management.md + maintainContext: true + - name: Add storage accounts + href: ../hdinsight-hadoop-add-storage.md + maintainContext: true + - name: Upload data for Apache Hadoop jobs + href: ../hdinsight-upload-data.md + maintainContext: true + - name: Multiple HDInsight clusters with Data Lake Storage + href: ../hdinsight-multiple-clusters-data-lake-store.md + maintainContext: true + - name: Import and export data with Apache Sqoop + href: ../hadoop/hdinsight-use-sqoop.md + maintainContext: true + items: + - name: Connect with SSH + href: ../hadoop/apache-hadoop-use-sqoop-mac-linux.md + maintainContext: true + - name: Run using cURL + href: ../hadoop/apache-hadoop-use-sqoop-curl.md + maintainContext: true + - name: Run using SDK for .NET + href: ../hadoop/apache-hadoop-use-sqoop-dotnet-sdk.md + maintainContext: true + - name: Run using Azure PowerShell + href: ../hadoop/apache-hadoop-use-sqoop-powershell.md + maintainContext: true + - name: Use Apache Oozie for workflows + href: ../hdinsight-use-oozie-linux-mac.md + maintainContext: true + - name: Cluster and service ports and URIs + href: ../hdinsight-hadoop-port-settings-for-services.md + maintainContext: true + - name: Migrate to Resource Manager development tools + href: ../hdinsight-hadoop-development-using-azure-resource-manager.md + maintainContext: true + - name: Availability and reliability + href: ../hdinsight-high-availability-linux.md + maintainContext: true + - name: Upgrade HDInsight cluster to newer version + href: ../hdinsight-upgrade-cluster.md + maintainContext: true + - name: OS patching for HDInsight cluster + href: ../hdinsight-os-patching.md + maintainContext: true + - name: Troubleshoot + items: + - name: Debug Apache Spark jobs + href: apache-spark-job-debugging.md + - name: Debug Apache Spark Jobs through Job Graph + href: apache-azure-spark-history-server.md + - name: Use IntelliJ to debug Apache Spark job + href: apache-spark-intellij-tool-debug-remotely-through-ssh.md + - name: Troubleshoot a slow or failing HDInsight cluster + href: ../hdinsight-troubleshoot-failed-cluster.md + maintainContext: true + - name: Apache Spark troubleshooting + href: apache-troubleshoot-spark.md + - name: Apache Hadoop HDFS troubleshooting + href: ../hdinsight-troubleshoot-hdfs.md + maintainContext: true + - name: Apache Hadoop YARN troubleshooting + href: ../hdinsight-troubleshoot-yarn.md + maintainContext: true + - name: Known issues + href: apache-spark-known-issues.md + - name: Resources + items: + - name: Information about using HDInsight on Linux + href: ../hdinsight-hadoop-linux-information.md + maintainContext: true + - name: Apache Hadoop memory and performance + href: ../hdinsight-hadoop-stack-trace-error-messages.md + maintainContext: true + - name: Access Apache Hadoop YARN application logs on Linux + href: ../hdinsight-hadoop-access-yarn-app-logs-linux.md + maintainContext: true + - name: Enable heap dumps for Apache Hadoop services + href: ../hdinsight-hadoop-collect-debug-heap-dump-linux.md + maintainContext: true + - name: Understand and resolve WebHCat errors + href: ../hdinsight-hadoop-templeton-webhcat-debug-errors.md + maintainContext: true + - name: Apache Hive settings fix Out of Memory error + href: ../hdinsight-hadoop-hive-out-of-memory-error-oom.md + maintainContext: true + - name: Optimize Apache Hive queries + href: ../hdinsight-hadoop-optimize-hive-query.md + maintainContext: true +- name: Enterprise readiness + items: + - name: Enterprise security + items: + - name: Overview + href: ../domain-joined/apache-domain-joined-introduction.md + maintainContext: true + - name: Plan for enterprise security + href: ../domain-joined/apache-domain-joined-architecture.md + maintainContext: true + - name: Configure enterprise security in HDInsight + href: ../domain-joined/apache-domain-joined-configure.md + maintainContext: true + - name: Configure enterprise security in HDInsight using Azure AD DS + href: ../domain-joined/apache-domain-joined-configure-using-azure-adds.md + maintainContext: true + - name: Configure Apache Hive policies + href: ../domain-joined/apache-domain-joined-run-hive.md + maintainContext: true + - name: Manage clusters with enterprise security + href: ../domain-joined/apache-domain-joined-manage.md + maintainContext: true + - name: Securing data + href: ../hdinsight-hadoop-create-linux-clusters-with-secure-transfer-storage.md + maintainContext: true + - name: Network access + items: + - name: HDInsight in Azure VNet + href: ../hdinsight-extend-hadoop-virtual-network.md + maintainContext: true + - name: Connect HDInsight with on-premises network + href: ../connect-on-premises-network.md + maintainContext: true + - name: Monitoring using Azure Monitor logs + items: + - name: Use Azure Monitor logs + href: ../hdinsight-hadoop-oms-log-analytics-tutorial.md + maintainContext: true + - name: Use queries with Azure Monitor logs + href: ../hdinsight-hadoop-oms-log-analytics-use-queries.md + maintainContext: true + - name: Monitor cluster performance + href: ../hdinsight-key-scenarios-to-monitor.md + maintainContext: true +- name: Reference + items: + - name: Code samples + href: https://azure.microsoft.com/resources/samples/?service=hdinsight + - name: Azure PowerShell + href: /powershell/module/az.hdinsight + - name: SDK for .NET + href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet + - name: SDK for Python + href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python + - name: SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-stable + - name: SDK for Go + href: ../hdinsight-go-sdk-overview.md + - name: .NET (Apache HBase) + href: https://www.nuget.org/packages/Microsoft.HBase.Client/ + - name: REST API + href: /rest/api/hdinsight/ + - name: REST API (Apache Spark) + href: /rest/api/hdinsightspark/ + - name: Resource Manager template + displayName: resource manager template, arm template, resource manager group + href: /azure/templates/microsoft.hdinsight/allversions +- name: Resources + items: + - name: Release notes + href: ../hdinsight-release-notes.md + maintainContext: true + items: + - name: Archived release notes + href: ../hdinsight-release-notes-archive.md + maintainContext: true + - name: Azure Roadmap + href: https://azure.microsoft.com/roadmap/?category=intelligence-analytics + - name: Get help on the forum + href: https://social.msdn.microsoft.com/forums/azure/home?forum=hdinsight + - name: Learning path + href: https://azure.microsoft.com/documentation/learning-paths/hdinsight-self-guided-hadoop-training/ + - name: Microsoft Professional Program for Big Data + href: https://academy.microsoft.com/en-us/professional-program/tracks/big-data/ + - name: Pricing calculator + href: https://azure.microsoft.com/pricing/calculator/ + - name: Windows tools for HDInsight + href: ../hdinsight-hadoop-windows-tools.md + maintainContext: true diff --git a/articles/hdinsight/storm/TOC.yml b/articles/hdinsight/storm/TOC.yml index 1b3a88f363663..af228d2c908f4 100644 --- a/articles/hdinsight/storm/TOC.yml +++ b/articles/hdinsight/storm/TOC.yml @@ -83,36 +83,47 @@ maintainContext: true - name: Manage items: - - name: Create Linux clusters + - name: Create HDInsight clusters href: ../hdinsight-hadoop-provision-linux-clusters.md maintainContext: true items: + - name: Use the Azure portal + href: ../hdinsight-hadoop-create-linux-clusters-portal.md + maintainContext: true + - name: Use Azure Resource Manager templates + displayName: resource manager template, arm template, resource manager group + href: ../hdinsight-hadoop-create-linux-clusters-arm-templates.md + maintainContext: true + - name: Use SDK for .NET + href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet#create-a-cluster + - name: Use SDK for Python + href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python#create-a-cluster + - name: Use SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-stable#create-a-cluster - name: Use Azure PowerShell href: ../hdinsight-hadoop-create-linux-clusters-azure-powershell.md maintainContext: true - name: Use cURL and the Azure REST API href: ../hdinsight-hadoop-create-linux-clusters-curl-rest.md - maintainContext: true - - name: Use the .NET SDK - href: ../hdinsight-hadoop-create-linux-clusters-dotnet-sdk.md - maintainContext: true + maintainContext: true - name: Use the Azure Classic CLI href: ../hdinsight-hadoop-create-linux-clusters-azure-cli.md maintainContext: true - - name: Use the Azure portal - href: ../hdinsight-hadoop-create-linux-clusters-portal.md - maintainContext: true - - name: Use Azure Resource Manager templates - displayName: resource manager template, arm template, resource manager group - href: ../hdinsight-hadoop-create-linux-clusters-arm-templates.md + - name: Create on-demand clusters + href: ../hdinsight-hadoop-create-linux-clusters-adf.md maintainContext: true - - name: Manage Apache Hadoop clusters + - name: Manage HDInsight clusters href: ../hdinsight-administer-use-portal-linux.md maintainContext: true items: - - name: Use .NET SDK - href: ../hdinsight-administer-use-dotnet-sdk.md - maintainContext: true + - name: Use SDK for .NET + href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet#management + - name: Use SDK for Python + href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python + - name: Use SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight + - name: Use SDK for Go (Preview) + href: https://docs.microsoft.com/azure/hdinsight/hdinsight-go-sdk-overview - name: Use Azure PowerShell href: ../hdinsight-administer-use-powershell.md maintainContext: true @@ -151,7 +162,7 @@ - name: Run using cURL href: ../hdinsight-hadoop-use-sqoop-curl.md maintainContext: true - - name: Run using .NET SDK + - name: Run using SDK for .NET href: ../hdinsight-hadoop-use-sqoop-dotnet-sdk.md maintainContext: true - name: Run using Azure PowerShell @@ -234,13 +245,13 @@ href: https://azure.microsoft.com/resources/samples/?service=hdinsight - name: Azure PowerShell href: /powershell/module/az.hdinsight - - name: .NET SDK + - name: SDK for .NET href: https://docs.microsoft.com/dotnet/api/overview/azure/hdinsight?view=azure-dotnet - - name: Python SDK + - name: SDK for Python href: https://docs.microsoft.com/python/api/overview/azure/hdinsight?view=azure-python - - name: Java SDK - href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-preview - - name: Go SDK + - name: SDK for Java + href: https://docs.microsoft.com/java/api/overview/azure/hdinsight?view=azure-java-stable + - name: SDK for Go href: ../hdinsight-go-sdk-overview.md - name: .NET (Apache HBase) href: https://www.nuget.org/packages/Microsoft.HBase.Client/ @@ -266,11 +277,11 @@ - name: Azure Roadmap href: https://azure.microsoft.com/roadmap/?category=intelligence-analytics - name: Get help on the forum - href: https://social.msdn.microsoft.com/forums/azure/en-US/home?forum=hdinsight + href: https://social.msdn.microsoft.com/forums/azure/home?forum=hdinsight - name: Learning path href: https://azure.microsoft.com/documentation/learning-paths/hdinsight-self-guided-hadoop-training/ - name: Microsoft Professional Program for Big Data - href: https://academy.microsoft.com/en-us/professional-program/big-data/ + href: https://academy.microsoft.com/en-us/professional-program/tracks/big-data/ - name: Pricing calculator href: https://azure.microsoft.com/pricing/calculator/ - name: Windows tools for HDInsight diff --git a/articles/machine-learning/service/azure-machine-learning-release-notes.md b/articles/machine-learning/service/azure-machine-learning-release-notes.md index 911a314234240..a3b2af80b5cc9 100644 --- a/articles/machine-learning/service/azure-machine-learning-release-notes.md +++ b/articles/machine-learning/service/azure-machine-learning-release-notes.md @@ -18,6 +18,13 @@ In this article, learn about the Azure Machine Learning service releases. For a + The Azure Machine Learning's [**main SDK for Python**](https://aka.ms/aml-sdk) + The Azure Machine Learning [**Data Prep SDK**](https://aka.ms/data-prep-sdk) +## 2019-04-15 + +### Azure Portal + + You can now resubmit an existing Script run on an existing remote compute cluster. + + You can now run a published pipeline with new parameters on the Pipelines tab. + + Run details now supports a new Snapshot file viewer. You can view a snapshot of the directory when you submitted a specific run. You can also download the notebook that was submitted to start the run. + ## 2019-04-08 ### Azure Machine Learning SDK for Python v1.0.23 @@ -47,6 +54,7 @@ In this article, learn about the Azure Machine Learning service releases. For a + Column type detection now supports columns of type Long. + Fixed a bug where some date values were being displayed as timestamps instead of Python datetime objects. + Fixed a bug where some type counts were being displayed as doubles instead of integers. + ## 2019-03-25 diff --git a/articles/migrate/resources-faq.md b/articles/migrate/resources-faq.md index c7b6be2e8d600..0b67dae5ebec2 100644 --- a/articles/migrate/resources-faq.md +++ b/articles/migrate/resources-faq.md @@ -56,6 +56,10 @@ Unites States | East US or West Central US The connection can be over the internet or use ExpressRoute with public peering. +### What network connectivity requirements are needed for Azure Migrate? + +For the URLs and ports needed for Azure Migrate to communicate with Azure, see [URLs for connectivity](https://docs.microsoft.com/azure/migrate/concepts-collector#urls-for-connectivity). + ### Can I harden the VM set up with the OVA template? Additional components (for example anti-virus) can be added into the OVA template as long as the communication and firewall rules required for the Azure Migrate appliance to work are left as is. diff --git a/articles/sql-database/sql-database-managed-instance-transact-sql-information.md b/articles/sql-database/sql-database-managed-instance-transact-sql-information.md index e3ade43ab0964..eeef5f23f62f0 100644 --- a/articles/sql-database/sql-database-managed-instance-transact-sql-information.md +++ b/articles/sql-database/sql-database-managed-instance-transact-sql-information.md @@ -475,7 +475,7 @@ Managed Instance cannot restore [contained databases](https://docs.microsoft.com ### Exceeding storage space with small database files -`CREATE DATABASE `, `ALTER DATABASE ADD FILE`, and `RESTORE DATABASE` statements might fail because the instance can reach the Azure Storage limit. +`CREATE DATABASE`, `ALTER DATABASE ADD FILE`, and `RESTORE DATABASE` statements might fail because the instance can reach the Azure Storage limit. Each General Purpose Managed Instance has up to 35 TB storage reserved for Azure Premium Disk space, and each database file is placed on a separate physical disk. Disk sizes can be 128 GB, 256 GB, 512 GB, 1 TB, or 4 TB. Unused space on disk is not charged, but the total sum of Azure Premium Disk sizes cannot exceed 35 TB. In some cases, a Managed Instance that does not need 8 TB in total might exceed the 35 TB Azure limit on storage size, due to internal fragmentation. diff --git a/articles/storage/blobs/storage-https-custom-domain-cdn.md b/articles/storage/blobs/storage-https-custom-domain-cdn.md index ab8ef62254bab..81ae170931fb8 100644 --- a/articles/storage/blobs/storage-https-custom-domain-cdn.md +++ b/articles/storage/blobs/storage-https-custom-domain-cdn.md @@ -63,4 +63,4 @@ On the [Azure CDN pricing page](https://azure.microsoft.com/pricing/details/cdn/ ## Next steps * [Configure a custom domain name for your Blob storage endpoint](storage-custom-domain-name.md) -* [Static website hosting in Azure Storage (preview)](storage-blob-static-website.md) +* [Static website hosting in Azure Storage](storage-blob-static-website.md) diff --git a/articles/storage/common/storage-metrics-in-azure-monitor.md b/articles/storage/common/storage-metrics-in-azure-monitor.md index 55637cee6a0b7..1309eef179bcb 100644 --- a/articles/storage/common/storage-metrics-in-azure-monitor.md +++ b/articles/storage/common/storage-metrics-in-azure-monitor.md @@ -388,7 +388,7 @@ Azure Storage supports following dimensions for metrics in Azure Monitor. | Dimension Name | Description | | ------------------- | ----------------- | | BlobType | The type of blob for Blob metrics only. The supported values are **BlockBlob** and **PageBlob**. Append Blob is included in BlockBlob. | -| ResponseType | Transaction response type. The available values include:

  • ServerOtherError: All other server-side errors except described ones
  • ServerBusyError: Authenticated request that returned an HTTP 503 status code.
  • ServerTimeoutError: Timed-out authenticated request that returned an HTTP 500 status code. The timeout occurred due to a server error.
  • AuthorizationError: Authenticated request that failed due to unauthorized access of data or an authorization failure.
  • NetworkError: Authenticated request that failed due to network errors. Most commonly occurs when a client prematurely closes a connection before timeout expiration.
  • ClientThrottlingError: Client-side throttling error.
  • ClientTimeoutError: Timed-out authenticated request that returned an HTTP 500 status code. If the client's network timeout or the request timeout is set to a lower value than expected by the storage service, it is an expected timeout. Otherwise, it is reported as a ServerTimeoutError.
  • ClientOtherError: All other client-side errors except described ones.
  • Success: Successful request.
  • SuccessWithThrottling: Successful request when a SMB client gets throttled in the first attempt(s) but succeeds after retries.| +| ResponseType | Transaction response type. The available values include:

  • ServerOtherError: All other server-side errors except described ones
  • ServerBusyError: Authenticated request that returned an HTTP 503 status code.
  • ServerTimeoutError: Timed-out authenticated request that returned an HTTP 500 status code. The timeout occurred due to a server error.
  • AuthorizationError: Authenticated request that failed due to unauthorized access of data or an authorization failure.
  • NetworkError: Authenticated request that failed due to network errors. Most commonly occurs when a client prematurely closes a connection before timeout expiration.
  • ClientThrottlingError: Client-side throttling error.
  • ClientTimeoutError: Timed-out authenticated request that returned an HTTP 500 status code. If the client's network timeout or the request timeout is set to a lower value than expected by the storage service, it is an expected timeout. Otherwise, it is reported as a ServerTimeoutError.
  • ClientOtherError: All other client-side errors except described ones.
  • Success: Successful request.| | GeoType | Transaction from Primary or Secondary cluster. The available values include Primary and Secondary. It applies to Read Access Geo Redundant Storage(RA-GRS) when reading objects from secondary tenant. | | ApiName | The name of operation. For example:
  • CreateContainer
  • DeleteBlob
  • GetBlob
  • For all operation names, see [document](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages). | | Authentication | Authentication type used in transactions. The available values include:
  • AccountKey: The transaction is authenticated with storage account key.
  • SAS: The transaction is authenticated with shared access signatures.
  • OAuth: The transaction is authenticated with OAuth access tokens.
  • Anonymous: The transaction is requested anonymously. It doesn’t include preflight requests.
  • AnonymousPreflight: The transaction is preflight request.
  • | diff --git a/articles/virtual-machines/windows/sql/virtual-machines-windows-sql-ahb.md b/articles/virtual-machines/windows/sql/virtual-machines-windows-sql-ahb.md index b1e452784dff3..df5e3078000ac 100644 --- a/articles/virtual-machines/windows/sql/virtual-machines-windows-sql-ahb.md +++ b/articles/virtual-machines/windows/sql/virtual-machines-windows-sql-ahb.md @@ -40,7 +40,7 @@ Switching between the two license models incurs **no downtime**, does not restar - Changing the licensing model is only supported for virtual machines deployed using the Resource Manager model. VMs deployed using the classic model are not supported. - Changing the licensing model is only enabled for Public Cloud installations. - Changing the licensing model is supported only on virtual machines that have a single NIC (network interface). On virtual machines that have more than one NIC, you should first remove one of the NICs (by using the Azure portal) before you attempt the procedure. Otherwise, you will run into an error similar to the following: - ` The virtual machine '\' has more than one NIC associated.` Although you might be able to add the NIC back to the VM after you change the licensing mode, operations done through the SQL configuration blade, like automatic patching and backup, will no longer be considered supported. + `The virtual machine '\' has more than one NIC associated.` Although you might be able to add the NIC back to the VM after you change the licensing mode, operations done through the SQL configuration blade, like automatic patching and backup, will no longer be considered supported. ## Prerequisites diff --git a/articles/virtual-machines/workloads/mainframe-rehosting/tmaxsoft/install-openframe-azure.md b/articles/virtual-machines/workloads/mainframe-rehosting/tmaxsoft/install-openframe-azure.md index 662ad38afa02b..f1466e99efcd5 100644 --- a/articles/virtual-machines/workloads/mainframe-rehosting/tmaxsoft/install-openframe-azure.md +++ b/articles/virtual-machines/workloads/mainframe-rehosting/tmaxsoft/install-openframe-azure.md @@ -942,7 +942,7 @@ ProSort is a utility used in batch transactions for sorting data. export PATH ``` -6. To execute the bash profile, at the command prompt, type: ` . .bash_profile` +6. To execute the bash profile, at the command prompt, type: `. .bash_profile` 7. Create the configuration file. For example: @@ -1052,7 +1052,7 @@ OFCOBOL is the OpenFrame compiler that interprets the mainframe’s COBOL progra 0 NonFatalErrors 0 FatalError ``` -10. Use the `ofcob --version ` command and review the version number to verify the installation. For example: +10. Use the `ofcob --version` command and review the version number to verify the installation. For example: ``` [oframe7@ofdemo ~]$ ofcob --version diff --git a/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md b/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md index e1d3475c1a5f7..76edf1c48c747 100644 --- a/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md +++ b/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md @@ -81,7 +81,7 @@ Run the following commands on all **iSCSI target virtual machines**. Run the following commands on all **iSCSI target virtual machines** to create the iSCSI disks for the clusters used by your SAP systems. In the following example, SBD devices for multiple clusters are created. It shows you how you would use one iSCSI target server for multiple clusters. The SBD devices are placed on the OS disk. Make sure that you have enough space. -**` nfs`** is used to identify the NFS cluster, **ascsnw1** is used to identify the ASCS cluster of **NW1**, **dbnw1** is used to identify the database cluster of **NW1**, **nfs-0** and **nfs-1** are the hostnames of the NFS cluster nodes, **nw1-xscs-0** and **nw1-xscs-1** are the hostnames of the **NW1** ASCS cluster nodes, and **nw1-db-0** and **nw1-db-1** are the hostnames of the database cluster nodes. Replace them with the hostnames of your cluster nodes and the SID of your SAP system. +**`nfs`** is used to identify the NFS cluster, **ascsnw1** is used to identify the ASCS cluster of **NW1**, **dbnw1** is used to identify the database cluster of **NW1**, **nfs-0** and **nfs-1** are the hostnames of the NFS cluster nodes, **nw1-xscs-0** and **nw1-xscs-1** are the hostnames of the **NW1** ASCS cluster nodes, and **nw1-db-0** and **nw1-db-1** are the hostnames of the database cluster nodes. Replace them with the hostnames of your cluster nodes and the SID of your SAP system.
    # Create the root folder for all SBD devices
     sudo mkdir /sbd
    @@ -299,7 +299,7 @@ The following items are prefixed with either **[A]** - applicable to all nodes,
        SBD_WATCHDOG="yes"
        
    - Create the ` softdog` configuration file + Create the `softdog` configuration file
    echo softdog | sudo tee /etc/modules-load.d/softdog.conf
        
    diff --git a/articles/virtual-machines/workloads/sap/high-availability-guide-suse.md b/articles/virtual-machines/workloads/sap/high-availability-guide-suse.md index 9df586b077f7c..8ff2f26fb61dd 100644 --- a/articles/virtual-machines/workloads/sap/high-availability-guide-suse.md +++ b/articles/virtual-machines/workloads/sap/high-availability-guide-suse.md @@ -93,7 +93,8 @@ The NFS server, SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and th * Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS cluster * Probe Port * Port 620<nr> -* Loadbalancing rules +* Load +* balancing rules * 32<nr> TCP * 36<nr> TCP * 39<nr> TCP @@ -110,7 +111,7 @@ The NFS server, SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and th * Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS cluster * Probe Port * Port 621<nr> -* Loadbalancing rules +* Load balancing rules * 33<nr> TCP * 5<nr>13 TCP * 5<nr>14 TCP @@ -131,7 +132,7 @@ The Azure Marketplace contains an image for SUSE Linux Enterprise Server for SAP You can use one of the quickstart templates on GitHub to deploy all required resources. The template deploys the virtual machines, the load balancer, availability set etc. Follow these steps to deploy the template: -1. Open the [ASCS/SCS Multi SID template][template-multisid-xscs] or the [converged template][template-converged] on the Azure portal +1. Open the [ASCS/SCS Multi SID template][template-multisid-xscs] or the [converged template][template-converged] on the Azure portal. The ASCS/SCS template only creates the load-balancing rules for the SAP NetWeaver ASCS/SCS and ERS (Linux only) instances whereas the converged template also creates the load-balancing rules for a database (for example Microsoft SQL Server or SAP HANA). If you plan to install an SAP NetWeaver based system and you also want to install the database on the same machines, use the [converged template][template-converged]. 1. Enter the following parameters 1. Resource Prefix (ASCS/SCS Multi SID template only) @@ -144,7 +145,7 @@ Follow these steps to deploy the template: Select one of the Linux distributions. For this example, select SLES 12 BYOS 6. Db Type Select HANA - 7. Sap System Size + 7. Sap System Size. The amount of SAPS the new system provides. If you are not sure how many SAPS the system requires, ask your SAP Technology Partner or System Integrator 8. System Availability Select HA @@ -200,7 +201,7 @@ You first need to create the virtual machines for this NFS cluster. Afterwards, 1. Click OK 1. Port 621**02** for ASCS ERS * Repeat the steps above to create a health probe for the ERS (for example 621**02** and **nw1-aers-hp**) - 1. Loadbalancing rules + 1. Load balancing rules 1. 32**00** TCP for ASCS 1. Open the load balancer, select load balancing rules and click Add 1. Enter the name of the new load balancer rule (for example **nw1-lb-3200**) @@ -532,6 +533,8 @@ The following items are prefixed with either **[A]** - applicable to all nodes, 1. **[1]** Create the SAP cluster resources +If using enqueue server 1 architecture (ENSA1), define the resources as follows: +
    sudo crm configure property maintenance-mode="true"
        
        sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \
    @@ -558,8 +561,38 @@ The following items are prefixed with either **[A]** - applicable to all nodes,
        sudo crm configure property maintenance-mode="false"
        
    + SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support. + If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), define the resources as follows: + +
    sudo crm configure property maintenance-mode="true"
    +   
    +   sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \
    +    operations \$id=rsc_sap_NW1_ASCS00-operations \
    +    op monitor interval=11 timeout=60 on_fail=restart \
    +    params InstanceName=NW1_ASCS00_nw1-ascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs" \
    +    AUTOMATIC_RECOVER=false \
    +    meta resource-stickiness=5000
    +   
    +   sudo crm configure primitive rsc_sap_NW1_ERS02 SAPInstance \
    +    operations \$id=rsc_sap_NW1_ERS02-operations \
    +    op monitor interval=11 timeout=60 on_fail=restart \
    +    params InstanceName=NW1_ERS02_nw1-aers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS02_nw1-aers" AUTOMATIC_RECOVER=false IS_ERS=true 
    +   
    +   sudo crm configure modgroup g-NW1_ASCS add rsc_sap_NW1_ASCS00
    +   sudo crm configure modgroup g-NW1_ERS add rsc_sap_NW1_ERS02
    +   
    +   sudo crm configure colocation col_sap_NW1_no_both -5000: g-NW1_ERS g-NW1_ASCS
    +   sudo crm configure order ord_sap_NW1_first_start_ascs Optional: rsc_sap_NW1_ASCS00:start rsc_sap_NW1_ERS02:stop symmetrical=false
    +   
    +   sudo crm node online nw1-cl-0
    +   sudo crm configure property maintenance-mode="false"
    +   
    + + If you are upgrading from an older version and switching to enqueue server 2, see sap note [2641019](https://launchpad.support.sap.com/#/notes/2641019). + Make sure that the cluster status is ok and that all resources are started. It is not important on which node the resources are running. +
    sudo crm_mon -r
        
        # Online: [ nw1-cl-0 nw1-cl-1 ]
    @@ -960,7 +993,7 @@ The following tests are a copy of the test cases in the best practices guides of
             rsc_sap_NW1_ERS02  (ocf::heartbeat:SAPInstance):   Started nw1-cl-0
        
    - Create an enqueue lock by, for example edit a user in transaction su01. Run the following commands as \adm on the node where the ASCS instance is running. The commands will stop the ASCS instance and start it again. The enqueue lock is expected to be lost in this test. + Create an enqueue lock by, for example edit a user in transaction su01. Run the following commands as \adm on the node where the ASCS instance is running. The commands will stop the ASCS instance and start it again. If using enqueue server 1 architecture, the enqueue lock is expected to be lost in this test. If using enqueue server 2 architecture, the enqueue will be retained.
    nw1-cl-1:nw1adm 54> sapcontrol -nr 00 -function StopWait 600 2
        
    diff --git a/articles/virtual-network/container-networking-overview.md b/articles/virtual-network/container-networking-overview.md index dd4e770f0704f..bd18c3003f7b7 100644 --- a/articles/virtual-network/container-networking-overview.md +++ b/articles/virtual-network/container-networking-overview.md @@ -57,10 +57,10 @@ The plug-in supports up to 250 Pods per virtual machine and up to 16,000 Pods in The plug-in can be used in the following ways, to provide basic virtual network attach for Pods or Docker containers: - **Azure Kubernetes Service**: The plug-in is integrated into the Azure Kubernetes Service (AKS), and can be used by choosing the *Advanced Networking* option. Advanced Networking lets you deploy a Kubernetes cluster in an existing, or a new, virtual network. To learn more about Advanced Networking and the steps to set it up, see [Network configuration in AKS](../aks/networking-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json). -- **ACS-Engine**: ACS-Engine is a tool that generates an Azure Resource Manager template for the deployment of a Kubernetes cluster in Azure. For detailed instructions, see [Deploy the plug-in for ACS-Engine Kubernetes clusters](deploy-container-networking.md#deploy-plug-in-for-acs-engine-kubernetes-cluster). -- **Creating your own Kubernetes cluster in Azure**: The plug-in can be used to provide basic networking for Pods in Kubernetes clusters that you deploy yourself, without relying on AKS, or tools like the ACS-Engine. In this case, the plug-in is installed and enabled on every virtual machine in a cluster. For detailed instructions, see [Deploy the plug-in for a Kubernetes cluster that you deploy yourself](deploy-container-networking.md#deploy-plug-in-for-a-kubernetes-cluster). +- **AKS-Engine**: AKS-Engine is a tool that generates an Azure Resource Manager template for the deployment of a Kubernetes cluster in Azure. For detailed instructions, see [Deploy the plug-in for AKS-Engine Kubernetes clusters](deploy-container-networking.md#deploy-the-azure-virtual-network-container-network-interface-plug-in). +- **Creating your own Kubernetes cluster in Azure**: The plug-in can be used to provide basic networking for Pods in Kubernetes clusters that you deploy yourself, without relying on AKS, or tools like the AKS-Engine. In this case, the plug-in is installed and enabled on every virtual machine in a cluster. For detailed instructions, see [Deploy the plug-in for a Kubernetes cluster that you deploy yourself](deploy-container-networking.md#deploy-plug-in-for-a-kubernetes-cluster). - **Virtual network attach for Docker containers in Azure**: The plug-in can be used in cases where you don’t want to create a Kubernetes cluster, and would like to create Docker containers with virtual network attach, in virtual machines. For detailed instructions, see [Deploy the plug-in for Docker](deploy-container-networking.md#deploy-plug-in-for-docker-containers). ## Next steps -[Deploy the plug-in](deploy-container-networking.md) for Kubernetes clusters or Docker containers \ No newline at end of file +[Deploy the plug-in](deploy-container-networking.md) for Kubernetes clusters or Docker containers diff --git a/includes/cognitive-services-speech-service-endpoints-text-to-speech.md b/includes/cognitive-services-speech-service-endpoints-text-to-speech.md index 89d20dd1c2938..8697c40241f70 100644 --- a/includes/cognitive-services-speech-service-endpoints-text-to-speech.md +++ b/includes/cognitive-services-speech-service-endpoints-text-to-speech.md @@ -26,7 +26,6 @@ Standard voices are available in these regions: | Region | Endpoint | |--------|----------| | Australia East | https://australiaeast.tts.speech.microsoft.com/cognitiveservices/v1 | -| Brazil South | https://brazilsouth.tts.speech.microsoft.com/cognitiveservices/v1 | | Canada Central | https://canadacentral.tts.speech.microsoft.com/cognitiveservices/v1 | | Central US | https://centralus.tts.speech.microsoft.com/cognitiveservices/v1 | | East Asia | https://eastasia.tts.speech.microsoft.com/cognitiveservices/v1 | @@ -52,7 +51,6 @@ If you've created a custom voice font, use the endpoint that you've created, not | Region | Endpoint | |--------|----------| | Australia East | https://australiaeast.voice.speech.microsoft.com | -| Brazil South | https://brazilsouth.voice.speech.microsoft.com | | Canada Central | https://canadacentral.voice.speech.microsoft.com | | Central US | https://centralus.voice.speech.microsoft.com | | East Asia | https://eastasia.voice.speech.microsoft.com | diff --git a/includes/configure-deployment-user-no-h.md b/includes/configure-deployment-user-no-h.md index dafc604631df4..362964bfb0952 100644 --- a/includes/configure-deployment-user-no-h.md +++ b/includes/configure-deployment-user-no-h.md @@ -18,7 +18,7 @@ In the following example, replace *\* and *\*, including the az webapp deployment user set --user-name --password ``` -You get a JSON output with the password shown as `null`. If you get a `'Conflict'. Details: 409` error, change the username. If you get a ` 'Bad Request'. Details: 400` error, use a stronger password. The deployment username must not contain ‘@’ symbol for local Git pushes. +You get a JSON output with the password shown as `null`. If you get a `'Conflict'. Details: 409` error, change the username. If you get a `'Bad Request'. Details: 400` error, use a stronger password. The deployment username must not contain ‘@’ symbol for local Git pushes. You configure this deployment user only once. You can use it for all your Azure deployments. diff --git a/includes/hdinsight-sdk-additional-functionality.md b/includes/hdinsight-sdk-additional-functionality.md new file mode 100644 index 0000000000000..1a2492d73eb46 --- /dev/null +++ b/includes/hdinsight-sdk-additional-functionality.md @@ -0,0 +1,14 @@ +--- +author: tylerfox +ms.service: hdinsight +ms.topic: include +ms.date: 04/15/2019 +ms.author: tyfox +--- +## Additional SDK functionality + +* List clusters +* Delete clusters +* Resize clusters +* Monitoring +* Script Actions \ No newline at end of file
    Notes
    Azure Data Lake Store directoryAzure Data Lake Store directory (Only Gen 1)
    Azure Data Lake Store fileAzure Data Lake Store file (Only Gen 1)