' has more than one NIC associated.` Although you might be able to add the NIC back to the VM after you change the licensing mode, operations done through the SQL configuration blade, like automatic patching and backup, will no longer be considered supported.
## Prerequisites
diff --git a/articles/virtual-machines/workloads/mainframe-rehosting/tmaxsoft/install-openframe-azure.md b/articles/virtual-machines/workloads/mainframe-rehosting/tmaxsoft/install-openframe-azure.md
index 662ad38afa02b..f1466e99efcd5 100644
--- a/articles/virtual-machines/workloads/mainframe-rehosting/tmaxsoft/install-openframe-azure.md
+++ b/articles/virtual-machines/workloads/mainframe-rehosting/tmaxsoft/install-openframe-azure.md
@@ -942,7 +942,7 @@ ProSort is a utility used in batch transactions for sorting data.
export PATH
```
-6. To execute the bash profile, at the command prompt, type: ` . .bash_profile`
+6. To execute the bash profile, at the command prompt, type: `. .bash_profile`
7. Create the configuration file. For example:
@@ -1052,7 +1052,7 @@ OFCOBOL is the OpenFrame compiler that interprets the mainframe’s COBOL progra
0 NonFatalErrors
0 FatalError
```
-10. Use the `ofcob --version ` command and review the version number to verify the installation. For example:
+10. Use the `ofcob --version` command and review the version number to verify the installation. For example:
```
[oframe7@ofdemo ~]$ ofcob --version
diff --git a/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md b/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md
index e1d3475c1a5f7..76edf1c48c747 100644
--- a/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md
+++ b/articles/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md
@@ -81,7 +81,7 @@ Run the following commands on all **iSCSI target virtual machines**.
Run the following commands on all **iSCSI target virtual machines** to create the iSCSI disks for the clusters used by your SAP systems. In the following example, SBD devices for multiple clusters are created. It shows you how you would use one iSCSI target server for multiple clusters. The SBD devices are placed on the OS disk. Make sure that you have enough space.
-**` nfs`** is used to identify the NFS cluster, **ascsnw1** is used to identify the ASCS cluster of **NW1**, **dbnw1** is used to identify the database cluster of **NW1**, **nfs-0** and **nfs-1** are the hostnames of the NFS cluster nodes, **nw1-xscs-0** and **nw1-xscs-1** are the hostnames of the **NW1** ASCS cluster nodes, and **nw1-db-0** and **nw1-db-1** are the hostnames of the database cluster nodes. Replace them with the hostnames of your cluster nodes and the SID of your SAP system.
+**`nfs`** is used to identify the NFS cluster, **ascsnw1** is used to identify the ASCS cluster of **NW1**, **dbnw1** is used to identify the database cluster of **NW1**, **nfs-0** and **nfs-1** are the hostnames of the NFS cluster nodes, **nw1-xscs-0** and **nw1-xscs-1** are the hostnames of the **NW1** ASCS cluster nodes, and **nw1-db-0** and **nw1-db-1** are the hostnames of the database cluster nodes. Replace them with the hostnames of your cluster nodes and the SID of your SAP system.
# Create the root folder for all SBD devices
sudo mkdir /sbd
@@ -299,7 +299,7 @@ The following items are prefixed with either **[A]** - applicable to all nodes,
SBD_WATCHDOG="yes"
- Create the ` softdog` configuration file
+ Create the `softdog` configuration file
echo softdog | sudo tee /etc/modules-load.d/softdog.conf
diff --git a/articles/virtual-machines/workloads/sap/high-availability-guide-suse.md b/articles/virtual-machines/workloads/sap/high-availability-guide-suse.md
index 9df586b077f7c..8ff2f26fb61dd 100644
--- a/articles/virtual-machines/workloads/sap/high-availability-guide-suse.md
+++ b/articles/virtual-machines/workloads/sap/high-availability-guide-suse.md
@@ -93,7 +93,8 @@ The NFS server, SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and th
* Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS cluster
* Probe Port
* Port 620<nr>
-* Loadbalancing rules
+* Load
+* balancing rules
* 32<nr> TCP
* 36<nr> TCP
* 39<nr> TCP
@@ -110,7 +111,7 @@ The NFS server, SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and th
* Connected to primary network interfaces of all virtual machines that should be part of the (A)SCS/ERS cluster
* Probe Port
* Port 621<nr>
-* Loadbalancing rules
+* Load balancing rules
* 33<nr> TCP
* 5<nr>13 TCP
* 5<nr>14 TCP
@@ -131,7 +132,7 @@ The Azure Marketplace contains an image for SUSE Linux Enterprise Server for SAP
You can use one of the quickstart templates on GitHub to deploy all required resources. The template deploys the virtual machines, the load balancer, availability set etc.
Follow these steps to deploy the template:
-1. Open the [ASCS/SCS Multi SID template][template-multisid-xscs] or the [converged template][template-converged] on the Azure portal
+1. Open the [ASCS/SCS Multi SID template][template-multisid-xscs] or the [converged template][template-converged] on the Azure portal.
The ASCS/SCS template only creates the load-balancing rules for the SAP NetWeaver ASCS/SCS and ERS (Linux only) instances whereas the converged template also creates the load-balancing rules for a database (for example Microsoft SQL Server or SAP HANA). If you plan to install an SAP NetWeaver based system and you also want to install the database on the same machines, use the [converged template][template-converged].
1. Enter the following parameters
1. Resource Prefix (ASCS/SCS Multi SID template only)
@@ -144,7 +145,7 @@ Follow these steps to deploy the template:
Select one of the Linux distributions. For this example, select SLES 12 BYOS
6. Db Type
Select HANA
- 7. Sap System Size
+ 7. Sap System Size.
The amount of SAPS the new system provides. If you are not sure how many SAPS the system requires, ask your SAP Technology Partner or System Integrator
8. System Availability
Select HA
@@ -200,7 +201,7 @@ You first need to create the virtual machines for this NFS cluster. Afterwards,
1. Click OK
1. Port 621**02** for ASCS ERS
* Repeat the steps above to create a health probe for the ERS (for example 621**02** and **nw1-aers-hp**)
- 1. Loadbalancing rules
+ 1. Load balancing rules
1. 32**00** TCP for ASCS
1. Open the load balancer, select load balancing rules and click Add
1. Enter the name of the new load balancer rule (for example **nw1-lb-3200**)
@@ -532,6 +533,8 @@ The following items are prefixed with either **[A]** - applicable to all nodes,
1. **[1]** Create the SAP cluster resources
+If using enqueue server 1 architecture (ENSA1), define the resources as follows:
+
sudo crm configure property maintenance-mode="true"
sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \
@@ -558,8 +561,38 @@ The following items are prefixed with either **[A]** - applicable to all nodes,
sudo crm configure property maintenance-mode="false"
+ SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support.
+ If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), define the resources as follows:
+
+sudo crm configure property maintenance-mode="true"
+
+ sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \
+ operations \$id=rsc_sap_NW1_ASCS00-operations \
+ op monitor interval=11 timeout=60 on_fail=restart \
+ params InstanceName=NW1_ASCS00_nw1-ascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_nw1-ascs" \
+ AUTOMATIC_RECOVER=false \
+ meta resource-stickiness=5000
+
+ sudo crm configure primitive rsc_sap_NW1_ERS02 SAPInstance \
+ operations \$id=rsc_sap_NW1_ERS02-operations \
+ op monitor interval=11 timeout=60 on_fail=restart \
+ params InstanceName=NW1_ERS02_nw1-aers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS02_nw1-aers" AUTOMATIC_RECOVER=false IS_ERS=true
+
+ sudo crm configure modgroup g-NW1_ASCS add rsc_sap_NW1_ASCS00
+ sudo crm configure modgroup g-NW1_ERS add rsc_sap_NW1_ERS02
+
+ sudo crm configure colocation col_sap_NW1_no_both -5000: g-NW1_ERS g-NW1_ASCS
+ sudo crm configure order ord_sap_NW1_first_start_ascs Optional: rsc_sap_NW1_ASCS00:start rsc_sap_NW1_ERS02:stop symmetrical=false
+
+ sudo crm node online nw1-cl-0
+ sudo crm configure property maintenance-mode="false"
+
+
+ If you are upgrading from an older version and switching to enqueue server 2, see sap note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
+
Make sure that the cluster status is ok and that all resources are started. It is not important on which node the resources are running.
+
sudo crm_mon -r
# Online: [ nw1-cl-0 nw1-cl-1 ]
@@ -960,7 +993,7 @@ The following tests are a copy of the test cases in the best practices guides of
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
- Create an enqueue lock by, for example edit a user in transaction su01. Run the following commands as \adm on the node where the ASCS instance is running. The commands will stop the ASCS instance and start it again. The enqueue lock is expected to be lost in this test.
+ Create an enqueue lock by, for example edit a user in transaction su01. Run the following commands as \adm on the node where the ASCS instance is running. The commands will stop the ASCS instance and start it again. If using enqueue server 1 architecture, the enqueue lock is expected to be lost in this test. If using enqueue server 2 architecture, the enqueue will be retained.
nw1-cl-1:nw1adm 54> sapcontrol -nr 00 -function StopWait 600 2
diff --git a/articles/virtual-network/container-networking-overview.md b/articles/virtual-network/container-networking-overview.md
index dd4e770f0704f..bd18c3003f7b7 100644
--- a/articles/virtual-network/container-networking-overview.md
+++ b/articles/virtual-network/container-networking-overview.md
@@ -57,10 +57,10 @@ The plug-in supports up to 250 Pods per virtual machine and up to 16,000 Pods in
The plug-in can be used in the following ways, to provide basic virtual network attach for Pods or Docker containers:
- **Azure Kubernetes Service**: The plug-in is integrated into the Azure Kubernetes Service (AKS), and can be used by choosing the *Advanced Networking* option. Advanced Networking lets you deploy a Kubernetes cluster in an existing, or a new, virtual network. To learn more about Advanced Networking and the steps to set it up, see [Network configuration in AKS](../aks/networking-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
-- **ACS-Engine**: ACS-Engine is a tool that generates an Azure Resource Manager template for the deployment of a Kubernetes cluster in Azure. For detailed instructions, see [Deploy the plug-in for ACS-Engine Kubernetes clusters](deploy-container-networking.md#deploy-plug-in-for-acs-engine-kubernetes-cluster).
-- **Creating your own Kubernetes cluster in Azure**: The plug-in can be used to provide basic networking for Pods in Kubernetes clusters that you deploy yourself, without relying on AKS, or tools like the ACS-Engine. In this case, the plug-in is installed and enabled on every virtual machine in a cluster. For detailed instructions, see [Deploy the plug-in for a Kubernetes cluster that you deploy yourself](deploy-container-networking.md#deploy-plug-in-for-a-kubernetes-cluster).
+- **AKS-Engine**: AKS-Engine is a tool that generates an Azure Resource Manager template for the deployment of a Kubernetes cluster in Azure. For detailed instructions, see [Deploy the plug-in for AKS-Engine Kubernetes clusters](deploy-container-networking.md#deploy-the-azure-virtual-network-container-network-interface-plug-in).
+- **Creating your own Kubernetes cluster in Azure**: The plug-in can be used to provide basic networking for Pods in Kubernetes clusters that you deploy yourself, without relying on AKS, or tools like the AKS-Engine. In this case, the plug-in is installed and enabled on every virtual machine in a cluster. For detailed instructions, see [Deploy the plug-in for a Kubernetes cluster that you deploy yourself](deploy-container-networking.md#deploy-plug-in-for-a-kubernetes-cluster).
- **Virtual network attach for Docker containers in Azure**: The plug-in can be used in cases where you don’t want to create a Kubernetes cluster, and would like to create Docker containers with virtual network attach, in virtual machines. For detailed instructions, see [Deploy the plug-in for Docker](deploy-container-networking.md#deploy-plug-in-for-docker-containers).
## Next steps
-[Deploy the plug-in](deploy-container-networking.md) for Kubernetes clusters or Docker containers
\ No newline at end of file
+[Deploy the plug-in](deploy-container-networking.md) for Kubernetes clusters or Docker containers
diff --git a/includes/cognitive-services-speech-service-endpoints-text-to-speech.md b/includes/cognitive-services-speech-service-endpoints-text-to-speech.md
index 89d20dd1c2938..8697c40241f70 100644
--- a/includes/cognitive-services-speech-service-endpoints-text-to-speech.md
+++ b/includes/cognitive-services-speech-service-endpoints-text-to-speech.md
@@ -26,7 +26,6 @@ Standard voices are available in these regions:
| Region | Endpoint |
|--------|----------|
| Australia East | https://australiaeast.tts.speech.microsoft.com/cognitiveservices/v1 |
-| Brazil South | https://brazilsouth.tts.speech.microsoft.com/cognitiveservices/v1 |
| Canada Central | https://canadacentral.tts.speech.microsoft.com/cognitiveservices/v1 |
| Central US | https://centralus.tts.speech.microsoft.com/cognitiveservices/v1 |
| East Asia | https://eastasia.tts.speech.microsoft.com/cognitiveservices/v1 |
@@ -52,7 +51,6 @@ If you've created a custom voice font, use the endpoint that you've created, not
| Region | Endpoint |
|--------|----------|
| Australia East | https://australiaeast.voice.speech.microsoft.com |
-| Brazil South | https://brazilsouth.voice.speech.microsoft.com |
| Canada Central | https://canadacentral.voice.speech.microsoft.com |
| Central US | https://centralus.voice.speech.microsoft.com |
| East Asia | https://eastasia.voice.speech.microsoft.com |
diff --git a/includes/configure-deployment-user-no-h.md b/includes/configure-deployment-user-no-h.md
index dafc604631df4..362964bfb0952 100644
--- a/includes/configure-deployment-user-no-h.md
+++ b/includes/configure-deployment-user-no-h.md
@@ -18,7 +18,7 @@ In the following example, replace *\* and *\*, including the
az webapp deployment user set --user-name --password
```
-You get a JSON output with the password shown as `null`. If you get a `'Conflict'. Details: 409` error, change the username. If you get a ` 'Bad Request'. Details: 400` error, use a stronger password. The deployment username must not contain ‘@’ symbol for local Git pushes.
+You get a JSON output with the password shown as `null`. If you get a `'Conflict'. Details: 409` error, change the username. If you get a `'Bad Request'. Details: 400` error, use a stronger password. The deployment username must not contain ‘@’ symbol for local Git pushes.
You configure this deployment user only once. You can use it for all your Azure deployments.
diff --git a/includes/hdinsight-sdk-additional-functionality.md b/includes/hdinsight-sdk-additional-functionality.md
new file mode 100644
index 0000000000000..1a2492d73eb46
--- /dev/null
+++ b/includes/hdinsight-sdk-additional-functionality.md
@@ -0,0 +1,14 @@
+---
+author: tylerfox
+ms.service: hdinsight
+ms.topic: include
+ms.date: 04/15/2019
+ms.author: tyfox
+---
+## Additional SDK functionality
+
+* List clusters
+* Delete clusters
+* Resize clusters
+* Monitoring
+* Script Actions
\ No newline at end of file