AZ-120 Microsoft Azure SAP – Operationalize Azure SAP Architecture

  • By
  • January 25, 2023
0 Comment

1. Introduction

Hi, and welcome back to the Microsoft Azure Four SAP workloads course. In this last section, we will be looking at how we take our design and deployment to operation by looking at optimizations, maintenance, and some housekeeping tasks. So let’s get started. I’m your host, Nikolay Capralesco, founder of Reteam Labs. This course is brought to you in partnership with Sam Khanjar, senior Azure cloud solution architect at Microsoft. There are six sections in total covering all of the AZ 120 exam objectives.

The final section of this course will take you through a task checklist to perform in order to ensure that your estate is running efficiently by sizing the VMs appropriately to ensure that you only pay for what you use. It’s important for you to understand what’s necessary steps to take in order to keep your SAP estate clean and healthy from running OS and SAP updates and scheduling downtime for maintenance operations. We’ll start by looking at how to optimize our landscape and ensure it’s running at optimum performance, carrying out common housekeeping tasks in managing OS and infra updates and changes.

2. SAP Performance Tools and Azure VM Resizing

It is true that we have gone through various optimizations throughout the course. This particular part will solidify and cement your understanding of such optimizations to ensure that you will be able to answer the exam questions. SAP comes with several out of the box performance and tracing tools. ABAP Meter is part of S T 13 performance tools, and it’s a very useful tool to test performance. However, you must use it carefully. We actually recommend that an expert user should be using the tool. Latency between the SAP application server and the DBMS server can be tested using TCP Ping or the SAP ABAP report, which is under SSA Cat ABAP Meter hama customers running on Azure are advised to load and install scripts which I will highlight below in the DBA cockpit.

SAP support engineers have created some very useful scripts for the detailed analysis of performance and for configuration issues. These scripts can be loaded and saved so that they can be used at any time prior to going live. It is highly recommended to run the following scripts review the Hana Mini check results and evaluate any difference between the reference value and the actual value. You can scale compute vertically. After creating a virtual machine, you can scale the VM up or down by changing the VM size. Resizing the VM might require the allocating it first. This can happen if the new size is not available on the hardware cluster that is currently hosting the VM.

Similarly, when resizing VMs in the same availability set, if the new resize for a VM in an availability set is not available on the hardware cluster currently hosting the VM, then all VMs in the availability set will need to be allocated to resize the VM. You might also need to update the size of other VMs in the availability set after one VM has been resized. Resizing of Azure VMs can be performed from the Azure Portal via PowerShell Azure CLI Arm templates or programmatically via Rest APIs for storage. The size of the virtual machine controls how many data disks you can attach. Attaching disks is an online operation. You can detach a data disk without stopping the Azure VM by using PowerShell or Azure CLI, but you should first make sure that the disk is not being used.

3. Scaling SAP HANA on HLI

For HLI computes to scale up or down, you can choose from many sizes of servers that are available for Hana large instances. They are categorized as Type One and Type Two and are tailored for different workloads. Choose a size that can grow with your workload for the next three years. One year commitments are also available. As for Hana and SAP business suite on Hana on a single blade can be scaled up to 20 terabytes with a single Hana Large instance. A multihost scale out deployment is generally used for a BW for Hana deployments as a kind of database partitioning strategy to scale out. Plan the placement of Hana tables prior to installation. From an infrastructure standpoint, multiple hosts are connected to a shared storage volume, enabling quick takeover by standby hosts in case one of the compute worker nodes in the Hana system fails.

For greenfield scenarios, the SAP Quick sizer is available to calculate memory requirements of the implementation of SAP software on top of Hana. If you already have SAP deployments, SAP provides reports you can use to check the data used by existing systems and calculate memory requirements for a Hana instance. The list on screen describes these reports for scaling storage for HLI, you can add storage by purchasing additional storage in 1. Additional storage can be added as additional volume. It can also be used to extend one or more of the existing volumes. The storage volumes are attached to the Hana Large instance unit as NFS four volumes. It isn’t possible to decrease the size of the volumes that was originally deployed. It also isn’t possible to change the names of the volumes or mount names.

4. Maintenance and Planned Outages

Next we will look at how to manage scheduled maintenance for SAP systems on Azure. Please make sure you protect your SAP landscape at every tier database ASCs App Server. The infrastructure will experience outages as Microsoft is constantly updating the system. If you build your system with a single point of failure, then your whole SAP system will be down. As mentioned earlier, in this course, you can mitigate against infrastructure planned maintenance and or unplanned outages by leveraging Availability Sets and Availability Zones. When designing a recovery strategy, you first need to know the business Recovery Point Objective RPO, which is how much data can the business tolerate to lose versus recovery time objective, which is the time it takes to recover your system back after a failure. RPO and RTO will dictate your business continuity and Dr design capabilities. For example, some design is only factoring a single site availability by building a High Availability System across multiple Availability Sets or Availability Zones, but pushing back up to a secondary region. This can also be extended to building the system across two paired regions, which gives you higher availability against regional failures. This could be either an active Active or Active passive deployment. It’s highly advisable not to keep testing your Dr failover scenarios and make sure you can bring the system up in or under your RPOr objective and not rely primarily on daily backups for your Dr.

5. Remote Management

Let’s now look at some housekeeping steps as documented in the Azure Virtual Machines Planning Guide. There are two basic methods for connecting into Azure VMs connect through public endpoints on a Jumpbox VM connect through a VPN or Azure Express route. Site to site connectivity via VPN or Express route is necessary for production scenarios. This type of connection is is also needed for non production scenarios that feed into production scenarios. Where SAP software is being used. Azure Automation offers the desired state configuration functionality via a cloud based managed DSC pool server in the Azure Cloud. It provides rich reports that inform you of important events such as when nodes have deviated from their assigned configuration.

You can monitor and automatically update machine configuration across physical and virtual machines, Windows or Linux in the Cloud or onpremises Azure. Automation also includes a builtin solution that starts and stops Azure VMs based on userdefined schedules. SAP Llama is used by many customers to operate, monitor and refresh their SAP landscape. Beginning with SAP llama 30, SP five It ships with a connector to Azure. By default, you can use this connector to reallocate and start virtual machines, copy and relocate managed disks, and delete managed disks. With these basic operations, you can relocate, copy, clone and refresh SAP systems using Sapilama. Access management for our cloud resources is a critical function for any organization that is using the cloud.

RBAC helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to. RBAC is an authorization system built on Azure Resource Manager that provides fine grain access management of Azure resources. The way you control access to resources using RBAC is to create role assignments. This is actually a key concept to understand. It’s how permissions are enforced. A role assignment consists of three elements security Principle, role definition and scope. A Security principle is an object that represents a user group, service principle or managed identity that is requesting access to Azure resources.

A role definition is a collection of permissions. It’s sometimes just called a role. A role definition lists the operations that can be performed, such as read, write and delete. Roles can be high level, like Owner or specific like Virtual Machine Reader. Azure includes several builtin roles that you can use. The scope is the set of resources that the access applies to when you assign a role. You can further limit the actions allowed by defining a scope. In Azure. You can specify a scope at multiple levels management Group, Subscription Resource Group or Resource. All scopes are structured in a parentchild relationship.

6. Remote Management for HLI

While connecting to HLI from an Azure VM or onpremise the transitive routing between Hana large instance units and on premises, as well as between Hana large instance that are deployed in two different regions does not work. You have the option of connecting to Hana large instance from the same virtual network that hosts the application Azure VMs. Alternatively, you can enable transitive routing by using any of the following methods express route Global Reach which we have described in detail earlier in this course a reverse proxy to root data to and from SAP Hana on Azure large instances. This can be, for example, NGINX with Traffic Manager deployed in the Azure Virtual network Iptables rules in a Linux VM to enable routing between on premises locations and Hana large instance units or between Hana large instance units in different regions.

The VM running IP tables needs to be deployed in the Azure Virtual Network that connects to Hana large instances to on premises networks. The VM needs to be sized accordingly so that the network throughput of the VM is sufficient for the expected network traffic. Another method is Azure Firewall to direct traffic between on premises and Hana large instance units. When using a reverse proxy, IP tables or Azure Firewall traffic routed through an Azure Virtual network could be additionally filtered by Azure network Security groups so that certain IP addresses or IP address ranges from on premises could be blocked or explicitly allowed to access Hana large instances. Please be aware that implementation and support for custom solutions involving thirdparty network appliances or IP tables isn’t provided by Microsoft. Support must be provided by the vendor of the component used or the integrator.

7. Remote Management for HLI cont’d.

You can also view some of the basic information from the Azure Portal. You will find basic unit information such as the name of the unit, its OS, its IP address, as well as the unit type with the number of CPU threads in memory. Another piece of information that is important and is available in the Azure Portal includes the Express Route circuit ID which you will need to provide it when raising a support request. Also, the IP address of the NFS endpoint provides storage for the unit and you need to reference it when configuring storage snapshots. The Portal also displays information about the power state.

One of the main activities recorded is restarts. The captured data includes the status of the activity, the timestamp of its trigger, the subscription ID of the trigger and the Azure Active Directory identity that initiated the trigger. Another type of recorded activity represents changes to metadata associated with individual units such as adding or deleting a tag. This activity is recorded as Write Hana instances and has no impact on the operational state of the Hana Large instance unit. By default, the Hana Large instance units have no tag assigned.

8. OS Updates, Express Route, and IPtables

With Azure VMs, you can use the Update Management Solution, part of Azure Automation, to manage updates and patches for your virtual machines. You can update the monitoring configuration for SAP. You should update the SAP monitoring configuration in any of the following scenarios the joint Microsoft SAP team extends the monitoring capabilities and requests tests more or fewer counters. Microsoft introduces a new version of Azure infrastructure that delivers the monitoring data, and the Azure enhanced monitoring extension for SAP needs to be adapted to those changes. You add or remove data disks attached to your Azure VMs. In this scenario, update the collection of storage related data. Changing your configuration by adding or deleting endpoints, or by assigning IP addresses to a VM does not affect the monitoring configuration. You change the size of your Azure VM. You add new network interfaces to your Azure VM. To update monitoring settings, simply redeploy the Azure enhanced monitoring extension for SAP. For SAP and SAP Hana, it is required to have a 1gb/second minimum connection to Azure. You can always request an increase for a single circuit bandwidth up to a maximum of 10gb/second. With revision three of HLI, the network latency experienced between VMs and HLI can be higher than a typical VM to VM network roundtrip latency. It varies between each Azure region, but the value can exceed .

7 ms roundtrip up to two ms actually, so make sure Express route fastpath is enabled. With revision four of HLI, the network latency between Azure VMs that are deployed in proximity to the HLI stamp is experienced to meet the average or better than average classification as documented by SAP. A final thing we need to touch on before ending this section in some scenarios, you need to bridge two security or routing domains with an intermediary system configured to forward, tunnel or Masquerade traffic between source and destination. This is required because there is no direct route between the two. This is most commonly used when connecting to HLI instances via internal routing and security rules. It can be set up using a Sus Linux Enterprise server with Iptables Masquerading by placing a Masquerading VM instance in the common Azure cloud to route that traffic.

Comments
* The most recent comment are at the top

Interesting posts

The Growing Demand for IT Certifications in the Fintech Industry

The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »

CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared

In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »

The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?

If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »

SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification

As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »

CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?

The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »

Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?

The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »

img