AZ-120 Microsoft Azure SAP – Migrate SAP Workloads to Azure Part 3
11. Planning and Deployment Checklist: Pilot (Part Two)
You also need to test and evaluate your virtual network infrastructure and the distribution of your SAP applications across or within the Azure virtual networks. You need to evaluate the approach of Hub and spoke virtual network configuration or micro segmentation within a single Azure virtual network based on the following criteria costs due to data exchange between peer Azure VNATS, comparison between the ability to terminate peering between Azure virtual networks and the use of NSGs to isolate subnets within a virtual network. This is for cases where applications or VMs hosted in a subnet of the virtual network become a security risk. Central logging and auditing of network traffic between on premises, the Internet and the Azure Virtual Data Center. Evaluate and test the data path between SAP application layer and the SAP DBMS layer.
And as part of your evaluation, you need to consider the following the placing of network virtual appliances in the communication path between the SAP application and the DBMS layer of SAP NetWeaver, Hybris or S. Four Hana based SAP systems is not supported. The placing of the SAP application layer and the SAP DBMS in different Azure virtual networks that are not peered is also not supported. It is, however, supported to use Azure application security groups, ASG and Network security Groups NSG to control traffic flow between the SAP application layer and the SAP DBMS layer. You need to make sure that the Azure Accelerated networking is enabled on the VMs used on the SAP application layer and the SAP DBMS layer. Please keep in mind the OS requirements for support of accelerated networking in Azure.
Windows Server 212 R two or newer susie Linux Twelve SP three or newer red Hat seven, four or newer finally, Oracle Linux Seven five please make sure that Azure Internal Load Balancer deployments are set up to use Direct Server return. This particular setting will reduce latency in cases where ILBS are used for High Availability configurations on the DBMS layer. If you are using Azure load Balancer in conjunction with the Linux Guest operating System, please also check that the Linux network parameter net IPV four TCP underscore timestamps is set to zero. Now, on the topic of High Availability and disaster Recovery deployments, if you deploy the SAP application layer without targeting specific Azure Availability Zones, please make sure that all VMs running SAP dialog instance or middleware instances of the same SAP system are deployed in the same availability set. In case you do not require High Availability for the SAP Central Services and DBMS, these VMs can be deployed into the same availability set as the SAP application layer. If you need to protect the SAP Central Services and DBMS layer for High Availability with passive replicas, you need to deploy the two nodes for SAP Central Services in one availability set and the two DBMS nodes in another availability set.
If you deploy into Azure Availability Zones, you cannot leverage availability sets. Instead, you should make sure that you deploy the active and passive Central Services nodes into two different availability zones, which provides the smallest latency between zones. You need to use the Azure standard load balancer when creating Windows Server or pacemaker based failover clusters for the DBMS and SAP Central Services layer across Availability zones. The basic load balancer does not support Zonal deployments. For timeout settings, you need to check the SAP Net Weaver developer traces of SAP instances, and you need to make sure there are no connection breaks between the NQ server and the SAP work processes. If you use a Windows failover clustering, please make sure that the parameters determining failover triggered by nonresponsive nodes are set correctly.
Test High Availability and simulate Dr procedures by shutting down Azure VMs if they are Windows guest OS, or putting operating systems in Panic mode if they are Linux guest OS. Please measure times it takes to complete a failover. If the times are too long, please consider the following options for Susie Linux use SPD devices instead of the Azure Fencing agent to speed up the failover for SAP. Hana if the reload of data takes too long, then consider improving storage performance. You also need to test the backup restore sequence and timing and tune it if necessary. Make sure that not only backup times are enough. Also test restore and take the timing on restore activities. Make sure that the restore times are within your RTO SLAs, where your RTO relies on database or VM restored process.
12. Planning and Deployment Checklist: Pilot (Part Three)
Task number four of the pilot phase is to perform security checks. You need to test the validity of the Azure RBAC approach you implemented. The goal is to separate and limit access and permissions delegated to different teams. As an example, the SAP Basis team members should be able to deploy deploy Azure VMs into a given Azure virtual network and to assign disks to these Azure VMs. However, the SAP Basis team should not be able to create new virtual networks or change the settings of existing virtual networks. Conversely, members of the network team should not be able to deploy Azure VMs into virtual networks where SAP application and DBMS VMs are running. Nor should members of the network team be able to change attributes of VMs or delete VMs and their disks.
You need to verify that NHG rules are working as expected and shield the protected resources. You also need to verify encryption at rest and in transit. You need to define and implement processes to backup, store and access certificates, as well as validate the restore process of encrypted entities. You need to use Azure disk encryption for OS disks. Finally, consider a pragmatic approach when deciding whether to implement an encryption mechanism. For example, please evaluate whether it is necessary to apply both Azure disk encryption and the DBMS Transport Database encryption. Task number five of the pilot phase is to test performance in migration scenarios. Leverage SAP tracing and measurements to compare the pilot with the current implementation based on the following top ten online reports and top ten batch jobs.
13. Planning and Deployment Checklist: Non-Prod, Prod Prep, Go Live, and Post Prod
We are now entering the nonproduction phase of the SAP workload planning and deployment checklist. In this phase, you are starting to deploy nonproduction SAP systems into Azure. Following on from a successful pilot leveraging all those testing and validation tasks we covered, all the criteria and steps applicable to the pilot do apply in this phase as well. The nonproduction environment typically includes development unit tests and business regression tests. It is recommended that at least one of them implements the Ha configuration that will be used for the future production system. We are now entering the production preparation phase of the SAP Workload Planning and Deployment checklist.
In this phase, you should leverage all the knowledge and experience you accumulated in the prior phases and apply them in preparation for the production deployments. In addition, in migration scenarios, you should prepare for data transfer between your current hosting location and Azure. We’re now entering the go live phase of the SAP Workload, Planning and Deployment checklist. During the go live phase, please ensure to follow the playbooks you developed in earlier phases. Execute the steps that you tested and trained for.
Don’t accept last minute changes in configurations and processes. In addition, please apply the following measures measure number one. Verify that monitoring is operational. Recommended monitoring approaches includes the Azure Portal Azure Monitor as well as Perth on for Windows and SAR for Linux. Please monitor the following counters CPU Counters Memory Disk Network measure number two after the migration of the data, perform all the validation tests you agreed upon with the business owners. Accept validation test results only if you have results for the original source systems. Measure number three. Verify that all interfaces are functioning and that applications can communicate with the newly deployed production systems. Measure number four validate the transport and correction system through SAP transaction STMs. Measure number five perform database backups once the system is released for production.
Measure number six perform VM backups for the SAP application layer VMs once the system is released for production. Measure number seven for SAP systems that were not part of the current go live phase but communicate with the SAP systems that you moved into Azure in the current go live phase, you need to reset the hostname buffer in SM 51. This step will purge the cached IP addresses associated with the names of the application instances you moved into Azure.
We are now entering the postproduction phase of the SAP workload planning and deployment checklist. This phase comprises of monitoring, operating, and administering the system in migration scenarios. From the SAP perspective, this involves the same tasks that were part of the operational model in the source environment. Azure specific tasks include monitoring and analyzing Azure Resource Billing optimizing Price Performance Ratio of Azure Compute and Storage Resources minimizing cost by stopping or de allocating Azure VMs which are not actively used.
14. Azure Databox
For an offline data transfer, we have the following options the Microsoft Azure Databox Disk solution lets you send terabytes of on premises data to Azure in a quick, inexpensive and reliable way. The secure data transfer is accelerated by shipping you one to five SSDs. These eight terabyte encrypted disks are sent to your data center through a regional carrier. The Microsoft Azure Databox Cloud solution lets you send terabytes of data in and out of Azure in a way that is also quick, inexpensive and reliable. The secure data transfer is accelerated by shipping you a proprietary data box storage device. Each storage device has a maximum usable storage capacity of 80 terabytes and is transported to your data center through a regional carrier.
The device has a rocked casing to protect and secure data during transit. Azure Data Box Heavy allows you to send hundreds of terabytes of data to Azure in a quick, cheap and reliable way. The data is transferred to Azure by shipping you a databox heavy device with one petabyte storage capacity. You will fill the device with your data and send it back to Microsoft. The device has a rugged casing to protect and secure data during transit. Here are the various scenarios where databox can be used for data transfer. One time migration when a large amount of on premises data is moved to Azure. Initial bulk Transfer When an initial bulk transfer is done using Databox Heavy and then followed by incremental transfers over the network.
Periodic uploads when a large amount of data is generated periodically and needs to be moved to Azure, for example, in the energy exploration industry where video content is generated on oil rigs and windmill farms. Now let’s look at an online method for transferring data to the cloud. Azure Databox Gateway is a storage solution that enables you to seamlessly send data to Azure. Databox Gateway is a virtual device based on a virtual machine provisioned in your virtualized environment or hypervisor. The virtual device resides in your premises and you write data to it using the NFS and SMB protocols. The device then transfers your data to Azure Block blob, Page Blob or Azure files. Data Box Gateway can be leveraged for transferring data to the cloud, such as Cloud Archival Disaster Recovery, or if there is a need to process your data at cloud scale.
Here are some specific scenarios where Data Box gateway can be used for data transfer. Cloud Archival You can copy hundreds of terabytes of data to Azure storage using Data Box Gateway in a secure and efficient manner. The data can be ingested in a one time manner or an ongoing basis for archival scenarios. Continuous Data Ingestion You can continuously ingest data into the device to copy to the cloud, regardless of the data size. As the data is written to the gateway device, the device uploads the data to Azure Storage initial bulk transfer followed by incremental transfer. Use databox for the bulk transfer in an offline mode as an initial seed and databox gateway for incremental transfers as an ongoing feed over the network.
15. SAP HANA System Replication
Azure Site Recovery ASR has been tested and integrated with SAP applications. With ASR, you can do the following Enable protection of SAP NetWeaver and nonnetweaver production applications that run on premises by replicating components to Azure. Enable protection of SAP NetWeaver and nonnetweaver production applications that run on Azure by replicating components to another Azure data center. Simplify cloud migration by using ASR to migrate your SAP deployment to Azure.
Simplify SAP project upgrades, testing and prototyping by creating a production clone on demand for testing SAP applications. The following mechanisms should be implemented to protect the individual tiers of the SAP deployment SAP Web Dispatcher Pool The Web dispatcher component is used as a load balancer for SAP traffic among the SAP application servers to achieve high availability. For the web Dispatcher component, the Azure load balancer is used to implement the parallel web dispatcher setup.
This is done in a round robin configuration for Https traffic distribution among the available web dispatchers in the balancer pool. Effectively, VMs will be replicated using Azure Site Recovery and automation scripts will be used to configure the load balancer on the Dr region. SAP Application Servers Pool to manage logon groups for ABAP application servers, the Smlg transaction is used. It uses the load balancing function within the message server of the Central Services to distribute workload among SAP application servers, pool for SAP, Gui’s and RFC traffic effectively, VMs will be replicated using ASR without the need for provisioning a load balancer. SAP Central Services. Cluster Central Services.
Run on VMs in the application tier. The Central Services is a potential single point of failure when deployed to a single VM. To implement a high availability solution, either a shared disk cluster or a file share cluster should be used. To configure VMs for a shared disk cluster, use the Windows Server Failover cluster. Cloud witness is recommended as a quorum witness.
ASR does not replicate the cloud witness. Therefore, it is recommended to deploy the cloud witness in the Dr region to support the failover cluster environment. SIOs DataKeeper Cluster Edition performs the cluster shared volume function by replicating independent disks owned by the cluster nodes. Azure does not natively support shared disks and therefore requires solutions provided by SIOs. Another way to handle clustering is to implement a file share cluster. SAP recently modifies the Central Services deployment pattern to access the SAP MNT global directories via a UNC path. However, it is still recommended to ensure that the SAP mntunc share is highly available.
This can be done on the Central Services instance by using Windows Server Failover cluster with scale out file server SOFs and the Storage Spaces Direct feature in Windows Server 2016. Currently, ASR supports only crash consistent point replications of VMs using storage spaces. Direct and passive node of SIOs data keeper for active directory use. Native adds replication technology to extend your ad to Azure and provide availability and Dr for SQL and Hanabased VMs. Use SQL AlwaysOn or HSR to replicate between different nodes to provide Ha and Dr capabilities.
Interesting posts
The Growing Demand for IT Certifications in the Fintech Industry
The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »
CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared
In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »
The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?
If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »
SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification
As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »
CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?
The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »
Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?
The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »