AZ-120 Microsoft Azure SAP – Migrate SAP Workloads to Azure Part 4

  • By
  • January 25, 2023
0 Comment

16. Azure Site Recovery (ASR)

SAP Hana offers different modes for the replication of the redo log. Synchronous in Memory synchronous with full Sync synchronous and Asynchronous let’s explore how each of the modes works in detail. Synchronous in Memory defaults the secondary system sends an Acknowledgment back to the primary system as soon as the data is received in memory.

The disk IO speed on the secondary system doesn’t influence the primary’s performance. When the connection to the secondary system is lost, the primary system continues the transaction processing and writes the changes only to the local disk. Synchronous with full Sync this replication mode can run with a full sync option. This means that the log write is successful when the log buffer has been written to the log file of the primary and secondary systems. When the secondary system is disconnected, for example, because of network failure, the primary system suspends the transaction processing until the connection to the secondary system is reestablished. No data loss occurs in this scenario.

Synchronous the secondary system sends an Acknowledgment back to the primary system as soon as the data is received and persisted to the log volumes on disk. When the connection to the secondary system is lost, the primary system continues the transaction processing and writes the changes only to the local disk. No data loss occurs in this scenario as long as the secondary system is connected. Data loss can occur when a failover is executed while the secondary system is disconnected.

Asynchronous the primary system sends redo log buffers to the secondary system asynchronously the primary system commits a transaction when it has been written to the log file of the primary system and sent to the secondary system through the network, it doesn’t wait for confirmation from the secondary system. This option provides better performance because it is not necessary necessary to wait for log I O on the secondary system database consistency across all services on the secondary system is guaranteed. However, it is more vulnerable to data loss. Data changes may be lost on failover.

17. SAP Database Migration Option (DMO)

With the classical migration option. SAP’s software provisioning manager. SWPM is used as the software logistics SL tool and is exclusively used for database migrations. SWPM exports data from a source system and imports it to a target system, where a target can be any DB, SQL Server, Oracle, DB, Two, et cetera. This method in particular uses a file based approach. DMO facilitates both an SAP upgrade and a database migration to the SAP Hana database via one tool. As both steps are handled at once, the DMO process is often referred to as a one step migration. In comparison, classical migration uses a heterogeneous system copy approach, thus gaining the title of a twostep migration, with the first step being that of a migration, followed by a second step facilitating an SAP upgrade. For a wider comparison, you can visit this URL for DMO processing.

The Software Update Manager Sum creates the shadow repository on the traditional database until the downtime phase. The target database is built up in parallel, where the shadow repository is subsequently copied and the SAP database connection is switched to the target database and the downtime process starts. Following the migration of the application data, which includes data conversion, the upgrade is finalized and the SAP system is running on the target database. The source database retains the unmodified application data and therefore a fallback is always possible when migrating an existing SAP system running on any DB to an SAP Hana database.

The following steps are generally required dual stack split Unicode conversion for versions prior to SAP net waver seven five database upgrade of any DB and upgrade of SAP software database migration option with system move as the name states involves enabling the migration with System move which is available from Sum 10, SP 21 where the application server driving the migration can be changed as part of the process. That is, some started on an on prem app server and switched to an App server running in Azure. Some is running on the source system and will stop at the execution phase.

Subsequently, the complete sum directory is copied to Azure, where the import process continues on the new target application server. To compare the classical DMO against DMO with system move option, let’s use the following parameters purpose or use case classical DMO can perform an inplace upgraded migration. DMO with System Move clouds Azure based migrations downtime optimization Flexibility it is high for classical DMO and it is medium for DMO with system move. Cloud Migration for classical DMO it is technically possible, but not officially supported by SAP for DMO with system Move yes.

Target Servers classical DMO The same application server can be used to connect to SAP Hana after a migration. DMO with system Move new servers need to be built in Microsoft Azure options for data transfer. Classical DMO offers memory pipes and file system dump. DMO with System move offers file system dump and can use sequential or parallel load options.

18. One-Step Migrations vs. Two-Step Migrations

To use the One-step migration with DMO and system move, the following steps need to happen you need to ensure connectivity to Azure is available via Express route. It is highly recommended and with higher speed connectivity or VPN into Azure. You need to provision the target infrastructure in Azure that includes Sam, AP, NetWeaver and SAP Hana database servers. The Azure infrastructure can be rapidly deployed using predefined Arm templates. Ensure summit started on the on premises source SAP application server. Ensure optima activities are executed from the on premises sub application server and the shadow repository is created as part of the downtime phase. Export files are generated on the source system and these files are then transferred to Azure via Express route or VPN. File transfers can occur in sequential data transfer or parallel data transfer mode.

Let’s look at each of the data transfer modes available as part of the one step migration. In the sequential data transfer mode, all tables are exported to the file system of the onpremises server. Once the export is finalized, the complete Sum directory is transferred to the target application server. In Azure, the Sum directory is resynchronized during the host change phase of the mo. Sum is initiated on the target Azure application server and the import is started. Post processing is completed in the parallel data transfer mode.

Data is immediately transferred to the Azure target after the export is completed for each file via the D motor cloud script. This mode can be used to minimize migration downtime. You need to consider the following for the two step migration ensure connectivity to Azure is available via Express route. Again recommended or VPN provision the target infrastructure on Azure which includes the clone system and target SAP NetWeaver and SAP Hana database servers.

The Azure infrastructure can again be deployed using predefined arm templates. Clone system can be built with a homogeneous system copy, backup, restore or via DBMS replication tools for example, Oracle data guards or SQL always on. Business and technical testing should be initiated with functional integration and acceptance testing to ensure the move of data has been successful. Following the business and technical validation, the traditional DMO process can be followed to migrate and upgrade to SAP Hana. DMO can be leveraged with the in memory pipe method, i.e. Export import occurs within the same application server and memory segment for accelerated migrations. What this means is that export import occurs within the same application server and memory segment for accelerated migrations. Following the migration to SAP Hana, business and technical validation should again be initiated. In this approach, two downtimes and two testing cycles are required.

19. Optimizations

In this last part of this section, we will look at how we can optimize our infrastructure for the migration. The following guidance should be followed for the source export of very large DB VL DB systems, perch technical tables, and unnecessary data. For full details, you may review SAP Node 238-8483. HowTo data management for technical table separate the R three load processes from the DBMS server is an essential step to maximize export performance. R three load should run on a fast new Intel CPU. Do not run out three load on Unix servers as the performance is very poor. Two socket commodity intel servers with 128GB of Ram cost little and will save days or weeks of tuning and optimization or consulting time. High speed network, ideally 10GB with minimal network hops between the source DB server and the Intel R three load servers. It is recommended to use physical servers for the R three load export servers. Virtualized R three load servers at some customer sites did not demonstrate good performance or reliability at extremely high network throughput sequence larger tables to the start of the orderby TXT.

Configure semiparalled export import using signal files. Large exports will benefit from unsorted export on larger tables. It is important to review the net impact of unsorted exports as importing unsorted exports to databases that have a clustered index on the primary key will be slower. Configure jumbo frames between the source DB server and the Intel R three load servers. Adjust memory settings on the source database server to optimize for sequential read export tasks. Please see SAP Support node 936441 Oracle settings for our preload based system. Copy jumbo frames are Ethernet frames larger than the default of 1500 bytes.

Typical jumbo frame sizes are actually 9000 bytes. Increasing the frame size on the source DB server means that all intermediate network devices such as switches and the intel are preload servers, reduces CPU consumption, and increases network throughput. Please note the frame size must be identical on all devices, otherwise resource intensive conversion will occur. Additional networking features such as receive size scaling RSS can be switched on or configured to distribute network processing across multiple processors running R three load servers. VMware has proven to make network tuning for jumbo frames and RSS more complex, and it is not recommended unless there is very expert skill level available. R three load exports data from DBMS tables and compresses this raw format Independent data in dump files. These dump files need to be uploaded into Azure and imported to the target SQL Server database. The performance of the copy and upload to Azure of these dump files is a critical component in the overall migration process for network upload optimizations. There are two basic approaches to optimize your network transfer copy from on prem export servers to Azure Blob storage via public Internet with AZ copy. In general, AZ copy will perform best with a larger number of small files and NC values between 24 48 NC determines how many parallel sessions are used to transfer a file. If a customer has a powerful server and very fast Internet, then this value can be increased. But if this value is increased too high, connection to the yard freeload export server will be lost due to network saturation.

So you need to be careful. You need to monitor the network throughput in Windows task Manager copy throughput of over one gigabit per second per R three load export server can be easily achieved. Now on to the second approach. Copy from on prem R three load export servers to an Azure VM or Blob storage via a dedicated express route connection using AZ Copy, Robocopy or a similar tool. In the background, Robocopy starts uploading dumb files. When entire split tables and packages are completed, the SGN file is copied either manually or via a script.

When the SGN file for a package arrives on the import R three load server, this will trigger the import for this package automatically. Please note that copying files over NFS or Windows SMB protocols is not as fast or robust as mechanisms such as AZ copy. It is recommended to test performance of both file upload techniques. Finally, please note that it is also recommended to notify Microsoft support of VLDB migration projects because very high throughput network operations might be misidentified as denial of service attacks. You.

Comments
* The most recent comment are at the top

Interesting posts

The Growing Demand for IT Certifications in the Fintech Industry

The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »

CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared

In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »

The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?

If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »

SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification

As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »

CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?

The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »

Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?

The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »

img