DP-203 Data Engineering on Microsoft Azure – Monitor and optimize data storage and data processing Part 10
26. Azure Databricks – Sending logs to Azure Monitor
In this chapter, I am going to give just a quick note on sending the application logs from Azure Databricks onto Azure Monitor. I’m also going to include this link as a resource on to this chapter. So this link actually gives you the way in which you can send the application logs for Azure data bricks onto a log antics workspace. So we have seen the log antics workspace before. So you have a workspace and then you can stream the logs from various resources onto the workspace itself if you actually go on to logs. So here I’ve already been sending logs from other resources.
So here you will see as your diagnostic SQL security audit events. So these are logs that are already being streamed onto this log antics workspace. Now, let’s say you want to ensure that even whatever applications or jobs that you run on Azure databreaks, those detailed logs are also sent on to a Log antics workspace. You can actually implement the steps which are present in this particular documentation link. Now, from the perspective of the exam, what is important? So if you want to implement this, you need to have a Log antics workspace in place. And the other step is to configure your database cluster to use the monitoring library.
So here if you actually open up this GitHub README, so here it gives you the list of steps on how you can integrate as your databricks with this monitoring library. Now, the reason I’m not going through this, because this is quite an extensive way on trying to integrate Azure databricks when it comes to monitoring, I’m hoping in the future that they make this much more simpler, wherein they’ll have direct monitoring or sending off logs from Azure databricks onto the Log antics workspace.
So I said there is quite a lot that you need to implement in order to make this possible. And in addition to logs, if you also want to send the application metrics from your Azure database application code onto Azure Monitor, you need to build something known as a Jar file. And then you create something known as the Drop visit causes and counters in your application code. So I said there’s quite a lot that you need to do in order to ensure that you can send both your logs and your metrics onto the Azure monitoring service. But from the perspective of the exam, I want to ensure that I inform you or give you information about this particular feature or this implementation.
27. Azure Event Hubs – High Availability
Now in this chapter, I want to go through the High Availability Option, which is available when it comes on to Azure Event Hubs. Now by default, Azure Event Hubs can withstand if the underlying individual machines in the data center goes down. And you can also enable availability zone support for Azure event. Hubs. This allows the Event Hubs to be available, available even in the event of a data center failure. So, Avery Zones is a group of data centers, and by ensuring that your Event Hubs are in multiple Avery Zones, it helps to keep the Event Hub up and running even if a data center failure occurs. But what happens if the entire region goes down?
So let’s say you have an Event Hub namespace in the North Europe location, and let’s say this location goes down in as your Event Hubs, you can actually create a pairing between a namespace that is located in another region. So your application can actually start sending messages on to the Event Hub namespace, which is in the different location. And if you are sending your events onto a storage account via the Capture feature, in order to make this entire setup highly available for your storage accounts, you can also consider using Georeplicated storage accounts.
So remember in the section wherein we looked at the replication techniques that is available for storage accounts, in addition to making your Azure Event Hubs highly available, you can also increase the availability of your storage accounts if there is a region level failure by implementing Georeplicated storage accounts. Now I’ll just go over onto Azure Event Hubs and I’ll show you where you can implement this geo feature that is available.
So now if I go on to my Event Hub namespace, it’s in the North Europe location here, if I go on to Geo Recovery here, I can click on the option to initiate pairing. Now here I can choose my resource group. Here I can choose the location that I want to pair with. So let’s say West Europe, I can choose that. And here we can now create a secondary Event Hub namespace. Then I’ll scroll down and we have to give something known as an alias. So now, instead of your application connecting separately onto the primary and then connecting second.
So now instead of your application first connecting on to App namespace, which is our primary, and then connecting on to secondary namespace, if there is a failure in App namespace, it can just connect on to the areas. And I explain this to you once we actually create this pairing in place. And I’ll explain this once we have this pairing in place. So I’ll give a name and hit on Create. Now this might take a couple of minutes. Let’s wait till this is complete. Now, once the pairing is complete, now here you can see you have your primary and your secondary namespace in place. So now whenever there is an issue with the region that is hosting your primary namespace.
If it goes down, then automatically it will switch over onto the secondary namespace. You can also do a manual failover at any point in time. Now, in your application, in order to ensure that there is no disruption onto sending events. Now, instead of using the shared access policy of each namespace separately, now for this particular alias which has been created, it also has its own share access policy. And what do I mean by this? So remember in one of our earlier programs when we saw a.
Net application for sending messages, let’s say onto Azure Event Hub, we had to take the connection string. Now, instead of going on to app namespace and taking this shared access policy or taking the shared access policy of the Event Hubs individually, what you can do is that you can now go on to that alias, go on to share access policy, and here you will see your shared access policies from here itself.
So now if you go on to one of the existing shared access policy, now you will see you have something known as the alias primary connection string. So now you can copy this onto your program. So what this does is if there is a failure in the primary Event Hub namespace, then you don’t need to change the connection string in your program because now it is connecting on to this alias. This alias will ensure that if there is a problem with the Event Hub in the primary namespace, your program can now connect on to the Azure Event Hub in the secondary namespace.
Interesting posts
The Growing Demand for IT Certifications in the Fintech Industry
The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »
CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared
In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »
The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?
If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »
SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification
As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »
CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?
The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »
Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?
The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »