DP-300 Microsoft Azure Database – Monitor activity and performance Part 2
3. 77. create event notifications for Azure resources
So now you’ve got this operational baseline. How can you be notified when things change? Well, we can create an alert. So let’s say we have got this data space used and I wanted to create an alert from that. Well, I can go in metrics to New Alert rule and that will get me into the alert page. Now I should point out if I go back to my database and I go to alerts so this is also in monitoring that I can click New alert rule. Then I would have to create a condition with my current one. I don’t really have to create a condition. It says whenever the maximum data space used is greater than and I just have to say, okay, it’s greater than something. If I haven’t started creating the metric, then I would have to start from scratch and say, okay, I need a particular signal type.
So metrics or activity log. So activity log you can see is useful when you want notification of restore points being created. For instance, so here I can go onto the platform and say I want to have a look at the data space allocated and that will get me back to where I was here. So we got this metric data space used in this case. Okay, but when do I want to be notified? Well, I’ve got this threshold and it could be static or dynamic. So if it’s static then it’s based on a single figure. So you give the figure that you want. So for instance, it could be greater than the maximum, could be greater than well, how many bytes, kilobytes, megabytes, gigabytes or terabytes? Well, it’s currently at 28, 31 megabytes. So if it goes above 40 megabytes and for some reason I can’t actually click on megabytes, so I’ll just have to type the value in.
So you can see that is my threshold up here and that is where I am currently. And you can see in English whenever the maximum data space used is greater than and I also need to select an aggregation granularity. So for instance, how often is this maximum going to be checked? Or if it was an average, how often is it going to be averaged? Is it going to be averaged every minute, every five minutes, every 15 minutes. So how frequently are measures grouped together? So let’s say we group them together every 30 minutes and how often are they going to be checked? So let’s check every hour and you can see that doesn’t work.
If they’re created every 30 minutes, then we need to check probably every 30 minutes or so. So that is an example of a static threshold where I put the actual value in. Now let’s change that to dynamic. And dynamic, it lets the computer decide and it’s calculated using historical database and can use things like seasonality. Maybe you need more space during a particular sales period so you can select the operator. It’s either going to be higher than the normal range or lower than the normal range. So we’re not talking about a single figure, we’re talking about a range.
We can change the aggregation, so average, maxormin, and then we can change the threshold sensitivity. So this says how often are you going to be emailed it’s based on how big is this range going to be? So this range would be reduced if the threshold sensitivity is high. So you will get more alerts. If it’s low, then the range gets expanded so you’ll get fewer alerts. Medium is the default. So if you’re getting too many alerts, then change it from high to medium or medium to low. Now, there are advanced settings. You can say how many violations are needed to trigger this alert. So maybe you’ve got a metric about your DTU and it reaches 100%. Is that something that you want to be notified of every single time? Or would it be that you want it to be notified if it happens four times over 2 hours?
So in other words, it happens every 30 minutes for 2 hours. So if using the maximum, it just needs to be at one point in that 30 minutes. If using the average, then it would need to be at the 100% throughout. But if you’re talking about dynamic, then it would be outside of the range on average or min or max at some point in that 30 minutes. If it’s a min or max or the average of it is going to be outside of the range in that 30 minutes period and it’s going to happen every 30 minutes for 2 hours. Because there are four violations. I could just say, well, you’ve got four time periods, let me know if it happens in two of those time periods. And I can also say ignore data before a certain point. So there is my condition. You can see it’s going to cost around 20 US cents per month.
So it’s not free, but it’s not too expensive either. It automates something away from you and then you get notified if something happens. So you don’t have to check every single day. Now, turning to the actions settings, you can have an existing action group or you can click on a new action group. So a new action group, here we go. Create action group. I would put in the action group name, so my action group and the display name is limited to just twelve characters. And here it is, the main bit of this, the notifications. So you can select email, SMS, that’s text message push and voice. And here you can see the configurations. So I need to provide a unique name, so send me a text message and you can say courier code, it’s a country code. So I’ll choose 34 and put in my text number. Or I could put in an email address metrics at something. Now, in addition to this.
You’ve also got an email Azure Resource Manager role. I just delete that, don’t want that. And we’ve also got actions. So this is when you can integrate this with other parts of Azure. So you can select an automation runbook, an Azure function, ITM log app and a web hook whether secure or not. So I will create this and if you don’t know what any of those mean, don’t worry. It’s only if you’re using those bits of Azure that you might go, oh, I want to put this into my log app, or once you have it as a web hook. And then finally you provide some alert rule details. So you put in a name, description, subscription, resource group and the severity. So whether it is critical all the way down to verbose.
So from zero to four lower numbers, meaning it’s more critical whether to enable the alert rule or polycreation and whether to automatically resolve the errors. So what does that mean? Well, here is an example of a demo alert. So this is the actual metric and based on the history, this is the range, this wider light blue that it could go in. So if it goes outside, so here you can see it’s fractionally outside, then it is an active alert and then it gets resolved.
So the alert period is shown in a different color. When it is unresolved, the line changes from blue to red dot and the background turns light red as well. So that’s what resolved and unresolved means. Now there are other places that you can create an alert rule from. So for instance, if I was looking at logs and look at a particular query, then I can also add in a new alert rule. So you can create alert rules from alerts, obviously metrics and also logs.
And so when you do that, you need to specify the scope that’s your target, you want to monitor the condition. So what specifically you are monitoring and why. And you know that you can look for dynamic as well as a statistic threshold, what to do an action, and also some rules about the alert. So this is how you can create event notifications for Azure resources.
4. 39. determine sources for performance metrics
In this video we’re going to have a look at what sources there are for metrics, specifically performance metrics, but also going a bit wider as well. So first of all you’ve got the Azure Tenant. So there are some services which are tenantwide such as Azure Active Directory. Then you got the subscriptions. So we’ve got, for instance, Azure activity log. So that includes self service health records and records of configuration changes. And we’ve also got Azure Service Health which has, as the name suggests, information about your Azure services health.
Then we have got your resources. So, most Azure resources submit platform metrics to the metrics database. Resource logs are then created internally regarding the internal operation of an Azure Resource. So SQL Database for instance, the way that we have all of these metrics is because they are being generated and sent. We’ve also got all of these logs. Now, if you have got a guest operating system in Azure or other clouds on Prem, then again there will be some sources there. So you will have the Azure Diagnostic Extension for Azure Virtual Machine.
When enabled that submits logs and metrics. We’ve also got log analytic agents so they can be installed on your Windows or Linux Virtual Machines either running in Azure and of a cloud or on Prem. And there’s also something called VM insights. It’s a preview at the moment. It provides additional Azure monitor functionality on Windows and Linux. VMs So other sources in application code. You could enable application Insights to collect metrics and logs relating to the performance and operations of the app.
You could have monitoring solutions and insights. They provide additional insights of a particular service or app. If you have got containers, then Container Insights provide data about Azure Kubernetes service. AKS as I’ve already said, VM Insights allow for customized monitoring of VMs and in VMs themselves. You can also have a look at the Windows Performance Monitor, which is also called a Perf Mol. And there are specific counters available for SQL Server. Now we in the previous video had a look at the metrics. Now for the managed instance available metrics are average CPU percentage in a selected time period, I O bytes read or written, I O request counts, storage space reserved and used and the virtual call count. And that can be anything from four to eight TV calls.
Now, metrics available for Nazir SQL Database are blocked by firewall deadlocks CPU percentage data I O percentage or log IO percentage data space used, allocated or used percentage DTU limit or used or percentage failed connections and successful connections in memory OLTP storage percentage. So that shows what is being stored in your online transaction processing if you’ve got that enabled. We’ve also got Sessions percentage and Workers percentage.
So the number of requests. We’ve also got SQL Server Process Core and Memory percentage and then Temp DB data and log file size and kilobytes and also the percentage of log used. So those are your metrics for Azure SQL database. For managed instance, you’ve also got few different ones like storage space reserved and used an average CPU percentage in a particular time. We’ve also got that for Azure SQL database. And then there are other methods of getting performance metrics from virtual machines including log analytic agents and VM insights.
Interesting posts
The Growing Demand for IT Certifications in the Fintech Industry
The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »
CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared
In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »
The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?
If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »
SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification
As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »
CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?
The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »
Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?
The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »