DP-300 Microsoft Azure Database – Monitor activity and performance

  • By
  • July 17, 2023
0 Comment

1. 38. prepare an operational performance baseline – Metrics Explorer

Hello. And in this section we’re going to be monitoring activity and performance and we’re going to start with preparing an operational performance baseline. So one way we can do this is using metrics. Metrics are numerical values that are collected at regular intervals and have a timestamp so when name value and other levels. Now they’re stored in a time series database which is suitable for alerting and fast detection of issues. It’s lightweight and allows for near real time alerting. By lightweight I mean doesn’t use up a huge number of resources. So let’s have a look at metrics in Metrics Explorer. So I can go down to the monitoring section, have a look at metrics.

So this is in a particular SQL database. So here I can select the scope. So in this particular case I’m selecting a database, but you can select other things as well. You can select the metric namespace. So in Azure, SQL database is only the one and you can select the metric. So I’m going to have a look at data space used, but notice there are quite a number of other ones. And then I can select the aggregation. So I can select Min Max average for other metrics. It may be some or count in a couple of videos. We’re going to just have a look and see what other metrics are available. So here we can see the metric data space used. Now at the moment it is over the last 24 hours.

We can change that. We can change it so that it says last 30 minutes all the way up to last 30 days. And also a custom period. We can also show the time in local or in Greenwich meantime or UTC coordinated Universal time. So let’s say 30 days. So here I’ve got the last 30 days and there’s only a maximum of 30 days that you can see. However, we can use these arrows to go backwards and forwards and we can go up to 93 days in the past if we so choose. So I’m going to change this back so that it is in the last 12 hours. Notice what’s happened, the start date and end date have now become a custom.

Now this time granularity. So how often does it do the aggregation? The aggregation in this case being max. So does it do it every five minutes? Every 30 minutes, every 12 hours, every month. So I’m just going to leave that to automatic. But if you’ve got a particular reason for changing it, say you want to see every day, but you want to see the max per hour or per 6 hours, you can do that. Now what else can you do here? Well, you can change the color of the line. Now you don’t do that with looking at the line itself, but looking at the legend, and there is a line next to the legend. If I click on that then I can change the color of the line, I can edit a title.

So here I’m going to just go to put the word database. I can split or filter a metric if it has a dimension. Now, that’s not applicable to Azure SQL database, we can add a second metric onto the same chart. For example, I will add not datespace used, but date space allocated. So you can see this is the amount of allocation. And again, if I wanted to, I could change the color. I could change the chart type. So at the moment, it’s a standard line chart. I could do an area chart, a bar chart, a scatter chart, or a grid if I so wished. There’s a few more settings over here in the dot. The dot I can move up, move down. If I’ve got more than one chart, I can add a new chart here.

So maybe I’ll add a new chart, which is data space use percentage, and then I could say, no, I want to move this bottom one up. So I can do that. I can delete it, I can clone it handy if I want to make changes, especially on a temporary basis. And I can have a look at chart settings as well, which gives me a fair number of the things that we’ve previously looked at. But in a pane, we can also change here the Y axis. I can also share this by downloading it to Excel or copying it into a link. So hopefully you can see from that how you can use this to create an operational performance spaceline. In the next video, we’ll have a look at some other ways we can do this.

2. 38. prepare an operational performance baseline – Other ways

Now other ways we can create an operational performance baseline is by using logs. So logs are events in the system which may contain nonnumerable data and may be structured or freeform and they may have a timestamp. So I can monitor the data loading in the last hour. Now, this uses a different language called koopa store ku sto. And we’re not going to go into the details of this, but you can see it does have some semblance with other languages such as R. And there is something that with your SQL training you can go, well, we got where we’ve got the summarize, so there are some ways we can learn it. We don’t need to for this particular course.

Now, other areas which could affect SQL Server performance include the hardware, compute and performance. So you can see we’re on Basic, so that will change things if you’ve got a virtual machine, the operating system, database applications and client applications can also affect SQL Server performance. So we’ve seen that we can use as your monitor to have a look at these things. So we can have a look at metrics and logs. So it could be that you need more CPU or input output resources if you’ve got a high requirement. So if you’ve got a high DTU percentage. So you can see here we’ve got DTU percentage, it might also be called a processor percentage if you’re using something else or a high IO percentage. Alternatively, make sure that your queries may need to be optimized.

Now you can also use TSQL. So we’ve already had a look in the previous video about Dmdb Resource Stats. So this reports on CPU I O and memory. Now, we can also use resource stats without the Dmdb. And so you can see the difference is that this reports on multiple databases. So it reports essentially on the server. So when I created a new database, it was starting to report on that. And you can see the skewed there is standard, whereas this just reports on the one particular database. We’ve also got DM user DB, resource governance and resource usage as well. So let’s just have a look at those in detail. So this is Sys Dmdb resource stats.

So it’s for CPU, I O and for memory. So you get a row for every 15 seconds for about the past hour. So it’s not good if you want to go back days, but if you want something detailed, four rows every minute for the past hour, then that’s one for you. Now Resource Stats gives you a similar thing, but this now gives it to you for the entirety of a server. And we’ve got related DMVs for Azure SQL managed instance and for Elastic pools, Sys Server Resource Stats and Sys Elastic Pool Resource Stats. So going to DM user DB resource governance. This shows the configuration or capacity settings actual in the current database or elastic pool. So you’ll notice that it’s a single database and not a single database every 15 seconds. So if I show you it in TSQL, it’s just returning the one raw. We’ve got things like minimum and maximum CPU and memory and log rate, and DTU and CPU limit. And then we’ve got Resource Usage. This provides an hourly summary of resource usage data for user databases in the current server. So if I run that here, you can see we’ve got our DP 300 database. And if I scroll down, we’ve got our metadata database that we generated and also this standard database that we used. And you can see it’s every hour or so. So it’s a single raw when it’s in use every hour.

Even if the database is idle, if it’s activated, if it actually is accessible, there will still be a single raw, and its usage would be shown at zero. Now, do bear in mind that this is actually a preview, and since they started using this, have actually discontinued one of the columns called Usage in seconds no longer there. So this now just shows you the storage in Megabytes. So the way to get operational performance baselines, you can use something like the Metrics Explorer. So you can create your chart from there. And you can also use Resource Stats, so you can see your CPU percentage, data, I O percentage, that sort of thing. You can also see what storage is being allocated. And you can also see not your current storage, but your historic storage going back for 90 days using Resource usage.

Comments
* The most recent comment are at the top

Interesting posts

The Growing Demand for IT Certifications in the Fintech Industry

The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »

CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared

In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »

The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?

If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »

SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification

As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »

CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?

The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »

Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?

The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »

img