DP-300 Microsoft Azure Database – Monitor activity and performance Part 3
5. 40. interpret performance metrics
In this video we’re going to say, okay, we’ve got all of these performance metrics, but what can we do with them? So what is the reason for getting all of these metrics? Well, it could be that you have got too much happening on your computer for your particular specification. So if you’re basic you might need to upgrade to standard and so forth. However it could be that you’ve got too much and so or maybe you might want to consider a downgrade.
So we’re going to have a look at some of these particular metrics and I’m going to do that by looking at Sys Dmdb resource stats. So here is an example of that. So if any of the following seems to be on the high side, so let’s say close to 100%, then you might want to consider upgrading to the next service tier. So first of all, let’s have a look at space and components used. So we have got things like CPU memory and I O for VCOs.
Now this can be seen in something like DTU performance. Now if this is high you might want to upgrade. If this is low then you might save money for downgrading and you won’t be able to use this DTU percentage if it’s a database which uses Vcore’s models. Now additionally we’ve also got things like the CPU percentage. So AVC CPU percent.
So when this is high then you might find that query latency increases and queries might time out. So maybe you need to increase the compute size or maybe you need to optimize queries to reduce the CPU requirements. Now you’ve also got the data I O percentage, that is avatata I O percentage and you’ve also got the log I O percentage as well.
So that is Avlog right percent. Now we’ve also got the in memory OLTP storage percent which is XTP storage percent. So this with turns zero if in memory OLTP. So we’re talking memory optimized tables indexes and table variables is not used. If you are using them and this hits 100%, then insert, update, alter and create operations will fail. Unfortunately select and delete will be fine. Now data space use percent if this is getting high then you might want to consider upgrading to the next service tier. You might want to shrink the database or scale out using sharding.
Now if it’s an elastic pool then you might want to consider moving it out of the pool and having its own resources. So that is dataspace use percent if we have a look at average memory usage percent, this is used for caching. So if you get out of memory errors then you might want to increase service tier or compute size or optimize the queries. Next we’ll look at the connections requested and used and we have got the sessions percentage and the workers percentage. So this is max worker percent and max session percent. So the max session percent.
So this is the maximum concurrent sessions divided by the service tier limit, and max worker percent is the maximum concurrent requests divided by the service tier limit. If you get towards 100, then you might want to increase service tier compute size or optimize queries. So what queries might need to be optimized? Well, if we go into the intelligence performance section here and query performance insight, so what we can do is we can have a look at the top five queries by CPU data I O and log I O. And if I scroll down to the bottom, you’ll see these are the top queries. So here if I click on this query, you can see what it actually is. So this is some source of system query.
So going back to it, we can also have a look at long running queries, so which ones run the longest? We can do a custom, so we can have a metric for CPU data, aisle log, aisle duration, or execution count. So we can select any of those within a particular time period. We can see the number of queries and the aggregation, and we can also have some recommendations as well. So you may see some icons earlier that weren’t actually on my version of the page, but you may in your version see icons showing performance recommendations. So if there are individual queries which are using a huge number of resources, then this is a good place to find out where they are.
There are other places and we’ll have a look at the query store in later videos. So this is how you can interpret performance metrics. So we’ve got all of these various metrics such as DTU percentage, CPU percentage, data I O, percentage, log I O percentage, in memory, OLTP storage percentage, average memory used and settled percentage, and workers percentage. If any of those get towards 100%, then you might want to consider upgrading. If they are all consistently low, then you might be using too much. You might get a high specification and be able to save some money by downgrading. So this is how we can interpret percentage metrics.
6. 41. assess database performance by using Intelligent Insights
In this video we’re going to assess database performance by using Intelligence Insights for Azure SQL Database and managed instance. So what this does is it compares the current database workload for the last hour within the last seven days. For example, it’s the most repeated and largest query. It uses data from the query store, which we’ll be talking about in later videos, which which is enabled by default in Azure SQL Database. So you need to enable it for Azure SQL managed instance, it monitors using artificial intelligence operational thresholds and detects issues with high wait times, critical exceptions and query prioritizations impacted metrics are due to increase if there is a highquality duration, excessive waiting, timed out or errored out requests.
Just to note, it’s not available in some regions, it’s not available in West Europe, North Europe, West US. One and east us. One. That doesn’t mean it’s not available in the US. There are other regions in that and similarly for Europe, there are other regions and it’s also not available for virtual machines. Now, I’m going to scroll down on the left hand side to Intelligence Performance where we can have a look at some of the Intelligent Insights by looking at Query Performance Insights, which we had a look at in the previous video. We can also in diagnostic settings, add additional diagnostic settings that we might want to look for. So we might want to look for SQL Insights automatic tuning, which we’ll have a look at in later videos and timeouts. And we can send them to a log analytics workspace, we can archive them to a storage account, we can stream to an event hub.
So those are the three destinations that we had previously and we can send it to a partner solution. So this is my dialogue setting and I’m going to send them to my Log analytics account that I created earlier. So I’ll save that. So, Intelligent Insights looks for things that could affect the database performance, such as resourcing limits. So if you reach your resources limits like CPU reaching resource limits for managed instances or DTUs worker threads or login sessions reaching resource limits for Azure SQL Database, then you might have a performance recommendation based on that.
There is also workload increase memory pressure. So workers, those are requests waiting for memory allocations, data lockings, whether you need to increase the maximum degree of parallelism. So if there are more parallel workers than this should have been, then that’s called the Max DOP if you’ve got Missing Indexes, new queries affecting performance, increased weight statistics, multiple threads using the same Temp DB resource, whether you have a new plan or change an existing plan or downgrade your pricing tier.
And for the DTU model, if you’re in an elastic pool, then there may be a shortage of available EDTUS. So if you have any of those, then you may have a recommendation in Performance Recommendations. So the Intelligent Insights, it uses artificial intelligence to have a look at what’s currently happening compared to the last seven days and sees about operational thresholds, detects issues with higher wait times, critical exceptions and query parameterizations.
7. 42. configure and monitor activity and performance
In this video, we’re just going to wrap up this section by looking at how to configure monitor activity and performance at the infrastructure, server, service and database levels. And we’ve already had a really good look at how we can do it in SQL Database. We can have a metric, we can create an alert on the metric, we can have a look at logs and we can query the logs using a language called Kouso. We can have a look at performance overview and performance recommendations, and we can also have a look at query insight. And we can also monitor using the dynamic management views that we had a look at earlier. But I just want to complete this section by just mentioning a few other things that you can do.
So first of all, we’ve got extended events. Now, this is a lightweight monitoring system, so it uses very little in terms of performance and allows you to create, modify, display and analyze your session data. So you can use it for trend analysis, ad hoc monitoring. There’s a graphical interface and you can use it within your own custom application, SQL Server Profiler. This tracks process events such as the start of a batch or transaction. So it allows you to monitor server and database activity. And you can also save the data to an SQL Server table or file for later analysis.
And you can also replay the events step by step to see exactly what happened. So you can do that from a single computer. If you are doing this from multiple computers, then something called Distributed Replay should be used instead. So this allows you to use multiple computers to replay trace data, so it simulates a mission critical workload system Monitor.
This primarily tracks a resource usage, such as the number of buffer management page requests in use. So it enables you to monitor server performance and activity using objects and counters to monitor these events. Now, System Monitor, which is also called Performance Monitor, collects counts and rates rather than data about the events. So the key difference between SQL Server Profiler and System Monitor is that the profiler looks at database engine events. System monitor, monitors resource usage. The Activity monitor in SSMS is useful for ad hoc views of current activity and graphically displays information about processors running on an instance of SQL Server. You can also look at blocked processes, locks and user activity.
We’ve also got some TSQL for monitoring. So we’ve had a fair look at TSQL, but after this we’ll just have a look at a few more things. We’ve got error logs, so the Windows application event log provides a picture of events on your Windows operating system. So this is for VMs, and we’ve also got your performance dashboard in SSMS that can help you to identify whether there’s any current performance bottleneck in SQL Server. So if you encounter any of these terms, then you know roughly what they’re for. So replaying. Captured Events is using SQL Server Profiler with one computer and distributed replay from multiple computers.
For instance, for trend analysis, you can use Extended Events, SQL Server Profiler and System monitor. So here are four commands in TSQL that might be of interest SP underscore who this provides snapshot information about current SQL Server users and processors, including whether it is blocked. SP Lock so this has a look at information about locks, including the object ID and the index ID, the type of lock, and so forth. SP Space Used this displays an estimate of the current amount of displace used by a table or the entire database.
And then this last one doesn’t actually work in Azure SQL Database, but it can display the statistics including CPU usage, I O usage, and the amount of idle time since it was last executed. So if I go into my on Prem database and run that here, you can see the output. So in this video, we’ve had a look at some terminology that you might encounter when you configure and monitor activity and performance at the infrastructure, Server Service and Database levels.
And these include things like extended events, SQL Server profiler and distributed replay system or performance monitor, activity monitor, TSQL error logs and the performance dashboard in SSMS. Now, you might want to know that none of these are asked for by name in the DP 300, so that’s why we’re not spending a number of videos looking at how to configure each one of these. But it’s just useful to know if you come across that particular terminology, what it can do.
Interesting posts
The Growing Demand for IT Certifications in the Fintech Industry
The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »
CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared
In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »
The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?
If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »
SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification
As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »
CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?
The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »
Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?
The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »