DP-300 Microsoft Azure Database – Identify performance-related issues
1. 48, 60. Query Store – an introduction
Hello. And in this section we’re going to be identifying performance related issues. First of all we’re going to be using the Query Store to help us. What is the Query Store? Well, it contains three different stores. A plan store for executing plan data, a runtime store for execution statistics data and awaits stats store. Now, when would you use a Query Store? Well, you could fix queries which are regressed due to changes in the execution plan. Let’s have a look at that. We’ll go into a particular database, it is enabled by default for a zero SQL database, so we don’t have to turn it on, but we’ll see how to do that in the next video. And we go down here to querystore.
And if I expand, you can see that we’ve got all of these at different built in queries and if I right and click, we’ve got the same things. So firstly the regressed queries. So have your query speeds got worse? You can have a look at duration, CPU time, logical reads, physical reads and many more. So for instance, you can click on a particular query and you can see what the query is here. And if I click on the dot you get it exported into a different text window. You can also see when this particular query was run. So it’s been run over several days and that the number of milliseconds that it’s taken, which in this popup is shown as total duration brackets ms right near the middle, varies quite a lot. So this is why the computer thinks this may have regressed.
Alternatively, it could be because there are other things happening at the same time. You can see that there is a plan here and I can force a plan to use a particular plan ID. So if there are multiple plan IDs here, I could say okay, I’ll click on one particular ID and I’ll say any future of these queries use that particular plan. So I can click on each one of these in turn and see which are the most regressed queries. Then we’ve got overall resource consumption, so are the resources being used in more particular days or between the day and the night? So we’ve got a standard text grid and we’ve got a chart and we can also configure this. So I got the chart being shown by default for execution count, duration, CPU time and logical reads. I could add on an extra one.
So here we got logical writes or I could say I don’t want the last month because that doesn’t really show me much, I want just the last week and I can really go into this. And then if I click on any of these we can see the top resource consumers for this particular database. So what is being used the most? So next is top resource consuming queries. So that’s exactly what we’ve just had, except that this is for all time or at least as far as the Query Store has got, and not for just one particular day. So again, I can click on any of these and you can see what the query is and go, okay, maybe we need to refine this particular query and use a particular plan.
Then we’ve got queries with force plans. We don’t have any here, but if there’s a query with a particular force plan, we’d have a list of them, queries with high variations. So this particular query may have had a varied amount of duration, CPU time, IO or memory. And you can change what you’re looking at again up here. And if you don’t want to see this as a graph, and by the way, this shows the variation as opposed to standard deviation. If you don’t want to see it as a graph, you can also see it as a table or in a grid. Next, we’ve got query weight statistics.
So what is it waiting for? Is it waiting for CPU? Is it waiting for memory? We can have a look at total wait time or average or minimal max. And then if I click on any one of those, you can see what the queries are waiting on, so you can get detail on any particular query. And then I might say, okay, a particular query. I really want to track this. And we’ve got a little symbol here. Track the selected query in a new tracked queries window. So this is query two seven six.
So if I was to close all of these down and go off attract queries, I could type in two seven six and see the data relating to that. And then I could say, okay, that’s the query that’s causing the moral problems. I will force the plan. Incidentally, we’ve got these little shapes here. Circle means the query completed, square means it was canceled by the client, and a triangle means it failed because of an exception, it was aborted. And don’t forget to have a lock in here for any missing indexes in the query view. So this is how we can use the Query Store in terms of these built in analyses. In the next video, we’ll look at how we can configure the Query Store and how we can extract query plans from it.
2. 48, 60. configure Query Store to collect performance data, extract query plans
Now, in this video we’re going to look at how we can configure query store to collect performance data and how to extract the query plans. So, Query store is disabled by default on new SQL Server databases, onprem or virtual machines, but it’s enabled by default for new Azure SQL databases. So if I write and click on my database, go to Properties and have a look on the left hand side we’ve got Query stow and you can see if I scroll down to the bottom, weight statistics capture mod is set to on. So it is capturing these statistics. Additionally, if I look at the top, the operation modes can either be Read, Write, Read Only or off. So you’ve got to have read Write if you want the Query Store to expand with new queries. And you’ve got to have the statistics being captured for this to be on.
So we can do this with Alter Database. Name a database set, query Store equals on and in brackets operation mode equals read Write or Read underscore Only. Either of those will get it on. But read. Write is the usual one. Now, is it actually collecting runtime statistics? Well, you can find that out by typing in select Star from Sys Database underscore Query underscore Store underscore Options. So here we can see for instance, the desired state and the actual state.
So if the actual state is Read Write, great. If it’s Read Only, then it’s not collecting the runtime statistics, but it is able to be using them. Now, I should point out that when you first switch it on, if you have to switch it on what? You don’t have to force your SQL Database, it can take up to a day to collect sufficient data to represent your particular workload. Now, let’s have a look at some of the options. How often is it going to be collecting statistics? Well, that is the statistic collection interval. And we can see over here in this brackets bit, we could have interval length minutes. So that equals 1510, 1530, 60 or 1440 minutes to represent a day. So what does this mean? It means that for each of these time periods, a query will have a maximum of one row collected for this time period.
Now we can have a look at max storage size in Megabytes. So that is here. So this is the space allocated to the Query Store. The default is 100 megabytes in SQL Server 2016 2017 and one gigabytes or 1000 megabytes in SQL Server 2019. If it reaches this limit, Query Store will no longer collect new data and therefore will go into a Read Only state. That will reduce performance accuracy because the Query Store will become stale. So in this query that we’ve just done, there is a Read Only underscore reason.
So if that is equal to 65,536, which is two to the power of 16, then Query Store has reached this max storage size Megabytes. Now, to prevent it from reaching the size, obviously increase it. And if you can’t allocate extra space then you might want to consider decreasing the data flush time. So the data flush interval seconds is the amount of seconds or minutes depending on which bit you’re looking at. The data is retained in the memory before being saved. Having a higher value means that you won’t have the data saved as often. And this is good if you’ve got a large number of queries not being generated, something quite small.
However, if SQL Server crashes or restart and you haven’t got all of those saved, then nothing new will be saved at that time. Having a lower value means it saves more often and that could have a negative impact on performance. Next up is the size based cleanup mod. As you can see here, it’s where the automatic data cleanup occurs when the size limit is reached. Now, when it reaches about 90% of the maximum storage size then a cleanup can start. It will remove the most oldest or least expensive query data and stops when the size goes from 90% of the maximum size to 80%. Now, if you need that to be checked more quickly then have a look at the data flush interval period again. So we’ve got operation which is read, write or read only query style capture mode, also known as query capture mode over here. So this can capture all queries, noncustom or auto.
Now the default used to be all but that means it was capturing things with infrequent queries and queries with small compiler execution times. So auto is the new default in SQL Server 2019 and Azure SQL which means it doesn’t capture infrequent queries or queries with small execution or compile times. After all, what use would the Query Store be for such thing? The maximum number of plans per query, max plans per query, fairly self evident. So these are some of the things that you can configure and you can purge the query data by pressing this button down here or by going to alter database namer, database set querystore clear.
Now, if you want to extract the query plans from the Query Store then you can use Sys Query Store plan. So here we have a list of all of the Query Store plans and you can see the engine and compatibility mode that they were used on and you can see the query plan in XML format and lots of other information. So if I was to just copy that into a new text field, then you can see the sort of thing that is in there. So there’s your statement and so forth. If you want some runtime statistics then you can do that using Syst Query Store runtime stats.
So the sort of statistics that we’ve got last execution time, how many times average duration, what the last min and max duration is the average CPU time, last min, max and so forth. Now, another thing that could be useful is to combine the Syst Query Store query with Sys Query Store query text. So if I can combine the two, then we can see the text in here with things like the query ID and various statistics like last execution time, average durations and that sort of thing. Now, quite often I just want to see a particular plan for a particular query. So eight, eight, seven So I can either use this and have a look at what Eight Eight Seven is like and have a look at all of the XML, but for me that’s not as helpful as actually physically seeing it. So there’s Eight Eight seven and I could have a look at this in XML format, but I want to see it in my more usual format.
So I go to Track Queries, double click on that, enter Eight Eight Seven and I can look at it in a graphical interface there. So this is my cross join that I’ve often used in this particular course. So we can configure Query Store by right and clicking on the Database, going to Properties and going on the left hand side to Query Store and have the operation and the wait statistic capture mod and more. And then I can extract information about these plans using Sys querystore plan runtime stats. Query and querytext.
Interesting posts
The Growing Demand for IT Certifications in the Fintech Industry
The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »
CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared
In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »
The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?
If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »
SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification
As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »
CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?
The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »
Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?
The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »