DP-300 Microsoft Azure Database – Configure Azure SQL Database resource for scale and performance Part 2

  • By
  • July 6, 2023
0 Comment

3. 9, 53, 56. DTU-based purchasing model, Server vs Serverless

So if that is the Vcobased purchasing model, what is the DTU based purchasing model? Well, DTU are little units. They’re called database transaction units. And they are packages of maximum number of compute, memory and input output or read and write resources for each class. So whereas previously with the VCO you could say, well, I’m really more interested in the storage than the compute.

Here with the DTU model you have these packages and you say I want to have this package. So if I go to the basics. So this is for less demanding workloads. You can see that my DTU goes from 0. 1 to 2GB. The DTU itself is just five DTU’s. It’s that nothing more, nothing less. And I’m not charged based on my maximum data science.

You can see the price remains at about $7 per month. And that is actually quite a bargain if you just want to have your own database online. And you can still use all the rest of the Azure functionality and features as well. Now, going up standard, this is for more typical performance requirements. So here you can say I want this particular model.

And you notice that the models, they have a standard number of DTUs, but they also have a name next to it. So. Sierra. S one, s two. Now, there are some features which aren’t available in the Lore models. So for instance, the Basic that we’ve just seen in S Zero and S One, this is where database files are stored in Azure standard storage hard at disk drives, HDDs, so that’s much less reliable than SSDs, the solid state disks.

So consider if you’re doing standard S Zero, S One or Basic, then this is for development testing and infrequently accessed workloads. Now, later on we’ll be using something called Change Data Capture or CDC. And this cannot be used where you’ve got less than the equivalent of one Vcore. And therefore you can’t use it in basic s zero, s one or even s two. You have to go up to the S Three, which is 100 DTUs before you can actually use it. Now you can see you can go all the way up to 3000, but consider changing to V cores. If you’re using say, more than about 300, consider it, it might reduce costs and there’s no downtime when you’re converting. So that’s the standard.

So that’s typical performance. If you’ve got more input output intensive workloads, then you might want to have a look at the premium model. So the premium model, as you can see now, starts at about $650. The standard started at $20. And we can go the DTUs all the way up to 4000 and a cost of about 18 19,000 U. S. Dollars. Again, you can see there’s no change in the charge for the data maximum size. It is included in the DTUs because the DTUs are a bundle of compute, memory and input output, right resources. So this is a good calculator for how much such a thing will cost. If you want to know how many DTUs you might need, then there is this website DTU calculator websites.

Net. And for this you need to do a few traces on your existing on prem and then it will give you some sort of figure calculation or recommendation. So if you are happy with just the standard bundles, simple, pre configured, then go for the DTU. And as I say, the goal the way down to Basic. And remember that the general purpose starts at around $450. So if you’re below that on the DTU, then you’ve got something cheaper. As long as you can actually use it, it is usable for what you want. Now you can change service tier on demand. So if I go to my current Azure SQL and I’ve got a database on the server, so if I go to the server, you can’t see anything here about what particular level you can see. It’s a per database function. And here you can see the pricing tier.

This is in the overview section. So if I click on that, I can change this to any of these Vecore or DTU, except I won’t be able necessarily to go to hyperscale. It used to be you couldn’t go into hyperscale from elsewhere. It looks like now they’ve solved that, but now you still can’t go out of hyperscale. Now, just one word of warning, I wouldn’t change this when you actually have a long job running. I would go at some point when the surf is less likely to be used the database and then change it if you have got DMVs. These are dynamic management views that we had a look at earlier. For them to have accurate figures, it is possible you may need to flush what’s called the Query Store, which we’ll be looking at in future sections before you rescale.

And if to that you would need to issue the command SP for stored procedure. Query store flush, DB. So this is the range of options that you have got. So general Purpose, business critical and hyperscale on the Vico and then Basic, Standard and Premium are in the DTU based purchasing model. In terms of Temp DB size. For the basic service level, you’ve got one Tempdb file of 13. 9gb, and that’s the same at the start of Standard. So s one and s two, while you’ve got less than one V core. When we get to s three, then we start having 32 gigabyte file sizes.

And once you progress all the way through to s twelve, we have twelve tempdb files at about 384gb. So it’s 32 times twelve. So you can see the number of temp DB files is roughly half the service level. S six has three, S Four has two, until you get to s seven where it really jumps. Now we’ll be looking at pools later. So we have a similar sort of relationship with the DTUs that you can have in a pool, which are called EDTUS and the number of Temp DB files. But in each case, the maximum data size per file is 32GB. And then finally, at the premium model, we have 13. 9 gigabyte maximum size for the Temp DB files. We have twelve of them. So that gives a maximum data size Temp DB of 166. 7gb.

4. 9, 53, 56. Serverless/provisioned and elastic pools

Now, if we go back into the compute and storage, you can see currently we are at Basic and our basic configuration is very little. We can’t really change anything. As we go through the DTU based purchasing model, we can change more and more things. Now, when we get to the general purpose, and only the general purpose, we can actually change the compute tier from Provisioned to Serverless. So the advantage of Serverless is that you’re billed by the second. So after say 1 hour of inactivity, then the billing stops, the ability to use the database stops and it restarts. When there’s any database activity, there may be a small delay, we’re talking not much, to be honest, but there may be a little bit. So you can auto pause from 1 hour all the way up to seven days, but no less than the 1 hour. Now, if you’re using Provisioned, then you do have the option of saving money by using Hybrid benefit.

So if you’ve already got an SQL Server license for Onprem, then you can use that license and we can’t use it everywhere. This example is in the central US region and you can see that we are saving, it says up to 55%, but only saving here about 35%. And not just what it says at the bottom. Your actual savings may vary. However, that may well reduce the cost. So, Provisioned or Serverless as a general purpose, the choice is yours. When you get to business critical, you can’t do that. And same for hyperscale. It only really makes sense for the general purpose because if it’s business critical, you need to be able to have access to it all the time.

Now, the next question is do you want to use an SQL elastic pool? And the question is, well, what is that? Well, let’s suppose that you have more than one database. So here we have an example of a database and here you can see the compute requirements for this particular database. So it peaks at 12:00 to 01:00 and then peaks again at 04:00 to 05:00. Okay? So we need to have sufficient DTUs or need to have sufficient vehicles, et cetera, so that we can have this compute requirements.

These figures are completely made up, by the way. Now, suppose we have a second database, okay? We have exactly the same requirements, except the timing is different. This peaks between one and 02:00 and we still have this peak here, four till 05:00. Let’s add a third database. Again, we have different timings, so it peaks between two and 03:00 and five and 06:00. And then a fourth database.

And as you can see again we have peak between three and 04:00 and five and 06:00. So notice none of these, the compute requirements are above this figure 20, which as I say, it’s a purely fictitious figure. So what we could do is we could have four databases with the maximum compute requirements of 20, but that would not be that good. When we have a look at an individual database. And you can see we are using not much in terms of the compute requirements in this time period. And this time period, if we had this limit of 20, which would have to, to accommodate the peaks, that wouldn’t be that good. We’d be wasting a lot of money. So here are these compute requirements again.

But here’s database one. But let’s put Database Two now on top of it. And you can see we peak at near 40. And you can see that the total peak is around 52, 53. So we were previously talking about four databases, each with an allocation of 20. Here we’ve got four databases with a total peak of 52. So if we commissioned each database separately, we’d have to provision a peak of 20 times 480. Here we can provision a peak of around 53 or so and that will be fine for all of the databases. So that is what an elastic pool is. It is a pool of resources and we can create a new pool.

So this is my elastic pool here, and we can configure this elastic pool with whatever purchasing model we want, but we can’t choose hyper scale. So you’ll notice that when we get to the DTU based purchasing model, there’s a small difference in terminology. It’s not DTU’s. It’s e DTU’s. The E standing for elastic. So I could have say, 500 EDTUS, and for the database, individual database, up to 75 DTUs, or I could not what’s called frostolate and allow everybody access to the full 500 Ed to use as necessary. So that is what an elastic pool is all about. It’s the ability to provision better.

All of these bumps. If all of the bumps happen at different times, if they all happen at the same time, well, I would have four lots of 20 all at once. I wouldn’t actually save money having an elastic pool. And in fact, it might well cost me money because not at the Vcore level, but at the DTU based purchasing model. The unit price for EDTUS pools is an extra 50%, right? Vcore, it’s the same unit price. So in this video, we’ve had a look at Provisioned and Serverless and had a look at elastic pools. In the next video, we’re just going to have a look at some of the other things that we need to look at while provisioning an Azure SQL database.

Comments
* The most recent comment are at the top

Interesting posts

The Growing Demand for IT Certifications in the Fintech Industry

The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »

CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared

In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »

The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?

If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »

SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification

As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »

CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?

The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »

Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?

The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »

img