Amazon AWS Certified SysOps Administrator Associate – Databases for SysOps Part 6

  • By
  • May 26, 2023
0 Comment

11. RDS Performance Insights

So let’s talk about RDS Performance Insights, and that’s the last one around monitoring for RDS, but I think you need to know it and it is quite a cool tool overall. So Performance Insights allows you to visualize your database performance and you can analyze why there are issues that affect your database. So you can visualize the database load and you can filter the load by four different types of metrics. The number one is called Weights. And Weights is very cryptic, but basically it shows you the resource that will be the bottleneck. So it could be CPU IO. Could be some lux.

So if you needed to know how to upgrade your database to a different type of instance, and know if you wanted to optimize the CPU or optimize the IO, weights will give you a good idea of what your database waits on the most if it’s a CPU, the I O, et cetera. Now you can also filter by SQL statements. So if an SQL statement is by default blocking your database or making it crawl for whatever reason, then you can identify that SQL statement and maybe you can reach out to the team or the application that runs that statement and try to understand how you can optimize that statement.

You can also filter by host. So basically by filtering and by grouping by host, you can find the server or the application server that will be maybe hammering our database and take action, maybe blocking access or maybe talking to them and understanding why they’re using so much of our database, maybe they need a read replica and then finally by users.

So this is to connect using the usernames and to find maybe the user that are using the most our database. So the idea is that this Performance Insight allows us to understand who or where from or what statements is using our weights, which is a CPU, I O, luck, et cetera. Now, database load will be evaluated as the number of active sessions for the database engine. And the whole thing basically allows you to travel shoot, including during the SQL queries or putting loads on your database. So our database is not interesting because it’s not running anything special, there is no application.

So I was just going to show you screenshots of what you can expect. So this is Performance Insights and this is from the Amazon blog. And so as you can see, you get a line right here and it shows you the max number of CPU. So as long as you’re under it, that means that you’re doing fine. But if you’re over it, that means that your database is running at capacity. And so if you slice that graph by weights on the right hand side, it shows you what’s using your database. So CPU is only zero point 32 and IO table, SQL Handler is 1. 82. So it seems like I O in there is going to be something that you may want to optimize or improve, maybe using a better disk or whatever.

But this is the kind of idea using the slice by weights which gives you like all this list of stuff that’s happening and you can understand what’s blocking your database. Now you can also basically analyze the SQL queries that are running. So as you can see, this 1138 is an SQL query that is taking a long time. So maybe update schema three, table one, set s one equals mg five random. Basically that does a lot of action and this takes a lot of resources. And maybe their team running that SQL statement, you should go and talk to them and understand why they’re doing it and if they can do anything better. So this is quite a cool way of basically troubleshooting which SQL queries are taking the most time. And then you can also view by users. So you can see on the right hand side that RDS user is having four connections, jeremiah is having two. Again, that’s from the blog.

But you get a lot more information around who is doing what on your database, which could be quite helpful as well. Maybe some application is opening a thousand connections and you need to know about it right away. And so by host I had to have a graph for it, but it will basically tell you which application server is connected to your database. So overall, all these things really help. I think Performance Insight is a great tool to help you troubleshoot. Let me just show you where it is in the AWS console. To use Performance Insights, you will go to the left hand side and you would need to actually enable Performance Insights. So to enable it, you would have to go to your database and modify it. So you click on your database instance and click on Modify.

But it turns out that because we’re using a T two type of instance, performance Insight is not supported right now on DBT two instance classes, unfortunately. So we can’t have it. And this is why I showed you screenshots of it, but from the screenshots you should get a great idea of how it works. But if you had a non T two micro type of instance, then you should be able to modify your DB instance and enable performance monitoring. And so as you can see, if I click on it and maybe do M four large and I scroll all the way down, then I start seeing Performance Insight and I can enable it and say how many days of retention of data I want, et cetera, but I won’t do it right now. But you get an idea and now you’re supposed to be a monitoring expert for RDS. So congratulations. I will see you in the next lecture.

12. [SAA/DVA] Aurora Overview

So let’s talk about Amazon Aurora, because the exam is starting to ask you a lot of questions about it. Now, you don’t need deep knowledge on it, but you need enough high level overview to understand exactly how it works. So this is what I’m going to give you in this lecture. Aurora is going to be a proprietary technology from AWS. It’s not open sourced, but they made it. So it’s going to be compatible with postgres and MySQL. And basically your Aura database will have compatible drivers. That means that you if you connect as if you were connecting to a postgres or a MySQL database, then it will work.

Aura is very special and I won’t go too deep into the internals, but they made it cloud optimized and by doing a lot of optimization and smart stuff, basically they get five X performance improvements over MySQL on RDS or three X to performance of postgres on RDS. Not just that, but in many different ways, they also get more performance improvements to me, I watch it, it’s really, really smart, but I won’t go into the details of it. Now, Aurora storage automatically grows, and I think this is one of the main features that is quite awesome. You start at 10GB, but as you put more data into your database, it grows automatically up to 64 terabytes. Again, this has to do with how to design it. But the awesome thing is that now as a DB or a Sys apps, you don’t need to worry about monitoring your disk. You just know it will grow automatically with time.

Also, for the re Replicas, you can have up to 15 Replicas, while MySQL only has five. And the replication faster, the way they made it, it’s much faster. So overall, it’s a win. Now, if you do failover in Aurora, it was going to be instantaneous. So it’s going to be much faster than a failover from multiaz on MySQL or RDS. And because it’s cloud native by default, you get high availability. Now, although the cost is a little bit more than RDS, about 20% more, it is so much more efficient that at scale, it makes a lot more sense for savings. So let’s talk about the aspect that are super important, which is high availability and read scaling. So Aura is special because it’s going to store six copies of your data anytime you write anything across three AZ. And so Aura is made such as it’s available.

So it only needs four copy out of six for reads. So that means that if one AZ is down, you’re fine, and it only means you have three copy out of six needed for reads. So again, that means that it’s highly available for reads. There is some kind of self filling process that happens, which is quite cool, which is that if some data is corrupted or bad, then it does self filling with peer to peer application in the back end and it’s quite cool. And you don’t rely on just one volume, you rely on hundreds of volumes. Again, not something for you to manage. It happens in the back end, but that means that you’ve just reduced the risk by so much.

So if you look at it from a diagram perspective, you have three AZ and you have a shared storage volume, but it’s a logical volume, and it does replication, self healing, and auto expanding, which is a lot of features. So if you were to write some data, maybe blue data, you’ll see six copy of it in three different AZ. Then if you write some orange data, again, six copy of it in different AZ. And then as you write more and more data, it’s basically going to go, again, six copy of it in three different AZ. The cool thing is that it goes on different volumes and it’s striped and it works really, really well. Now, you need to know about storage, and that’s it. But you don’t actually interface with the storage.

It’s just a design that Amazon made, and I want to give it to you as well so you understand what Aura takes. Now aurora is like multiaz for RDS. Basically, there’s only one instance that takes rights. So there is a Master in Aurora, and that’s where we’ll take rights. And then if the Master doesn’t work, the failover will happen in less than 30 seconds on average. So it’s really, really quick fell over. And on top of the Master, you can have up to 15 read Replicas, all serving reads. So you can have a lot of them. And this is how you scale your read workload. And so any of these read Replicas can become the Master in case the Master fails.

So it’s quite different from how RDS works, but by default, you only have one Master. The cool thing about these read Replicas is that it supports cross region replication. So if you look at Aurora on the right hand side of the diagram, this is what you should remember. One Master multiple read Replicas, and the storage is going to be replicated self healing, auto expending little blocks by little blocks. Now, let’s have a look at how Aurora is as a cluster. So this is more around how Aurora works. When you have clients, how do you interface with all these instances? So, as we said, we have a shared storage volume and it’s auto expanding from 10GB to 64 terabytes.

Really cool feature. Your Master is the only thing that will write to your storage. And because the Master can change and fail over, what Aurap provides you is what’s called a Writer endpoint. So it’s a DNS name, a Writer endpoint, and it’s always pointing to the Master. So even if the Master fails over, your client still talks to the Writer Endpoint and is automatically redirected to the right instance. Now, as I said before, you also have a lot of read Replicas. What I didn’t say is that they can have auto scaling on top of these read Replicas. So you can have one up to 15 read Replicas. And you can set up odo scaling, such as you always have the right number of read Replicas. Now, because you have odo scaling, it can be really, really hard for your applications to keep track of.

Where are your read Replicas, what’s the URL? How do I connect to them? So, for it, you have to remember this. Absolutely. For going for the exam, there is something called a reader endpoints. And a reader endpoint has the exact same feature as a writer endpoint. It helps with connection load balancing, and it connects automatically to all the read Replicas. So anytime the client connects to the reader endpoint, it will get connected to one of the read Replicas. And there will be load balancing done this way. Make sure, just notice that the load balancing happens at the connection level, not the statement level. So, this is how it works for Aura. Remember writer endpoint, reader endpoint?

Remember auto scaling? Remember shared storage volume that auto expends? Remember this diagram? Because once you get it, you’ll understand how Aura works. Now, if we go deep into the feature, you get a lot of things I already told you. Automatic failover backup and recovery, isolation and security, industry compliance. Push button scaling by auto scaling. Automated patching with zero downtime. So it’s kind of cool. Darker magic they do in the back end. Advanced monitoring, routine maintenance. So all these things are handled for you. And you also get this feature called Backtrack, which is giving you the ability to restore data at any point of time. And it actually doesn’t rely on backups, it relies on something different. But you can always say I want to go back to yesterday at 04:00 p. m. . And you say oh no, actually I wanted to do yesterday at 05:00 p. m. . And it will work as well, which is super, super neat. For security, it is similar to RDS because it uses the same engine.

We have postgres and MySQL. So we get encryption at rest using Kms. We have automated backups snapshots and Replicas that are also encrypted encryption in flight using SSL. And this is the exact same process we have for MySQL and Postgres, if you wanted to enforce it. And we have also authentication using IAM Tokens, which is the exact same method we have seen for RDS. Thanks to the integration with MySQL and postgres RDS, you are responsible still for protecting the instance with security groups and you cannot SSH into your instance. So or security, all in all is the exact, exact same as RDS security. So, that’s it for Aurora and I will see you in the next.

Comments
* The most recent comment are at the top

Interesting posts

The Growing Demand for IT Certifications in the Fintech Industry

The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »

CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared

In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »

The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?

If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »

SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification

As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »

CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?

The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »

Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?

The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »

img