Amazon AWS Certified SysOps Administrator Associate – Monitoring, Auditing and Performance
1. CloudWatch Metrics
Okay, so we’ve seen Cloud Watch throughout the course, but let’s just go and do a quick summary on it. So first, CloudWatch Metrics. It will provide metrics for every service in AWS, and you need to understand what the metric means. So usually the name of the metric gives you a pretty good indication of what it is, for example, CPU utilization, Networking. And then based on how the metric is behaving, it gives you an idea of how the service is behaving, and you can do some troubleshooting based on this.
So metrics belong to namespaces, and then you have a dimension which is an attribute of a metric, for example, instance, ID environment, ETCA, etc. And you can choose up to ten dimensions per metric. Metrics will have timestamps and you can create Collaboration Dashboards of Metrics. So in this course, we’ve seen EC two metrics, for example, and we’ve also seen the EC two detailed monitoring. So we know that by default, EC two instances will have metrics every five minutes. But if you enable detailed monitoring for a cost, it’s additional, then you’re going to get data of metrics every 1 minute.
And if you enable this, then for example, you’re going to be able to react faster to changing metrics for your EC two instances, and it gives you some benefits for your SG if you want to scale out and in faster. Now, the feature allows you to get ten detailed monitoring metrics, and the thing to know is that the EC two memory usage, so your Ram is not pushed by default and needs to be pushed from the instance as a customer metric. And we’ll have a look at how to push custom metrics very soon. So when you are in the Cloud Watch dashboard, on the left hand side there is metrics, and you can find all the metrics. And as you can see, we see all the namespaces in here for our metrics.
So if you have a look, we have based on services, for example, ELB, Auto, Scaling, EBS, EC Two, EFS, and so on. So a lot of information given to you here. So we can click on EC two, and we can have a per instance metric just to see one metric. And I’m going to type credit to see the CPU credit balance. For example, I will take this instance, which was created a long time ago, and then what I’m going to do is I’m going to choose a custom range, which is going to be one month to find some data in here. Okay, so we have the data here, and so the cool thing with Cloud Watch metrics is that you can just click and select the time span you want.
And here we go, we’re getting some information around our metrics. As you can see, we get metrics every five minutes here, so every data point is every five minutes, because detailed monitoring was not enabled for this instance. Okay, but if I did enable detailed monitoring, I would get data every one minutes. So this is just the basics of CloudWatch metrics, nothing too fancy, but we can definitely filter by time. We can view it as a different line, so stacked area or line or number or a pie chart, you can add it to your dashboard, you can download to CSV, you can share it.
Okay, so college metric is very, very handy, and you can have a look at all the metrics based on the region you want, based on the dimension you want, the resource ID you want, so you can filter everything. So that’s it for college metrics. I hope you liked it and I will see you in the next lecture.
2. CloudWatch Custom Metrics
So all the metrics we’ve seen so far in this course are metrics taken directly from the aid of services that we have enabled by default. But there is a way for you to get customer metrics for Cloud Watch. And this is you can define your own custom metrics. For example, you wanted to push the memory usage of the Ram to Clywatch or the disk space or the number of login user for your application.
For this you would use an API call named Put Metric Data. You can add dimensions or attributes to your segment metrics, for example, instance, ID, environment, name, whatever you want, really, it’s up to you to name it however you want. And then you can specify a metric resolution with the storage resolution API parameter with two possible values. So either it’s a standard custom metric and you can push a metric every 1 minute, so 60 seconds, or you can enable very high resolution, in which case you can enable to push metrics every 1510 or 30 seconds.
Okay? Something good to know is that with custom metrics, when you push a metric in the past or in the future, this works as well. So this is a very important example point. So if you are pushing a metric up to two weeks in the past or 2 hours in the future, you’re not going to get an error from Cloud Watch, okay? This is going to accept your metric as is. And so that means you need to make sure that your EC two instance time is correctly configured if you want the metrics to be synchronized with the actual time from AWS. So let’s push a custom metric.
And for this I went to the documentation into Cloud Watch, put Metric data. This is the CLI documentation and this shows you how to push a metric into Cloud Watch. So I’m not going to read the documentation. You can have a look at all the parameters in here, okay? But very important timestamp can be specified. And so you can specify a timestamp up to two weeks in the past and 2 hours in the future. So very, very important.
Then you can specify the data, the name of the value, the units, the value and so on, dimensions, as well as the storage resolution if you wanted to get a high resolution metric or a standard resolution. Okay, so what I’m going to do is just push a very custom example.
So at the end, there’s examples. And you can use a metrics JS file to push a metric like this if you wanted to, and then use this API call. Or if you wanted to go even quicker. You can just use one API command to specify the value of your metric, the unit, the bytes, as well as instance ID, instance type, and so on. So let me take this command right here, and we’re going to open the Cloud shell utility to push that metric okay, so Cloud Shell is launched and now I’m going to paste the command in and then press Enter. So this is going to push a custom metric into Cloud Watch.
Now you have to imagine that if this is done from an easy to instance with a script, for example, you can push any metric regularly. Right now I’m just pushing one data point using the CLI into Cloud Watch, which is quite handy already. Okay? And if you know the unified agent for Cloud Watch, what it does is that it does use this put metric data API call to push metrics into Cloud Watch regularly. So when this is pushed, we have pushed a new namespace named my namespace. And so that means that if I go back to my Cloud Watch metrics and refresh this, I need to clear my graph and then I’m just going to go out of the service and go back to the service. It’s going to be easier then go to all metrics.
And as you can see, we have a custom namespace that has been created right here. So all of those before where namespace is created by AWS, but now we have a namespace created by us. And so in it we have two dimensions, instance ID and instance type. And these represents the same instance ID and instance type dimensions that were specified in the command.
So this is up to you to define these dimensions. Obviously. Then you click on it and you can see the instance ID, the instance type and the metric name buffers. And if I click on it now, we don’t see much because well, we don’t have much. But there is one data point in here that has been created, okay? And this is part of my custom metric. So that’s it. It’s quite handy. You can see how to create custom metrics very easily using an API call. So I hope you liked it and I will see you in the next lecture.
Interesting posts
The Growing Demand for IT Certifications in the Fintech Industry
The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »
CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared
In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »
The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?
If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »
SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification
As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »
CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?
The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »
Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?
The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »