Amazon AWS Certified SysOps Administrator Associate – Monitoring, Auditing and Performance Part 2
3. CloudWatch Dashboards
Now that we have metrics, it’s possible for us to present them into a Cloudwash dashboard. So it is very easy. You can access to your key metrics and your alarms, and the dashboards are going to be global, so they’re going to be across many regions, and you can include graphs from different accounts in AWS or different regions. And this is very important from an exam perspective.
You can have metrics from US east one, and then EUs one, and then AP southeast two in the same cloud watch dashboard. You can change the time zone and the time range of the dashboard. You can set up automatic refresh and you can share the dashboard. If you do not have an account in AWS, the pricing is very easy.
You’ll get three dashboards up to 15 metrics for free, and then you’re going to pay $3 per dashboard per month afterwards. So let’s go ahead and see how we can create a dashboard. So let’s go ahead and have a look at our dashboard. So currently we have zero custom dashboards, but we can create an automatic dashboard based on a specific service.
So you could create, for example, an auto scaling dashboard and this would create already a pre configured dashboard for you, which you can filter by resource group, for example, dev group, finance group subgroup. And we had created these groups from before and then this would show you all the metrics that auspicions are relevant for your auto scaling group.
Okay? But these are your automatic dashboards. What you can do instead is you can create a custom dashboard. So if you go to dashboard and then create a dashboard, you can create a demo dashboard. Now in here you can add many different things. You can add a line, a stacked area, a number, some text, some logs, table, an alarm status and so on. So lots of different widgets that you can use within your dashboard. So for now, I’ll use a number and I’ll use a number while because we pushed a custom metric in my namespace and I will choose this number and this is going to show you the size of my buffer. So this is quite handy and I will create this widget. So remember, this number was pushed using the put metric data API call that we used from before, from the documentation in the CLI.
So here we have this number and it’s pretty cool, but for example, you wanted to add another widget, you can add for example, a line and a metric, and then you need to choose a metric. For example, again EC two, and you got your per instance metric and you get your CPU credit balance and you can add it. Now, we don’t have any information right now in this dashboard because, well, we don’t have any easy to instances running, and so therefore, this credit balance for that easy to instance is not happening to be populated, okay? But you could go back in time again and display one month of data, and this would show you a little bit of information. And you can zoom in.
As you can see, as I zoom in, well, the buffer did not show any data, because during that time frame that I’m zooming in right now, well, the buffer that was a custom metric did not have any data, so you can definitely have a look at clarity dashboards. And the cool thing is that all the metrics and all the widgets that you are showing side to side are going to be shown during the exact same time frame, so you can get a more complete information around the story of your metrics as you go and explore your dashboard. Okay? So right now I can just display the last 1 hour, and I can just save this dashboard and then we’ll be good to go. So we can say, hey, show me the last 180 minutes and you should be good to go. Okay, so now you can also do something else for your dashboard.
I told your dashboards are global, so therefore you can add a widget and you can add a line again, okay, a metric. But this time, as you can see, the metric can be chosen from any specific region. So right now we’re in Frankfurt, but if you go to us east one, you could find some metrics around us east one, for example, some storage metrics of a bucket, and you can get the bucket size bytes and plot that or the number of objects, for example, and add this directly.
So this metric right here comes from us east one, even though there’s no data available. So not very, very interesting. But as you can see, metrics can come from different regions in the same Cloudwash dashboard, which is quite handy. Okay? And then when you’re done with the dashboard, make sure you save your dashboard to have it saved, and therefore all the things you’ve done are going to be permanent. Okay? So let’s say for collaborative dashboard, just remember, multiple accounts, multiple regions, that’s very important. And you can show multiple widgets just by clicking on the add widget button, including for your custom metrics. Okay, so I hope that was helpful and I will see you in the next lecture.
4. CloudWatch Logs
Now let’s talk about Cloud Watch logs. So when you want to store your logs in AWS, the best place is Cloud wash Logs. So the idea is that you’re going to group these logs into log groups. And this is a name that you want to give it, but usually it’s representing an application. And within each log group you have log streams and they represent instances within the application or different log file names or different containers and so on.
Then you define a log expiration policy. For example, you may never want to log to expire or you want it to be deleted after 30 days and so on because you are paying for storage on Cloud Watch logs. Then from Cloud Watch logs. You can export the logs to multiple places such as Amazon s three Kinesis data streams. Kinesis data, fire hose, lambda and elastic search.
Now, what types of logs can go into CloudWatch Logs? Well, we can send the logs using the SDK or the Cloud Watch logs agent or the Cloud Watch unified agent. Now. The Cloud Watch. Unified agents. Send logs to Cloud Watch. And so the Cloud Watch logistics is now sort of deprecated. You have Elastic Beanstalk, which is used to collect logs from the application directly into Cloud Watch.
ECS will send the logs directly from the containers into Cloud Watch. Lambda will send logs from the functions themselves. VPC for Logs will send logs specific to your VPC metadata Network traffic API Gateway will send all the requests made to the API gateway into Cloud Watch Logs cloud trail. You can send the logs directly based on the filter and Route 53 will log all the DNS queries made to its service.
So another thing you can do that’s very important is to define metric, filter and insights. So the idea is that you have your colors logs and you can use filter expressions, for example, to find all the specific IP within the logs or find the log lines where that IP appears or find every log line in your Lugs that contains the word errors. And then thanks to this metric filter, you can start counting these occurrences and then this becomes a metric. Okay? And this metric can be linked into a clywatch alarm. Then the other feature that’s really cool to discuss is Clywatch logs insights. So the idea is that Cloudwater Logins, you can query logs and add these queries into CloudWatch Dashboards directly. And some common queries are added directly by AWS.
And this is a very easy language to use. So first let’s talk about the S Three export. So you extend from CloudWatch into Amazon is free and this can take up to 12 hours to become available for export. And the API call is Create Export Tax. But this is going to be done in its own time and so it’s not renew real time or real time. Instead, if you wanted to stream logs from Cloudwash Logs, you would need to use subscriptions. Now, subscriptions are what? Well, subscriptions are a filter that you apply on top of your cloud wash logs, and then you can send it to a destination.
So it could be a lambda function, for example, that you define custom or one that’s used by AWS if you want to send data into Amazon elastic search directly. Or it could be Kinesis data fire hose if you wanted to send it, for example, to Amazon is free in near real time. So this is a faster alternative to the one I just showed you from before using the export from Cloud Watch two, s three.
Or you could use Kinesis data stream, for example, to send the data into Kennedy’s data fire hose, kinesis data analytics, Amazon EC two or lambda and so on. Finally, with Cloudbatrus, you can do some log aggregation across accounts and across regions. So you may have like, multiple accounts with, for example, a region that has a subscription filter that’s sent into a Kennedy data stream in a common account, same for account V, so region two, same architecture and so on. And so you can centralize all these locks together into Kinesis data stream, for example, and then data fire hose, and then, for example, Amazon s three. Okay.
Interesting posts
The Growing Demand for IT Certifications in the Fintech Industry
The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »
CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared
In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »
The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?
If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »
SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification
As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »
CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?
The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »
Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?
The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »