Amazon AWS Certified SysOps Administrator Associate – S3 Storage and Data Management – For SysOps Part 5

  • By
  • June 7, 2023
0 Comment

12. [SAA/DVA] S3 Storage Classes + Glacier – Hands On

So let’s have a look at storage classes. I’m going to create a bucket demo tiffan 2020 storage classes. And I will scroll all the way down and click on create buckets. So this is a whole new bucket. And where is it? Here it is. And I’m going to upload my first object in here. I’m going to upload an object named Coffee dot JPEG. Okay, so if we expand the additional upload options, we can have a look at the storage class option. And so here we get access to all the storage classes we’ve learned from the previous lecture. So we have standard standard IA, one zone IA, reduced redundancy, which is deprecated, so usually not used intelligent, tiering, glacier and glacier deep archive.

So we have all the options here, and there is a summary of what each storage class means. For example, how many AZ are available, the minimum storage duration, the minimum billable, object size, some fees in terms of auto, tiering and retrieval fees. You can also click on this, learn more. And Amazon has free pricing buttons, links if you want to learn more and get access to the documentation, obviously, as well as the pricing documentation in case you have any doubts.

But for example, let’s choose to upload this file in standard IA, which is infrequently access. And let’s see if that works. We click on upload and here we go. Our file has been uploaded so I can happily upload another file. For example, I can upload my beach JPEG, and if I choose to upload my beach JPEG in the glitch here, then it will not allow me to see this file immediately unless I retrieve it. So let’s have a look.

Restore it first. So I have my beach JPEG. And as you can see, the storage class is glacier, whereas coffee JPEG, the storage class is standard IA. So if I click on beach JPEG, it says this object is stored in the glacier storage class. In order to access it, you must first restore it. So that makes sense. Whereas if I go to the coffee that JPEG, I’m easily able to do object action open and view it. For this beach JPEG, I cannot open it. I first need to initiate a restore, which is going to take some days to restore. Like I could take, sorry, between five to 12 hours. If I do it with bulk retrieval standards between three in 5 hours and expedited between one and five minutes.

And it’s going to be a lot more expensive. And how many days do I want the restored copy to be available? In? Days. So it’s going to take a lot of time for me to show you this, but you get the idea because it is English here, then it’s going to take some time to restore it. And finally, it is possible for you to edit the storage class of an object. So this object right here is in standard IA. But I can edit it and I can change the storage class to standard or ones on IA, or reduce redundancy or intelligent tiering. For example, let’s move this to intelligent tiering, precise changes. And here we go. The storage class of that object has been changed. So that’s it for storage classes. I hope you liked it and I will see you in the next lecture.

13. [SAA/DVA] S3 Lifecycle Rules

So you can transition objects between storage classes, as we’ve seen in the previous handson. So we can do it in what way? Well, there is a giant graph on the aviation website that describes how to do it. So it’s pretty complicated. But as you can see from Standard IA, you can go into Intelligent Tiering One’s on IA and then Glacier Deep Archive, and it just shows the possible transitions. As you can see from Glacier, you cannot go back to Standard IA. You have to restore the objects and then cut IP that’s restored copy into IA if you wanted to. So for efficiently access objects, move them to Standard IA. For archived objects that you don’t need in real time, the general rule is to move them to Glacier or Deep Archive. And so moving all these objects around, all these classes can be done manually, but it can also be done automatically using something called a lifecycle configuration.

And configuring those is something you are expected to know going into the exam. So lifecycle rules, what are they? You can define transition actions which are helpful when you want to transition your objects from one story to class to another. For example, you’re saying move objects to student IA class 60 days after creation and move to Glacier for archiving six months later. So, fairly easy and fairly natural expiration actions, which is to delete an object after some time. So for example, your access log files, maybe you don’t need them after another year. So after a year you would say, hey, all my files are over a year old, please delete them, please expire them. And it can also be used to delete old versions of a file.

So if you have versioning enabled and you keep on overriding a file and you know you won’t need the previous versions after maybe 60 days, then you can configure an expiration action to expire objects, old versions of a file after 60 days. It can also be used to clean up incomplete multipart uploads in case some parts are hanging around for 30 days and you know they will never be completed, then you would set up an expiration action to remove these parts. And rules can be applied for a specific prefix. So if you have all your MP3 files within the MP3 quote unquote folder or prefix, then you can set a lifecycle rule just for that specific prefix. So you can have many lifecycle rules based on many prefix on your bucket. That makes sense. And you can also have rules created for certain object tags.

So if you want to have a rule that applies just to the objects that are tagged Department Finance, then you can do so. So the exam will ask you some scenario questions, and here is one, and you need to think about it with me. So your application on EC Two creates images thumbnails after profile photos are uploaded to Amazon s rate and these thumbnails can be easily recreated and you only need to be kept to be kept for 45 days.

The source images should be able to be immediately retrieved for these 45 days and afterwards the user can wait up to 6 hours. How would you design this solution? So, I’ll let you think for a second, please pause the video and then we’ll get to the solution. So the S Three source images can be on the standard class and you can set up a lifecycle configuration to transition them to Glacier after 45 days. Why? Because they need to be archived afterwards and we can wait up to 6 hours to retrieve them. And then for the thumbnails, they can be ones on IA. Why? Because we can recreate them. Okay? And we can also set up a lifecycle configuration to expire them or delete them after 45 days. So that makes sense, right? We don’t need the thumbnails after 45 days, so let’s just delete them. Let’s move the source image to Glacier and the thumbnail is going to be on one IA because it’s going to be cheaper. And in case we lose an entire AZ in AWS, we can easily, from the source image, recreate all the thumbnails. So this is going to be providing you the most cost effective rules for your S Three bucket.

Now, second scenario, there is a rule in your company that states that you should be able to recover your deleted Sree objects immediately for 15 days, although this may happen rarely after this time and up to one year, deleted objects should be recoverable within 48 hours. So how would you design this to make it cost effective? Okay, let’s do it. So you need to enable Sree versioning, right? Because we want to delete files, but we want to be able to recover them. And so with Sfree versioning, we’re going to have object versions and the deleted objects are going to be hidden by delete marker and they can be easily recovered.

But we’re going to have non current versions, basically the objects versions from before. And so these non current versions, we want to transition them into S three IA because it’s very unlikely that these old object versions are going to be accessed. But if they do are accessed, then you need to make sure to recover them immediately. And then afterwards, after these 15 days of grace period to recover these non current versions, you can transition them into Deep Archive, such as for 365 days, it can be archived and they would be recoverable within 48 hours. Why don’t we use just Glacier? Well, because Glacier would cost us a little bit more money because we have a timeline of 48 hours and so we can use all the tiers all the way up to Deep archive to retrieve our file and get even more savings.

So this is the kind of exam questions you would get and it’s really important for you to understand exactly what the question is asking and what storage class is corresponding the best to it and what lifecycle role can also correspond the best to it. So let’s go into the hands on just to set up a lifecycle role.

14. [SAA/DVA] S3 Lifecycle Rules – Hands On

Okay, so now let’s have a look at lifecycle rules. For this, I’m going to go into management and I can define a lifecycle rule. So let me create one. I will call this one Demo Rule, and we can either apply it to a specific scope within a bucket, or all the objects. I will apply to all the objects just for simplicity’s sake in this video. So we have five different kinds of lifecycle rules. Actions, we can transition current versions of objects between storage classes or previous versions.

So what do we mean by current and previous? Well, for current we mean the object that is the most recent, if we have enabled versioning, and for previous, we mean all the other versions of an object. If we have enabled versioning, then we can expire current versions of objects, permanently deletes previous version of objects, and finally deletes expired delete markers or incomplete multipart uploads. So, lots of different options. But let’s just do this one, we’re going to transition current versions of objects between storage classes, and we can say we can transition objects into the standard IA storage class after 30 days.

Then we can move it into intelligent hearing after 70 days, and then move it into maybe glitchier after 180 days and glitchier Deep Archive after, let’s say, 365 days. Okay? And there is probably, because I’m not doing my cost correctly, I get a small warning. If you transition small objects into Glacier or Glacier Deep Archive, this will increase cost. So obviously, if you do transition objects there, make sure they’re big enough. Okay, great. So next, I can also expire current object version, current versions of objects. So we’re saying, okay, after an object is created, maybe after 700 days, then please delete it. And we can also add a previous version transition just for fun, and we can say it move it into Glitch here after 60 days, because if it’s a previous version, maybe we don’t want to access it.

We don’t plan on accessing it. Okay, so I acknowledge this as well. And so we can look at the timelines summary. And so for current version actions, day zero, the objects are uploaded. Day 30, they are transitioned automatically into standard IA, day 70 into television tiering. Day 118 to Glacier. And finally 365, Glacier Deep Archive.

Day 700, they will expire. And then for previous version actions, day zero, they become non current, and day 60, they are transitioned into Glacier. Just a simple fun lifecycle rule to set up. But this shows you the whole power of lifecycle rules. You can set up multiple ones per bucket on different filters with different rules. And this allows you to really optimize your costs in AWS for Amazon sree. So hope you liked it and I will see you in the next lecture.

Comments
* The most recent comment are at the top

Interesting posts

The Growing Demand for IT Certifications in the Fintech Industry

The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »

CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared

In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »

The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?

If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »

SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification

As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »

CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?

The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »

Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?

The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »

img