SAP-C02 Amazon AWS Certified Solutions Architect Professional – New Domain 5 – Continuous Improvement for Existing Solutions Part 8
47. EBS Volume Types
Hey everyone, and welcome back to the Knowledge Portal video series. Now, today we will be discussing about EBS volume types. So, before we go ahead with the lecture, let me show you exactly on what do I mean by this. Now, if you go to Volumes inside the Elastic Block Store and you click on Create Volume, the first option that you get is the volume type. Now, there are few volume types which are available, which is General Purpose Provision and many others. Now, as a solutions architect, you should be knowing on what are these volume types and what are the use cases in which these volume types can be used. And this is something that we will be discussing in today’s lecture. So, let’s start. Now, generally when we talk about performance metrics in storage devices, there are a few metrics that are very important to understand. Now, let me give you an example. So, around five to six months back, I was getting my own Scootie. So I don’t prefer bikes.
So I asked my friend on what are the things that we need to see while choosing an ideal bike or a Scootie? And he said there are two important things. Again, it varies. First is in terms of power, the first is basically the CC. So you look into the engine CC, whether it’s 100 CC or a 125 CC or 150 CC. So that is one important metric. Second important metric is maybe how fast can it go, so can it go to 100? Those two are one of the metrics in which generally people will choose a bike from. Similarly, when you look into or when you plan to buy a storage devices, there are certain metrics that you want to have a look before you buy a storage device. The first important matrix is the input output operations per second, also referred as I Ops. Second is the throughput and third is response time.
So now, when you talk about throughput earlier, it was generally referred as megabytes per second. So we had Kilobytes megabytes gigabytes terabytes. Now the conventions are getting more standardized and generally they will be referred as maybe Bytes or KB bytes, or GB Bytes and something similar. So, talking about standardization, you have new standardized units which various operating system and cloud vendors are now following for storage devices. So you have TB byte, you have Maybe byte, you have Gibbyte, you have TB byte. So these are the standard conventions which the vendors are now starting to follow. And this is the reason why when you talk about cloud providers, you will not find MB, you might find MIB per second. So this is something that which is important to remember.
Now, many people might it is sometimes difficult to spell because throughout these 2025 years you are used to this and suddenly you find that standardization has occurred. So this is something which is new and something that we have to adapt to now, when we talk about EPS volume types, we already looked into the console where there were multiple volume types. And each of these volume types differs based on price and performance. So, depending upon our use case, we can select one among these types. And the pricing and the performance ratio might vary depending upon the volume type that you choose. So, as a generic scenario, when we talk about EBS, over here there are two major types of EBS volumes and one is the standard hard disk drive. This is something that you generally see in Desktop and the second, which is more newer and more faster, which is the SSD. Now, if you look into the standard hard disk drive, you generally have some kind of a rotor.
And this rotor rotates quite fast in order to read or write data. Now, as this rotor is mechanical, it generally does not reach the higher speeds. And this is the reasons why SSDs were introduced which are significantly faster but more expensive as well. So, when we talk about volume types, these volume types are divided based upon the SSD or HDD which is being used. So when we talk about SSD or solid state drives, there are two major volume types which are available. One is general purpose SSD and second is provision I Ops based SSD. Now, in a very simple terms, general Purpose, as the name also signifies, it is quite good, like not very slow and not very fast as well. So these type of SSDs can be used in test environment or dev environment. These are generally also used in production environment when we talk about web servers or application servers.
However, when we need a very high performance in mission critical applications, that is the time where provisioned I Ops are generally used. So, if you look into the first point, highest performance SSD volume designed for mission critical application workloads. So these are one of the fastest volumes which are available right now in EBS Block Store. Now, in order to understand the major difference between these two, let me open up the console and show you the major difference. Now, when you talk about general purpose SSD, you see the baseline performance of three IOPS per GIB. Okay? Now. Minimum is 100. I ops. So what really happens is, as you have minimum 100 I Ops and you have baseline of three I Ops per GIB. So in this case, let me select 30. We have 30 GIB. And here, if you see the IOPS has not increased.
So if you multiply 30 with three, it becomes 90. So till 33, ideally the IOPS will not really change because minimum is 100. Now, if you just select 35, this is where the IOPS really start to change. Or you can select 34 till 33, it is safe because 33 into three is 99. You select 34 and the IOPS has started to change. So it is 102 IOPS. If I move one more, then it should be 105, you see, as it is three I ops per GIB. Now, there is one problem with this kind of approach. Let’s assume you have 100 GIB over here and you are getting 300 IOPS of performance. Now, assume that this is a database server and you need a higher I Ops. You need a higher performance because there is a very high workload which is present.
However, if you choose a general purpose SSD, the maximum I Ops that you can get is 300. By default you cannot go higher. So, in order to improve the performance, you generally select provision I Ops. Now, what really happens in provision I Ops is even for 100 GIB you can select what kind of performance that you want. So let me select 2000 and you can see I can for 100 GIB I can select 2000 IOPS. So I can select what kind of IOPS that I need even though my GIB is same. So in provision IOPS, let’s assume you you get 2000 IOPS. It can go much more higher as well. You can select 3000 as well, you see. So generally the ratio is 50 raised to one. So in provision IOPS, let’s assume for 100 GIB you are getting 3000 IOPS. But as soon as you move to general purpose SSD, the IOPS has significantly changed to 300. So you see from 300 to 3000. Quite big difference. So this is the difference between a general purpose and a provision I ops based volumes.
Now, definitely the volume size is different. It starts from one g IB to 16 TIB. However, for provisioned IOPS, the volume size is four GIB to 16 TIB performance. Again you have maximum of 10,000 IOPS and 160 MIB per second of throughput. However, if you look into provisioned IOPS, you have maximum of 20,000 and the IOPS has like doubled. You have 160 MIB per second. And here you have 320 MIB per second. So it’s like double the performance. And this is generally referred as GP Two. And these type of volumes are referred as IO one. So by this I mean if you look into the console, let me go to volumes, you see the volume type, it is referred as GP two over here. So this is the point that I’m referring to. So this is referred by GP Two and these are referred by IO one.
So it will not tell you it is a provision I Ops. It will just tell you it is an IU one. So by IO one you should understand it is a provisioned IOPS. So these are the high level difference between the general purpose SSD and provision SSD as far as SSD based drives are concerned. Now we will move to hard disk based drives. So they are generally classified into throughput optimized and cold hard disk drive. And the third is magnetic. Now, when we talk about cold hardest drives, these are the drives which cost the least. So these are the lowest cost harvest drives, which are designed for less frequent access workloads. So these cost the lowest, but the performance is also less. Now, these are not designed for high frequent access data. They are slow. So in a scenario where you just want to store the data and not access much, then these are the drives which can be used by you.
Now, similarly, throughput Optimized hard disk drive, they are low in cost, but they are designed for frequently accessed throughput based on intensive workload. Now, the difference between these two is these are designed for less frequently accessed workload and this is designed for frequently accessed workload. However, both are designed to have lower cost. And this is the reasons why the throughput optimizer generally used in the applications like data warehouse and these cannot be really used in applications like data warehouse. The performance will really degrade. Now, volume size in both of them remains the same. The performance matrix that you will see, it has 500 IOPS maximum. The maximum of Cold HDD is 250 IOPS, like half the performance.
So Cold HDD is half the performance of throughput Optimized and it is referred as SD One and the coal HDD is referred as SD One. Now, one very important thing to remember is that none of these drives can be used as a boot volume. Like you cannot install operating system on top of this. This is very important point to remember. And the last volume type is generally magnetic, which is referred as previous generation. Now, again, the performance is very less as far as magnetic is concerned. Again, the volume size is also restricted. You have one GIB to one TIB and the performance is maximum of 40 to 200 IOPS and 40 to 90 MIB per seconds. Now, when you talk about the pricing factor, obviously you might have noticed that provision I Ops should cost one of the highest because they are the highest performance based devices. So, if you look into the pricing section, you have the provision IOPS.
You see IO one volumes. They cost zero point 125 per GB per month of provision storage. However, if you talk about general purpose SSD, the cost is significantly lower, which is zero point ten. Now, do remember, because when you select a provision I Ops, you also have to pay for the IOPS that you have provision which is additional. So these two, you have to calculate cost of the first factor as well as the second factor over here. Now, the third is the throughput Optimized. Now, definitely throughput Optimized HDD you see first is SSD, second is SSD. So these two are part of the HDD volume SSD volumes. The lower ones are part of the HDD volumes.
Now, throughput Optimized definitely will cost higher your zero point 45 per GB per month. And the cost of the Cold HDD is almost half, which is zero point 25 per GB per month. So generally you don’t really find the magnetic costing directly because that is previous generation which is fading out right now. So this is the basic about the volume types in exams you need to understand in which scenarios specific volume types are used. So I have included a sample scenario so that you can understand what kind of a question you might be asked.
So you have a Medium Corp is an e commerce organization and you have been assigned a responsibility of optimizing the performance of the servers. Now, the use case scenario here is that they have a critical database server and it is receiving a lot of connections. Now the DevOps there they tried to increase the Ram and CPU but still the database performance was low. So the question is what type of EBS volume type will you suggest to them? Now the question is the basic scenario is that they need higher performance. So definitely the first or the most right answer will be the provision IOPS because provision IOPS is one of the fastest volume types which is available in EBS. So you might get very similar questions and each use case in each use case there might be different volume types which might be needed. So this is the basic about the EBS volume.
48. Overview of AWS X-Ray
Hey everyone and welcome back. In today’s video we will be discussing about one of the new service which AWS has launched recently, which is Xray. Now, Xray is a service which is basically used during debugging and for monitoring the application’s purpose. Now generally if you go to the mid size or even enterprise organizations, you would see that they would traditionally use some kind of application monitoring system. So New Relic is one of the very famous ones which is used. So these application monitoring systems, they monitor the application, they actually give you the exact details on which query on what part of the application the things are getting slowed down. So these are the application monitoring systems. So when it comes to X ray I would say at a high level overview, they are actually trying to be something similar to this.
So whenever you have a huge amount of micro services based architecture, it is literally difficult to pinpoint the exact problem when it occurs. So this is the basis of X ray and Xray basically allows you to quickly find the issues within your applications. Now, when it comes to the definition side, xray basically allows us to debug our application functionality of request racing so that we can find a root cause and a performance issue. So let’s look into how exactly it really works. So you have an application code and you integrate the application code with the X ray SDK which the AWS officially provides. So depending upon what programming language you write your application into, there are various SDKs which are available. Now these SDKs, they integrate with your application and they send the data, the metric specific information to the Xray demon which would be running. Now this X ray DMN in turn sends the data to the X ray API of AWS and from there you will be able to visualize in your web browser about your applications connectivity and where the things might be slowing down.
So before we discuss further, let me quickly show you a demo on how exactly it might look like. So I have a small application which is running which is simple X and zero based application and I have X ray which is enabled for that. Now, if you’ll see over here on the left hand side you have a client. Clients is connecting to the X and zero application which is called as the Score Kit. This is running on the elastic beanstalk environment and in turn it is connecting to various other services which includes DynamoDB, it even includes SNS. And X Ray is able to quickly find the connectivity related options. Now, in this scenario, which is a good scenario, everything is green so nothing seems to be gone bad but typically you would see various errors or various things slowing down.
So let me actually increase this to 6 hours and now you see at a certain amount of time there is an error which is caused here. So you have an error rate of 33%. It even gives you a graph and if you go to view Traces, you can select the error. You can even go to view Traces. Basically Traces is where you would actually find the place where the error has occurred. So this is where the SNS has been giving you an error. And if you click here, it really gives you more details related to the overall trace ID. Now if you just remove the SNS functionality from here, you would get actually a huge amount of information related to various trace lists which you can further dig down.
So if I just open a sample one, you will have a nice little graph related to the response related time and even the raw data which the X ray is capable of fetching. So this is what X ray is all about. Again, we’ll not go too much detail related to how X ray works and from my experience x ray is definitely does have huge amount of potential, but for the time being it is not as mature as traditional APM systems like New relic and over the time it will definitely be.
But you can definitely play around as it does give you good amount of visibility related to the connectivity as well as response times and various others. So when it comes to X ray integration, AWS X ray does provides integration with various AWS services like EC two, you have Lambda, you have elastic load balancing, you have API gateway and you have even elastic beanstalk. Now, when it comes to the X ray supported platforms, does not support all the platforms as of now, but these are the platforms that it supports. You have Java. You have Go, Net, Ruby, Python and NodeJS. So this is something that you should remember. What are the platform which Xray supports? Quite important for the exams as well.
Interesting posts
The Growing Demand for IT Certifications in the Fintech Industry
The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »
CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared
In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »
The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?
If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »
SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification
As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »
CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?
The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »
Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?
The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »