SAP-C02 Amazon AWS Certified Solutions Architect Professional – New Domain 5 – Continuous Improvement for Existing Solutions Part 4
37. Link Aggregation Groups (LAG)
Hey everyone and welcome back. In today’s video we will be discussing about the Link Aggregation group. Now one important part to remember is that the term of Link Aggregation group is not just limited to Direct Connect. In fact, this is a generic concept. However, the same thing is applied at the Direct Connect level also. Now in short, a Link Aggregation allows us to group the the Ethernet interface to form a single Link Clear interface which is also known as the Link Aggregation group, which is referred as Lie or also a bundle. So this can be understood with a diagram here. So you have a switch and let’s say you have a server here. Now this server has multiple network interfaces. Now you can combine all of them to form a single logical interface which is referred as the Link Aggregation. Now there are a lot of advantages of this type of approach. One of the advantages is that it can increase the network throughput beyond what a single connection could sustain.
So let’s say you have a nick over here and this nick can sustain, let’s assume, ten MEPs and you have a nick too, which can sustain ten MEPs. So now if you just make use of a single nick, you can have a maximum throughput of ten MEPs. So in case of Link Aggregation you combine both of these links. So in combination you can have up to around 20 maps of throughput. And the second important advantage of Link Aggregation is that you can have a redundancy.
So in case if you have a Link Aggregation over here and if one of the nick goes down, then you also have the second nick through which the traffic can flow. So these are some of the advantages when you do a Link Aggregation and same advantages apply when you do a Link Aggregation with Direct connect connections. Now, in perspective of a Direct Connect, a Link Aggregation group basically makes use of the Link Aggregation control protocol referred as the LACP to aggregate multiple one Gbps or ten Gbps connection at a single direct connect location, allowing customers to treat them as a single managed connection.
So this is important we’ll understand about this in detail as part of the important pointers at the last slide. Now, once we create a Link Aggregation group, we can associate existing connection with our Live. Now let’s understand some of the important pointers that you need to remember for the exams with respect to Direct Connect and Live. First is that all the connections in the Link Aggregation group must use same bandwidth. Now, bandwidth of one GPS and ten GPAs are supported, you can have a maximum of four connections in a single Link Aggregation group. And the fourth point, which is quite important, is that all the connection in a Lag must terminate at a same direct connect location and on the same AWS device. So this is one important part to remember.
Now, there is one more important concept that you need to remember as far as the direct connect and lag is concerned. And that is that all the Lag that we create must have an attribute which determines minimum number of connection in a lag that must be operational for the lag itself to be operational. So we can understand this with an example. Let’s say the total connections for our lag is four. So four is the maximum connections that we can associate with a lag. Now we define that minimum number of connection is equal to two. That basically means that in order for our lag to be in this operational condition, the minimum number of connection that should be up and running is two. Now, if two connections happen to be failed, the overall status would still be up because out of four connections within that lag, if two connections are failed, then the overall status would be up. However, if the third connection fails, the overall status would be down.
Now, the reason why it would be down because we have defined that a minimum number of connection should be two. So if the minimum number of running connection is only one, this is the case of our second point where if the third connection fails, then the overall connection would be down even though if a single connection is still up and running. So you need to make sure that whatever attribute that you specify here needs to be directly linked to what you want to be associated for the operational lag. Now, this is important.
Some might say that you can have the minimum number of connections to be one. Now you can certainly do that, but the problem is that there is certain use case where you had created a lag with four connections. You might require certain performance, you might require certain bandwidth. And out of four connections, if only one connection is running, that means your overall performance, your overall bandwidth, the overall throughput has impacted significantly. And this is the reason why a lot of organizations they do not ah, put the minimum number of connection as one.
38. AWS AppStream 2.0
Hey everyone and welcome back. In today’s video we’ll be discussing about the App Stream 20. Now, App Stream 20 basically allows us to centrally manage our desktop application and securely deliver them to any computer. So this can be understood in a better way with a simple use case. So the use case is related to software vendor. So you have a software vendor who can use use the App Stream 2. 0 service to deliver the trials as well as demos and trainings for their application with no downloads or installations.
And this is very important. Let’s say that you are in a meeting and a specific software vendor wants to give you a demo and even you want to explore on how exactly that application might look like. And that application works only on Windows and you are running Mac currently. So you cannot really run that application within your Mac. So you have to do some kind of a virtualization or something similar. So App Stream 2. 0 really solves that use case in a very simple way where you can actually use that application from your browser. So let’s jump directly into the demo and look into how exactly this would look like. So I’m in my App Stream 2. 0 console and let’s click on Try it now.
So the first time you will have to accept the terms and conditions. So just click on Agree and continue. So here you see there are many applications over here. So you have Eclipse, eclipse, many of you might have used it, you have a Firefox browser, you have a free cat, et cetera, et cetera. And even if you have some custom application which is running in your Windows, and you want it in such a way that it should be able to be accessible via browser, so the user can use it, so you can basically use the App Stream 2. 0 service. Let’s look into eclipse. So I’ll just click on the Eclipse here. So currently you see the Eclipse IDE is loading. So this is very similar. Like when you double click on Eclipse in Windows, you will have something quite similar. So I’ll just click on Allow over here and I’ll maximize it. So this is the eclipse ID. Now, you can do everything that you can typically do when the Eclipse is installed.
So in the software vendor example that we were discussing, instead of them giving the demo, what they can do is they can put the application to App Stream and all the users, they can connect to the application from their individual browser and they will be able to use it similar to how they might be doing when they are installing the application. So let’s say if I just click on create a new Java project, everything remains the same.
So I hope you understood at a high level overview about what App Stream service is. So again, if you have an application in Windows, you can put it to app Stream 2. 01 great thing about this is that you do not really have to manage the back end service that is done by App Stream. Now along with that do remember that in exams you might get a use case where an organization wants to stream their application so that it gets accessible from browser. So for such kind of use cases App Stream 2. 0 is something that you should answer straight away. Again you will not be asked technically on how exactly you can put the application but you should be aware about the use cases where App Stream 2. 0 hit.
39. Lambda@Edge
Hey everyone and welcome back. In today’s video we will be discussing about lambda ADH. Now at a high level overview, lambda DH basically allows you to run your lambda function to customize the content that your Cloud Front might deliver to your end users. Now basically this can be easily explained in a diagrammatic view because we directly jump into these theoretical points. It will just confuse. So this is a diagrammatic representation. So let’s say this is a user. This can be a browser CLI or anything and you have a Cloud Front here, this is a Cloud Front cache and this is the origin. Now this can be an S three bucket or whatever website that you are running behind the scenes. Now what lambda at the edge allows you to do is it allows you to run lambda function at four major points. First major point is before the request hits the Cloud Front cache.
The second point is after the request misses the Cloud Front cache and before it hits the origin. So this is the second point where you can run your lambda function. Third point is as soon as you get the data back from your origin. This is the third part and the fourth part is after the data traverses the Cloud Front cache and before it goes back to the user, you have one more pointer here. So there are four pointers where you can run your lambda function. Now these pointers are defined by a name. The first one is defined by viewer request. Second one is defined by origin request. Then you have the origin response and fourth is the viewer response. Now, because of the capability of AWS to run your lambda function in these four pointers, it really allows a lot of possibilities and lot of capabilities. So let’s go back to our first slide and understand about each of these points.
So now coming back to the second point of the first slide, it states that you can run your lambda function to change the cloud print request and responses at the following points. Now, we already discussed that there were four pointers. Now these are the four pointers that you need to remember. First is viewer request, second is origin request. Third is origin response and fourth is viewer response. Now, viewer request is nothing but after CloudFront receives a request from a viewer. So this is what the viewer request is all about. Second is the origin request which basically states that this is the location before Cloud Front forwards the request to the origin. So you have a Cloud Front cache over here. Now, before you forward the request to the origin there is a pointer over here. So this pointer is referred as the origin request. Now the third one is the origin response which is basically after CloudFront receives the response from the origin. So this is the third place of it and the fourth place is the viewer response which basically states that before the cloud front forwards the response to the viewer so this is the location. So before the request is sent back to the user agent, this is the fourth place over here. Now, do remember that this representation that you see here, it’s basically the cloud front cache. So let’s do one thing, let’s understand about each of these events in a detailed manner. Now, the viewer request is basically executed on every request before the cloud front cache is checked. This is something that we already saw that viewer request.
So any lambda function that you put over here, so these are the four locations where you can put your lambda function and that lambda function will execute at each of these location whenever a request is arrived. So any lambda function that you put at this specific location will be executed every time before a cloud front cache is checked. Now, there are a lot of benefits of running a function at the viewer request. One of the benefit is modifying the URL. You can do a cookie, you can do a query string related modification and one of the very famous ones which is also easy to explain is perform authentication and authorization checks. So this can be understood with the following diagram where you have a user agent over here and you have a cloud front.
So as soon as the cloud front receives the request, then the first thing before the cloud front cache is checked is the viewer request event. So viewer request event can basically look into what is the request that a user agent has sent, whether the request contains the appropriate password which is required to open the website. If the request contains the appropriate password to open the website then the request will be go to the OK and then it will be sent to the origin.
If the request does not have the password then your viewer request even can send an Http 40 three back to the user agent. And this password authentication is one of the functions which a lot of organization puts at the viewer request level itself because let’s say that a user agent is requesting for a specific file and that file is cached to a rear. Now it might be possible that the request would not go to the origin, the request would be responded from the cache itself. So it is very important. Specifically, if you have a kind of a password authentication, you put your password authentication related check in the viewer request so that as soon as the request is arrived at the cloud front level, you run your lambda function here and that lambda function can determine whether the request should be allowed to pass or not. Now the second part is the origin request.
Now origin request is generally a location of your lambda function which would execute whenever a cache is missed or before a request is forwarded to the origin. Now, basically there are a lot of things that you can do at this stage like dynamically set the origin based on the request headers so let’s understand this part so this is the diagram of the origin request. If you note, I have put a star over here so that it is easy to know on which function or which location we are discussing about. So here we are assuming that the request first went to Cloud Front. Then you have the viewer request then it went to Cloud Front cache there is a cache? Miss, do remember important part to remember here if a cache hit is there, then Cloud Front can deliver the data directly to the user agent, then the origin request and the origin response might not execute. So it is important to remember. So whenever a cache miss occurs, the data traverses to the origin request. Over here, now. Origin request. You can. Control a lot of parameter over here. One of the parameters or one of the use cases that you can work at this level is dynamically. Set the origin based on request headers so it might happen that there are multiple origins. Let’s say you have a s three bucket. You have an EC two instance. Now, from this lambda function, you can tell whether the request should go to the s three bucket or to the EC to instance based on the request headers. So all of those control you can have at the origin request level itself now you can also directly send the data from the origin request back.
It is not necessarily that it would hit the end. Point. So depending upon what is the logic that you want to define a warrior? So the next part that we need to discuss is the origin response. Now, origin response basically is executed on a cache miss after a response is received from the origin. So if we look into the diagram associated with origin response so origin response comes after the origin and the data which origin sends back. It goes to the origin response stage over here. Now again, this stage is very important and you will be able to achieve a lot of use cases from this origin response. Stage one is you can modify the response headers and you can also intercept and replace various four XXX and five xx errors from the origin. So let’s say you have an origin here, it can be an easy to instance and somehow your application is down and it is not working at all. And what it gave, it gave a 500 error back. Now, at the origin response, you can have a logic that whenever you get a 500 response from the origin, either it can replace that response with a specific.
You can say a page saying that the website may be in a maintenance or the origin response can connect or it can send a 30 one redirect to a back end or to the backup origin for the request to happen. So this is also one critical stage. Now the last stage is the viewer response. Viewer response is basically the function which gets executed on all the responses received either from the origin or from the cache. Now this again is a very important state because it might happen that after viewer request the request might hit the cloud front cache and from the CloudFront cache itself the data would go back to the user agent.
So this right side part might not get executed at all. So if you want certain functions to happen for all the responses which are being sent back to the user agent putting your function at the viewer response level is extremely important. Now one of the use cases is which a lot of organizations use the viewer response for is to modify the response headers before the caching of the response happens. Now basically what happens is from the origin response if you have certain headers it might get cached at the cloud front cache but if you put it at the viewer response level you will not have the caching at all. It will go directly at the user agent level.
Interesting posts
The Growing Demand for IT Certifications in the Fintech Industry
The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »
CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared
In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »
The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?
If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »
SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification
As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »
CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?
The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »
Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?
The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »