SAP-C02 Amazon AWS Certified Solutions Architect Professional – New Domain 5 – Continuous Improvement for Existing Solutions Part 5
40. Practical Demo – Lambda@Edge
Hey everyone and welcome back. Now in the earlier video we were discussing about the lambda at the edge. Now we are discussing more into a theoretical perspective. However, with theory it little becomes difficult to see what exactly origin request is or how exactly it looks in a practical manner. So what I’ll do, I’ll quickly give you a demo on how exactly lambda edge would look like if you go ahead and you configure within your AWS environment. So lambda edge integrates with the cloud front. Now what I have, I have a sample cloud front distribution over here and I have a simple lambda function. Basically, this is the lambda function that we use at lambda at the edge. Now within this cloud front distribution if you look into the distribution you go to behaviors and currently there is one behavior. Now let me quickly do an edit over here.
So here you have viewer protocol policy and various others but if you look down at the lambda function association there’s a cloud front event. So here you have various types viewer request, viewer response, origin request, origin response now depending upon the use case that your lambda at the edge function would be doing, you will have to place it at one of these locations. All right? So in my case, this specific function is placed at the origin request over here. Now if you look into my lambda function over here so this is my function and if you go a bit down, this is the code which is used at the lambda at the edge function. Now along with that, let me also show you the s three bucket. Now if you look into the CloudFront distribution here it is basically let’s let’s come out of this. Now if you look into the origin it has the origin of test seven website s three bucket. Now within the s three console I do have a bucket called as test seven hyphen website.
Now within this I have a directory called as experiment group and within this there are two image files which are present over here. Now if I directly open up the CloudFront distributions domain name let’s open this up you see the image automatically loads. However, ideally if I do not have a lambda at the edge function over here now this redirect is done by the lambda at the edge function. If that function would have not been present you would have to specify the entire uri like you would have to specify experiment. Then you will have to specify control like control pixel, jpg, something like this. But since we have the lambda the edge function which is listening at the origin request, we do not really have to specify this uri. This function takes care of that.
Now if you look into this function, it basically says that if the experiment uri is not present it basically gives an output of experiment cookie has not been found. It throws a dice where you have a random function and experiment uri can either be path experiment A or path experiment B. Now path experiment A is basically if you go a bit up path experiment A is the control pixel Jpg. It says experiment groupcontrolpixel GPG and if it is path experiment B then it is experiment grouptreatment pixel Jpg. So generally whenever we load and cookie is not found one of this image is automatically taken into consideration by the lambda at the edge function and our image will load accordingly. Now this lambda at the edge function if you go into actions you do have the option to deploy at the rate lambda at the edge and here you can specify the distribution ID.
You can also specify the CloudFront event here like origin request, origin response, viewer request and viewer response. All right, this is one thing. Now the second thing that I wanted to show you typically if you work with lambda ADH is that whenever someone visits your page and you have the lambda ADH function ongoing. So this specific log over here so currently you see it is outputting a console log. So this output will be at the region where the request came from and where the edge location is. So currently I am in Mumbai and Mumbai does have an edge location of cloud front. So this specific log associated with the query that we made would be stored in the cloud watch logs in the Mumbai region. So let’s look into that as well. So let me open up the cloud watch.
So within the cloud watch console I do have a log group of AWS lambda us east one edge. So edge is basically the lambda function and within that you have if you look into the log here it basically states that the experiment cookie has not been formed throwing the dice and it is automatically setting the request uri to experiment groupcontrol pixel Jpg. So this uri has been set by the lambda at the edge function. If you remember we never set this specific uri. All we did was we made a request to the CloudFront distribution and that’s about it.
We never specified the uri. So once this specific request goes, it goes to the lambda function lambda functions checks if the uri is present or not. If uri is not present then it will generate one of the uri depending upon the mathematical function and it will automatically set this uri and you will get the output according to the uri which is set by the lambda at the edge function. So this is the highlevel overview on how exactly lambda at the edge might look like. I hope this video has been informative for you and I look forward to seeing you in the next video.
41. AWS Batch
Hey everyone and welcome back. In today’s video, we’ll be discussing about the AWS batch service. Now, before we go ahead and understand about batch service, let’s go ahead and understand what batch jobs are. Now, a batch job is basically a collection or you can also refer it as a list of commands that are processed in a sequence, often without requiring a user input or intervention. Now, generally, whatever batch jobs that are present, they get accumulated during the working hours and are then executed during the evening time or whenever the computer is idle. So, one of the use cases that I can share is in Udemy. Now, in Udemy you have thousands of students and everyone would put a review after watching around four or five videos. Now, that review will not immediately appear within the Udemy course. It takes around two days or even three days for it to appear. So it runs as a batch job at the night interval mostly. And then those reviews are being available at the website course level.
Now, in order to understand this in a much more better sense, let me show you how exactly it might look like in Windows. Now, in Windows we have a service called as a task scheduler. Let’s open this up. Now, within this task scheduler, there are a lot of jobs which are available over here. Now, here it says jobs are ready or jobs are running. Now, if you quickly open up any one job, let’s take up the CC cleaner one. It has various sub tabs. One is the trigger on when exactly this job will get triggered. So here you have daily at a start up time, at one time. Now, within the actions, it basically states that this job will start this program which is Ccupdate exe. Now, here you also have conditions. Now, whenever you have a job, you can define a condition on when the job should run, on what are the conditions where the job would run. And here you can specify the conditions.
So if I can go to the properties of this specific job, let’s go to condition. You can specify that this task should only start when the computer is idle. You can specify the amount of minutes. Here you can also specify that this task should only start if the computer is on the AC power. So there are a lot of flexibilities which are available. Now, for this type of service to work, which basically runs your predefined job, there are a lot of components which are involved. One of the major component is the task scheduler. There are also various other components like triggers, you have conditions and various others. Now, something similar to this, AWS also has a service of batch. Now, if an organization, lot of organization, they make use of open source as well as various commercial tools to have their batch job processing in cloud environment. Now, with the help of AWS batch service it really becomes much more easier. So let’s go ahead and do a practical so it becomes much more easier for us to understand it. Now, this is the AWS batch console. The first thing that we’ll do is we’ll click on get started. So this will take you the defined job screen will leave everything as default over here. Now, if you see here, this is a container properties.
Now, AWS batch extensively uses the ECS behind the scenes. So you will have to specify the container image. So here by default it is the busy box image. But you can specify your own container which has the required dependencies that can run the job. So let’s leave it as a default over here. And within the command you have a simple echo hello which is present. Now, below you can specify the amount of vCPU and the amount of memory which is needed. Let’s say the amount of memory. I’ll just put it as 256. Now let’s go to the next screen.
So within the next screen you have the option to specify whether the provisioning model is on demand or spot instances. And also the allowed instance type. Now, the allowed instance type currently is optimal. That basically means that depending upon the number of vCPU, the number of Ram that you define, AWS batch will launch the EC to instance according to the requirement. So let’s do one thing maximum vCPU. I do not want to keep it as 256, I keep it as two. One important point to remember about AWS batch is that the T two instances are not supported. That means T two micro is not supported and it will typically launch from M four, C four and R four. That means that you will get charged even though you are in free tire. So this is one important part to remember. Now, this is something that I’m seeing at January 2019, AWS does have plan to add the T two instances, but the dates are not confirmed yet anyways. So below you can specify the VPC and the subnets where your EC two instance would get launched.
And you also have the job queue over here. Job queue is nothing but the queue where the jobs can be scheduled. Now, once you have defined all these, we can go ahead and we can do a create. So once you do a create, it will go ahead and it will set up a job queue. It will set up a task definition as well as the compute environment, which is nothing but the EC two instances where your jobs would be running. So here it says that everything has been successful. Let’s go to the dashboard. And this is how the dashboard looks like. Now, if you go to services and click on EC two, you should see that there is one EC two instance which is up and running. And this is of the type M four large. Now, depending upon the vCPU that you again give. This instance type would be changed on your side. So now let’s go to the AWS batch service and there are a lot of tabs which are available over here. Compute environment is something that we already saw. So this is nothing but the compute like EC two instances which is up and running. The next important thing is job. Now, once the EC two instance is running, you might want to send a job which the EC two instance will go ahead and process.
Now let’s click on Submit job. Now I’ll say as Demo job. And here you will have to specify the job definition. Now Job definition will discuss about this. I’ll just select the force run job definition. And here the job type is single and the command here is echo hello. So this is the command that will get executed on the busy box container that was running. So I’ll just leave things as default and I’ll click on Submit Job. So here it is saying that the job has been successfully submitted. Now, once the job is successfully submitted, it goes to the queue and then it will be processed inside the EC two instance. Now this particular job that we had submitted, it does nothing but an echo hello one. So once the job is successfully submitted and it gets succeeded. So currently you see it went out of submitted. Let’s go into the succeeded stage and here you see our demo job has been succeeded. So once this has been succeeded within the cloud watch, you should see the output. So let’s go to the Cloud watch. Here I’ll go to the logs and typically you should see two log streams because there were two jobs which has ran. Now if I just click on random one, it should show Hello World.
All right, so this is the hello world which was executed. Now, the job definition is where you can specify what exactly needs to be done. So here you can specify the container image. So earlier the container image was busy box and the command was hello. However, if your application requires a certain container image or if you have your own custom container image, you can specify the container image. You can specify the command over here. Once you have created a job definition, then you create the job. So from the job, if you click on Submit Job you can specify which job definition you want to refer for and which is the job queue. So this is the high level overview about the AWS batch service. Now, if you just want to terminate, because this is not hunter free tire, the first thing that you need to do is you have to do a disable. Once you do a disable, you see the status has been changed to updating and if you click refresh, it’s now valid and the state is disabled.
So once it is disabled, you can go ahead and delete your job queue. And once your job queue is deleted, you can go ahead and you can disable the compute environment. And once your compute environment is disabled, you can go ahead and you can delete it says cannot delete found existing job queue relationships. So that means that our job queue is still deleting. So basically if you want to delete your compute environment, you need to make sure that your job queue is deleted first. Great. So our job queue is now deleted. Now let’s go ahead and try to delete our computing environment. And now the status has been changed to deleting. So once your computer environment is deleted, basically your EC two instance would be terminated. And you see, the instance is now terminated. So this is the highlevel overview about the AWS batch service. I hope this video has been informative for you and I look forward to seeing the next video.
Interesting posts
The Growing Demand for IT Certifications in the Fintech Industry
The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »
CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared
In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »
The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?
If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »
SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification
As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »
CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?
The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »
Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?
The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »