100% Real Amazon AWS Certified Cloud Practitioner Certification Exams Questions & Answers, Accurate & Verified By IT Experts
Instant Download, Free Fast Updates, 99.6% Pass Rate.
Download Free AWS Certified Cloud Practitioner Practice Test Questions VCE Files
Exam | Title | Files |
---|---|---|
Exam AWS Certified Cloud Practitioner CLF-C02 |
Title AWS Certified Cloud Practitioner CLF-C02 |
Files 1 |
Amazon AWS Certified Cloud Practitioner Certification Exam Dumps & Practice Test Questions
Prepare with top-notch Amazon AWS Certified Cloud Practitioner certification practice test questions and answers, vce exam dumps, study guide, video training course from ExamCollection. All Amazon AWS Certified Cloud Practitioner certification exam dumps & practice test questions and answers are uploaded by users who have passed the exam themselves and formatted them into vce file format.
Hey everyone, and welcome back to the Knowledge Portal video series. Now continuing a journey withthe elastic block store section. Today we'll be speaking about EBS portability. So, let's look into what it is. To recap, AWS elastic block store is built on network attached storage. So we have already looked at the two types of storage that are available. One is an instance store. Second is the EBS. Now the instance store is directly connected to the storage devices, which are part of the host on which the virtual machines reside. However, when we talk about EBS, EBS is generally in a cluster, which is different from EC Two. And these devices are network-attached to the EC2 instances. Now, one more thing that we have already looked into is that all of these EBS devices are replicated to ensure durability and availability. Now, since this storage device is attached via network, it can be easily detached as well as providing the functionality of portability. Now, since the storage devices are connected to the EC Two instance VR network, we can easily disconnect them and reconnect them to a different EC Two instance. So let's assume the first drive let'sassume this is the EBS volume. Let's assume this is connected to EC2, instance one. Now, since it is networked, we can easily disconnect it from here and connect it to instance two. And this is a very basic feature. It's not very difficult. And the fact that it is network-connected simplifies things even more. Now, I generally use the analogy of a hard disc drive, a portable hard disc drive, when we talk about the portability feature. In general, what happens if I connect this hard disc drive to my laptop? I can connect it to my laptop now, and once my work is done, I can easily disconnect it and re-connect it to the laptop two.So whenever I need it, I can connect to laptop one, disconnect it, and then connect to laptop two. So that feature is called the "portability feature," and the same concept can be applied to the EBS volume also. So let us do one thing: let us go to the practical scenario so that this concept is much clearer. And this is something that really helps during the troubleshooting aspects, specifically when your SSH goes down due to some misconfigurations. So let's do one thing. Let me go to the volume section and let's create a new volume. So I'll give it the size of five GB. And if you'll see over here, the availability zone, we have to select one. Now, there are three availability zones in the Oregon region. Now, one important thing to remember is that EC2 and EPS have to be in the same availability zone. So let's just verify we will be connecting to our first instance over here, which is running. And if you look into the availability zone, it is us. West. Two B. Perfect. So let's do one thing: let's build it in the United States' West Two B. So if you connect it in differentavailability zones then it will not work. I'll click on "Create Volume." The size is five GB, so let's make it five GB for now. Okay, it's creating perfection. So as soon as it gets created, you will see the status "available." Available means it is ready to get attached or connected to an instance. So, just to make things simple, I'll call this portable Perfect. Now let me connect to the EC instance. My first instance is referred to as my first instance. So I'll just say that is just my true story section in case you are interested. Anyway, I'll do SSH easy to use, and if it doesn't work, I'll have to refer to the okay, perfect. So now switch to root, and if you do lsblk overhere, you will see that there is only one disc attached, which is xvda, and this can also be seen if you godown; there is only one root device, which is xvda. And if you look into block devices, there is only one block device, which is XVDA. So what will happen is what happens if we connect this particular hard disc to the EC2 instance. So let me click on Attach Volume, and the instance will connect to my post instance. If you see the device type, it will give the name "SDF." So generally, we will get the naming convention of XvDm. Now, if I'll just refresh, there should be one more block device, which is STF. Now, if I just remember the last portion over here, F. So that is important. Now, if I do LSBLT again, ideally there should be one more disc called XVDF. Okay? So let me do this, and you will see I have one more disc called xvdf. The size is five GB. Now, whenever you have a new hard disk, as I hope many of you might remember if you have a new pen drive, generally it will ask you to form it with some kind of partition, like NTFS for Fat32 as far as Windows is concerned. Now, partitioning basically creates a layout on your hard disc drive and says how exactly your data will be stored. Anyways, we will not discuss more about partitioning right now, but just to remind you, whenever you buy a new hard drive, you have to format it with a specific file system. Now, you don't really have filesystems like NDFs in Linux. We do have what we don't use; we generally use file systems like exts in a Linux box. So what we will be doing is formatting this particular disc with an ext4 filesystem. So in order to do that, the command is mkfs ext4 followed by the device name, which is dev expedia, and I click on enter, and you see it is creating a file system inside this particular hard disc drive. Now, the next step after creating a file system is to connect it to a specific directory. So I'll create a directory called SKP Labs. Now, what I'll do is connect this particular hard disc drive to this particular folder that we have created, and we can do that with the mount command. So, ride Devxvdf to Kplabs. Now it is mounted. If you just want to verify, you can do DF and H. And if you will see, devxvdf is connected directly to kplapps that we have created. I'll just clear the screen. Now, if you just go to KP Labs, you'll see a "loss plus found" folder. So this is the default folder that comes with Now, let's do one thing. Let's make a text file called kplabsTXT, and then I'll echo the ebsportability lecture and save it to kplabs TXT. So if you just want to do a chat, you can find the sentence that we have created. Now, what we'll be doing is, since this device is attached to the first instance, we will disconnect it from the first instance and connect the same hard disc drive to my third instance. So this is very similar to a pen drive or hard disc that we have discussed, where you disconnect a pen drive from one computer and attach it to a second computer, and you will find that the data on the pen drive remains the same. And this is exactly what we are trying to do. So let's come out of the directory. So we have to come out of this particular directory for successful unmounting. So I'll unmount there or I'll unmount KPLabs. Okay? When you run DFH, you'll notice that the xvdf has been removed. That means the hard disc drive is disconnected from the directory. Now, the second step that you have to do is, since this is the hard disc drive, we have to detach it from the EC2 instance that we have connected it to. So I'll click on detach, and the state should be available. So currently, it is in use. Let's wait for a few seconds for it to be available. Now, if you see the state is available, we can connect it to the different EC2 instance as soon as it is available. So this time I'll attach this particular EBS volume to my third instance, and I'll click on attach. Now remember that even my third instance isn't in the availability zone for USB two B.And this is the reason why I'm able to attach. So if I just copy the IP address and let me disconnect and connect to the second easy instance, Let me just verify if it is attached to the console. And this is the first thing you should be doing. It says it is in use, but if you refresh, you now see that the developer SDF is connected to my third instance. If you look into my first instance, Now the EBS volume is disconnected. Perfect. So now I think someone is cookingbit early today even I'm hungry. God. Anyway, so let's complete the lecture first so that I can go and have some food. So if you do lsblk now, you will see that XVDF is connected. You see the disk, and the size is five GB. So we'll do the same procedure. Now, since this device is already formatted, wedon't really have to reformat it again. Remember, the formatting generally has to happen only the first time. So I'll create a directory called "Kplabs 2" and I'll run the mount command on "dev xvdf" on Kplabs 2. Perfect. Let's just verify if it is connected. You can see that devxvdf is linked to Kplab 2. So if I just go to Kplab 2, you will find that the Kplabs TXT and Lost Plusform folders are already there. So, if you just do a cat, this is the sentence you'll get. So I hope you got the basic concept of the portability feature of elastic block storage volumes. Just think of it as a pendrive that you can attach to one computer, and whenever you need it, you can detach it and attach it back to the second computer. So this is one of the features of elastic block storage. I hope this lecture has been informative for you, and I look forward to seeing you in the next lecture.
Hey everyone and welcome back. In today's video, we will be discussing the elastic load balancer service which AWS offers. Now typically, when you host a website on a single server, if that server goes down, that means your website also goes down, and when your website is driving your business, you do not want a single point of failure to happen. And this is the reason why, for high availability, typically a lot of organisations have multiple servers so that if one server goes down, the traffic can still be redirected to the other servers, which are up and running. The load balancers are an important component of that architecture. So let's go ahead and discuss more about that. Now, as we were discussing, that single point of failure should always be avoided. So in this type of architecture, what you have is two servers, and the traffic is routed 50%, so the first request would go to the first server, and the second request would go to the second server. Now, in case the first server goes down, all of the requests can still go to the second server, and your website will still be up and running. However, if you look into this architecture carefully, there is still one single point of failure, which is this load balancer. The load balancer is also the load balancer. There are numerous software load balancers that engineers can use. Proxies are another one of the famous ones. So if you install the software load balancer in an EC2 instance, and if that load balancer goes down, Then, typically, your entire website will go down, which is why this type of architecture is used in many organisations where you have two load balancer servers that are both redirecting traffic to the underlying servers that you may have. So in this case, even if the first load balancer goes down, traffic can still be redirected to the second load balancer. However, since AWS introduced the elastic load balancing service, this type of architecture is no longer seen as frequently as it was a few years ago. So, instead of running the software load balancer on two EC2 instances, you now use an elastic load balancer or one of the other load balancers provided by AWS. One of the great things about these loadbalancers is that they are managed services. So you don't really need to worry about these load balancers themselves going down. They can scale automatically depending upon the load; they can even scale down, and all of those configurations are being managed by AWS. So this, in our case, is the elastic load balancer. There are other offerings that AWS provides, like application load balancers and even network load balancers. So let's spend some time understanding what Amazon's ELB offering is now. ELB is also referred to as an elastic load balancer. Basically, we know that it allows us to distribute the incoming traffic to multiple EC instances that we might have in our environment. Now, one of the important aspects or features of ELB is that it is capable of handling rapid changes in network traffic patterns. So Amazon, behind the scenes, handles the scalability of the ELB depending upon the traffic load that you have. Now, since ELB is a managed service client, you do not have to worry about the high availability aspect as far as the load balancer itself is concerned. So that's the theoretical aspect behind ELB. Let's go ahead and understand it in a practical way through the demo. So for a demo set-up, what I have done is create two easy instances, and I have installed NGINX in both of them. So if we quickly open this up, let me copy this IP in my browser, and it says "welcome to Host One on the Amazon Linux AMI." Similarly, I copied the IP address of the demo-2 server in my browser, and it says welcome to host two. So first, welcome to Host One. The second one is welcome to Host Two. Now, in order to verify whether ELB actually works, we have seen that ELB should redirect the traffic request to multiple instances so that even if one instance goes down, your application from the second instance can still be served. So, if you go into the load balancer section on the left hand side, you'll find it here. I've created a load balancer called KplabHype and ELB, and within this load balancer, under the Instances tab, I've connected both of the instances to the load balancer, and as you can see from the status, both of these instances are in service. So, if I go to the description and let me copy my load balancer's DNS name and paste it here, the first thing it says is "Welcome to Host Two." So it has redirected other traffic to the host over here. Let me refresh, and this time you see welcome to the Host One.So the second request now goes to the Host one.If I refreshed once more, it went to the host too. If I refresh once more, it goes to the host page. So what the load balancer is doing is that it is redirecting or sending a request to both of the servers that are part of the Load Balancer Instances tab. Now it might happen, so we were already considering the scenario that it might happen that one of the servers might go down due to some issue with the application, or it might be the network issue or the server issue by itself. So let's do one thing: let me stop the server by itself, and let's see whether the load balancer will send the request to this failed server or not. So our instance state is still stopping, and what we can do is go back to our load balancer. Let me just copy this URL; I'll paste it in a different tab, and now you see the request has been redirected to host two, which is the Kpop 2 server, and if I refresh once again, the request is going to host two. Let me refresh a few more times, and all the time we are seeing the data or the request going to the demo two server, it is not going to the demo one server. So this indicates that even if one server goes down, the load balancer will redirect all the requests to the second server, which is up and running. Now, for our testing purposes, if you see that we have created that distinction between hosts two and one, typically in production you will have the same set of applications deployed across both servers. Now, if you quickly go to the load balancer after this host goes down, this is just for demonstration purposes. You see, it is saying that a Kpop demo oneserver is out of service, and this basically means that the request will not be redirected to this demo oneserver and all the requests will be sent to the server that has the status of "In Service."
Hey everyone and welcome back. In today's video, we will be looking into the practical aspect of how you can create a load balancer in AWS that can redirect traffic across multiple servers. So coming back to the EC console, in the earlier video we were discussing the KPlapse demo load balancer, which was redirecting traffic between demo one and the demo two server. So, in order to have a similar demonstration, let's make the first load balancer practical. So I'll create a load balancer, and basically there are various types of load balancers that are available. Currently we are more interested in a simple load balancer, which is the classic load balancer. I'll go ahead and create do a create.So here you have to give the load balancer a name; let me name it Kplab's demo, and within the load balancer protocol you have port 80. So the load balancer also needs to listen on certain ports, so that port would be 80, and the instanceport instance port is where your engineer is listening. Port 80, for example, is also available. So this would be the default one. If your NGINX is listening on, let's say, port 8080, then you have to change it over here. So the next part is to assign security groups. We'll just select the default security group and I'll do it. Next, we'll do a configuration health check, and the health check path currently is index HTML. So this file needs to be present. So in the earlier video, we had seen that when we had stopped the server, it went out of service. Now the reason why it went out of service was because the load balancer was not able to open the index HTML file of that server, which went down. So whichever server that you attach to the load balancer, you need to make sure that that server has the index HTML file, or if you have a file with a different team, make sure you change the specific settings. So once you have done this, you can do "next," and basically it asks you whether you want to add instances or not. So currently, these are the instances that we had previously. So let's do one thing. Let's go ahead and create two more instances for today's demo. So I'll do a launch instance, and I'll be launching from the Amazon Linux AMI. We'll be doing a twiddling of the micro, and we'll be selecting the default VPC. So we'll give it two storage (8GB is fine), do a review and launch, and I'll click on launch, select the key pair through which I'll be doing the SSS to the server, and we'll launch an instance. Perfect. So now you see that there are two instances that are getting created. So if I just go outside, you would see that there are two instances that are under the initialization stage. So let's name these two instances. I'll say Demo One, and the second instance I'll name Demo Two, and I'll do a save. So let us just wait a moment for both instances to be running. Great. So both of our instances are currently in a running state. So what we'll do is let's connect to both of the instances, and we will be installing NGINX for our sample demo. So let's do a quick SSH; let's say Kpladf and a new PM. This is my key, and I'll be putting the IP addresses of the demo server on it as a start. So currently I'm logged in. Let's go ahead and quickly install NGINX with the yum y install NGINX rently I'm logSo once this NGINX is installed, we'll go to a directory called UserSharenginx TML. And within this, if you see there is a file called index HTML over here, So let's do one thing. I'll do an echo on this file; I'll say echo demo one, and I'll put it in the index.html file, and I'll do a service Engineer Start.Perfect. So once you have done this, you can copy the IP address of the server and paste it in the browser. And it seems that the website has not been loaded. Now, the reason—a quick reason—is because of the security group. If you look into the inbound, we have SSH from zero zero zero, which is allowed. We'll do one more. We'll add port 80 also, and I'll do a quick save. Once you do that, let's quickly refresh, and after refreshing, you see Demo 1 appearing over here. So we need to do a similar setup for our Demo Two server. Allow me to connect, copy the IP address, and we'll log in from the Demo one server, then paste the IP address of the Demo two server here. Perfect. So basically, the steps would be the same. I'll run Yum y installenginex, and then we'll go to User Sharenginexhtml, where we'll put demo Twoto index HTML and start the engineer service. Perfect. So once you have done this, let's quickly verify. I'll put the IP address in my browser, and this time you have the Demo 2 instance, which is have done this,So now we have both of these instances ready. So this was one of the prerequisites before you connect. Before you have a working load balancer, you need to make sure that both of the instances are up and running, and whatever applications that you have within those instances should also be up and running. So once we have this, we can go ahead and configure the load balancer. So, coming back to the load balancer screen now, if we go to add EC two instances, you see that Demo One and Demo Two instances are appearing. In case it does not appear for you, make sure you just refresh your page, primarily because of the cache. You might see older entries. So once you have done that, you can click on "Next Review" and "Create and Create." So this will basically create a load balancer called Kplad Hyphen Demo for you. And if you click here within the instances, you will see that there are two instances, and the status of both of them is out of service," which basically means that the registration is still ongoing. As a result, both of them may take a few minutes to transition from out of service to insurgence. So let's just quickly wait for a minute. Great. So now both of the instances are saying that they are in service, which means that the health checks of indexHTML that we had set are working perfectly. So once you have done that, let's copy the DNS name and we'll put it in the browser. So I'll paste the DNS here, and currently it is showing in the demo too. That means that the first request was sent to the demo 2 server. So now let's do a refresh. And it went to the demo one this time. Let's refresh once more. Again, it went to demo two. Let's do a refresh. It went to the demo one.The load balancing is working perfectly, and this is how you can create an elastic load balancer in AWS. So with this, we will conclude this video. I hope this has been informative for you, and I look forward to seeing the next video.
Hey everyone and welcome back to the Knowledge Pool video series. And in today's lecture, we will be speaking more about the tagging strategies. Now, generally, whenever you create a resource, let's assume, for instance, that you will see it ask you to tag the related information. So I have my four instances, which are running currently, and if I go into the tags currently, I only have a single tag. However, in an enterprise organization, having a good tagging strategy is extremely important. So let me give you an example where your AWS bill is skyrocketing and the CFO has come to you and asked which team is using how much money in the AWS account. So there can be multiple teams like the Payments Team and the Recon Team, and the CEO wants to know how much resource each team is using as far as the monitoring value or the financial value is concerned. And you will not be able to answer that specific question unless and until you have a good tagging strategy. So what do I mean by this? So let me show you. So generally, in the organisations where I generally do consultative testing, I create a tag, or we generally create tags for each and every resource that we create. So the tag is called the team followed by the team name. Then you have the tag of the team followed by the team name. So here you have various different team names. So each and every resource that we create gets created with a specific tag. So if the EC-2 instance belongs to the Payments team, we add a tag called Payments. So let's try this out. So within the add tag, we can create a new tag called "Team." And here I'll mention payments. So I'll click on "Save." Within the second instance I'll add one more tag called "theme," and I'll say DevOps. So at the end of the month, if you have proper billing setup, you can look into how much money it has cost for the resource belonging to a specific tag. So, if you want to know how much money you paid for the resource with the tag "team," as well as the value of payments, So this becomes much easier for you to look into. Now, once we have tagged all of ourresource, you have to enable the billing toinclude tagging related information as well. So how can we do that? So now if I go to my billing dashboard, I just go to preferences, and here you have the option of receiving billing reports. So let me just click on here, and here you just put the AWS bucket name. So let's try this out. So I'll do S three and I'llclick on Create a new bucket. I'll name this as Kplabs Billing, and I'll click on Save. Now we must name the bucket KP Lab and Billing. Along with that, you need to enable a bucket policy. So just click over here, and this will create a sample bucket policy for you. So just copy this bucket policy and paste it in the S 3 bucket. So this is the bucket policy that got generated. I'll paste this here, and I'll click on verify. so it will successfully verify. Now what it has given is whatever report will be generated, and it will be uploaded to a three-bucket repository. What should it contain? And here there is one very important parameter, which is a detailed billing report with resources and tags. So now what will happen is that whatever billing report AWS will save to the S3 bucket, which will contain a wealth of information about which tags are costing us how much money. So click on "Manage report tags" here, and you need to basically activate certain tags. So basically, we had created a tag for his team. So I'll click on "Team" and I'll click on "Activate." So let me just activate this tag, and this has been activated. Perfect. So this is something that you need to do. Now if I just go to preferences because we have not saved them, let's do KP Labs Hyphen billing. I'll quickly verify we have our tags, which are activated, and you click on Save Preferences. So after this, after your billing cycle has completed, whenever S Three will upload your data, or whenever AWS will upload your data to S Three, it will contain information related to the tag as well. So this is it for this lecture. In the upcoming lecture, we will look into what exactly that report would really look like. Since I don't have the report right now because I just activated our first tag for our AWS account, I'll be uploading the video once the file has been uploaded to the AWS S Three account. So this is it for this lecture. Now, a few important things to remember are that, depending on your organization, you can have multiple tags. So here I have a tag of "team," which helps us understand which team is using more resources. This basically used to help us a lot in one of the organisations that I was working in, because there the developer used to blindly tell DevOps that we needed ten instances; we needed 15 instances. And, at the end of the month, Team made use of the resources that DevOps had made available. The money was more than $5,000 to $6,000 for that team. So after that, a CFO came to the DevOps team and asked, "Why so much money?" And the DevOps team blindly demonstrated to them that $6,000 or $5,000 of the total AWS bill belongs to only one team. So this is how it really helps you. So that is why you should have an effective tagging strategy. A tagging strategy will also help you. Specifically, if you have just a single account, you can have tags for prod stage and death so that at the end of the month, you can know how much money each of these environments is costing you. So this is it. about this lecture. I hope this has been informative for you, and I look forward to seeing you in the next lecture.
Hey everyone, and welcome back to the Knowledge Poll video series. So, in the early lecture, we were actually looking into how we can activate the detailed billing reports, which include the tags that we generate for our EC2 resources or for our AWS resources. So if you look here in the report, we have actually activated this specific option of a detailed billing report with the resource tag. So whatever report that will be generated at the end of the month will also have the tags that are associated with the AWS resources. So we're saving this specific, detailed billing report to a bucket called KP Labs Billing. So this is the S-three bucket where we will save this report. So let's do one thing. Let's go to the S-three and look into the bucket that has been created. So let's do a billing for KP labs. And this is the third bucket. So now you will see over here that AWS has actually dumped the CSV file for the reports for various months, which is December, January, as well as February. So let's do one thing. Let's download the January billing report, and we'll look into what exactly the contents of this specific, detailed billing report are. Perfect. So the billing report has been downloaded. So what I'll do is this is a zip format. I'll just extract this and we'll get the CSV file. So I'll open this up with the CSV, and you'll see this is the detailed billing report, which gives you a lot of information. So this is quite wide, so we'll not be able to cover everything if we zoom at 100%. Let me just minimise the zoom to a certain extent so that it becomes much more visible. Perfect. So in the last column that you see over here, I'll just try to maximise the important columns that we can look for. So this last column that you see over here is specific to the tags that we had created. So let me just maximise it again, so it becomes much more clear. So we have created tags for a lot of AWS resources, which include EBS volume, S Three buckets, simple cases where we divided it into teams. So a team is equal to payments; a team is equal to DevOps. So at the end of the month, if you will see over here, I have certain resources that have the tag of payments. So these resources belong to the team's payments. So if you go a bit down, again, this is a resource that belongs to Team Payments. So if you'll see on the left-hand side, this is basically the resource that is owned by this team. So this is basically the EBS volume. So, if you go a little lower, you now have certain resources where team equals DevOps. So there can be multiple teams in an organization. So, for our testing purposes, we formed two teams, one for DevOps and the other for payments. So at the end of the month, we can actually know. So if you see over here this specific resource, which is associated with the tag where team is equal to DevOps, this is basically an easy InstanceID. And then you can also have the cost that is associated with the specific resources. This becomes quite a valuable asset for the finance team because they come to know exactly which resources belong to which team and how much cost is being incurred for the organization. And this is something that a solutions architect should definitely know. So this is it for the detailed billing report with Axe. I hope this has been informative for you, and I look forward to seeing you in the next lecture.
ExamCollection provides the complete prep materials in vce files format which include Amazon AWS Certified Cloud Practitioner certification exam dumps, practice test questions and answers, video training course and study guide which help the exam candidates to pass the exams quickly. Fast updates to Amazon AWS Certified Cloud Practitioner certification exam dumps, practice test questions and accurate answers vce verified by industry experts are taken from the latest pool of questions.
Amazon AWS Certified Cloud Practitioner Video Courses
Top Amazon Certification Exams
Site Search:
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
MIN10OFF
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.
Used premium dump, there were alot of new questions but still managed to pass today by studying with this dump and some online training
Add Comment
Feel Free to Post Your Comments About EamCollection VCE Files which Include Amazon AWS Certified Cloud Practitioner Certification Exam Dumps, Practice Test Questions & Answers.