Cisco CCNA 200-301 – Cloud Computing Part 2
4. Cloud Computing Case Study
In this lecture we’re going to work through a case study where you can see cloud computing in action with a live demo. So the case study is what you can see on the slide here. And let’s say we’re a company and we’re a startup and we want to deploy a new three tier e commerce application. So we’re going to be selling something online. It’s a three tier e commerce application, meaning the first tier is our front end web servers where customers will connect in over the internet. Traffic from there goes back to some application middleware servers which will process the orders. And then at the back end, the last tier we’ve got our database servers which holds the inventory and customer information, et cetera. So it’s a pretty standard three tier e commerce application.
Now we’ve got choices of how we are going to deploy this. Let’s say that we are going to be a large company, so we want to have a bulletproof data center to house our application in. Well, we could build our own data center and then we could buy all of our own equipment and we could provision it and configure it all ourselves. Obviously, that would take months, if not years, to find the site, for a building, to have the building built to make sure that there’s redundant power and cooling in there, than to buy all of the network equipment, the storage equipment, the servers, get everything onto the site, have everything physically connected together and then configured and also to test.
It would take literally months to do that. And it’s not just for new companies like a new start up. This could be an existing company and they want to deploy a new application and again they want it. It’s a mission critical application so they want it to be in a highly available data center. The same choices again there. They could either maybe put it into their existing facility, but the facility would have to be that kind of really hardened data center to house this application. If not, they would be building their own facility and then all of the other problems as well. Another thing is that for that new startup they’re going to have to find staff that know how to do this and they’re going to want staff that they’re confident that they know how to do this in a best practice manner. So that’s one way that you can do it, which is going to take months and be hugely expensive and be mostly a capital expenditure. The other way that you can do it is you could deploy it in a cloud solution and when you do that, you’re deploying it in somebody else’s data center. The data center is already built and you know that it’s been built to the best practice. You know that everything in there is redundant and highly available. And for provisioning it, you can get it up and running in literally hours instead of taking months or years. So let’s have a look at how we’re going to do that.
And for the example, I’ll use AWS, Amazon Web Services because they are by far the most popular cloud provider for this type of service. So I’ve already opened up the console in AWS and for this we want infrastructure as a service. IaaS I’ll explain what the different deployment models are later in this lecture. IaaS is what Amazon are probably best known for. Well, they’re known for the other products as well, but it’s their most popular product. So their solution for this is EC two stands for Elastic Cloud Compute. You see Amazon provide all the other types of cloud services as well. You can also provision storage in their cloud database applications, you can do application development there, etc. But for this example, we are going to use EC Two because we are going to be provisioning servers in the Amazon Cloud and we also want to have our Firewalls and our load balancers there as well.
But let me just go back to the diagram to explain that while this page is loading. So I said about the three tier application with the front end web servers, the application middleware and the back end database servers. In front of that. Obviously we want to have a Firewall facing out to the Internet. And when traffic comes in, we’ve got a farm of web servers here. These are all basically identical, running the same application. When traffic comes in, we want to load balance across these different servers. So we’ve got load balancers in front there.
We’ll see that for this particular application, it’s handled within the application for sending traffic back from the front end web servers. Also notice that we’ve got no single points of failure here. We’ve got redundant components for everything. So let’s go and have a look at an example of how we would configure this. So for the servers we’re going to run those as virtual machines as instances in AWS. This is using virtualization that I’m going to talk about in another lecture later in this section. Let’s see how to do it. So I’ll click on launch instance and I’m going to configure my web servers. 1st. 1st thing it asks me is what operating system do you want to run on those servers? And it gives me an option of lots of different flavors of Linux.
I could also run them on Windows if I wanted to as well. I’m just doing a demo here, so I’ll just take cheap option which is the Amazon Linux AMI. When I say it’s cheap, you don’t have to pay for licensing like you do with Windows. There’s nothing wrong with it, it’s a good operating system. Next thing is I need to specify the CPU and the amount of memory in those servers. Again, I will just take the lowest one here because it’s just a demo with one virtual CPU and one gig of Ram. We scroll down, you see that you can get really powerful servers if there’s one with four.
A virtual CPUs, 160 gig of Ram. Obviously, the more powerful of a server you want, the more it’s going to cost you. I then click on Next to configure the instance details, and in here I can specify how many instances I want. So in our example, we were just starting off with four servers, but you could say however many web servers you want.
So this is just for the web servers because these are all going to be identical. I would provision the middleware servers and the database servers separately because they’re going to have different specifications. So I would put four in here. I’ll just leave it as one in the demo. Also notice it’s asking if I want to configure an auto scaling group. What you can do is you can monitor how busy the servers are in AWS, and if it goes above a certain threshold, for example, the amount of connections that are coming in or how busy the CPU is, it can automatically spin up additional servers. So that’s the elasticity that we were talking about before. You can manually add servers on demand anytime you want to. You can also have this automatically be done for you as well.
So based on the current demand, additional servers can be spun up. Also, if the demand drops below a threshold, you can start shutting servers down. So it’s very elastic and you can automate that. The next thing is the network configuration. So looking at our example, we are going to need three different networks here. I was running out of space and diagram, but I’d probably put Firewalls in between each of these tiers as well. So the web servers, the middleware servers, and the back end servers are all going to be in different VLANs and in different IP subnets.
So I would configure that with the network settings. Then I can click on Next and add storage here. I say how much storage I went through the different servers, whether I want to use local disks inside the servers or an external sand. And I can also get performance guarantees for my storage in here as well. I’ll just accept the default. The next page you can add tags. This is useful if you’ve got a lot of servers for keeping them organized. Then on the next page it’s the security group. And this is where I configure my Firewall rules. So I specify the type of traffic that I want to allow coming into those different servers. I then click on Review and Launch, and I can view a summary of the specifications of my server here and then click on Launch and in around ten or 15 minutes time, those servers will be up and ready. So you can see the difference here with using a traditional on premises or colo model where it takes over a week to get new servers online. When you’re using Cloud, you can have new servers up and running literally in 15 minutes. It’s super fast and you can spin up additional ones whenever you need to them quickly as well.
Let’s look at the characteristics of Cloud that we covered in the last lecture. I’ve got that in the next slide. So here were the Cloud characteristics. First one on demand self service. I did not need to raise a ticket and have somebody manually configure everything for me. It was all automated, it was on demand and through a self service web based portal. That’s what you’ll usually see for Cloud services. The next one. Rapid elasticity. You saw that I could provision them very quickly and it can automatically scale up and down in line with demand. The next one is broad network access. When that server is finished being built, I’ll be able to access it from inside my own office, or I could access it from out on the Internet. The next one is resource peeling. I’m going to cover this in more depth as we go through the rest of this section.
Those virtual servers are not running on their own separate dedicated server hardware. They’re shared with other customers as well. The benefit I get from this is it lowers the cost. And then finally the measured servers based on how much virtual CPU power I wanted, the amount of memory, the amount of storage, and other characteristics as well. I’m going to be billed based on what I provisioned and I’m going to get a monthly bill for that. So it’s going to be an ongoing opex cost rather than a large upfront capex cost having a spend at the start. Okay, let’s have a look at some other things as well while I’m here, just to point this out, because when we get into the software defined networking, I’m going to talk about this some more. So if I go back, you know, when I was talking about how we do it traditionally there’s going to be different teams that are going to be involved in getting this set up.
You’ve got the server team who are going to install the operating system, install any patches, and also install the applications. You’ve got the networking team who are going to configure the VLANs, configure the routing information, configure the firewalls, and configure the load balancers, and you’ve got the storage team who are going to provision the storage. If it’s on a large data center, it’s probably going to be external storage. So they’re going to have to configure the storage for this particular server and secure it as well. So that’s all very manual and it makes it more time consuming when we use Cloud. I specified everything I wanted through this really easy to use self service portal while I was in here. Let’s have a quick look again. So I specified the amount of CPUs and memory I wanted.
And on the page before this, I specified the operating system that I wanted. After I click review and launch, it does not go back to somebody who’s going to manually configure this. It’s all automated. So we’ve got the front ender software here where I specify the operating system, the actual specs of the server, also the networking details as well, and the storage details. And spinning up, actually bringing up the server is all automated.
So we’ve got this front end software that is then going to talk to software behind here that can talk to the different server storage and networking components and automatically provision everything. That’s how we can be up and running in around 15 minutes time. I could also configure other security settings in here as well. So this just gives you an idea of the standard kind of services that we use with cloud. Okay, that will do for this lecture. See you in the next one.
5. Server Virtualization
Lecture you’ll learn about virtualization, which is one of the main enablers of cloud computing. It allows for resource pooling where multiple customers or internal departments share the same underlying hardware. And virtualization has been around a lot longer than cloud computing has. This lecture focuses on server version virtualization because that was the first type available. But the same principles that I’ll cover here can be applied to virtualized network infrastructure devices as well. And I’ll give you some examples of those in the next lecture. So looking at our diagram here, this is the three tier e commerce application that we worked through as a case study in the previous lecture. You can see that we’ve got the firewalls at the front connecting to the Internet.
Behind it we’ve got load balancers that balance the incoming connections to our front end web servers. Behind there we’ve got the application middleware servers and at the back end we’ve got the database servers. And everything is tied together with our network infrastructure devices of our routers and our switches. So when we build this solution on cloud, this is how it’s going to look and behave logically. But it’s not actually running on dedicated physical hardware devices. The underlying devices are going to be shared with other customers or departments as well. So the cloud provider does not provision separate dedicated physical hardware for every customer it’s shared. A customer can sometimes deploy some selected dedicated hardware devices at additional cost depending on the cloud provider you’re going with. Some will give you that option, some will not.
So let’s look at what we had before virtualization. Let’s say we’re looking at a small company here and we go down to their server room, we’ll find the rack and in the rack they’ve got for example, a web server, a database server and an email server. For consistent power we’ll use ups uninterruptible power supplies and for the network connectivity able to be switches and router and a firewall in the rack. And notice that here everything is running on separate dedicated hardware. So if we look at the architecture of this and the blue box here represents a physical server. So in the server down at the bottom here, we’ve got the physical components in there like the processor, the Ram memory, the network card, the hard drive, et cetera. And then the operating system runs on top of there. And then on top of the operating system we install the application.
For our example, the first server is our email server. So that’s how the email server is going to look. From an architectural point of view. We’re going to have the same for the database server, the same for the web server and all three of them are running on their own separate physical boxes. And a problem with this is if you look at the server utilization, it’s going to be down around 15% to 20% normally, something like that. And when I say the utilization I mean, how busy the CPU is, how much memory we’re using, how much of the potential of the hard drive, the network card, et cetera. So we’re really not getting good utilization at all. A lot of the potential power of a server is being wasted and you have to pay for each separate server, you have to buy that server. And we’re also all using power, space and cooling, which is costing money as well.
So it’s really not cost efficient. So how could we get better efficiency from our server? Well, the first way you might think of doing it, looking at the box here again, this is a single server. We’ve got the CPU, the Ram and the nick at the bottom we install the operating system on the server. And what we could do is we could run our email server application, our database server application and our web server application all on top of the same operating system on the same box. But putting multiple applications on the same server is terrible practice. It would give us better utilization if it was running at 15% each before. Maybe we’re going to be up at 45% utilization now. But it’s bad practice because if we have a problem with any one of those applications, it’s liable to crash all of them. So you never do this. So is there a better way to do it? And the answer is yes.
We can use server virtualization. So this is going to look a little similar to the previous example, but you’ll see that there’s a key difference. So the blue box here is our single physical server box. Again, we’ve got the physical components down at the bottom, the CPU, the Ram, the Nic, etcd. And rather than installing a normal operating system on there like Windows or Linux, we install a hypervisor on top of the physical hardware. I’ll talk a bit more about what the hypervisor does in a second. For now, you can think of it as running as an operating system. Then on top of the hypervisor we install a virtual machine and in that virtual machine it’s got its own operating system, which is Windows in this example, and also the application which was our email server. We then install a second virtual machine which has got its own separate operating system, which is Windows again in this example, but it’s a separate instance of Windows.
And on top of that operating system we install our second application which was the database server. Then we install our third virtual machine. On there we install its operating system, which is Linux for this example. And on top of there we install the web server. So you can see the difference from the previous example. In the previous example we just had one operating system and we were running all of our applications on top of that. Here we’ve got three separate instances of the operating system and each of these separate instances each of these separate virtual machines acts and behaves as if it was a separate physical server. So if there’s a problem in one of the applications it’s only going to affect that virtual machine, it’s not going to affect the other virtual machines. So we don’t have that bad practice issue that we had in the previous example. But we got all the benefits now of running multiple virtual machines on top of one piece of physical hardware.
So if you look at the utilization you’re going to get much better utilization which gives you much better cost efficiency. Okay, that example there was a type one bare metal hypervisor and type one hypervisors run directly on the system hardware. So let’s just go back again. You can see the hypervisor is running on top of the hardware. It’s acting as the operating system. The other thing it does is it gives out the access to that underlying hardware to the virtual machines that are running on top of it. It’s in charge of making sure that they all get their fair share of the underlying hardware. Popular software which is type one hypervisors is VMware ESXi that runs on the physical box, the physical server itself which is known as the host and VMware ESXi. So that’s operating system in the host, that’s part of the Vsphere suite in VMware. Other popular type.
One hypervisors are Microsoft HyperV, Red Hat, KVM, Oracle VM server and Citrix Zen Server. There are also type two hypervisors as well which are not used in the data center. But I’m going to cover them here to save any confusion. Type two hypervisors run on top of a host operating system and examples of these are VMware Workstation, VMware Player and VMware Fusion for Mac. VMware workstation is for Windows VirtualBox, Chemu and Parallels. So let’s look at the different architecture with a type two hypervisor. Here we’ve got the physical box which is probably going to be your workstation or your laptop. We’ve got the underlying physical resources there. Then on top of that, we don’t install the hypervisor directly onto the hardware. We’ve got our normal desktop operating system like Windows or Linux or Mac here. Then on our normal desktop operating system we’ve got our normal applications like Microsoft Office, our web browser, et cetera. And then also on top of the desktop operating system we install the hypervisor on there.
Then within that hypervisor we can run our different virtual machines which will have their own separate operating systems and applications. So my laptop that I’m recording this on has actually got a type two hypervisor installed on it. I’ve got VMware Workstation player on here and I’ve already started up a virtual machine. So I’ll just open that for you. You can see I’m running on Windows but my virtual machine is actually a Linux box. So what this would be useful for is well, I’m an instructor. Let’s say that I go to a customer site to do some training, but I’m going to be doing training on Linux this week. I don’t want to carry a Windows laptop with my presentation on it and a separate Linux laptop that I can do the demonstrations on. By running a type two hypervisor, I just take my one laptop, I can use the Windows operating system to show my PowerPoint presentations and within VMware, I can open up my Linux virtual machines so I can show that as well. So what a type two hypervisor is really useful for is for doing lab tests or lab demos, that kind of thing. So let’s compare the two. You see the type one hypervisor there? The hypervisor is running directly on top of the hardware and then we have our virtual machines running on top of there. With the type two hypervisor, we’ve got a normal desktop operating system on top of the hardware and then our other normal applications. And we also install a hypervisor in the operating system as well.
So where would you use one or the other? You would use a type one hypervisor in a data center because there’s less layers there between the virtual machines and the hardware. This is really built for data center environments where you’re dedicating that physical server host to be used for data center servers. And because we don’t have that extra layer there, you’re going to get better performance. With the type two hypervisor, there’s an extra layer between the virtual machines under underlying hardware there we’ve got the hypervisor and a normal desktop operating system. So this would be used when you’ve got a normal laptop. That’s what you’re going to be normally using it for.
And you also want to do some lab testing or some lab demos as well. And it gives you the convenience of being able to run it on your laptop. Okay, that was our server virtualization. Let me just skip back a whole bunch of slides here to our case study that we covered earlier as well. It must be just about there. Okay, here we go. So go ahead. Eventually with our case study, you see the servers we have our front end web servers, our application middleware servers and our back end database servers.
Those are not dedicated to us and us alone. When we are the customer of the cloud provider, our servers are going to be running as virtual machines and there’s going to be other customers virtual machines running on the same underlying hardware as well. That obviously cuts the cost down and the cloud provider can pass those cost savings on to us. Some cloud providers do allow you to have your own dedicated server hardware dedicated just for you, but it’s going to be a much more expensive option to do that. That’s everything needed to cover here. See you in the next lecture where I’ll talk about virtualizing network devices.
Interesting posts
The Growing Demand for IT Certifications in the Fintech Industry
The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »
CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared
In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »
The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?
If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »
SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification
As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »
CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?
The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »
Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?
The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »