Cisco CCNA 200-301 – Cloud Computing Part 3
6. Virtualizing Network Devices
You’ll learn about virtualizing network devices. And the first thing that I want to cover here is how networking works on type one hypervisors like VMware vsphere. ESX. So it was the type one hypervisors we were talking about in the last lecture before I show you how it works and was a quick recap of how switched networking works in traditional networks. So in the example here, we’ve got a switch which has got two servers plugged into it. The servers here are not virtualized, these are bare metal servers. A bare metal server means a server where the operating system is running directly on the hardware. It’s not running a hypervisor. So we got server one in VLAN 20 with IP address 1010 2010 and we’ve got server two running another application that’s in VLAN 30, IP address 1010 30 Ten. On the switch we configure the physical port connected to server one is an access port in VLAN 20. The physical port connected to server two is an access port in VLAN 30.
So right now we’ve got the layer two set up done. But if we wanted those two servers to be able to communicate with each other, they’re in different layer three subnets. So we would need a router for that. So let’s add a router. This is a simple router on a stick configuration. So we’ve got Interface, gigabit ethernet 120 IPRs 10 10 21. It’s the default gateway for server one and the other sub interface at gig zero slash 130 IPRs 10 10 31 is the default gateway for server two. And now those two servers are able to communicate with each other. So you already know how that works on traditional switch networks. Let’s see how things work when we’ve got multiple virtual machines running on top of a virtualized host. So the blue box here is a single physical host and we’ve got two virtual machines running on there. It’s the same as the servers we had in the last slide but we’ve got them running as virtual machines now.
So we’ve got virtual machine one the same VLAN 20 IP address Ten 2010 and virtual machine two and VLAN 30 IP address 1010 30 Ten. We’re also going to need an IP address on the host, on the underlying server hardware itself as well to be able to manage that with our hypervisor software. So for that we’ve got a management IP address on there, 1010 and that’s in VLAN Ten.
Now the problem that we have is in the example here that one physical box is connected up to the physical switch with a single cable. So when traffic goes down to that host, how does it know whether to send that to the management IP address, the virtual machine one or virtual machine two and different VLAN. So how is the VLAN going to work as well? Well, the way that it works is it uses a virtual switch. So you see the switch up at the top. That’s actually a physical switch outside the box that the box is connected to with a physical cable. The switch that is highlighted in red is a switch that is running in software.
So it’s not an actual physical thing. We connect the physical port on the host up to the physical switch and we configure that as a trunk port. So before we had access ports that were connected to our individual servers. Now traffic is going to multiple different virtual servers that are in different VLANs. So we need to configure that as a trunk port. And whenever traffic is getting sent out to virtual machine One, it will be tagged as VLAN 20. When it goes down to Virtual Machine two, it’ll be tagged as V 130. When it’s for management, it will be tagged as VLAN Ten.
When the traffic comes in to the virtual host, it looks at the tag and based on the tag, it knows where to send the traffic. So right now that is all layer two information traffic can get down to our virtual machines. We don’t have any layer three configuration here yet, so the virtual machines won’t be able to communicate with each other. If we wanted to do that, we could do the same as we did in the previous slide. We’re upstream, we’ve got a router, and it’s acting as the default gateway for the three different IP subnets. Now with the virtual switch that you see here that is highlighted in red, that is running in software. If it’s in VMware, they’re going to be using their own native software for that.
There used to be support for a Cisco software product, which was the Nexus 1000 V, which was a software switch. So again, you couldn’t buy this as a physical thing. It was software that you installed in your VMware environment and that replaced their native switch. The Nexus 1000 V is still supported in Microsoft HyperV, but support has gone for it in VMware now. So an example here you see how it’s working for the networking for our type one hypervisor. Next thing that we’re going to look at is if I go back a slide, this is fine if you’re running this in your own data center. But let’s say that these virtual machines now are being run in a cloud environment and you want to have your own router to control the routing between them.
You don’t need to do that because when you do deploy this in a cloud environment, the routing can be taken care of for you by the cloud service provider. But maybe you don’t want that. Maybe you want to implement some advanced routing features and you need to use your own router. Well, they’re not going to let you put a physical router in their facility. So what you can do instead is you can use a virtual router. The virtual router runs as a virtual machine itself. So now what we’ve got with that big blue box that you see here again is one physical box on that physical host we are running virtual machine one IP address 10210, virtual machine 210 2010.
And then we run another virtual machine which is not running Windows, it’s not running Linux, it’s running routing software like the Cisco CSR 1000 V and it can route between those different virtual machines. If you’re looking at this, by the way and thinking, well, what about if I’ve got virtual machines running on another physical host in another box? Yes, you can do that as well. And you can still have your layer two and your layer three connectivity between different physical boxes and still run all of your devices as virtual machines.
Okay, so that was layer two and layer three and are different options on a type one hypervisor. Next, let’s look at some other types of virtualization we can do for network devices. First one is looking at virtualizing. Our firewalls Cisco have got a firewall called the ASA. It’s the adaptive security appliance and it supports being virtualized. The big blue box that you see here is a single physical box. And to virtualize that what we can do is we can create separate security contexts. So the admin context has got global administrative configuration.
We configure a customer one context which has got customer one’s configuration and we configure a customer two context which has got customer twos configuration. Customer one, their traffic is going through on interfaces gigabit zero one and gig zero two. Customer two’s traffic is going through interfaces gigabit ethernet, zero three and zero four. The different interfaces are dedicated to the different contexts. Now, when you do it like this, those two contexts act and behave as if they’re two completely separate physical firewalls. And you could also give the customers access to manage their own devices because we’ve got separate configurations.
You could have administrators for customer one, could SSH into the customer one context and configure it administrators for customer two. You could allow them to do their own configuration. They could SSH into the customer two context and do their configuration. The two sets of administrators would not even know that another context existed on that same physical box. They appear to be separate physical firewalls and that’s how they act. The benefit that we get from doing this is it can save money because we just buy one firewall rather than buying two physical firewalls. And if you wanted to have redundancy well, if you weren’t using virtualization, you would need two firewalls for customer A and two firewalls for customer B. But using virtualization we can just buy two firewalls and that gives us redundancy for both customer A and customer B as well. Okay, so that was our firewall virtualization. You can do this type of virtualization on routers as well, but only on the really high end service provider routers. What you’ll find support for on your normal enterprise level routers is VRF. VRF stands for virtual routing and forwarding.
And you can see here we’ve got a single physical router and we can have separate routing tables on there for different customers or different departments. So the example here, we’ve got Customer One and whenever routes come in on interface gigabit ethernet zero one, we know that they go into the Customer One routing table. Whenever routes come in on interface gigabit ethernet zero three, that goes into the Customer Two routing table. Now with the example here, you couldn’t give the customer’s own administrators access to the router to configure it because there’s a single configuration. So when you’re in that configuration, you can see information for both Customer One and Customer Two as well.
So really it has to be just a service provider or the higher level of hierarchy that has access to this. So previous example, they’ve got their own configurations. You can give them access with VRFs for just one configuration, so you can’t give them access to that. Where you’ll most often see VRFs being used is for MPLS, layer three VPNs. Going back to our MPLS example from the earlier section, we’ve got the service provider network here. They’ve got PES in New York and Boston. We provision layer three VPN for Customer A and you see the interfaces that Customer A are connected in on, we assign those to VRF Customer A. So whenever a customer route is received on that interface, we know it’s for Customer A and we can send it to the Customer A router.
On the other side, we also have Customer B who are connected into the same physical routers as well. We assign those interfaces to VRF Customer B. So whenever a Customer B route comes in, it goes into the Customer B routing table and it will be advertised to the Customer B router. On the other side by having separate routing tables, separate VRFs for Customer A and Customer B, that keeps them separate and it makes sure that we never have routes being mingled with each other, which would obviously be a security issue. Okay, so to summarize, virtualization, it supports running multiple virtual systems on a single physical system. This provides flexibility and it reduces costs. If you are wondering about Redundancy, because that could be a concern, you may be thinking, well wait a minute, I’ve got multiple virtual machines running on one single physical box.
What happens if that physical box blows up? I’ve lost all my virtual machines. Well, you can still support Redundancies. In fact, Redundancy is often easier to implement in a virtualized environment than when you’re using dedicated physical appliances. So Redundancy is supported by adding multiple physical systems which each have virtual systems running on them. And typically it’s very easy to move the virtual machines from one underlying physical box to another one. So if you have a physical failure, you just move your virtual machines to another box. You can automate that and you can be up and running very quickly. Okay, so that’s virtualization done. I just want to mention one other thing while we’re here as well, which is about clustering. And clustering is kind of like the opposite of virtualization. We look back at virtualization.
Virtualization supports running multiple virtual systems on a single physical system. Clustering is the other way around. Clustering supports combining multiple physical systems into a single virtual system. And you can see here we’re doing that with our ASA firewalls again, where we’ve got four physical ASA firewalls, but we can configure them so that they operate like the one single firewall. The reason you would do that is for redundancy. If any one of the four fails, the other three keep operating. Traffic keeps going through. It also increases performance as well, because now we’ve got four times the throughput that we would have if we just had the one firewall. Okay, that is it for virtualization. See you in the next lecture where we’ll get more into the cloud computing again.
7. Cloud Service Models
You’ll learn about the cloud service models. The NIST defines three service models of how cloud services can be offered in the same document that I was talking about earlier when we were talking about the cloud characteristics. So the three service models are IaaS, which is infrastructure as a service, PaaS, which is platform form as a service and SaaS, which is software as a service. And I’ll cover what the three of them are and how they differ from each other in this lecture. Large cloud service providers will offer multiple models, not just one service. Small providers might specialize in just one just to offer that. But large providers like Amazon Web Services and IBM, et cetera will offer all three. The three models define where the customer and provider areas of responsibility are and at what level the customer gains access and the three models build on top of one another.
You’ll see how that works as we see the slides. So for figuring out where the customer and provider responsibility is, we need to understand what the different levels are first. So let’s look at the data center stack down at the bottom we’ve got the actual facility, which is the building, the power and the cooling and security staff, et cetera. On top of that we’ve got the network infrastructure hardware. Then we’ve got the storage infrastructure hardware, the compute hardware which is the physical servers. Then we’ve got the hypervisor, which is the software that runs on top of those physical servers. On top of the hypervisor we’ve got the virtual machine operating systems, then we’ve got the applications and then finally we’ve got the data.
So where is the line of where it is the provider responsibility and where it is the customer responsibility? Well, if you’ve got your own on premises solution, obviously there is no provider, it’s just you. So you looking at from the point of view, if you’re a customer, you’re going to be managing everything obviously with an on premises solution and that’s not a cloud solution. With a colo solution, the provider is going to provide the facility and there’s going to be a bit of a mix between whose responsibility it is for the network layer. The provider is going to provide incoming network connections. You’ll have your own network infrastructure equipment in there, you’ll have your own firewalls, et cetera. So some of that is the provider, some of it is you.
Everything on top of the facility and the network is going to be your responsibility. So with on premises and colo solutions, you buy, own and maintain all of your infrastructure hardware. You buy that as an upfront capex cost. Again, on premises and colo are not cloud solutions. Let’s look at the cloud solutions, an infrastructure as a service offering. With this, the provider is providing the facility, they are providing all of the underlying hardware infrastructure as well. So the network infrastructure, all of the routers, all of the switches is going to be provided, owned and managed by the provider. The same for the storage infrastructure and the compute hardware as well. The hypervisor will also be owned and maintained by the service provider. Now there can be a bit of a gray area here. It is possible with some providers that you can install and manage your own hypervisor.
As far as the CCNA exam is concerned, if you get a question about this then the hypervisor is the provider responsibility. Everything on top of the hypervisor. So the operating system, the applications and the data is going to be installed and managed by the customer and the customer gets access at the operating system level. So let’s see how this works. I’ll go back on to Amazon Web Services. You remember from the earlier lecture in this section where I showed you how to provision a new virtual machine. Now let’s look at how I would actually access that virtual machine once it is up and running.
So I can see here’s my instance here I can click on the connect button and from in there I can get the connection information including a remote desktop file. So I’ve already downloaded that to my downloads folder. So I’ll go there now and I’ll open up this RDP connection to my virtual machine and put my password in there and it’s going to open up this RDP connection and you’re going to see that I’m going to get in to the desktop on this Windows Server 2012 virtual machine that I provisioned earlier. And what I can do now is I can go and install my own applications, I can save my own data on here, I can configure the server however I want it to be configured.
So with IaaS the provider is providing the underlying infrastructure. That’s why it’s called Infrastructure as a service. I get in at the desktop level in the operating system and I can configure and manage everything upwards from there. Okay, let’s go back onto the slides. So the next one to cover is Platform as a Service which goes up one level. So if we go back a slide with Infrastructure as a service, customer gets in at the operating system level and it’s operating system level and above which is managed by the customer. With Platform as a service, the operating system is managed by the provider and it’s the applications and data that are managed by the customer. This doesn’t really tell the whole story though. So this site here, if you get asked anything on the exam, this is how Platform as a Service works. But it’s easier if we look at it like this.
So with Platform as a service it’s typically used for developing software and the provider will provide a custom environment that makes it easy for your developers to develop software in there. So all of the underlying infrastructure is managed by the provider. As a customer you don’t see that at all. Also the operating system is managed by the provider. You get in at a custom environment level, which is purpose built for building applications. You can build your applications on there, and then you can manage your own data as the customer. So let’s have a look at an example of that. And I will go back to my web browser and look at another tab at IBM here, and you can see that IBM also offer infrastructure as a service offerings. One way that they differentiate from AWS is they don’t just offer virtual servers, they offer bare metal dedicated servers as well.
If you look further down at the Blue Mix Platform offerings, these are Platform as a service. So if you look at the Cloud Foundry apps, you see that you can get into this purpose built environment for Python, for Ruby, et cetera, and you can develop your own applications in there. A benefit you can get from that is if we look back at that Ecommerce application we’ve been using throughout this section, let’s say that you need to develop that ecommerce application. Well, if you use Platform as a service, usually it’s going to have plugins there that you can pull into your own application, like plugins for Ecommerce, like payment systems and things like that. So rather than you having to write the entire code from scratch, you get into this custom environment and you can pull pieces of code in to build your own applications, and that accelerates the process of doing so.
So that is Platform as a service. Of the three types, platform as a service is probably the least well known and the least used. So we’ve covered IAS and PaaS. That leaves the last one, which is software as a service. With software as a service. This is really the opposite of on premises where the provider is managing everything. Provider manages all the way from the facility all the way up to the data. An example of Software as a service would be something like Microsoft Office 365. So you can install Office programs like Word and Excel on your own laptop and run it from there.
You can also run it in the cloud as well. So rather than installing anything on your laptop, you can connect over your Internet connection through a web browser and you can access all of your Office applications from there. Other examples of software as a service would be Salesforce. com, things like that. So it’s where you actually access the software directly from a web portal rather than having to install and manage it yourself. Okay, so those were the different cloud offerings, and you get access to the application level with software as a service. That was the last slide for this lecture, so see you in the next one for some.
Interesting posts
The Growing Demand for IT Certifications in the Fintech Industry
The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »
CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared
In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »
The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?
If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »
SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification
As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »
CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?
The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »
Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?
The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »