Cisco CCNA 200-301 – QoS Quality of Service
1. Introduction
The next three sections are for technologies that are new on the CCNA exam. That’s QLS quality of service, which we’re going to cover in this section, and then cloud computing, and finally, Sdn software defined networks. So for these three new sections, Cisco don’t expect you to know how to configure them. What they do expect you to know is what they are and have a foundational level knowledge of how they work. So that’s what I’m going to cover in the course. But to be honest, the original driver for that was voice and video, and I used to specialize in voice and video back in the day.
So I love QoS. So I’m actually going to give you more information here than is really required for the exam. I’m going to give you the foundational level knowledge and a bit more as well. And I’ll show you the configuration so that you’re going to really understand how QoS works. So you’re going to be able to breeze through any QoS questions on the exam and also take this forward when you do any further studies and also into the real world as well. So let’s get started with Quality of service.
2. QoS Overview
I’m going to give you an overview of QoS, which is quality of service. And when people talk about QoS in general, they’re usually referring to queuing. So that’s going to be the main focus of this lecture. The original driver for QoS was voice over IP, and the first thing I’ll do here year is explain why there was that need for quality of service. So way back in the day, actually, you’ll still see some networks doing this today. Now as well. There was dedicated voice, video and data networks and maybe the company didn’t have video, but for sure they would have data and voice. For their data network, they would have a standard Ipwan like we covered in the Wan section earlier, for their phone network. They would be connected to the local telephone provider, the local phone company, and for that they would have phones on their desks in the offices, and there would usually be a PBX in the office as well, which is used to control the phones.
So And the PBX would be connected up to the telephone company for video calls if they had video endpoints. And typically that would be connected up over an ISDN network. Integrated Services Digital Network is what it stands for and ISDN would be provided by the phone company as well. And you can see from the diagram here that for voice, video and Data, the three networks are completely physically separate from each other. The data network is dedicated for data, the ISDN network is dedicated for video, and the phone network, the public switched telephony network with the telco is dedicated for their voice calls. So because of this, if there’s a problem with any of those, it’s not going to affect the other two.
So Now what you’ll find most often in modern networks is converged networks where the company is running voice, video and data all over the same underlying physical network infrastructure. So you can see the diagram here. Now that we’ve got IP phones on the desks and they’re going to be connected to an IPPBX like the Cisco Unified Communications Manager, we’ve got IP video endpoints as well, and we’ve got our standard IP servers and workstations and everything is connected to the same underlying network which is running over an IP infrastructure. Now, you may notice something missing here. In the last slide, we were able to call external people like customers and suppliers through the phone company as well. In this example here, you’ll see that everything is on the company’s corporate IP one, so they can send data, voice and video and make phone calls between offices. But right now they wouldn’t be able to call customers or suppliers. So we’re going to still need to be connected to the local phone company as well for that connection. That could be using a traditional phone connection you would see in an office like Voice E One or T One connection with a lot of modern phone providers they’ll actually offer this over IP as well, and it will be using a Sip connection which stands for Session Initiated Protocol. That’s really not important here. The thing that’s important here is that the voice, the video and the data is all running over that same shared network infrastructure between the different offices.
So the effects of this on old traditional networks, data, Voice and Video had their own physically separate network infrastructure and they did not impact each other. If there’s a problem on the data network that is not going to affect the voice network, people can still make phone calls as normal. On modern networks, however, Data, Voice and Video run over the same shared physical infrastructure. And the reason that companies will do that now is because it enables cost savings. Because rather than having three separate networks, you can run everything over that same shared network. And it also enables advanced features for voice and video. For example, you’ll see, with modern video endpoints, they can integrate with other collaboration software, they can do things like do shared presentations over webex over the network. It can also integrate with your call center if you have one, and stuff like that as well. So lowered costs and increased features.
So But a potential problem with this now is that Data, Voice and video are all fighting for the same shared bandwidth on that same shared physical network. And Voice and Video have got quality requirements for voice and traditional standard definition video packets. The recommended requirements to be an acceptable quality call are that the latency, which is another word for delay, should be up to 150 milliseconds, no more than that. The Jitter, which is variation and delay, no more than 30 milliseconds. I’ll explain that in a bit more detail in a second and that there’s no more than 1% of the packets are lost. Now, those are one way requirements, meaning that a packet sent from a phone in the HQ has 150 milliseconds to reach the phone in the branch, and vice versa. So if we go back and look at the diagram, you see we’ve got our phones in the HQ, where we’re going to make a phone call from the HQ to the phone in the branch.
So Those packets coming from the phone in the HQ are where your spoken voice is carried inside. They’ve got 150 milliseconds to make it to the phone in the branch. Also, the Jitter should be no more than 30 milliseconds. Jitter is variation in delay. So to give you an example of this, let’s see those packets. We’ve got multiple packets going from the HQ phone to the branch phone and they’re arriving now, now, now, that was variation and delay. There’s going to be multiple packets. And if a delay between the first, the second and the third packet varies, that is your Jitter.
So Your IP phones have got a built in Jitter buffer. They don’t immediately play the packets out to your ear because there’s always going to be some Jitter there and it would literally sound Jittery if they did that. So the IP phones will smooth out the rate of the packets being received to make it sound natural. But if Jitter goes above 30 milliseconds, it’s going to overrun that built in Jitter buffer and it’s going to make it a bad quality call. Now, you’ve all seen bad quality calls. If you watch a news report coming from a war zone or somewhere like that, usually they’ll be using a satellite phone and satellite is famously a high latency connection. So that’s why when you see the news report, they’ll always make the apology at the start about the quality of the call and you’ll see it will be choppy and the audio will be bad quality. That’s the kind of thing that will happen if you don’t meet those requirements for your voice and your video. So that was for standard Iptofenny voice and standard definition video. If you’re using high definition video, it’s got stricter requirements where it can handle less and less.
So With high definition video, it uses very high compression. So if you lose any packets at all, that will be noticeable in the video. Okay, so what are we going to do about this, or how can this cause a problem? The first thing to tell you about here is the default queuing mechanism on a router and on switches as well, which is first in, first out. So whenever congestion is experienced on a router or a switch, packets are sent out in a first in, first out, which is FIFO manner by default. And congestion can be experienced whenever it’s possible for packets to be coming in quicker than they can be sent out. So an example would be on your one edge router, where you’ve got a fast interface on the inside and you’ve got a slower interface on the outside, like you see in the example here. So on the router it’s got fast Ethernet on the inside, 100 megabits per second.
So The outside interface in our example was an E one, so its speed is two megabits per second. It’s possible for traffic to be coming in at a rate up to 100 megabits per second, but the router can only physically send traffic out at a rate of up to two megabits per second. So if traffic is coming in at a rate higher than two megabits per second, the router can’t send it out as quickly as it comes in and it’s going to have to queue those packets up. Another place where you would see congestion, so this is a really common place you’ll see it most likely place you’ll see congestion is on your wan edge routers because typically they will have faster speed interfaces on the inside than on the outside.
So You can also see it in your campus, on your switches as well, because you’re going to have more workstations connected in at the access layer, then you’ve got uplinks, going up to the top. So in the land in your campus, typically it will be less congestion because there are high speed interfaces there, but there can still be congestion there as well. In our example, we’re going to use our one edge router because that’s where you’re going to see usually the main effect of congestion and also where QoS can usually help the most as well. So this is still an example where we had the HQ on the left in the diagram and the branch office in the right, and this is the HQ router and we’re going to be sending traffic from left to right to the branch.
So let’s look what happens first when we don’t have congestion. So we’ve got traffic coming in on the inside interface on that router, which is going over to our external offices. And because we’re running a converged network here, we’ve got both voice and video and data packets are going to be coming in. In the example you’ll see, the data packets are blue and they tend to be bigger than your voice packets, which are usually small, but always small packets. So our voice packets are the small green ones. And the first example, traffic is coming in at a rate less than two megabits per second and that will happen when the office isn’t very busy. When that happens, the router can send traffic out immediately as it is received. So in that case, there is no congestion at all. Traffic is passing very quickly through the router. We’re not going to have any problems there, but we get a problem when traffic comes in at a rate higher than two megabits per second. So you can see in the example here, we’ve got that larger data packet at the front, the blue one, a little green voice packet behind there, then a couple of data packets, and then we’ve got a green voice packet coming in again.
And what order are packets going to come in? Well, it depends what people are doing. If somebody’s making a phone call, voice packets are going to be coming in. If somebody’s sending data, data packets are going to come in and they’re going to come in and in whatever order users are taking those actions. And because traffic is coming in at a rate faster than two megabits per second, as you see in the diagram here, the router can’t send it out quickly enough, it can’t keep up. So when that happens, the router will buffer traffic, meaning it will queue it up and packets wait in the queue to go out. The default queuing mechanism is that traffic gets sent out in the same order, but it comes in it’s first in, first out. So you can see in the queue here, we’ve got that data packet that came in first is at the front of the queue, it’s going out right now and then we’ve got the voice packet, the two data packets and then the other voice packet behind there.
And this is congestion in our router. Whenever you’ve got packets being queued up, that is congestion and this causes delay to the packets as they wait in the queue. Also, as the size of the queue changes, it causes jitter. If the queue is large when there’s going to be long, it’s going to take packets longer to get to the front of the queue and when the queue is short, it’s going to take less time. So having packets in the queue and variable size of a queue is going to cause the jitter to go up and down as well. And there’s a limit to the size of the queue, there’s only so much memory in the router. If a packet arrives when the queue is full, then the router is going to drop it.
So the congestion, it causes delay as packets are waiting in the queue, it causes jitter as packets are waiting in the queue again and the queue is changing size and it’s causing loss when the queue is full and packets that try to get in at the back are getting dropped. And our voice and video calls and also applications will be unacceptable quality if they do not meet their delay, jitter and loss requirements. And having queues in the router is going to cause our voice and video packets to not meet those requirements and it’s going to cause bad quality calls when you’re working in it. This is going to give you a big issue because way back in the day, when voice and video were on their own separate dedicated networks, users are used to, they can always pick up the phone and make a phone call and it’s always going to be a good quality call.
So really you have to provide them that same quality on modern networks as well. So how can we mitigate congestion? First way we can do it, the easiest way is we can add more bandwidth. If we had a 100 meg interface on the outside as well as on the inside, then whenever traffic comes in, we can send it out immediately, there’s not going to be any congestion. So the best way to fix the congestion problem is by adding more bandwidth. But the problem is that that costs money. That outside interface is connected through your service provider and the more bandwidth you want, the more money they’re going to charge you for it. So another way that we can help mitigate congestion is by using quality of service techniques. And what quality of service does is it gives better service to the traffic that needs it. So what we’re going to do now is we’re going to configure queuing on our router and we’re going to give better service to our voice packets. It’s the same scenario before where we’ve got traffic coming in at a rate higher than two megabits per second.
The data packet comes in first, then a little voice packet, then two data packets and then the voice packet. So we’ve got congestion. That traffic goes in the queue. But the difference now is if I go back that one slide again, you see it was blue, green, blue, blue, green. Now what we do with queuing is we put our voice packets straight to the front of the queue whenever there is a queue in the router. So the voice packets jump in front of the data packets, the router recognizes those voice packets and it moves them to the front of the queue and that minimizes their delay. We’re not further back in the queue, they’re straight at the front, so they’re going to be in the queue for less time. We’re going to get out of the router quicker, that’s going to minimize their delay, the jitter and the loss. So what are the effects of doing this? Well, like I just said, it reduces the latency of the jitter and the loss for particular traffic.
You’re going to give better service to your voice and video traffic and maybe some mission critical applications. The original driver for QoS was Voice over IP, but it can be used to give better service to important data applications as well. Thing is, if you’re giving one type of traffic better service on the same link, the same bandwidth you had before, then the other traffic types must get worse service. We jumped that voice packet to the front of the queue, but that’s actually moving our data packets further back in the queue. So voice is going to get better service, but Data gets worse service. The point is to give each type of traffic the service it requires. If a user is trying to open up a web page on the internet and that takes 1 second rather than half a second to notice, the user is not going to even notice the difference. It doesn’t matter.
But if you’re on a phone call and somebody’s voice starts sounding jerky and there’s gaps in it and you can’t understand them, that’s a big deal. It means that the phone call doesn’t work. So it’s important to give voice and video really good service so it gets the quality it requires. It doesn’t matter if data gets a little less quality service because users won’t actually notice for that application anyway. QLS is not a magic bullet and it’s designed to mitigate temporary periods of congestion. If a link is permanently congested, then you’re going to have bad quality voice, video and applications on that link. So what you need to do is upgrade the link. What you’ll often see companies do is they’ll have a target utilization. 80% is not uncommon, so they’ll look for an average utilization of 80% on the link.
But you know that the network gets busier at some times of day, like on Monday morning at 09:00 a. m. It’s probably going to be busier than it is at 03:00 p. m. On a Friday. So you could put in enough bandwidth that the link never ever gets congested. But that would be really expensive. And what you can do is balance the cost by having the link running at, for example, 80% utilization on average. You know that it will sometimes burst up to 100% and the link will be congested for those temporary periods of congestion session, you enable QoS, so your voice and your video and data applications get the service that they require. Okay, that was an overview of QoS. I’ll see you in the next.
Interesting posts
The Growing Demand for IT Certifications in the Fintech Industry
The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »
CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared
In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »
The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?
If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »
SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification
As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »
CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?
The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »
Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?
The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »