Cisco CCNA 200-301 – QoS Quality of Service Part 3

  • By
  • March 14, 2023
0 Comment

4. Congestion Management

In this lecture, you’ll learn about congestion management and how to configure queuing on your Cisco devices. When we talk about congestion management, we are talking about queuing. So there’s some congestion on the router or the switch, there’s more traffic coming in, then it’s able to send out. So it has to buffer that traffic, it has to put it into a queue. Queue. Congestion management is manipulating the queue so that you give better service to the traffic that requires it. There’s two types of queuing policies that are commonly used.

That’s Cbwfq, which is class based, weighted for queuing, and LLQ, which is low latency queuing. With Cbwfq, it gives bandwidth guarantees to specified traffic types. So when you’ve got congestion, you can say for this particular type of traffic, I’m going to give it so much of the available bandwidth. LLQ, the low latency queueing, is class based weighted fare queuing with a priority queue. So this is good, the configurations are exactly the same, it’s just the LLQ has got an additional line on there where you can give priority to a type of traffic. Traffic in the priority queue is sent before other traffic.

So with LLQ you can have a priority queue there, which you’ll probably put voice and video in. And for your data applications, you can give them bandwidth guarantees. So let’s look at how this is actually configured. And it uses the MQC, which is the modular QoS command line interface. The MQC is built with three main sections.

First up, we have the class map which defines the traffic to take an action on. Then there is the policy map which specifies the action to take on that traffic. And finally you have the service policy where you apply the policy map to an interface. When the MQC first came out, it was just used for QLS. But Cisco uses this framework for loads of other different configurations as well. Now, for example, if you’re configuring your security policies on a Cisco Firewall, it uses the same framework as this. Loads of other things as well use the same framework.

So it’s good that when you learn it the first time and then you go and you learn a different type of technology that’s still from Cisco. Very often it uses the same framework, so it makes it easy to learn that new technology. Okay, let’s look at an actual example and then we’ll look at the configuration for this. So with our example, we’ve got the HQ on the left, we’ve got a branch on the right, and we’ve got PCs and IP phones. In both locations. We send data and we send voice between the two locations. We’ve looked at the calls that we’re making between the locations and we’ve seen that we need to support ten concurrent calls between the HQ and the branch over that one link. For our example, each call is taking 25. 6.

Use this to make the math severely easy. If you do work with voice, you’ll see the different codecs, which means how we convert the actual spoken voice into the ones. And OS Digital uses a codec and different codecs can use different bandwidth. For our example here, let’s say that each call is 25. 6K. We need to support ten concurrent calls between the sites. So 25. 6 equals 256K. So we provision 256 bandwidth for our voice calls. We also analyze the data traffic between the two sites. And in this example, we have determined that 512k is required for data on average. So 512 plus 256 equals 768k. So that is the bandwidth on the link that we provision from the service provider. I know I’m using really low numbers here, but big numbers make my head hurt. So it’s easier to understand and explain using these low numbers, but a real world deployment just make the numbers bigger.

It works exactly the same way. Okay, so we’ve provisioned our bandwidth at we know that that is going to be enough bandwidth for normal operations, but we also know that data is sometimes going to burst above 512k during busy periods. And during those periods we could also have those ten concurrent calls. So when that happens, the link is going to get congested. We don’t want to use first in, first out queuing because if we do that, the voice packets are going to get stuck behind the data packets and we’re going to have bad quality phone calls.

We want to bump those voice packets straight to the front of the queue so they don’t get delayed and we get good quality calls. So let’s look at the configuration. So the first part of the NQC is the classmap where we specify the traffic that we’re interested in. Our configure is classmap. And then give it a descriptive name, anything you want. I’ve called it. Voice payload. Match IP Dscpef. And then classmap call signaling.

Match IP DSCP. CS three. The IP phones are marking their own packets. So whenever a phone generates a packet which is spoken voice, it will mark it as DSCP EF. Whenever it generates a signaling packet to set up the call or tear it down, it’s going to mark it as CSV. So we’re looking for those particular packets coming from the IP phone. We recognize them with our classmaps. Next thing that we’re going to do is we’re going to specify what we’re going to do to that traffic in our policy map. So I’ve got policy map, I’ve called it One Edge. Again, call it anything you want.

Descriptive name, then Class Voice Payload, which references the class map that we configured already. And then I say priority percent 33. Priority means put this traffic straight at the front of the queue whenever there is a queue. So if there’s no congestion, this does not take effect. But when there is congestion, the queueing policy will kick in and any voice packet that comes into the queue will now be bumped straight to the front of the queue. We provisioned 256K for our voice calls, which is one third of the 768k link, which is why I’ve said priority percent 33.

So voice packets will go straight to the front of the queue and are guaranteed 33% of the bandwidth. Next, I want to give a bandwidth guarantee to my call signaling traffic. This traffic is not so important, so it doesn’t need to go in the priority queue. If it gets delayed, then the call might just take a fraction of a second longer to set up, but it will still work just fine. But I want to make sure that those signaling packets do get there.

So that’s why I’m giving them a bandwidth guarantee. These packets don’t require much bandwidth, so real world you would figure out exactly how much bandwidth they do require. Here we’ve figured out that it’s bandwidth percent five that they require. So now whenever there’s congestional interface, our voice payload packets go straight to the front of the queue. They are guaranteed 33% of the bandwidth as well.

Our cog signaling packets don’t go straight to the front of the queue, but they’re guaranteed 5% of the bandwidth. And if they require more, the cock signaling packets can take more bandwidth than that as well. If it’s available, the product EQ is guaranteed 33% and it’s limited to 33% as well because if it was able to burst up to 100%, then it would take all of the bandwidth and nothing else would ever get out of the router, so it would break all of our traffic. So the priority queue is guaranteed that much and it’s also limited to that much as well.

For the bandwidth statements, it’s guaranteed that much and it can go higher if it’s available. Okay, so that is us giving the required service to the voice payload and the call signaling. Next is class class default. Class default means all other traffic that we haven’t specified, higher up with a class map. So everything else and then we say fair queue. This is a best practice command to put in. It’s a more fair queuing policy, even first in, first out.

First in, first out tends to penalize small packets unparalleled unfairly. Fair queue is a better queuing mechanism, so it’s best practice to put those commands in. And then finally we need to apply the policy to the interface. If you’re ever working in QoS in the real world, this is the bit that’s really easy to forget because you do the class map where you see the traffic you’re looking for, then you do the policy map where you see what you’re going to do to it and then you’re like, okay, I’m done.

And it’s easy to forget to put on the service policy. If you do that, nothing happens. You have to apply the service policy for this to take effect. This is done under the interface. So in our example it’s interface serial zero zero. We say bandwidth seven, six eight. You have to do this if you’re using a priority percent so it knows what this percent is. 33% of seven six eight is two five six. That’s where it gets the value from. So put the bandwidth statement on there and then finally to apply it, we say service policy out wan Edge, which is the name of the policy map.

Okay, so that is the whole thing. If you missed this earlier for the CCNA exam, you do not need to know this configuration. I’m showing it here because I think it makes it a lot easier to understand what’s happening when you see the configuration. But you don’t need to memorize this for the exam. For the exam you need to understand all the theory. So you need to understand what QoS is.

You need to understand the different QoS mechanisms like classification and marking or congestion management, which is queuing, and also policing and shaping, which is going to come up and next lecture. So you need to know the theory, but you don’t need to know the configuration. You’ll need to know that when you move on to specializing in some of the other tracks or going on to the CCNP level. Okay, so that was congestion management and queuing and I’ll see you in that next lecture for policing and shaping.

Comments
* The most recent comment are at the top

Interesting posts

The Growing Demand for IT Certifications in the Fintech Industry

The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »

CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared

In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »

The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?

If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »

SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification

As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »

CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?

The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »

Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?

The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »

img