• Home
  • Cisco
  • 642-642 Quality of Service (QoS) Dumps

Pass Your Cisco 642-642 Exam Easy!

100% Real Cisco 642-642 Exam Questions & Answers, Accurate & Verified By IT Experts

Instant Download, Free Fast Updates, 99.6% Pass Rate

Archived VCE files

File Votes Size Date
File
Cisco.Certkey.642-642.v2011-11-22.by.Randolf.80q.vce
Votes
1
Size
1.89 MB
Date
Nov 22, 2011
File
Cisco.Pass4sure.642-642.v2011-02-02.by.VGN.79q.vce
Votes
1
Size
3.08 MB
Date
Feb 03, 2011
File
Cisco.SelfTestEngine.642-642.v2010-10-29.by.DoDo.77q.vce
Votes
1
Size
1.72 MB
Date
Oct 31, 2010
File
Cisco.SelfTestEngine.642-642.v2010-08-02.by.Vuyane.92q.vce
Votes
1
Size
2 MB
Date
Oct 06, 2010
File
Cisco.SelfTestEngine.642-642.vv2.35.by.Rene.88q.vce
Votes
1
Size
1.84 MB
Date
Feb 24, 2010

Cisco 642-642 Practice Test Questions, Exam Dumps

Cisco 642-642 (Quality of Service (QoS)) exam dumps vce, practice test questions, study guide & video training course to study and pass quickly and easily. Cisco 642-642 Quality of Service (QoS) exam dumps & practice test questions and answers. You need avanset vce exam simulator in order to study the Cisco 642-642 certification exam dumps & Cisco 642-642 practice test questions in vce format.

An Introduction to the Cisco 642-642 and VoIP Foundations

The Cisco 642-642 exam, formally titled "Quality of Service (QOS)," was a key professional-level examination for network engineers specializing in voice and video technologies. It was one of the required exams for the Cisco Certified Network Professional Voice (CCNP Voice) certification. This certification track validated a robust set of skills for implementing, operating, configuring, and troubleshooting a converged IP network. Passing the Cisco 642-642 exam demonstrated a candidate's mastery of ensuring high-quality, real-time voice and video traffic delivery across an enterprise network. While this specific exam code is now retired, its principles remain fundamental to modern collaboration engineering. The core focus of the Cisco 642-642 exam was on implementing Quality of Service.

This is a critical discipline in networking that involves managing network resources to provide preferential treatment to specific types of data traffic. For real-time applications like voice calls or video conferencing, consistent and predictable network performance is not just a luxury but a necessity. The exam curriculum covered the models, mechanisms, and tools required to classify, mark, police, and shape network traffic. Understanding these concepts was essential for preventing issues like jitter, latency, and packet loss, which can severely degrade the user experience. Although the CCNP Voice certification has since evolved into the CCNP Collaboration certification, the knowledge tested in the Cisco 642-642 exam is more relevant than ever. The explosion of unified communications, cloud-based collaboration tools, and video streaming has placed even greater demands on network infrastructure. The underlying principles of identifying critical traffic, marking it with appropriate priority levels, and configuring network devices to honor those priorities are foundational. 

Therefore, studying the topics of the original Cisco 642-642 syllabus provides a powerful knowledge base for any network professional working with modern real-time applications. The transition away from the Cisco 642-642 exam reflects the broader evolution of Cisco's certification programs. The newer CCNP Collaboration certification addresses a wider range of technologies, including Cisco Unified Communications Manager (CUCM), gateways, and collaboration endpoints, with QoS being an integrated part of the overall curriculum. However, the deep dive into QoS that the 642-642 exam demanded remains a valuable area of specialization. Engineers who master these skills are better equipped to design and maintain high-performance networks capable of supporting the most demanding applications, ensuring business-critical communications are always clear and reliable.

Understanding the Shift to Voice over IP

Voice over Internet Protocol, or VoIP, represents a fundamental paradigm shift in how voice communications are transmitted. In traditional telephony, known as the Public Switched Telephone Network (PSTN), a dedicated physical circuit is established and maintained for the entire duration of a call. This method, called circuit-switching, is highly reliable but also inefficient, as the circuit's bandwidth is reserved exclusively for that one call, even during moments of silence. This older system relied on a completely separate infrastructure built solely for voice. The concepts central to the Cisco 642-642 were developed to manage voice on a different kind of network. VoIP, in contrast, utilizes packet-switching technology, which is the same method used for all other data on the internet, such as emails and web pages. Voice signals are first digitized, compressed, and then broken down into small units called packets. Each packet is independently sent over the IP network towards its destination. 

This approach is far more efficient because network resources are shared among many users and conversations simultaneously. Bandwidth is consumed only when someone is actually speaking, allowing the same infrastructure to carry voice, video, and data traffic concurrently. This convergence is the cornerstone of modern communication systems. The primary advantage of VoIP is its ability to leverage existing data networks, significantly reducing the cost and complexity of maintaining separate infrastructures for voice and data. This consolidation simplifies management and enables a host of advanced features that are difficult or impossible to implement on traditional phone systems. Features like voicemail-to-email, integrated video conferencing, and presence information are all made possible by treating voice as just another application on the network. 

This convergence, however, introduces new challenges that the Cisco 642-642 exam topics aimed to solve, primarily related to ensuring voice quality on a shared network. The main challenge with VoIP is that data networks were not originally designed for real-time traffic. Unlike an email, which can tolerate some delay in packet arrival, a voice conversation is extremely sensitive to latency and jitter. Latency is the delay it takes for a packet to travel from the speaker to the listener, while jitter is the variation in that delay. Excessive amounts of either can lead to garbled audio and a frustrating user experience. This is precisely why the Quality of Service principles from the Cisco 642-642 curriculum are so critical. They provide the tools needed to prioritize voice packets over less time-sensitive data.

Core Components of a VoIP Network

A modern VoIP network is composed of several key components working in concert to deliver seamless communication. The central brain of the system is the IP Private Branch Exchange (IP PBX), often referred to as a call control agent. In the Cisco ecosystem, this role is filled by the Cisco Unified Communications Manager (CUCM). The IP PBX is responsible for all call processing functions, such as call setup, routing, and teardown. It manages all the endpoints on the network, enforces call policies, and provides access to advanced features like voicemail and conferencing. Endpoints are the devices that users interact with directly. These can include IP phones, which look like traditional office phones but connect to the network via an Ethernet port instead of a phone jack. They can also be software-based clients, or softphones, running on a user's computer or smartphone. These endpoints register with the IP PBX, which allows them to make and receive calls. 

When a user dials a number, the endpoint communicates with the IP PBX using a signaling protocol to initiate the call setup process. A solid grasp of these components was essential for the Cisco 642-642. To connect a VoIP network to the traditional Public Switched Telephone Network (PSTN), a device known as a voice gateway is required. The gateway acts as a translator, converting the signaling and media streams between the packet-switched IP network and the circuit-switched telephone network. This allows users on IP phones to make calls to and receive calls from traditional landlines and mobile phones. 

Gateways are critical components for any organization that needs to communicate with the outside world, and their proper configuration is a major aspect of voice engineering. They are also a key location for implementing QoS policies. Finally, in many larger deployments, a Multipoint Control Unit (MCU) or conference bridge is used to facilitate calls with multiple participants. While simple three-way calling can often be handled by the IP PBX or the endpoints themselves, MCUs are specialized devices that can mix audio and video streams from many participants. This allows for large-scale audio and video conferences. All these components—the IP PBX, endpoints, gateways, and MCUs—rely on a robust and well-managed underlying IP network. The quality of that network, managed with QoS, directly impacts the quality of every call.

The Role of Codecs in Voice Communication

A codec, which is short for coder-decoder, is a crucial algorithm used in VoIP to convert analog voice signals into digital packets for transmission and then back into analog signals at the receiving end. When a person speaks into a phone's microphone, their voice creates an analog sound wave. The codec's job is to sample this wave thousands of times per second, quantify each sample into a digital value, and then compress the resulting data to reduce the amount of bandwidth required to send it over the network. This process is fundamental to making VoIP practical. Different codecs offer various trade-offs between voice quality, bandwidth consumption, and computational demand. For example, the G.711 codec provides high-fidelity voice quality, equivalent to that of a traditional telephone call, because it uses minimal compression. However, this high quality comes at the cost of consuming more bandwidth, typically around 87 kilobits per second (kbps) per call. 

On the other hand, codecs like G.729 use more advanced compression algorithms to significantly reduce bandwidth usage to as low as 31 kbps per call. This efficiency, however, comes with a slight reduction in audio fidelity and requires more processing power. The choice of codec in a network design has significant implications for both call quality and network capacity. In environments where bandwidth is plentiful, such as on a local area network (LAN), using a high-quality codec like G.711 is often the preferred choice. However, for calls that traverse a wide area network (WAN) link with limited bandwidth, a low-bandwidth codec like G.729 is often more appropriate. The Cisco 642-642 exam required engineers to understand these trade-offs and to configure network devices to support the appropriate codecs for different parts of the network. 

Modern networks often use a mechanism called transcoding, which is the process of converting a media stream from one codec to another. This is typically performed by a device like a voice gateway or a Digital Signal Processor (DSP) resource. Transcoding is necessary when two endpoints that are trying to communicate do not share a common supported codec. For instance, if an internal user on a G.711 call needs to connect to a remote site that only uses G.729, a transcoding resource must be available to bridge the two call legs. This adds complexity and potential delay, highlighting the importance of careful codec planning.

Signaling Protocols: SIP and H.323

In a VoIP network, signaling protocols are used to manage the establishment, control, and termination of calls. They are the language that endpoints, gateways, and call control agents use to communicate with each other. They handle tasks such as locating the user being called, negotiating the parameters of the call (like which codec to use), and managing the call's status, such as putting a call on hold or transferring it. The two most prominent signaling protocols in the VoIP world are H.323 and the Session Initiation Protocol (SIP). Understanding their differences was important for Cisco 642-642 preparation. H.323 is one of the oldest and most established VoIP signaling standards, developed by the International Telecommunication Union (ITU).

It is not a single protocol but rather a suite of protocols that define how multimedia communications occur over a network. H.323 is known for its robustness and maturity, having been widely deployed in enterprise and carrier networks for many years. It has a relatively complex, binary-based structure. Key components in an H.323 network include terminals (endpoints), gateways, gatekeepers (for call control), and multipoint control units (for conferencing). The Session Initiation Protocol (SIP), developed by the Internet Engineering Task Force (IETF), has become the de facto standard for modern VoIP and unified communications. SIP is a text-based protocol with a syntax similar to HTTP, the protocol used for web browsing. 

This makes it more lightweight, flexible, and easier to troubleshoot than H.323. SIP's architecture is simpler, consisting of user agents (which act as clients and servers) and various server types, including proxy servers, redirect servers, and registrar servers. Its extensibility has allowed it to easily incorporate features like instant messaging, video, and presence. While SIP has largely superseded H.323 in new deployments, many established networks still use H.323 or operate in a mixed environment. 

Therefore, voice gateways must often be able to translate between the two protocols to ensure interoperability. This process is known as interworking. For a network engineer, it's crucial to understand the call setup process for both protocols. This includes knowing how devices register, how calls are routed, and how media sessions are established. This knowledge is essential for configuring and troubleshooting the complex call flows that can exist in a large enterprise environment.

Transport Layer Protocols: TCP vs. UDP for Voice

At the transport layer of the OSI model, VoIP primarily uses two protocols for communication: the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP). The choice between them depends on the type of VoIP traffic being sent. Signaling traffic, which handles call setup and control, can use either TCP or UDP, depending on the specific signaling protocol. For example, SIP can operate over both. However, the actual voice media stream, which consists of the Real-time Transport Protocol (RTP) packets, almost exclusively uses UDP. The reasons for this were a key concept for the Cisco 642-642. TCP is a connection-oriented protocol that guarantees the reliable delivery of data. It achieves this through a system of acknowledgments and retransmissions. When a segment of data is sent, TCP waits for an acknowledgment from the receiver. If one isn't received within a certain time, it retransmits the data. It also ensures that packets are delivered in the correct order. 

While this reliability is excellent for applications like file transfers or email, it is detrimental to real-time voice conversations. The overhead of acknowledgments and the delay caused by retransmitting lost packets would introduce unacceptable latency and jitter into a call. UDP, on the other hand, is a connectionless, "best-effort" protocol. It sends packets without establishing a prior connection and without any mechanism to guarantee delivery or order. It simply sends the data and hopes it arrives. This may sound unreliable, but it is precisely what makes it ideal for voice traffic. UDP has very low overhead, which means it is fast and efficient. For a voice call, it is better to miss a small packet of audio, which might result in a barely perceptible audio glitch, than it is to delay the entire conversation waiting for a retransmission of that old packet. 

The actual voice media is carried within packets using the Real-time Transport Protocol (RTP). RTP itself runs on top of UDP. RTP adds a sequence number and a timestamp to each packet. These additions don't provide reliability in the TCP sense, but they allow the receiving device to reorder any packets that arrive out of sequence and to help mitigate the effects of network jitter. This combination of UDP for low-overhead transport and RTP for sequencing and timing information provides the best possible framework for delivering real-time media over an IP network, forming the basis for QoS strategies.

Introduction to Quality of Service (QoS)

Quality of Service (QoS) is not a single feature but rather a collection of networking technologies and strategies designed to manage network resources and provide differentiated service levels to different types of traffic. In a standard, non-QoS network, all packets are treated equally in a "best-effort" or "first-in, first-out" manner. While this is acceptable for applications that are not time-sensitive, like email or web browsing, it is completely inadequate for real-time applications like VoIP and video conferencing. The Cisco 642-642 exam was entirely focused on mastering these technologies. 

The fundamental goal of QoS is to overcome the network limitations of bandwidth, delay, jitter, and packet loss. Bandwidth is the maximum data rate a network link can support. Delay, or latency, is the time it takes for a packet to travel from its source to its destination. Jitter is the variation in the packet arrival delay. Packet loss occurs when network congestion forces routers to drop packets. Any of these factors can severely degrade the quality of a real-time conversation, making QoS an essential component of any converged network. QoS works by enabling network administrators to define policies that classify and prioritize traffic. For example, a policy could be created to identify all voice traffic and give it the highest priority, ensuring it gets processed by routers and switches before any other data. 

This same policy could assign a lower priority to bulk file transfers and the lowest priority to recreational web browsing. By managing how bandwidth is allocated and how packets are queued during times of congestion, QoS ensures that the most critical applications always receive the resources they need to function properly. Implementing a successful QoS strategy involves a multi-step process. First, you must identify the different types of traffic on your network and define your business requirements for each. Second, you need to classify the traffic so network devices can distinguish between high-priority and low-priority packets. Third, you apply specific QoS tools and mechanisms, such as queuing, shaping, and policing, to enforce the policies you have defined. This end-to-end approach ensures that traffic is treated appropriately as it traverses the entire network path, which is a core philosophy behind the Cisco 642-642 curriculum.

Why QoS is Critical for Voice Traffic

Voice traffic has very specific and stringent requirements for network performance, making it uniquely sensitive to network impairments. Unlike most data applications, VoIP cannot tolerate significant delay, jitter, or packet loss. A one-way delay of more than 150 milliseconds (ms) becomes noticeable to the human ear and can lead to people talking over each other, making conversation difficult and unnatural. The entire process of speaking, packetizing, transmitting, and de-packetizing must occur within this tight time window. QoS is the mechanism that ensures this happens reliably. Jitter, the variation in packet arrival times, is another major enemy of voice quality. Voice packets are generated at a constant rate, typically one packet every 20 or 30 milliseconds. 

They are meant to be played back at the receiving end at that same constant rate. If packets arrive with inconsistent timing due to network congestion, the receiving device's de-jitter buffer may either overflow or run empty. This results in dropped audio or glitches that make the speech sound choppy and distorted. QoS queuing mechanisms are designed to smooth out the flow of voice packets, minimizing jitter. Packet loss also has a much more immediate impact on voice than on data. If a packet is lost in a file transfer, TCP will simply retransmit it. For a voice call using UDP, a lost packet is gone forever. While modern codecs and endpoints have some basic error concealment mechanisms to guess what the missing audio might have been, anything more than a small amount of packet loss will be clearly audible. 

A packet loss rate of even one percent can significantly degrade call quality. QoS helps prevent packet loss for voice traffic by giving it priority access to bandwidth and buffer space. Ultimately, the goal of implementing QoS for voice, as detailed in the Cisco 642-642 syllabus, is to make the VoIP phone call sound as good as, or better than, a call on the traditional telephone network. This requires providing voice traffic with a "virtual private line" through the converged network. By prioritizing voice packets and protecting them from the bursty, high-volume nature of data traffic, QoS ensures that businesses can rely on their IP network for mission-critical communications. It transforms a best-effort data network into a predictable, high-performance multimedia network.

Preparing for Modern Cisco Collaboration Certifications

While the Cisco 642-642 exam and the CCNP Voice certification are retired, the skills they represent are still in high demand. The modern equivalent is the CCNP Collaboration certification. This updated track reflects the current state of communication technology, placing a greater emphasis on solutions like Cisco Webex, video integration, and collaboration endpoint configuration. However, the principles of call control, gateway implementation, and, most importantly, Quality of Service remain at its core. 

A solid understanding of the old 642-642 material provides an excellent foundation for success. The current CCNP Collaboration certification requires passing a core exam, "Implementing and Operating Cisco Collaboration Core Technologies (CLCOR)," and one concentration exam of your choice. The CLCOR exam covers a broad range of topics, including infrastructure and design, protocols, codecs, endpoints, gateways, call control, and collaboration applications. It integrates QoS as a fundamental piece of the overall architecture rather than treating it as a separate, specialized exam. 

This reflects the reality that QoS is not an optional add-on but an essential part of any successful collaboration deployment. To prepare for the new certification, candidates should still build a deep understanding of QoS. This means studying the different QoS models like IntServ and DiffServ, mastering classification and marking techniques using tools like NBAR, and understanding how to configure queuing mechanisms like Low Latency Queuing (LLQ). Lab practice is essential. Building a home lab with virtualized Cisco routers, switches, and a CUCM instance allows you to experiment with different QoS configurations and see their impact on traffic flow. 

This hands-on experience is invaluable for both the exam and real-world scenarios. Beyond the technical knowledge, it is also important to understand the business case for collaboration technologies. The goal is not just to make phones ring but to enable more effective communication and teamwork within an organization. A successful collaboration engineer can translate business requirements into a technical design that is reliable, scalable, and secure. The disciplined approach to network performance required by the Cisco 642-642 QoS curriculum is a key part of that skill set, ensuring that the technology delivers a consistently high-quality user experience that supports business objectives.

Revisiting QoS in the Context of Cisco 642-642

The Cisco 642-642 exam placed significant emphasis on understanding the theoretical frameworks that underpin Quality of Service. Before an engineer can configure specific QoS tools, they must first understand the overarching models that define how QoS can be implemented on a network. These models provide a high-level approach to managing traffic and allocating resources. 

They are not mutually exclusive technologies but rather different philosophies for achieving service differentiation. The three primary models covered were the Best-Effort model, the Integrated Services (IntServ) model, and the Differentiated Services (DiffServ) model. Each has its own mechanisms, benefits, and drawbacks. The Best-Effort model is the simplest and is the default behavior on most networks. In this model, there is no QoS. The network treats all packets equally and forwards them on a first-in, first-out basis. There are no guarantees of delivery, timing, or bandwidth. While simple to implement, this model is unsuitable for networks that need to carry real-time traffic like voice or video alongside regular data traffic. 

During periods of congestion, critical voice packets would be delayed or dropped just as readily as non-critical file transfer packets, leading to poor application performance. The Integrated Services and Differentiated Services models were developed to address the shortcomings of the Best-Effort approach. They both aim to provide guarantees and predictability but do so in very different ways. IntServ focuses on reserving resources explicitly for each individual traffic flow, making it very granular but difficult to scale. DiffServ takes a more aggregated approach, classifying traffic into a small number of classes and providing different levels of service to each class. 

The Cisco 642-642 curriculum required a thorough understanding of the mechanics and suitability of each model for different enterprise network scenarios. Choosing the right QoS model, or often a combination of models, is a critical network design decision. The choice depends on factors such as the size and complexity of the network, the types of applications being supported, and the administrative overhead the organization can handle. While DiffServ has become the dominant model for its scalability and flexibility, understanding IntServ is still important as some of its concepts are relevant, and it provides a valuable contrast that clarifies why DiffServ is generally preferred in modern enterprise environments.

The Best-Effort Model

The Best-Effort model is the default mode of operation for IP networks. As the name implies, the network makes no promises about the delivery of packets. It simply does its best to forward them to their destination. There is no concept of traffic priority or resource reservation. All packets are placed into a single queue and are processed in the order they arrive. This model is perfectly adequate for many traditional data applications, such as email, file transfers (FTP), and general web browsing. These applications are resilient to variations in delay and can retransmit lost packets without a noticeable impact on the user. The primary advantage of the Best-Effort model is its simplicity. It requires no special configuration on routers or switches, making it easy to deploy and manage. 

The network hardware can focus on its core task of forwarding packets as quickly as possible without the additional processing overhead of inspecting, classifying, and prioritizing them. For small networks with ample bandwidth and no real-time applications, this model can be sufficient. However, its limitations become immediately apparent when applications with different performance requirements are introduced onto the same network infrastructure. The major drawback of the Best-Effort model is its complete lack of predictability. During times of network congestion, when the volume of traffic exceeds the capacity of a link, routers must begin to drop packets. In a Best-Effort model, the packets that get dropped are simply the ones that happen to arrive when the queue is full. 

This means a critical voice packet from a CEO's conference call has the same chance of being discarded as a packet from a large, non-urgent software download. This unpredictability makes it impossible to guarantee the performance of sensitive applications. This is why the study of QoS, as mandated by the Cisco 642-642 exam, begins with understanding the limitations of the Best-Effort model. It serves as the baseline from which all other QoS models are judged. By recognizing why this default behavior fails to meet the needs of a converged network, engineers can better appreciate the necessity and function of more advanced models like IntServ and DiffServ. The move away from Best-Effort is the first and most critical step in building a network capable of delivering a high-quality user experience for all applications.

The Integrated Services (IntServ) Model

The Integrated Services (IntServ) model was one of the first major attempts to provide true end-to-end QoS guarantees on an IP network. It is often described as a "hard QoS" model because it focuses on explicitly reserving network resources for specific traffic flows. Before an application sends its data, it must first request a particular level of service from the network. This request is handled by a signaling protocol called the Resource Reservation Protocol (RSVP). The application uses RSVP to signal its traffic characteristics and requirements, such as bandwidth and delay, to all the routers along the path. Each router in the path examines the RSVP request. 

If it has sufficient available resources (bandwidth and buffer space) to meet the application's needs without impacting existing reservations, it accepts the request and forwards it to the next router. If all routers along the path accept the request, the reservation is successfully established, and the network provides a firm guarantee that the application's traffic will receive the requested level of service. If any router in the path cannot meet the request, the reservation is denied, and the application is informed that it cannot send its data with the desired quality. 

The primary benefit of the IntServ model is its ability to provide very strong, quantifiable guarantees for application performance. Because resources are explicitly reserved, an application like a video conference can be assured that it will have the bandwidth it needs for the entire duration of the session. This per-flow granularity is a powerful feature. 

However, this is also its biggest weakness. Every router in the path must process RSVP messages and maintain state information for every single reserved flow. This creates significant processing and memory overhead on the network devices. Due to these scalability issues, the IntServ model is rarely deployed across an entire enterprise network or the internet. The state-keeping requirement makes it impractical for core routers that handle thousands or millions of traffic flows. Where IntServ and RSVP might still be found is in smaller, controlled environments or for specific applications where absolute guarantees are necessary. For the Cisco 642-642, understanding IntServ was crucial for comprehending the evolution of QoS and the reasons behind the development of the more scalable DiffServ model.

The Differentiated Services (DiffServ) Model

The Differentiated Services (DiffServ) model was developed to address the scalability limitations of IntServ. Instead of managing reservations for individual flows, DiffServ takes a more practical, class-based approach. It is considered a "soft QoS" model because it provides service differentiation rather than absolute guarantees. In the DiffServ model, packets are classified and marked into a small, predefined number of service classes at the edge of the network. The core network routers then use these markings to apply a specific per-hop behavior (PHB) to the packets, such as giving them priority queuing or a higher chance of being forwarded during congestion. 

The marking is done using the Differentiated Services Code Point (DSCP) field in the IP header. This six-bit field allows for up to 64 different traffic classes. This is a significant improvement over the older IP Precedence field, which only allowed for eight classes. When a packet enters the network, an edge router inspects it to determine its class (e.g., voice, video, transactional data) and then sets the DSCP value accordingly. From that point on, routers within the network do not need to perform deep packet inspection; they only need to look at the simple DSCP marking to know how to treat the packet. 

This approach is highly scalable because the core routers do not need to maintain any per-flow state information. They only need to be configured to handle a few classes of service. For example, a router might be configured to place all packets marked with DSCP value EF (Expedited Forwarding) into a priority queue, ensuring low latency for voice traffic. Packets marked with various AF (Assured Forwarding) values might be given different levels of bandwidth guarantees and drop probabilities, while packets with the default BE (Best-Effort) marking receive no special treatment. The DiffServ model is the most widely deployed QoS architecture in enterprise networks today and was a central focus of the Cisco 642-642 exam. 

It provides a flexible and scalable framework for implementing QoS policies. While it does not offer the absolute, end-to-end guarantees of IntServ, it provides a robust mechanism for ensuring that critical applications receive better service than non-critical ones, which is sufficient for the vast majority of business requirements. Its effectiveness relies on a consistent classification and marking strategy at the network edge and corresponding queuing and scheduling policies in the core.

Comparing IntServ and DiffServ for Enterprise Networks

When comparing the Integrated Services and Differentiated Services models, the primary trade-off is between the strength of the service guarantee and the scalability of the solution. IntServ, with its per-flow resource reservation using RSVP, offers strong, end-to-end quantitative guarantees. It essentially creates a circuit-like path over the packet-switched network for a specific application session. This is ideal for applications that cannot tolerate any degradation in service. 

However, the cost of this guarantee is high in terms of router overhead, as every router must process and maintain the state for every single flow. DiffServ, in contrast, offers qualitative differentiation rather than quantitative guarantees. It doesn't reserve bandwidth for a specific call; instead, it ensures that voice traffic as a class of service is treated better than, for example, email traffic. This class-based approach is immensely more scalable. Core network devices only need to understand a handful of service classes, identified by DSCP markings, regardless of how many individual flows are traversing the network. 

This eliminates the per-flow state problem of IntServ, making DiffServ suitable for large, complex enterprise networks and the internet. In a typical enterprise, DiffServ is the pragmatic choice for implementing QoS. It provides the necessary tools to protect real-time voice and video traffic from being impacted by bulk data traffic without overburdening the network infrastructure. The configuration is also more manageable. Policies for classifying and marking traffic are typically applied only at the network edge or trust boundaries, while the core of the network simply acts on those markings. This aligns with the Cisco 642-642 philosophy of pushing complexity to the edge and keeping the core fast and simple. While DiffServ is the dominant model, it's not a complete replacement for IntServ in all scenarios.

In some specific use cases, such as managing bandwidth over a slow WAN link for a critical video conferencing system, RSVP from the IntServ model might be used in conjunction with a DiffServ architecture. For instance, RSVP could be used to signal the need for a reservation across the WAN, and the WAN edge routers could then map this request into the appropriate DiffServ class. This hybrid approach can offer the best of both worlds, though it increases configuration complexity.

Implementing DiffServ with Differentiated Services Code Point

The key mechanism for implementing the DiffServ model is the Differentiated Services Code Point, or DSCP. This is a 6-bit field located in the IP header's Type of Service (ToS) byte. Because it is 6 bits long, it allows for 2^6, or 64, possible values (0-63). 

These values are used to "mark" packets, assigning them to a specific traffic class. Network devices like routers and multilayer switches can then read this DSCP value to make decisions about how to forward the packet, particularly during times of congestion. The Cisco 642-642 exam required detailed knowledge of these values. The 64 DSCP values are not just arbitrary numbers; they are organized into pools that correspond to standardized per-hop behaviors (PHBs). The most important PHB for voice traffic is Expedited Forwarding (EF).

The recommended DSCP value for EF is 46 (binary 101110). Packets marked with EF are intended for applications requiring low delay, low jitter, and low loss. Network devices are configured to give EF traffic strict priority queuing, meaning it always gets sent before any other traffic. This makes EF the ideal class for VoIP media streams. Another important group of PHBs is Assured Forwarding (AF). 

The AF class provides a certain level of service guarantee, but unlike EF, it tolerates being dropped during congestion. The AF class is defined by four classes (AF1, AF2, AF3, AF4) and three drop probabilities within each class (low, medium, high). This is represented by a notation like AF31, which means Assured Forwarding class 3 with a low drop probability. This granularity makes the AF class suitable for a wide range of applications, such as interactive video, transactional data, and network control traffic. 

Finally, there is the Best-Effort (BE) or Default Forwarding (DF) class, which has a DSCP value of 0. This is for all traffic that does not require any special treatment. There is also a Class Selector (CS) pool, which provides backward compatibility with the older IP Precedence scheme. For example, CS5 is equivalent to an IP Precedence value of 5. A successful DiffServ implementation relies on a well-planned policy that maps different applications to these DSCP values consistently across the entire network.

The QoS Baseline and Cisco's Recommendations

To simplify the implementation of DiffServ, Cisco has developed a set of best-practice recommendations often referred to as the QoS Baseline. This provides a standardized model for how many traffic classes to create and which applications should be placed into each class. Following a baseline model helps ensure consistent and predictable QoS behavior across the network. The Cisco 642-642 curriculum heavily referenced these models. While the specifics can be adapted to an organization's needs, the baseline provides an excellent starting point for any QoS design. 

The Cisco QoS Baseline typically defines between five and eleven classes of traffic. A common five-class model might include a Voice class, a Video class, a Critical Data class, a Best-Effort class, and a Scavenger class. The Voice class, for real-time VoIP media, would be marked with DSCP EF (46) and placed in a priority queue. The Video class, for interactive video conferencing, might be marked with AF41 and given a guaranteed bandwidth queue. Critical Data, such as database transactions, could be marked AF21 and also get a guaranteed amount of bandwidth. 

The Best-Effort class, marked with DSCP 0, is the default for all traffic not otherwise classified. It gets whatever bandwidth is left over after the higher-priority classes have been served. A key addition in many modern QoS designs is the Scavenger class, often marked with CS1. This class is intended for traffic that is considered "less than best-effort," such as peer-to-peer file sharing or large, non-business-critical downloads. 

During times of congestion, traffic in the Scavenger class is the very first to be dropped, protecting all other business applications. Implementing this model requires a systematic approach. First, network traffic must be analyzed to identify the different applications. Then, a policy must be created that classifies this traffic and marks it with the appropriate DSCP value as defined by the chosen model. This marking should happen as close to the source of the traffic as possible. Finally, queuing policies must be configured on network devices, particularly at WAN interfaces and other potential congestion points, to enforce the per-hop behaviors associated with those DSCP markings. This end-to-end strategy is key to a successful deployment.


Go to testing centre with ease on our mind when you use Cisco 642-642 vce exam dumps, practice test questions and answers. Cisco 642-642 Quality of Service (QoS) certification practice test questions and answers, study guide, exam dumps and video training course in vce format to help you study with ease. Prepare with confidence and study using Cisco 642-642 exam dumps & practice test questions and answers vce from ExamCollection.

Read More


SPECIAL OFFER: GET 10% OFF

ExamCollection Premium

ExamCollection Premium Files

Pass your Exam with ExamCollection's PREMIUM files!

  • ExamCollection Certified Safe Files
  • Guaranteed to have ACTUAL Exam Questions
  • Up-to-Date Exam Study Material - Verified by Experts
  • Instant Downloads
Enter Your Email Address to Receive Your 10% Off Discount Code
A Confirmation Link will be sent to this email address to verify your login
We value your privacy. We will not rent or sell your email address

SPECIAL OFFER: GET 10% OFF

Use Discount Code:

MIN10OFF

A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.

Next

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.