Home > Networking > Cisco Router QoS with Policy Map: Optimizing WAN Links for Better Performance

Cisco Router QoS with Policy Map: Optimizing WAN Links for Better Performance

Quality of Service (QoS) is a critical tool for managing network traffic, especially on WAN links where bandwidth is often limited. By configuring a Cisco router with a policy map, network administrators can prioritize traffic, allocate bandwidth, and prevent congestion. In this post, we’ll dive into a sample QoS configuration using a policy map, explain key terms like priority, bandwidth, remaining bandwidth percentage, shape-average, and random tail drop, and explore why QoS is essential for WAN link optimization.

Sample QoS Configuration with Policy Map

Here’s an example of a QoS policy map configuration on a Cisco router:

plaintext

class-map match-all VOICE
 match ip dscp ef
class-map match-all VIDEO
 match ip dscp af41
class-map match-all DATA
 match ip dscp default

policy-map WAN-QOS
 class VOICE
  priority percent 20
 class VIDEO
  bandwidth percent 30
  random-detect
 class DATA
  bandwidth remaining percent 50
  shape average 5000000
 class class-default
  fair-queue

interface GigabitEthernet0/0/0
 service-policy output WAN-QOS

Why Use QoS for WAN Link Optimization?

WAN links typically have lower bandwidth compared to LANs, and they’re prone to congestion when traffic exceeds capacity. Without QoS, critical applications like VoIP or video conferencing can suffer from latency, jitter, or packet loss. QoS ensures efficient use of limited bandwidth by:

  • Prioritizing latency-sensitive traffic (e.g., voice).
  • Guaranteeing bandwidth for specific applications.
  • Preventing congestion by shaping traffic and dropping low-priority packets when needed.

For example, on a 10 Mbps WAN link, QoS can ensure that voice traffic gets low latency, video gets adequate bandwidth, and non-critical data doesn’t overwhelm the link.

Key QoS Components Explained

  1. Priority
    • Command: priority percent 20
    • Explanation: The priority keyword assigns a low-latency queue (LLQ) to a class (e.g., VOICE). In this case, 20% of the interface bandwidth (e.g., 2 Mbps on a 10 Mbps link) is reserved exclusively for voice traffic. During congestion, this traffic is serviced first, minimizing delay and jitter—crucial for real-time applications like VoIP.
  2. Bandwidth
    • Command: bandwidth percent 30
    • Explanation: The bandwidth keyword guarantees a minimum bandwidth allocation for a class (e.g., VIDEO gets 30%, or 3 Mbps on a 10 Mbps link). Unlike priority, it doesn’t provide low-latency queuing but ensures the class gets its share during congestion.
  3. Remaining Bandwidth Percentage
    • Command: bandwidth remaining percent 50
    • Explanation: This allocates a percentage of the remaining bandwidth after priority and bandwidth reservations. For example, if VOICE takes 20% (2 Mbps) and VIDEO takes 30% (3 Mbps), 5 Mbps remains. The DATA class gets 50% of that (2.5 Mbps). This ensures fair distribution of leftover capacity.
  4. Shape-Average
    • Command: shape average 5000000
    • Explanation: Traffic shaping limits the rate of traffic to a specified value (e.g., 5 Mbps). Shape-average smooths bursts by buffering excess traffic and sending it out at a steady rate. This prevents downstream congestion on WAN links where the provider might drop packets if traffic exceeds the contracted rate.
    • How the Buffer Works: Excess packets are held in a buffer and released based on a token bucket algorithm. Tokens are replenished at the configured rate (5 Mbps). If no tokens are available, packets wait or are dropped, ensuring the average rate doesn’t exceed the limit.
  5. Random Tail Drop (Random Early Detection – RED)
    • Command: random-detect
    • Explanation: When congestion occurs, random-detect proactively drops packets from the queue based on a probability curve, before the queue is full. This prevents TCP sessions from synchronizing their retransmissions (global synchronization), improving throughput. Unlike traditional tail drop (dropping all packets when the queue is full), RED is more efficient for managing congestion.

Why Apply Policy Maps to Outgoing Traffic Only?

QoS policy maps are applied to outbound interfaces (e.g., service-policy output WAN-QOS) because routers can only control traffic they send, not traffic they receive. On a WAN link, inbound traffic is shaped by the upstream device (e.g., ISP router), so applying QoS inbound has no effect—your router can’t throttle what’s already arriving. Outbound QoS lets you prioritize, shape, and queue traffic before it hits the WAN, ensuring optimal performance.

Conclusion

Configuring QoS with a policy map on a Cisco router is a powerful way to optimize WAN links. By using priority for real-time traffic, bandwidth and remaining bandwidth percentage for fair allocation, shape-average to control rates, and random-detect to manage congestion, you can ensure critical applications perform well even under heavy load. Whether you’re managing VoIP, video, or bulk data, QoS is your key to a stable, efficient network.

Leave a Comment