1

My question is similar to: What is benefit of using CBWFQ with fair-queue statement

Given this policy-map (without class-maps) configuration:

policy-map test
 class DSCP30
  bandwidth percent 20
 class DSCP40
  bandwidth percent 30
 class DSCP50
  bandwidth percent 40
 class class-default
  fair-queue

This reserves 20% of interface bandwidth for traffic marked as DSCP30. But this queue is still handled as FIFO. I hope following 2 statements are right:

  • If two clients send DSCP30 traffic and one of them is a heavy hitter, the second client will experience drops and delays in times of congestion because the queue is being FIFOed.
  • If two clients send DSCP0 traffic and one of them is a heavy hitter, the second client will experience less drops and delays in times of congestion because the queue is being fair-queued.

So the benefit of fair-queuing, that traffic is put into flows and treated equally (fair), does only apply to the class-default traffic. So why do I never see a configuration like this in tutorials or examples:

policy-map test
 class DSCP30
  bandwidth percent 20
  fair-queue
 class DSCP40
  bandwidth percent 30
  fair-queue
 class DSCP50
  bandwidth percent 40
  fair-queue
 class class-default
  fair-queue

It basically activates fair-queuing for every class. Is there a reason not to use a configuration like this?

Some background: We have mainly Citrix traffic which uses very little bandwidth by default. But from time to time, some user uploads images from a camera and generates a lot of traffic. We dont have any possibility to differentiate this traffic from the rest, because it is handled in the same channel. So we would like to minimize the impact of "image peaks" for other Citrix users.

I have read somewhere that this is called "using a FQ pre-sorter" but am not sure if this is true. Thank you for having a look into this question.

Mario Jost
  • 1,690
  • 8
  • 20

1 Answers1

2

If two clients send DSCP30 traffic and one of them is a heavy hitter, the second client will experience drops and delays in times of congestion because the queue is being FIFOed.

Yes, this is correct. FIFO queuing performs no prioritization of data packets on user data traffic. It entails no concept of priority or classes of traffic. When FIFO is used, ill-behaved sources can consume available bandwidth, bursty sources can cause delays in time-sensitive or important traffic, and important traffic may be dropped because less important traffic fills the queue.

If two clients send DSCP0 traffic and one of them is a heavy hitter, the second client will experience less drops and delays in times of congestion because the queue is being fair-queued.

It depends on the type of traffic. Weighted Fair Queuing is a flow-based queuing algorithm used in Quality of Service (QoS) network applications that schedules low-volume (telnet for instance) traffic first, while letting high-volume traffic (FTP for instance) share the remaining bandwidth. This is handled by assigning a weight to each flow, where lower weights are the first to be serviced. So it depends a lot of the type of traffic.

CBWFQ allows you to specify the exact amount of bandwidth to be allocated for a specific class of traffic. Taking into account available bandwidth on the interface, you can configure up to 64 classes and control distribution among them

CBWFQ allows you to define what constitutes a class based on criteria that exceed the confines of flow. CBWFQ allows you to use ACLs and protocols or input interface names to define how traffic will be classified, thereby providing coarser granularity.

It basically activates fair-queuing for every class. Is there a reason not to use a configuration like this?

No i do not think there is. But administratively it can become quiet hard to maintain.

CBWFQ allows the user to reserve a minimum bandwidth for a class during congestion. However, this scheme does not work well for voice traffic, which is intolerant of delay. Delay in voice traffic results in irregular transmission causing jitter in the heard conversation. For good voice quality, the one-way end-to-end delay should ideally be less than 150 milliseconds (ms). The new feature provided by LLQ is especially important for ensuring voice quality on slow-speed links. So you would definitely need to define your priority queue.

Think about the fair-queue configuration. It basically creates mini queues. So the question is how many? What happens when we have a new conversation, but all of the queues are being used? The answer is that the number of queues is configurable, but when the queues are all in use, new sessions are dropped.

FIFO, which is the fastest method of queuing, is effective for large links that have little delay and minimal congestion. If your link has very little congestion, FIFO queuing may be the only queuing you need to use.

  • Thanks for the detailed answer. So, you mentioned, that if all fair-queue "mini queues" are in use, connections are dropped. So if i have enabled fair-queue on a default-class with a hypothetical limit of 4 queues. 4 clients are sending 10kbit/s constant traffic and a 5th client wants to send traffic, is all the data of the 5th client dropped? Even if a Gigabit interface is utilized at 40kbit/s only? Based on the amount of connections, this would be a reason not to use fair-queueing on all classes... – Mario Jost Apr 01 '19 at 12:25
  • 1
    @MarioJost It's very hard to find any definite explanation to this on Cisco, but theoretically, yes. The problem is just, that i've not been able to recreate this in a test environment. –  Apr 01 '19 at 12:37