Why was fair queuing algorithm proposed?
Show IEEE Account
Purchase Details
Profile Information
Need Help?
A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. Analysis and Simulation of a Fair Queuing Algorithm1) Definitions:a) Per Flow Queuing: b) Queue Management vs. Queue
Scheduling: c) Work Conserving: d) Max-Min Fairness: The following example illustrates the idea: The first flow needs a bandwith of 3. The second one needs 1. The bottleneck bandwith is 3. We start by allocating 1.5 to each user. Howerver the second on can only use 1, therefore we give it the 1 it requires and add the remaining 0.5 to the first user. The basic idea is to share the available BW as fairly as possoble. If there is residue, we give it to someone who can use it. 2) Fair Queuing: In Fair Queuing, Max-Min index is used. Obviously, the most fair algorithm is the bit-by-bit round robin algorithm, where one bit of each flow is serviced per round. However it's not practical. Nagle suggested a packet-by-packet round robin solution. The main drawback of his algorithm is that it does not take into account the size of packets, therefore a flow with small size packets will recieve less bandwidth than one with large packet size. As a solution, the Fair Queuing Algorithm was proposed. Its main objective is to try to emulate the bit-by-bit round robin algorithm on a packet by packet basis. Na is the number of active flows u is the available bandwidth R(t)= # of rounds that bit-by-bit round robin would have performed by time t ( could be a fraction).
The idea is to use R(t) as the clock instead of using time, ie. the time of arrival of a packet for example would be R(t) instead of t. Also, notice the following relation : dR(t) / dt= u / Na As the number of flows increases, slope of R(t) decreases ( In bit-by-bit, the number of rounds performed in a time interval decreases because the queue has to service more bits per round). Case 1) Empty Queue: Suppose that a packet P arrives at to to an empty queue. R(to) is the arrival time => R(tf)=R(to)+P Consider the following example: P1 arrives at to R(to)=12 P1=10 bits => R(tf)=22 P2 arrives earlier at another queue. Since the finish time of the first packet is smaller, we service it first. It is clear that under bit-by-bit, the first packet would have finished earlier. Case 2) Full Queue If P arrives to a full queue, then the previous packet must finish before starting to service P. Therefore the finish time of the previous packet must be taken into account.
General Case) In general, for a particular flow: Fi= finish time of packet i Si=Max(Fi-1, R(t)) and Fi=Si+P Example: R(to)=4 2 packets for C1 of size 2 arrive at to The following figure shows the calculated finish times for each packet. According to these times, we service the packets
in an increasing order ( C1 is the flow in the top queue). Another Goal: Another goal of the Fair Queuing algorithm is to give more promptness to users with less than their fair share. This can be achieved by using another variable , the Bid B instead of the finish time when determining which packet to forward. Bi= Pi+ Max( Fi-1, Si- d) For users heavily utilizing their bandwidth, Fi-1 term dominates. However for idle users, the Si-d term gives preference to users who have been idle for up to d timestamps. Performance: The following figure shows the performance of Fair Queuing as opposed to FIFO queuing. It is obvious that when the new flow is using less than its fair share under FQ, it experiences less delay. Generalized Prcessor Approach
WFQ+ Leaky Bucket => Worst case guarantees (BW,
Delay). 1) The Leaky Bucket Admission Control The role of the leaky bucket is insure that before entering the queue, traffic is shaped and controlled. It regulates the traffic entering the network. The tokens are generated into the bucket at a rate p. The size of the bucket is s. Each incoming packet can not proceed before using up a token. When the bucket is full, no more tokens are produced. This way, the max amount of traffic injected to the network at ant time is restricted to s. This way, burstiness is controlled. The following figure shows how the output of the packet is restricted between the 2 lines corresponding to an empty bucket and a full bucket. Also the slope of the admission rate is less than the peak rate C. What is WFQ algorithm?Weighted fair queueing (WFQ) is a network scheduling algorithm. WFQ is both a packet-based implementation of the generalized processor sharing (GPS) policy, and a natural extension of fair queuing (FQ).
What is WFQ in QoS?WFQ is a flow-based queuing algorithm used in Quality of Service (QoS) that does two things simultaneously: It schedules interactive traffic to the front of the queue to reduce response time, and it fairly shares the remaining bandwidth between high bandwidth flows.
Which algorithm is based on queuing technique?Fair Queuing (FQ). The algorithm ensures that a high-data-rate flow cannot use more than its fair share of the link capacity. Packets are first classified into flows by the system and then assigned to a queue dedicated to the flow.
What is startStart-time fair queueing: a scheduling algorithm for integrated services packet switching networks. Abstract: We present a start-time fair queueing (SFQ) algorithm that is computationally efficient and achieves fairness regardless of variation in a server capacity.
|