Skip to content

Archive

Archive for April, 2012

GLBP

Apr 23

Gateway Load Balancing Protocol (GLBP) is a Cisco proprietary protocol that attempts to overcome the limitations of existing redundant router protocols by adding basic load balancing functionality.

In addition to being able to set priorities on different gateway routers, GLBP allows a weighting parameter to be set. Based on this weighting (compared to others in the same virtual router group), ARP requests will be answered with MAC addresses pointing to different routers. Thus, load balancing is not based on traffic load, but rather on the number of hosts that will use each gateway router. By default GLBP load balances in round-robin fashion.

GLBP elects one AVG (Active Virtual Gateway) for each group. Other group members act as backup in case of AVG failure. In case there are more than two members, the second best AVG is placed in the Standby state and all other members are placed in the Listening state. This is monitored using hello and holdtime timers, which are 3 and 10 seconds by default. The elected AVG then assigns a virtual MAC address to each member of the GLBP group, including itself, thus enabling AVFs (Active Virtual Forwarders). Each AVF assumes responsibility for forwarding packets sent to its virtual MAC address. There could be up to four active AVFs at the same time.

By default, GLBP routers use the local multicast address 224.0.0.102 to send hello packets to their peers every 3 seconds over UDP 3222 (source and destination).

Lets now start with our topology. Here PC1,PC2,PC3 and PC4 routing is disabled with “no ip routing” and all are pointing towards the GLBP assigned address 10.0.0.254, which is assigned on the LAN interface of R1 and R2 pointing towards end hosts. Additionally R1(AS-12) is connected to ISP1(AS-501) and R2(AS-12) is connected to ISP2(502) . I have not configured R1 and R2 to be an IBGP neighbor, although R1 and R2 both receiving same routes from ISP1 and ISP2.

Pre-configurations of all devices.

PC1,PC2,PC3 and PC4 have addresses in range 10.0.0.0/24 and ip routing is disabled and default-gateway is configured to be 10.0.0.254(GLBP address) assigned on Fastethernet0/0 interface of R1 and R2.

Sample config of PC1

PC1#sh run int fa0/0
interface FastEthernet0/0
ip address 10.0.0.1 255.255.255.0
end
PC1#sh run | i default
ip default-gateway 10.0.0.254
PC1#sh ip route
Default gateway is 10.0.0.254

Host               Gateway           Last Use    Total Uses  Interface
ICMP redirect cache is empty

R1

R1#sh run int fa0/0
interface FastEthernet0/0
ip address 10.0.0.101 255.255.255.0
glbp 1 ip 10.0.0.254
R1#sh run int S0/0
interface Serial0/0
ip address 11.11.11.1 255.255.255.0
R1#sh run | s bgp
router bgp 12
no synchronization
bgp log-neighbor-changes
neighbor 11.11.11.254 remote-as 501
no auto-summary
R1#sh ip bgp summ | b Nei
Neighbor        V    AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
11.11.11.254    4   501      35      31                     5          0      0         00:28:31                    4
R1#sh ip bgp | b Network
Network          Next Hop            Metric LocPrf Weight Path
*> 33.33.33.1/32    11.11.11.254             0             0 501 i
*> 33.33.33.2/32    11.11.11.254             0             0 501 i
*> 33.33.33.3/32    11.11.11.254             0             0 501 i
*> 33.33.33.4/32    11.11.11.254             0             0 501 i

R2

R2#sh run int fa0/0
interface FastEthernet0/0
ip address 10.0.0.102 255.255.255.0
glbp 1 ip 10.0.0.254
R2#sh run int s0/0
interface Serial0/0
ip address 22.22.22.2 255.255.255.0
R2#sh run | s bgp
router bgp 12
no synchronization
bgp log-neighbor-changes
neighbor 22.22.22.254 remote-as 502
no auto-summary
R2#sh ip bgp summ | b Neighbor
Neighbor        V    AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
22.22.22.254    4   502      38      34                5               0    0       00:31:42                   4
R2#sh ip bgp  | b Network
Network          Next Hop            Metric LocPrf Weight Path
*> 33.33.33.1/32    22.22.22.254             0             0 502 i
*> 33.33.33.2/32    22.22.22.254             0             0 502 i
*> 33.33.33.3/32    22.22.22.254             0             0 502 i
*> 33.33.33.4/32    22.22.22.254             0             0 502 i

ISP1

ISP1#sh run int s0/0
interface Serial0/0
ip address 11.11.11.254 255.255.255.0
ISP1#sh run | s bgp
router bgp 501
no synchronization
bgp log-neighbor-changes
network 33.33.33.1 mask 255.255.255.255
network 33.33.33.2 mask 255.255.255.255
network 33.33.33.3 mask 255.255.255.255
network 33.33.33.4 mask 255.255.255.255
neighbor 11.11.11.1 remote-as 12
no auto-summary
ISP1#sh run | i ip route
ip route 0.0.0.0 0.0.0.0 11.11.11.1

ISP2

ISP2#sh run int s0/0
interface Serial0/0
ip address 22.22.22.254 255.255.255.0
ISP2#sh run | s bgp
router bgp 502
no synchronization
bgp log-neighbor-changes
network 33.33.33.1 mask 255.255.255.255
network 33.33.33.2 mask 255.255.255.255
network 33.33.33.3 mask 255.255.255.255
network 33.33.33.4 mask 255.255.255.255
neighbor 22.22.22.2 remote-as 12
no auto-summary
ISP2#sh run | i ip route
ip route 0.0.0.0 0.0.0.0 22.22.22.2

Now lets start with the verifications of GLBP. As we have not configured any fancy stuffs so far in GLBP , lets check the output of “show glbp” on R1 and R2 respectively.

R1#sh glbp
FastEthernet0/0 – Group 1
State is Standby                      —>>This is standby AVG as this has the lower ip address assigned similar to HSRP or VRRP.
1 state change, last state change 00:29:49
Virtual IP address is 10.0.0.254
Hello time 3 sec, hold time 10 sec
Next hello sent in 1.772 secs
Redirect time 600 sec, forwarder timeout 14400 sec
Preemption disabled     —>>Preemption is disabled by default similar to HSRP, only VRRP has preepmtion enabled by default.
Active is 10.0.0.102, priority 100 (expires in 7.136 sec) –>>This is pointing towards the Active Gateway that is R2.
Standby is local                     —>>R1 is  standby AVG.
Priority 100 (default)              —>>Default priority.
Weighting 100 (default 100), thresholds: lower 1, upper 100
Load balancing: round-robin
Group members:
c200.1550.0000 (10.0.0.101) local       —>>AVF’s and there respective MAC addresses.(R1)
c201.1550.0000 (10.0.0.102)                  —>>AVF’s and there respective MAC addresses.(R2)
There are 2 forwarders (1 active)
Forwarder 1
State is Listen
MAC address is 0007.b400.0101 (learnt)
Owner ID is c201.1550.0000       –>>AVF is R1, this is in listen state means this will answer the next ARP response.
Time to live: 14397.124 sec (maximum 14400 sec)
Preemption enabled, min delay 30 sec
Active is 10.0.0.102 (primary), weighting 100 (expires in 8.780 sec)
Forwarder 2
State is Active
1 state change, last state change 00:29:58
MAC address is 0007.b400.0102 (default)
    Owner ID is c200.1550.0000      —>>AVF is R2 and this is Active means this will answer the ARP.
Preemption enabled, min delay 30 sec
Active is local, weighting 100

R2#sh glbp
FastEthernet0/0 – Group 1
State is Active           —>>This is AVG as this has the higher ip address assigned similar to HSRP or VRRP.
2 state changes, last state change 00:30:42
Virtual IP address is 10.0.0.254
Hello time 3 sec, hold time 10 sec
Next hello sent in 0.372 secs
Redirect time 600 sec, forwarder timeout 14400 sec
Preemption disabled     —>>Preemption is disabled by default similar to HSRP, only VRRP has preepmtion enabled by default.
Active is local             —>> This indicated that this is AVG for this group.
Standby is 10.0.0.101, priority 100 (expires in 9.024 sec)
Priority 100 (default)     —>>Priority of all FHRP’s by default is 100.
Weighting 100 (default 100), thresholds: lower 1, upper 100        —>>Default config of GLBP.
  Load balancing: round-robin                                                                         —>>Default load-balancing of GLBP.
Group members:
c200.1550.0000 (10.0.0.101)                               —>>AVF’s and there respective MAC addresses.(R1)
    c201.1550.0000 (10.0.0.102) local                    —>>AVF’s and there respective MAC addresses.(R2)
There are 2 forwarders (1 active)
Forwarder 1
State is Active
1 state change, last state change 00:30:32
MAC address is 0007.b400.0101 (default)
Owner ID is c201.1550.0000                                     —>>AVF is R2 and this is Active means this will answer the ARP.
Redirection enabled
Preemption enabled, min delay 30 sec
Active is local, weighting 100
Client selection count: 2
Forwarder 2
State is Listen
MAC address is 0007.b400.0102 (learnt)
Owner ID is c200.1550.0000   —>>AVF is R1,this is in listen state means this will answer the next ARP response.
Redirection enabled, 597.904 sec remaining (maximum 600 sec)
Time to live: 14397.904 sec (maximum 14400 sec)
Preemption enabled, min delay 30 sec
Active is 10.0.0.101 (primary), weighting 100 (expires in 7.900 sec)
Client selection count: 2

So far so good, so we are doing some Load balancing out here, but really are we doing, how can we check.The easy way to check is from pinging from our end hosts Pc1 through Pc4 and see do we actually distributing the load, as seen in the above output.

Lets start.

Pc1

PC1#ping 33.33.33.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 33.33.33.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/241/1080 ms
PC1#sh arp | i 10.0.0.254
Protocol  Address          Age (min)  Hardware Addr   Type   Interface
Internet  10.0.0.254              0   0007.b400.0101  ARPA   FastEthernet0/0      —>>AVF is R2.

PC2

PC2#ping 33.33.33.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 33.33.33.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 24/241/1064 ms
PC2#sh arp  | i 10.0.0.254
Protocol  Address          Age (min)  Hardware Addr   Type   Interface
Internet  10.0.0.254              0   0007.b400.0102  ARPA   FastEthernet0/0       —>>AVF is R1.

PC3

PC3#ping 33.33.33.3
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 33.33.33.3, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 12/231/1060 ms
PC3#sh arp | i 10.0.0.254
Protocol  Address          Age (min)  Hardware Addr   Type   Interface
Internet  10.0.0.254              0   0007.b400.0101  ARPA   FastEthernet0/0     —>>AVF is R2.

PC4

PC4#ping 33.33.33.4
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 33.33.33.4, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 12/233/1056 ms
PC4#sh arp | i 10.0.0.254
Protocol  Address          Age (min)  Hardware Addr   Type   Interface
Internet  10.0.0.254              0   0007.b400.0102  ARPA   FastEthernet0/0     —>>AVF is R1.

So the output is quite promising we have sent twenty packets as a whole towards our destinations and 10 packets are routed via R1 and 10 packets are routed via R2. As the default Load Balancing in GLBP is round-robin as we have seen in the output of “show glbp” it means the forwarding engine inside the GLBP will serve packets one after another in a round robin fashion. As evident from the output.

Now lets modify some of the outputs,Here our connections to ISP1 and ISP2 are both T1 means (BW 1544 Kbit), now lets say the connection to ISP1 from R1 is DS3 channel(45 Mbps), and now we want packets are more likely to be forwarded towards R1 in comparison to R2.Lets modify the weighting and watch the results.

For actually test the load-balancing we need to create an extended access-list on R1 and R2, that we can use for packet counters, and ping from our end hosts PC1 through PC4, and change configurations on R1 accordingly as this the faster link so i have increased weighting of R1 as double than R2 and decreased the Forwarder preempt delay from Default 30 seconds to 1 second on R1.So the final config on R1 is.

R1

R1#sh run int fa0/0
interface FastEthernet0/0
ip address 10.0.0.101 255.255.255.0
ip access-group 100 in
duplex auto
speed auto
glbp 1 ip 10.0.0.254
 glbp 1 weighting 210 lower 195 upper 205  —>>This side weighting is twice than R1(100 default).
 glbp 1 load-balancing weighted          —>>This will change the default LB from Round-Robin to Weighted.
 glbp 1 forwarder preempt delay minimum 1   —>>The forwarder preemtion delay is minimum to 1 sec.
end

R2

R2#sh run int fa0/0
interface FastEthernet0/0
ip address 10.0.0.102 255.255.255.0
ip access-group 100 in
duplex auto
speed auto
glbp 1 ip 10.0.0.254
glbp 1 load-balancing weighted          —>>This will change the default LB from Round-Robin to Weighted.
 glbp 1 forwarder preempt delay minimum 1   —>>The forwarder preemtion delay is minimum to 1 sec.
end

The access-list 100 is used as packet counter and applied inbound on R1 and R2, the config is below.

access-list 100 permit icmp any any echo
access-list 100 permit ip any any
access-list 100 deny   ip any any log

Now lets start pinging from PC1 through PC4 with repeat count of 1000 each.

PC1#ping 33.33.33.1 re 1000

PC2#ping 33.33.33.2 re 1000

PC3#ping 33.33.33.3 re 1000

PC4#ping 33.33.33.4 re 1000

Now lets see the result on R1 and R2 by using access-list that we have made for packet counters.

R1#sh access-l
Extended IP access list 100
10 permit icmp any any echo (3000 matches)
20 permit ip any any (195 matches)
30 deny ip any any log

R2#sh access-l
Extended IP access list 100
10 permit icmp any any echo (1000 matches)
20 permit ip any any (189 matches)
30 deny ip any any log

Voila, the results are quite good as we can see total 4000 packets sent out of which 3000 packets are forwarded by R1 and 1000 packets are forwarded by R2. Its really hard to simulate actual load-balancing in GLBP, also if we can refer to Cisco DOC-CD GLBP the load balancing algorithm is not well documented here.

We can configure authentication and tracking, same as we use in HSRP and VRRP, the configurations are almost same only we need to replace key word glbp in GLBP by standby in HSRP or vrrp in VRRP, rest are same.

Good resources to look for GLBP.

INE BLOG  :-  GLBP Explained

Cisco Site apart from DOC-Cd  :- GLBP

One of the most confusing topics that i have read in my CCIE studies is 3560 Queuing and Scheduling.In 3550 queues are interface based however in 3560 queues are chassis based.

Because the total inbound bandwidth of all ports can exceed the bandwidth of the internal ring, ingress queues are located after the packet is classified, policed, and marked and before packets are forwarded into the switch fabric. Because multiple ingress ports can simultaneously send packets to an egress port and cause congestion, outbound queues are located after the internal ring.

In 3560 we have 2 input queues as shown in the diagram and 4 output queues per interface.Input queues are configured in global mode  by “mls qos srr-queue input [try ?]” .Only mapping DSCP or COS values to an egress queue and to a threshold id can be done in global config mode by “mls qos srr-queue output [try ?]” rest Egress queues can be configured under interface mode.

3560 has four hardware queues per interface and per queue has 3 threshold .Expedite Queue can be configured in Q1. Resulting in 4Q1P3T.Although by default.

COS 5 -> Queue 1
COS 0 -> Queue 2
COS 2/3 -> Queue 3
COS 4/5/6 -> Queue 4

Weighted Tail Drop

Both the ingress and egress queues use an enhanced version of the tail-drop congestion-avoidance mechanism called weighted tail drop (WTD). WTD is implemented on queues to manage the queue lengths and to provide drop precedences for different traffic classifications.

Each queue has three threshold values. The QOS label  determines which of the three threshold values is subjected to the frame. Of the three thresholds, two are configurable (explicit) and one is not (implicit).

Configuring Ingress Queue Characteristics:-

Mapping DSCP or CoS Values to an Ingress Queue and Setting WTD Thresholds

We can prioritize traffic by placing packets with particular DSCPs or CoSs into certain queues and adjusting the queue thresholds so that packets with lower priorities are dropped.

This example shows how to map DSCP values 0 to 6 to ingress queue 1 and to threshold 1 with a drop threshold of 50 percent. It maps DSCP values 20 to 26 to ingress queue 1 and to threshold 2 with a drop threshold of 70 percent:

Switch(config)# mls qos srr-queue input dscp-map queue 1 threshold 1 0 1 2 3 4 5 6
Switch(config)# mls qos srr-queue input dscp-map queue 1 threshold 2 20 21 22 23 24 25 26
Switch(config)# mls qos srr-queue input threshold 1 50 70

In this example, the DSCP values (0 to 6) are assigned the WTD threshold of 50 percent means that after filling 50% of the queue 1 threshold the packet will start dropping and will be dropped sooner than the DSCP values (20 to 26) assigned to the WTD threshold of 70 percent means the packets are not dropped till the queue 1 reaches 70% of threshold.

Allocating Buffer Space Between the Ingress Queues

We define the ratio (allocate the amount of space) with which to divide the ingress buffers between the two queues. The buffer and the bandwidth allocation control how much data can be buffered before packets are dropped.

This example shows how to allocate 60 percent of the buffer space to ingress queue 1 and 40 percent of the buffer space to ingress queue 2:

Switch(config)# mls qos srr-queue input buffers 60 40

Allocating Bandwidth Between the Ingress Queues

We need to specify how much of the available bandwidth is allocated between the ingress queues. The ratio of the weights is the ratio of the frequency in which the SRR scheduler sends packets from each queue. The bandwidth and the buffer allocation control how much data can be buffered before packets are dropped. On ingress queues, SRR operates only in shared mode.

This example shows how to assign the ingress bandwidth to the queues. Priority queueing is disabled, and the shared bandwidth ratio allocated to queue 1 is 25/(25+75) and to queue 2 is 75/(25+75):

Switch(config)# mls qos srr-queue input priority-queue 2 bandwidth 0
Switch(config)# mls qos srr-queue input bandwidth 25 75

Configuring the Ingress Priority Queue

We should use the priority queue only for traffic that needs to be expedited (for example, voice traffic, which needs minimum delay and jitter).

This example shows how to assign the ingress bandwidths to the queues. Queue 1 is the priority queue with 10 percent of the bandwidth allocated to it. The bandwidth ratios allocated to queues 1 and 2 is 4/(4+4). SRR services queue 1 (the priority queue) first for its configured 10 percent bandwidth. Then SRR equally shares the remaining 90 percent of the bandwidth between queues 1 and 2 by allocating 45 percent to each queue:

Switch(config)# mls qos srr-queue input priority-queue 1 bandwidth 10
Switch(config)# mls qos srr-queue input bandwidth 4 4

Configuring Egress Queue Characteristics:-

Allocating Buffer Space to and Setting WTD Thresholds for an Egress Queue-Set

This example shows how to map a port to queue-set 2. It allocates 40 percent of the buffer space to egress queue 1 and 20 percent to egress queues 2, 3, and 4. It configures the drop thresholds for queue 2 to 40 and 60 percent of the allocated memory, guarantees (reserves) 100 percent of the allocated memory, and configures 200 percent as the maximum memory that this queue can have before packets are dropped:

Switch(config)# mls qos queue-set output 2 buffers 40 20 20 20
Switch(config)# mls qos queue-set output 2 threshold 2 40 60 100 200
Switch(config)# interface gigabitethernet0/1
Switch(config-if)# queue-set 2

Mapping DSCP or CoS Values to an Egress Queue and to a Threshold ID

We can prioritize traffic by placing packets with particular DSCPs or costs of service into certain queues and adjusting the queue thresholds so that packets with lower priorities are dropped.

This example shows how to map DSCP values 10 and 11 to egress queue 1 and to threshold 2:

Switch(config)# mls qos srr-queue output dscp-map queue 1 threshold 2 10 11

Configuring SRR Shaped Weights on Egress Queues

We can specify how much of the available bandwidth is allocated to each queue. The ratio of the weights is the ratio of frequency in which the SRR scheduler sends packets from each queue.

This example shows how to configure bandwidth shaping on queue 1. Because the weight ratios for queues 2, 3, and 4 are set to 0, these queues operate in shared mode. The bandwidth weight for queue 1 is 1/8, which is 12.5 percent:

Switch(config)# interface gigabitethernet0/1
Switch(config-if)# srr-queue bandwidth shape 8 0 0 0

Configuring SRR Shared Weights on Egress Queues

In shared mode, the queues share the bandwidth among them according to the configured weights. The bandwidth is guaranteed at this level but not limited to it. For example, if a queue empties and does not require a share of the link, the remaining queues can expand into the unused bandwidth and share it among them. With sharing, the ratio of the weights controls the frequency of dequeuing; the absolute values are meaningless.

This example shows how to configure the weight ratio of the SRR scheduler running on an egress port. Four queues are used, and the bandwidth ratio allocated for each queue in shared mode is 1/(1+2+3+4), 2/(1+2+3+4), 3/(1+2+3+4), and 4/(1+2+3+4), which is 10 percent, 20 percent, 30 percent, and 40 percent for queues 1, 2, 3, and 4. This means that queue 4 has four times the bandwidth of queue 1, twice the bandwidth of queue 2, and one-and-a-third times the bandwidth of queue 3.

Switch(config)# interface gigabitethernet0/1
Switch(config-if)# srr-queue bandwidth share 1 2 3 4

Configuring the Egress Expedite Queue

We can ensure that certain packets have priority over all others by queuing them in the egress expedite queue. SRR services this queue until it is empty before servicing the other queues.

This example shows how to enable the egress expedite queue when the SRR weights are configured. The egress expedite queue overrides the configured SRR weights.

Switch(config)# interface gigabitethernet0/1
Switch(config-if)# srr-queue bandwidth shape 25 0 0 0 
Switch(config-if)# srr-queue bandwidth share 30 20 25 25 
Switch(config-if)# priority-queue out 
Switch(config-if)# end 

Limiting the Bandwidth on an Egress Interface

We can limit the bandwidth on an egress port. For example, if a customer pays only for a small percentage of a high-speed link, we can limit the bandwidth to that amount.

This example shows how to limit the bandwidth on a port to 80 percent:

Switch(config)# interface gigabitethernet0/1
Switch(config-if)# srr-queue bandwidth limit 80

Default Ingress Queue Configuration

Table 1 shows the default ingress queue configuration when QoS is enabled.

Table 1 Default Ingress Queue Configuration

Feature

Queue 1

Queue 2
Buffer allocation 90 percent 10 percent
Bandwidth allocation 1 4 4
Priority queue bandwidth 2 0 10
WTD drop threshold 1 100 percent 100 percent
WTD drop threshold 2 100 percent 100 percent
1 The bandwidth is equally shared between the queues. SRR sends packets in shared mode only.2 Queue 2 is the priority queue. SRR services the priority queue for its configured share before servicing the other queue.

Table 2 shows the default CoS input queue threshold map when QoS is enabled.

cos-input-q for L2-based marking queue.Shown by “sh mls qos map cos-inout-q”. Can be modified with

put COS 1 to Q1T2
mls qos srr-queue input cos-map queue 1 threshold 2 1

Table 2 Default CoS Input Queue Threshold Map

CoS Value

Queue ID-Threshold ID
0-4 1-1
5 2-1
6, 7 1-1

Table 3 shows the default DSCP input queue threshold map when QoS is enabled.

dscp-input-q for L3-based marking queue. Shown by “sh mls qos map dscp-input-q”. Can be modified by

put DSCP decimal 32 to Q2T3
mls qos srr-queue input dscp-map queue 1 threshold 3 32

Table 3 Default DSCP Input Queue Threshold Map

DSCP Value

Queue ID-Threshold ID
0-39 1-1
40-47 2-1
48-63 1-1

Default Egress Queue Configuration

Table 4 shows the default egress queue configuration for each queue-set when QoS is enabled. All ports are mapped to queue-set 1. The port bandwidth limit is set to 100 percent and rate unlimited.

Table 4 Default Egress Queue Configuration

Feature

Queue 1

Queue 2

Queue 3

Queue 4
Buffer allocation 25 percent 25 percent 25 percent 25 percent
WTD drop threshold 1 100 percent 200 percent 100 percent 100 percent
WTD drop threshold 2 100 percent 200 percent 100 percent 100 percent
Reserved threshold 50 percent 50 percent 50 percent 50 percent
Maximum threshold 400 percent 400 percent 400 percent 400 percent
SRR shaped weights (absolute) 1 25 0 0 0
SRR shared weights 2 25 25 25 25
1 A shaped weight of zero means that this queue is operating in shared mode.2 One quarter of the bandwidth is allocated to each queue.

Table 5 shows the default CoS output queue threshold map when QoS is enabled.

Table 5 Default CoS Output Queue Threshold Map

CoS Value

Queue ID-Threshold ID
0, 1 2-1
2, 3 3-1
4 4-1
5 1-1
6, 7 4-1

Table 6 shows the default DSCP output queue threshold map when QoS is enabled.

Table 6 Default DSCP Output Queue Threshold Map

DSCP Value

Queue ID-Threshold ID
0-15 2-1
16-31 3-1
32-39 4-1
40-47 1-1
48-63 4-1
As defaults are already set for proper functioning of ASICs for QoS Queuing, we only need to alter configurations of queues when we need to change something according to our need in hardware processing within 3560.