It has been two months since I passed my CCIE RS written.
Booked my lab in October hoping that 8 months with 3 hours of sleep will be
more than enough to be well prepared for the lab exam. I created a spreadsheet
to go with the preparation; unfortunately I’ve to drop the preparation for last
one due to health issues; still ‘ve not regained the health also. All my
preparation plans got ruined. With 7 months in hand for the preparation of the
toughest Cisco certification. I was asking myself whether I should drop the lab
dates and to postpone for the next year. After thinking for hours I decided
that quitting is easier. I’ll move with scheduled date .I believe that if you
need something desperately you‘ll find solutions rather than excuses excuse. Let’s
see whether my trust and faith lead me to my destination or not. I think its
not only about passing or failing; but it’s the courage to face the problems;
whether it is CCIE lab exam or its about life. “ll update my status soon about
preparation.’ve to change the whole plan for the preparation.
Cisco
Tuesday, 10 April 2012
Thursday, 13 October 2011
QOS-WFQ: Cisco's Intelligent Queuing Tool for Today's Networks
For situations
in which it is desirable to provide consistent response time to heavy and light
network users alike without adding excessive bandwidth, the solution is WFQ.
WFQ is one of Cisco's premier queuing techniques. It is a flow-based queuing
algorithm that does two things simultaneously: It schedules interactive traffic
to the front of the queue to reduce response time, and it fairly shares the
remaining bandwidth among high-bandwidth flows.
WFQ
ensures that queues do not starve for bandwidth, and that traffic gets
predictable service. Low-volume traffic streams---which comprise the majority
of traffic---receive preferential service, transmitting their entire offered
loads in a timely fashion. High-volume traffic streams share the remaining
capacity proportionally between them, as shown in Figure .
WFQ is designed
to minimize configuration effort and automatically adapts to changing network
traffic conditions. In fact, WFQ does such a good job for most applications
that it has been made the default queuing mode on most serial interfaces
configured to run at or below E1 speeds (2.048 Mbps).
WFQ is
efficient in that it uses whatever bandwidth is available to forward traffic
from lower-priority flows if no traffic from higher-priority flows is present.
This is different from time-division multiplexing (TDM), which simply carves up
the bandwidth and lets it go unused if no traffic is present for a particular
traffic type. WFQ works with both of Cisco's primary QoS signaling
techniques---IP precedence and Resource Reservation Protocol (RSVP), 'll be described
later --to help provide differentiated QoS as well as
guaranteed services.
Figure for WFQ, if multiple high-volume conversations are active,
their transfer rates and interarrival periods are made much more predictable
The
WFQ algorithm also addresses the problem of round-trip delay variability. If
multiple high-volume conversations are active, their transfer rates and
interarrival periods are made much more predictable. WFQ greatly enhances
algorithms such as the SNA Logical Link Control (LLC) and the Transmission
Control Protocol (TCP) congestion control and slow-start features. The result
is more predictable throughput and response time for each active flow,
as shown in Figure show below
This
diagram shows an example of interactive traffic delay (128-kbps Frame Relay WAN
link).
QOS :Congestion Management Tools
One way network
elements handle an overflow of arriving traffic is to use a queuing algorithm
to sort the traffic, and then determine some method of prioritizing it onto an
output link. Cisco IOS software includes the following queuing tools:
- First-in, first-out (FIFO) queuing
- Priority queuing (PQ)
- Custom queuing (CQ)
- Weighted fair queuing (WFQ)
Each
queuing algorithm was designed to solve a specific network traffic problem and
has a particular effect on network
performance, as described in the following sections.
In its simplest
form, FIFO queuing involves storing packets when the network is congested and
forwarding them in order of arrival when the network is no longer congested.
FIFO is the default queuing algorithm in some instances, thus requiring no
configuration, but it has several shortcomings. Most importantly, FIFO queuing
makes no decision about packet priority; the order of arrival determines
bandwidth, promptness, and buffer allocation. Nor does it provide protection
against ill-behaved applications (sources). Bursty sources can cause long
delays in delivering time-sensitive application traffic, and potentially to
network control and signaling messages. FIFO queuing was a necessary first step
in controlling network traffic, but today's intelligent networks need more
sophisticated algorithms. Cisco IOS software implements queuing algorithms that
avoid the shortcomings of FIFO queuing.
PQ ensures that important traffic gets the fastest handling at each point where it
is used. It was designed to give strict priority to important traffic. Priority
queuing can flexibly prioritize according to network protocol (for example IP,
IPX, or AppleTalk), incoming interface, packet size, source/destination
address, and so on. In PQ, each packet is placed in one of four queues---high,
medium, normal, or low---based on an assigned priority. Packets that are not
classified by this priority-list mechanism fall into the normal queue; see Figure
46-3. During transmission, the algorithm gives higher-priority queues absolute
preferential treatment over low-priority queues.
Figure below: Priority queuing places data into four levels of queues: high, medium, normal, and low.
PQ
is useful for making sure that mission-critical traffic traversing various WAN
links gets priority treatment. For example, Cisco uses PQ to ensure that
important Oracle-based sales reporting data gets to its destination ahead of
other, less-critical traffic. PQ currently uses static configuration and thus
does not automatically adapt to changing network
requirements.
CQ was designed
to allow various applications or organizations to share the
network among applications with specific minimum bandwidth or latency
requirements. In these environments, bandwidth must be shared proportionally
between applications and users. You can use the Cisco CQ feature to provide
guaranteed bandwidth at a potential congestion point, ensuring the specified
traffic a fixed portion of available bandwidth and leaving the remaining
bandwidth to other traffic. Custom queuing handles traffic by assigning a
specified amount of queue space to each class of packets and then servicing the
queues in a round-robin fashion (see Figure 46-4).
Figure shows a Custom queuing handles traffic by assigning a specified amount of queue space to each class of packets and then servicing up to 17 queues in a round-robin fashion.
As
an example, encapsulated Systems
Network Architecture (SNA) requires a guaranteed minimum level of service. You
could reserve half of available bandwidth for SNA data, and allow the remaining
half to be used by other protocols such as IP and Internetwork Packet Exchange
(IPX).
The
queuing algorithm places the messages in one of 17 queues (queue 0 holds system
messages such as keepalives, signaling, and so on), and is emptied with
weighted priority. The router services queues 1 through 16 in round-robin
order, dequeuing a configured byte count from each queue in each cycle. This
feature ensures that no application (or specified group of applications)
achieves more than a predetermined proportion of overall capacity when the line
is under stress. Like PQ, CQ is statically configured and does not
automatically adapt to changing
network conditions.
Subscribe to:
Posts (Atom)