Manufacturers design QoS strategies, buffer sizing, and build test plans for performance around best case scenarios, which means your uplinks across the network need to be as close to the same as possiblr or you don't get to leverage the benefits of cut through switching. Pushing 元 as close to the edge to get rid of spurrious L2 PDUs from traversing large portions of your fabric had become far more important. It is also important to note (as was mentioned previously) with Queuing enabled the same buffers are used for Queuing different classes of traffic which means an additional strain is put on buffers where traffic moves from hi-rate interfaces to low rate interfaces. Now we use cut through switching (Nexus) which requires less buffer space for traffic traversing ports at the same rate. The world of switching has drastically changed since the the early 2000s when the architecture for the legacy catalyst ASICs was designed. Queueing is much smarter now and the design and implementation of that design in your network is much more important. This is in large part to latency and bus speed in the switch performing hundreds of fold better, allowing a common pool with distributed processing in the switch. In the latest catalyst and nexus 9K switches all ports are managed by a single ASIC in some models. (Approx 25-32MB total in a 1U switch.) We have moved from dumb dedicated interface group buffers (expensive) for ingress and egress (cat6500/4500) to only (cat29/35/37/38xx) egress buffers and now to a large shared buffer with smarter ASICs at the edge. (Read up on SERDES)īuffer sizes have largely not changed in 10-15 years while interface rate had grown at a parabolic rate. The larger the disparity the more likely to the switch has to use the egress/ingress buffer to store packets as they get trickled down the output interface at a slower serialization rate. Newer platforms have since moved to 10/25/40/100G uplinks and the reality is there are still a large number of devices that only support 10/100M. These are store and forward switches, designed in an era where 10/100M was the norm for access port speed and uplinks were 1G. Gist is on 2960/3560/3750 platforms the buffers are extremely tight. I wrote a blog post documenting all we found. So we were having similar issues back in 2014.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
December 2022
Categories |