Choosing Your Core Switches

When designing data center or campus LAN with Cisco products (see I made the point clear here immediately) a no-brainer solution is using Nexus 7000 switches in the core. There aren’t really many cases you can go wrong operationally with it in general-purpose data center or even campus environments.

But, if you think they cost too much for you or take too much power or space, or you are just scared for big Nexus you may want to consider other options as well. And there comes the hard part: Cisco has many options for you. (Thanks goes to Mr and Packet Pusher Ethan Banks for inspiring this post with his tweet.) I’m not going to give you all an absolute comparison list because all the planning situations are different so the comparison list should be created according to the actual needs. I won’t give you direct price comparisons either because browsing the price list is something that only poor people do. Ok I was kidding with the last one. But I will give you at least some look on this. Go and find more information for your own special case. Warning: This article is gravitated towards having L3 features incorporated as well (not an L2-only implementation).

Nexus 5500

I will take this option here right in the beginning anyway. Nexus 5500 (5548P, 5548UP, 5596UP) supports all 4094 VLANs and all the ports are 10G with 1G ability as well. When equipped with L3 forwarding module it has the features that are enough for many situations. 5548 has 32 fixed ports (one module slot for 16-port module) and 5596 has 48 fixed ports (three slots for 16-port modules). With Nexus 5500 you at least know in advance how many ports you can get when you by them (compared to the modular switches that have different port densities in different line cards in different oversubscription levels in different generations).

In Nexus family the important advantage is the FEX selection: remote line cards in top of the rack implementations. That also brings one major limitation: with current software (NX-OS 5.1(3)N1) only 8 FEXes are supported when L3 module is used. If you single-home your FEXes then you can have a total of 16 FEXes with each Nexus 5500 pair (you implement core switches in pairs, right?). When dual-homing the FEXes then the maximum total number is of course 8 because all FEXes are seen by both Nexus 5500.

Nexus 5500 switches only have one supervisor but Cisco still boasts that it supports ISSU (In-Service Software Upgrade). However, ISSU is not supported with L3 module installed. Depending on your environment (and FEXing style [can you say that?]) that may or may not be an important factor for you. When dual-homing everything it may not be so big deal after all.

Also, when comparing Nexus 5500 L3 features with bigger core switches you need to make sure that you know your route and MAC address limitations, as always.

Catalyst 6500

You saw this coming… Catalyst 6500 is the good old DC and campus core switch. With modern supervisors and line cards it can really kick the frames through the rich services it provides in the same box. Plenty of chassis choices for different installations and requirements, as well as line cards and service modules. Do I need to say more? You can “dual-everything”, use VSS to combine two chassis together and so on. Cat6500 can do almost anything you can imagine. It may not be absolutely the fastest, but hey, if you needed the ultimate raw speed you would have selected Nexus 7000 anyway, you remember? Btw, 160 gigs per slot was announced to be coming for Cat6500 so that gives some picture of the situation.

Catalyst 4500

I don’t know Catalyst 4500 very well in core use. My first experiences from Catalyst 4000 were with a separate 4232-L3-whatever module, and it was horrible to configure (CatOS on the supervisor, IOS on the L3 module, internal GEC trunk between those). And Catalyst 4500 (or should I say 4500E?) is totally different: supervisors worth of 7 or so generations (running IOS or IOS-XE), line cards almost as many generations, different chassis generations, and so on. Current maximum bandwidth per slot seems to be 48 Gbps per slot with Sup7E. The supervisor still does all the forwarding for the line cards. Catalyst 4500 does not provide any separate service modules but it provides a set of IOS features. There are also various chassis sizes. In short: not very exciting option for a LAN core but may work well for you.

Catalyst 4500-X

The newcomer in Catalyst family is Catalyst 4500-X. They are 1U switches with a small expansion module slot. The base ports (16 or 32) are 1G/10G ports and the expansion module is promised to have 40G ports available later. (But again, your DC is apparently not needing those.) Cat4500-X runs IOS-XE and supports VSS to cluster two switches together. If your access layer is not very wide you could run your core with Cat4500-X.

Catalyst 3750-X

Why am I continuing this list… I’m really entering a territory I don’t handle: stacking switches. I just haven’t liked them. Too much dependencies between the switches and horror stories everywhere. But maybe you could stack two Catalyst 3750-X‘s together and run your small core on those? Cat3k family is restricted on VLANs and MAC addresses when compared to the options above.


And then there is more DC-grade stuff:

  • Nexus 3000: L2/L3 10G switch but more oriented to low-latency implementations with no special feature requirements
  • Catalyst 4948, Catalyst 4900M, and so on: The features are similar to Catalyst 4500 but in smaller box with limited number of interfaces available.

There is one buzzword that I didn’t mention above: FabricPath. That is something that I’m really interested in at the moment. And that is FabricPath especially with Nexus 5500 switches. How cool would it be to easily implement L2 topologies with selected service and routing entities connected to the leaf nodes, meaning that there really needn’t to be any separate core in every case? The truth is that I don’t exactly know. Yet.


Add a Comment
  1. As someone who inherited a design with a 3750 stack in the core of a small DC, all I can say is “just don’t”. You can’t do software upgrades without taking down the whole stack, and the inevitable IOS bugs also usually take down the whole stack. Stack master failover only seems to really work only in a power-cut scenario.

    1. Dear Ryan Malayter!

      Now, i’m doing a desgin for Migration of a campus network.
      Current my system is using 2 core switch C4506 and be linked by ethernet channel.
      Now, i want to recommend a replacement 2 coreswitch C4506 to C4506-E (because we have over 1000 users) but my friend want to use stack solution by 2 switch stack 3570X.
      I think that if using Stack switch will not guarantee for current network requirements because it limit with performance, bandwitch.

      Could you comment to help me? I actually want to know pros & con between 2 this solution.


      1. I don’t think the bandwidth on the interconnect are likely to be your problem. Indeed, assuming you are buying new switches, the new 3850 (which replaces the 3750 series) has ample bandwidth on the stack interconnect for almost all scenarios (128 Gbps as I recall).

        The problem is that there is just one software image (management and control planes) actually running on the master switch in the stack, and if it has bugs, or you need to upgrade it, you end up taking down the whole stack. As I mentioned, it seems the stacking really only keeps the stack alive when the master totally fails, as in a power cut. We’ve been hit by several IOS bugs that caused the stack master to mis-behave badly but not die entirely, which resulted in an outage of all or a good portion of the network.

        It’s a better option to use two independent switches, or something like VSS on the Nexus series where each switch is actually operating independently but synchronizing state with the others.

  2. I wouldn’t run a core with anything that requires downtime to upgrade.

    The 4500 series is adaquate for smaller “I’ve got some servers and some users and maybe some VOIP at some point” installations.

    I’d go with Nexus if you’re planning on doing virtualisation in the current product cycle. In-development and current featuresets around that are too cool not to do so.

    The 6500 is still a bloody nice switch, but it doesn’t seem that it’ll be developed for too many more years. I don’t expect to see 100G ports on that ‘ol chassis.

  3. Another key piece missing when comparing N5K vs the N7K for core switch position, is the absence of Netflow on N5K.


  4. overall nice article, thanks.

  5. Nice analysis. I have done a similar one for my company and added some other points to consider:
    1. Does the Datacenter needs in-depth flow visibility (for accounting and billing purpose) ? If so, pay attention that NX7K does not support Flexible netflow, and does not support Netflow in hardware on F2 line cards ….
    2. Does your Dataenter requises SLA monitoring ? If so, you should choose IOS platform, (much more powerfull than NX-OS)
    3. Does your Datacenter needs hot/cold aisle ? If so pay attention that NX7K is not very well designed for that, excepted the 7010 which is the only Front to back NX7K switchs… All other NX7K are side to side (strange design for a DC switch) or site to back airflow.
    4. One more point against NX7K. I thought i could use it with F2 cards linked with NX2K and Fex. And if it is possible, The placement limitation is awfull because for each fex, you have to dedicate 4 ports on your F2 cards (all port don’t need to be up), because of the notion of “group” implemented on F2 cards.

    For all theses reasons, and because we are also “green focused” (space saving, cooling requirement, power saving ….) on our Datacenter, the only good cisco option for our DC Core L3 switch is 4500-X. (linked to NX5K and other Catalyst switch for Server Access)

    I longly hesitated with 6504 (we own many ones), but SFP+ connectors and high 10G port density (at least 30) were mandatory options, and are not compatible with 6500.

    NB: I really wonder why Cisco made NX7K with such a horrible form factor design ! It is may be suitable for very big Provider DC but absolutely not for “small” Enterprise DC (with 500 to 1000 servers/VM).
    And i still don’t undesrtand why NX-OS (in NX-7K) lacks NAT fonctionnality, compared to an much older 6500 or, the new one NX 3548 ?
    The only “killer feature” of NX7K is VDC. But for a NX7K price, power, and height, it is cheapper for us to have many physically POD (instead of one box and virtual VDC POD )!

Leave a Reply