Dual-layer vPC on Nexus 5000

Doing some googling for the coming NX-OS 5.1(3)N1(1) for Nexus 5000 I found this:

Cisco NX-OS Software Release 5.1(3)N1(1) for Cisco Nexus 5000 Series Switches and 2000 Series Fabric Extenders: https://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/product_bulletin_c25-686744.pdf

Update: The same document in HTML: http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/product_bulletin_c25-686744.html

Yes there are the goodies like FabricPath and Adapter FEX and PTP, but who cares about those anyway 😉

For me the best feature that is promised is dual-layer vPC. It means that it will be possible to dual-home FEXes to two Nexus 5000s (= vPC between the FEX and two Nexus 5000s) and still connect the end-hosts to two different FEXes with vPC (= vPC between the end-host and two FEXes).

I should probably insert here some picture of the topology but hey, all the fancy Nexus-blue topology diagrams look eventually the same, the port-channel indicators and various straight lines are just connected a bit differently in each topology, so…

There are still plenty of servers having only one NIC connected to f.ex. backup network so dual-homing FEXes is a kind of must in some implementations. Currently (as of 5.0(3)N2(2a)), dual-homing a FEX prevents connecting end-hosts with vPC so you must use the traditional active-standby teaming on the teamed NICs on the servers.

Also, the number of FEXes is limited on Nexus 5000 (see the configuration limits in http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/configuration_limits/limits_503/nexus_5000_config_limits_503_n2_1.html) so you really must think before deploying FEXes in both single-homed and dual-homed topologies on the same Nexus 5000 switches.

With this new dual-layer vPC feature you can presumably start replacing the active-standby teaming configurations with LACP teaming (port-channels) to get both NICs transmitting and receiving on the servers and still get the Nexus 5000 redundancy for the single-connected networks or servers.

Hopefully the sequential port-channel hashing operations will result in balanced traffic distribution and not lead always to the same Nexus 5000 upstream of the servers 🙂

6 Comments

Add a Comment
  1. I think the best part of this is being able to have 1 consistent configuration across your DC for most scenarios.

    Personally, I have been moving away from using vPC’d 2k’s for a few reasons:

    1) Yes backup is often just 1 connection. But they end up with same redundancy as pre-Nexus. This does make code upgrade difficult as you are probably doing upgrades at night when backups happen. I have a good mix of vPC’d end hosts (blades) and active/standby hosts that will be around for a few more years. (i wish they weren’t)

    2) Fex limits – I have mostly 5020’s and I reach the fex limit before I run out of ports, but have yet to cover the immediate racks around the 5k’s.

    3) Fex numbering simplicity/port configuration simplicity –
    Say you are configuring 2 ports for a new server. With VPC’d 2k’s, for example you are configuring ports e100/1/1 and e101/1/1 on BOTH 5k’s. You have to note somewhere that fex 100 and fex 101 are a “pair” when it comes time to upgrade fex code and it needs a reboot. With “straight-through” fex’s, You know the fex’s are a pair, because they will share the same fex number on each 5k. And when it comes to configuration, you can just configure e100/1/1 on each 5k. ** you do want to use “vpc orphan-port suspend command though**

    thanks!
    Ian

    1. Thanks for your comments, Ian! You have very valid points there.

      For your first point I need to say that times have somewhat changed on the redundancy. On Catalyst times we could run the same IOS for ages and not cause any disturbance (due to the upgrades) for the single-connected server NICs. Nowadays, I have a feeling that the switches need to be upgraded all the time, or at least more often than in the old days. That’s why I want to reserve the right to ISSU the Nexus 5500 at any time 🙂

      About the port configuration, it really is a serious operational issue (or it may become an issue at some point later…) so FEX numbering scheme is something that needs careful consideration.

      Can you open up your meaning about suspending the orphan ports here?

  2. I agree about code-upgrades- I think best practice is moving towards staying at the latest code-level. This is probably because the complexity is increasing rapidly.

    I find the fex numbering issue becomes a problem as you transition your catalyst friendly tech staff to nexus support.

    As far as orphan ports go – If you lose the vPC peer-link- it shuts down all vPC on the vPC secondary switch. This includes the uplink. Now with uplink down, crosslink down, but downstream port-channels and links UP, you are black-holing your hosts. Your straight-through FEX’s stay up. So a server doesn’t see a link down and continues to send traffic over that link.

    I have found most NIC teaming software (HP for instance) eventually does a Rx/Tx health check and noticed it can’t get to the gateway and shuts down use of that NIC.

    You can’t really vouch for every server’s NIC teaming method, so I find its best to apply the “vpc orphan-port suspend” command to make sure they get links down.

    you can do a “show vpc orphan-ports” to see ports at risk.

    1. Right, I read the context again and realized what you meant. Yes, I agree on the orphan port setting.

      I think that sometimes the NIC teaming implementations are pure voodoo… Maybe it’s just me but nobody really seems to know how they really work. “If you think it works, don’t touch or investigate it, there are thousands of them…”

      1. Yeah- I tried a few different NIC teaming methods to see how they reacted to vPC peer-link loss… it’s such a mixed bag of methods.

        If I can only convince them all to go active-active now…

  3. I am in the process of updating our two data centres to a pair of Nexus 5548 in each and a pair of 22xx FEXes in each rack, so your blog is timely. So a big “thank you”!

    Right from the start I thought that dual-homed FEXes were the way to go, and I was glad that they introduced enhanced vPC before I got too far through the migration. Now I can have a mix of server topologies (single-homed, active-standby, and LACP) in each rack.

    One thing I am finding absolutely indispensible, of course, is the switch-profile functionality, so thanks for your blog on that too.

    A painful lesson I have learned is about the L3 module. It does not fit in well with the vPC architecture, and also limits you to 8 FEXes instead of 24. I don’t know whether Cisco will ever get round to addressing those issues, but for the moment I shall keep my Nexus purely layer-2 and get something else (maybe my old Catalysts on a stick) to do the layer-3 for me.

    I would be intgerested to know if anyone has done some studies of different failure scenarios in the enhanced vPC topology, their effects on traffic, and their convergence times. For example:

    – loss of vPC primary 5500
    – loss of vPC secondary 5548
    – split-brain due to loss of vPC peer-link
    – loss of uplink to primary
    – loss of uplink to secondary
    – total loss of one FEX (for various server topologies)
    – etc

    Again, thanks!

Leave a Reply