New Nexus 9K Items

Author
Peter Welcher
Architect, Operations Technical Advisor

This blog contains brief coverage about some Nexus 9K and optics-related items that might be of interest.

Nexus 93240YC-FX2

The N93240YC-FX2 switch is the N93180YC switch’s new relative. The difference is 12 rather than 6 x 40/100 Gbps uplink (etc.) ports. Both have 48 x 1/10/25 Gbps access ports.

Whether you’re doing classic VPC or spine and leaf topology, this gives you room to handle a few more access switch pairs, which is quite useful when the data center consists of a small number of racks.

100 Gbps Optics

Have you read the Cisco 100 Gbps QSFP Modules document yet? Well, that and checked pricing?

Some of the 100 Gbps options are relatively inexpensive.

I was recently exploring 2-10 km options as far as 40 and 100 Gbps, as the next logical step beyond 10 Gbps.

Another thing: if you’ve been assuming twinax is only for 10 Gbps, think again. So, if you’re doing VPC cross-links, or doing leaf to spine (access to core) uplinks in a small number of racks, the 100 Gbps copper or “active optical” options cost not much more than 10 Gbps twinax used to.

Conclusion: Building a 100 Gbps fabric in your data center may be far more affordable than you thought, particularly for smaller fabrics!

VPC with Fabric Peering

I’d recently been noticing that doing VPC ties up a lot of the 40/100 Gbps ports on various Nexus 9K switches. Apparently, Cisco did too!

The new alternative is “VPC with Fabric Peering”. It’s for MLAG in a VXLAN / EVPN fabric. The fabric carries the CFS messages in place of the VPC peer link!

And yes, you’ll want a high priority DSCP marking so you don’t lose your VPC communications when there’s congestion.

Follow the above link for more details, and to decide if this feature is for you.

VXLAN / EVPN Multi-Site

The VXLAN / EVPN Multi-Site Design / Deployment White Paper is some useful and interesting reading! One key point to this approach is scalable VXLAN at multiple sites with some degree of L2 control.

Among other things, that document presents some major use cases. It also has some useful configuration snippets and diagrams, if you’re thinking of building a lab model or deploying this functionality.

One use case that caught my eye recently for two design projects was the pseudo-Border Gateway use case. It is described as providing support for tying legacy data center sites to VXLAN / EVPN sites. It looks like an OTV replacement but isn’t quite presented that way. The new code does support traffic storm-control for VXLAN EVPN.

I do wish any of the storm-control documentation describes whether the “unicast” option caps all unicast or just unknown unicast traffic. I suspect it’s the former — and why would anybody want that? The latter is what I’d like to have. Failure to document clearly means a feature might as well have not been implemented?

One caution: VIRL’s virtual N9K NXOS image does not yet support Multi-Site (as of April 2019).

Second caution: If you’re thinking of building a VIRL model for VXLAN anything, you’d best have a VM with 4+ cores and a lot of RAM, like 80-128 GB or more. The N9K images are large, and when you load VXLAN features, they get even larger.

Comments

Comments are welcome, both in agreement or constructive disagreement about the above. I enjoy hearing from readers and carrying on deeper discussion via comments. Thanks in advance!

—————-

Hashtags: #CiscoChampion #TechFieldDay #TheNetCraftsmenWay #Switches #DataCenter

Twitter: @pjwelcher

Disclosure Statement
Cisco Certified 20 Years

NetCraftsmen Services

Did you know that NetCraftsmen does network /datacenter / security / collaboration design / design review? Or that we have deep UC&C experts on staff, including @ucguerilla? For more information, contact us at info@ncm2020.ainsleystaging.com.

Leave a Reply