Click here to request your free 14-day trial of Cisco Umbrella through NetCraftsmen today!

Thanks to Bob Harper for his follow-up emails concerning my CMUG presentation on vPC, FEX, and Datacenter Virtualization. If you haven’t read the presentation, it can be found at  Follow-Up to vPC, FEX, and Datacenter Virtualization CMUG. (Thanks to Cisco for permission to use and even re-arrange a ton of their slides to convey my message.) I’m posting an edited version of our discussion thread with Bob’s permission, in the hopes it’ll be of interest and perhaps stir up some debate. Or at least provide amusement.

BH: I attended the February CMUG: Nexus 1000v/1010v and VM-FEX technologies. Thanks again!  It was a great discussion about all the illusive FEX technologies that are coming.I totally agree with you points on HP’s 802.1qbr.  It does not look to, or really solve, virtualizing x86 network components. I am looking forward to adapter-FEX technology and am eager to see how it plays out.
I had one question for clarification from the beginning of the the CMUG presentation. Did you say that we should support the ‘any server anywhere’ network design model?  Or should we have definitive ‘edge | DMZ | TestLab | DataCenter | Campus’ networks. In other words inter-VRF routing across multiple physical routers in lieu of traditional layer 2 segmentation?  There are of course multiple caveats for doing inter-VRF routing like multi-tenate networks, etc..
PJW: I hate to say it, but “it depends”.  I also am not sure I’m fully understanding your question.
I want L2 contained strictly within the datacenter, since I consider it a major risk. OK, OTV or something between datacenters, I expect that to turn out to just move the risk goalposts (i.e. lower risk but bigger outage when it hits the fan?).
[Ed: to mix metaphors wildly.]
I think we need to be prepared for any server anywhere in the datacenter. In a large setting, I might want to partition the datacenter, but it depends on politics and how much one can control server placement, etc.
[Added later: That might be like
King Canute and the tide, of course.]
I never ever want L2 to closets from the datacenter, nor servers in closets. I prefer L3 to the closet — costs for L3 licensing (unless doing EIGRP stub), but well worth it in that closets never a  (STP) problem again. May not be possible in hospitals with apps that require L2 adjacency.
[Added later: you can do L3 to the closet cheaply (pseudo-L3 to the closet) by having one-closet VLANs that terminate at the distribution layer in L3 SVI’s, and do NOT extend the VLAN between the two distribution switches — only a good idea if you have the discipline to keep the VLAN to a single closet, if it expands, you’ll be in danger of black-holing packets.]
DMZ, Lab / Dev probably need isolation. L2 VLANs (everywhere in datacenter) might do that. L3 VRF-Lite does it more strongly, but imposes constraints on VMotion etc. (but not VXLAN!). Routing into/out of VRF-Lite requires good design or it gets messy fast — we think we’ve got a good design using VRF-Lite for example for one major hospital which needs its FW-protected Epic servers in datacenter A to be able to route to similar ones in B without passing through a firewall.
If I have multiple edge “perimeter modules”, I have repeatedly considered VRF Lite to isolate the many partner routes from main routing, particularly in OSPF environments. For some reason we rarely seem to end up doing it that way in practice. I’ve also considered running EIGRP for the perimeter as a simple way to keep the perimeter routes out of OSPF in the core — just originating default into OSPF instead.
[Ed note: And for some reason, like complexity or ugliness, my NetCraftsmen-peers didn’t like those approaches.]Added later: The recently re-announced/announced Cisco Easy Virtual Network (EVN) might be an easier way to tackle this than VRF-Lite, in part because it addresses the routing into/out of issue more cleanly. From some MPLS designs I’ve seen, I’m going to guess that using that heavily could turn around and bite you, complexity and security-wise.]
BH: Thanks for the prompt and thorough reply. It always depends. I feel that when possible, KISS is a great network design philosophy.  Network hardware is cheap when compared to L2 outages and troubleshooting costs.  Just because you can does not mean that you should.[Added later (PJW): Exactly, emphasis mine!]
PJW: That’s about where my thoughts are. OTOH, L2 is here, to me it’s a matter of where you should and where you CAN draw the line. And trying to get a dialog started that accommodates everyone’s needs and concerns.

Follow-up thoughts:

That “just because you can” resonates for me. I repeatedly see VLANs all over creation, or tunnel spaghetti, sometimes both. I do see a lot of datacenter sprawl and chalk up some of it as entropy — and some of it due to lack of  planning, and/or server folks or management either not receiving the message or not understanding the risk trade-offs for the convenience. If networking were a card game, apparently convenience would be trump suit every time!

What do you think? Is it even possible to exert some control over VLAN sprawl in the datacenter? Is it useful? Or is it just burning time being neat, tidy, and safe when those don’t buy you much in the real world? (Says those who have yet to be burned?)

While we’re at it, it seems like OTV solves some real-world problems for people. Are we at risk of extending our VLAN sprawl to multiple datacenters? What can or should be done about that, if anything?

Peter Welcher

Peter Welcher

Architect, Operations Technical Advisor

A principal consultant with broad knowledge and experience in high-end routing and network design, as well as data centers, Pete has provided design advice and done assessments of a wide variety of networks. CCIE #1773, CCDP, CCSI (#94014)

View more Posts

 

Nick Kelly

Cybersecurity Engineer, Cisco

Nick has over 20 years of experience in Security Operations and Security Sales. He is an avid student of cybersecurity and regularly engages with the Infosec community at events like BSides, RVASec, Derbycon and more. The son of an FBI forensics director, Nick holds a B.S. in Criminal Justice and is one of Cisco’s Fire Jumper Elite members. When he’s not working, he writes cyberpunk and punches aliens on his Playstation.

 

Virgilio “BONG” dela Cruz Jr.

CCDP, CCNA V, CCNP, Cisco IPS Express Security for AM/EE
Field Solutions Architect, Tech Data

Virgilio “Bong” has sixteen years of professional experience in IT industry from academe, technical and customer support, pre-sales, post sales, project management, training and enablement. He has worked in Cisco Technical Assistance Center (TAC) as a member of the WAN and LAN Switching team. Bong now works for Tech Data as the Field Solutions Architect with a focus on Cisco Security and holds a few Cisco certifications including Fire Jumper Elite.

 

John Cavanaugh

CCIE #1066, CCDE #20070002, CCAr
Chief Technology Officer, Practice Lead Security Services, NetCraftsmen

John is our CTO and the practice lead for a talented team of consultants focused on designing and delivering scalable and secure infrastructure solutions to customers across multiple industry verticals and technologies. Previously he has held several positions including Executive Director/Chief Architect for Global Network Services at JPMorgan Chase. In that capacity, he led a team managing network architecture and services.  Prior to his role at JPMorgan Chase, John was a Distinguished Engineer at Cisco working across a number of verticals including Higher Education, Finance, Retail, Government, and Health Care.

He is an expert in working with groups to identify business needs, and align technology strategies to enable business strategies, building in agility and scalability to allow for future changes. John is experienced in the architecture and design of highly available, secure, network infrastructure and data centers, and has worked on projects worldwide. He has worked in both the business and regulatory environments for the design and deployment of complex IT infrastructures.