Designing an OTV Deployment

Author
Carole Warner Reece
Architect

I have been looking at OTV (Overlay Transport Virtualization) designs to support an organization I will call Delta.
[Note: I’ve using a mix of the old U.S. Navy Radio Alphabet and the Nato Phonetic Alphabet to generate company names…]

Delta wants to extend 4 or 5 VLANs from their data center to a disaster recovery site (DRS) that will support a few key applications. The design should support active/passive server teaming at the main data center. Delta mostly uses Layer 3 in the data center to connect server access switches to the distribution layer, but is starting to implement a few targeted applications that need Layer 2 connectivity. The disaster recovery site does not mirror the data center, it just supports a few applications.

This article summarizes some of my thoughts on helping Delta deploy OTV in their environment.

At a high level, the proposed design uses two OTV edge devices at the data center connecting across a dark fiber WAN to a single OTV edge device at DRS.

Overview of OTV network

Delta is planning to migrate some applications to new servers (using VMware) in a new row in their data center that will support 4 or 5 VLANs extended to their disaster recovery site (DRS). The new row will use a pair of Nexus 5548s with multiple pairs of Nexus 2232s in an end-of-row(N5K)/top-of-rack (N2K) topology. We are not yet sure of the details for the N5K/N2k topology, but one option would be to cross connect the N2Ks with the N5Ks, and connect the migrated servers with Active/Standby NIC teaming that Delta uses for most of their servers. This topology might look something like this:

2011_11_12_n7k-n5k-n2k

The following diagram illustrates the cabling to support the logical VDCs in the N7Ks:

OTV on a stick cabling

Design Notes on Separate Distr and OTV VDCs
The current Cisco N7K OTV implementation requires separation between SVI routing and OTV encapsulation for a given VLAN. In addition, the Join interface can only be defined as a physical interface (or subinterface) or as a logical one (i.e. Layer 3 port channel or Layer 3 port channel subinterface).

Note: Support for loopback interfaces as OTV Join interfaces is planned for a future NX-OS release.

To meet these constraints, two VDCs will be deployed at the Delta N7Ks:

  • an OTV VDC dedicated to perform the OTV functionality
  • a Distr VDC used to provide SVI routing support.

We could either configure the OTV VDC ‘on-a-stick’ with regards to the Distr VDC, or directly connect the OTV VDC to the transport WAN. We chose to use the ‘on-a-stick’ design. One clear advantage to the ‘on-a-stick’ design is that when the NX-OS no longer requires separation between SVI routing and OTV encapsulation for a given VLAN, Delta will easily be able to migrate to using just the Distr VDC for SVI routing and OTV encapsulation. The only migration steps needed will be to move the OTV configuration from the OTV VDC to the Distr VDC and deactivate the OTV VDC. This migration would be transparent to the rest of the data center network.

Note: There is a small cost, the ‘on-a-stick’ design does use at least an extra pair of interfaces on each N7K.

The N7K also requires that the routed SVI not be carried across the vPC peer link, so a separate Layer 3 link will be implemented between the vPC peers. This Layer 3 link can also be used for the vPC peer-keep alives.

Design Notes on Distr VDC
Based on the current Delta plans, the design shows Layer 3 connections from the existing data center network to the new N7Ks. The plan is to move the devices needing Layer 2 VLAN extension in a separate data center pod that connects to the N7K though Layer 2 connectivity. A maximum of 5 VLANs will be extended between the data center and DRS in the initial phase. The Distr VDCs connect using a pair of 10GE links configured as a vPC peer link. In addition, a 1GE Layer 3 link is connected between the Distr VDCs.

Since Delta already supports multicast for other applications in the data center, we are recommending enabling multicast in the WAN transport network between the data center and DRS.

Design Notes on OTV VDC
The OTV VDC provides dynamic encapsulation for Layer 2 flows that need to be sent to a remote location. Each Ethernet frame in one of the Layer 2 is individually encapsulated into an IP packet and delivered across the Layer 3 WAN, or transport network.

OTV Internal Interface
The OTV VDC connects through a Layer 2 port-channel to the pair of Distr switches across a vPC at the N7K ends. This port-channel will support the OTV Internal interface functionality.

Note: Although only one Layer 2 connection to one Distr VDC is required for OTV support, we recommend dual-homing the Layer 2 connection between the OTV VDC to the two Distr VDCs using a vPC to achieve higher resiliency.

When possible, the physical interfaces in the port-channel should be on different line-cards. Since Delta is planning to implement two 10GE modules, module diversity of the port-channel interfaces will be possible when the second module is inplemented.

The OTV Internal interface will be Layer 2 port-channel configured as a trunk port. The trunk configuration will be able to concurrently extend more than one VLAN across the overlay. There is no need to apply OTV-specific configuration to an Internal interface. The use of a port-channel for the OTV Internal interface also optimizes traffic from the data center access switches destined for an OTV AED. When the OTV Internal interface is connected in a port-channel with the Distr vPC, server traffic to at AED will not need to flow across the vPC peer-link when there is no failure condition.

OTV Join Interface
The Join interface is a Layer 3 interface. The OTV Join interface is used to source the OTV encapsulated traffic and send it to the Layer 3 transport network. With the current NX-OS release, the Join interface can only be defined as a physical interface (or subinterface) or as a logical one (i.e. Layer 3 port channel or Layer 3 port channel subinterface).

We are recommending that the OTV VDC use one GE connection to the Distr VDC in the same N7K. This link will support the OTV Join interface functionality.

Note: Since Delta has only has a single module supporting 1GE in the N7Ks today, a port-channel would only connect Ethernet ports on the same module and is not recommended.

OTV Overlay Interface
The third type of OTV interface in the OTV VDC is the logical OTV Overlay interface. This interface is explicitly configured and is where the OTV configuration is applied.

Every time the OTV VDC receives a Layer 2 frame destined for a remote site, the frame is logically forwarded to the Overlay interface. The OTV VDC performs the dynamic OTV encapsulation on the Layer 2 frame and sends the encapsulated packet to the Join interface toward the routed domain.

OTV and Multi-Homing
The OTV protocol supports multi-homing where two OTV edge devices provide LAN extension services to a given site. For the Delta design, the data center site will have two OTV edge devices while the DRS site will use one OTV edge device.

OTV uses an Authoritative edge device (AED) to prevent loops when there is more than one OTV edge device. The AED forwards Layer 2 traffic and advertises MAC reachability to remote edge devices. The AED role is negotiated, on a per-VLAN basis, between all the OTV edge devices belonging to the same site (which have the same Site ID). OTV uses a VLAN called “Site VLAN” within a site to detect and establish a Site Adjacency with other OTV edge devices. For Delta, the Site VLAN will be carried on multiple Layer 2 paths across the vPC connection on the Distr VDCs to increase the resiliency of this internal adjacency.

With 5.2 (1) NX-OS release which is planned for Delta, OTV devices also maintain a second adjacency, named the  “Overlay Adjacency”, which is established through the Join interfaces across the Layer 3 network domain. OTV uses the site-identifier value for establishing this adjacency, and all edge devices in the same site must be configured with the same site-identifier.

Traffic Flows
This section briefly discusses traffic flow at a high level from Host A at the main data center to Host B at the DRS.

  1. Traffic from Host A passes through a N2K, through a N5K, and the travels to a Distr VDC at the main data center. The Distr VDC forwards this traffic to the Internal interface on the OTV VDC.
  2. The OTV VDC encapsulates the Layer 2 frames into Layer 3 packets and forwards out its Join interface towards the transport WAN.
  3. The Distr VDC receives the Layer 3 packets and forwards them to the WAN router, which forwards them across the WAN to DRS.
  4. The DRS Distr VDC forwards the Layer 3 traffic to the Join interface on the DRS OTV VDC. The DRS OTV VDC removes the Layer 3 encapsulation and forwards the Layer 2 traffic to the host device.
  5. The DRS Distr VDC receives the Layer 2 traffic, and forwards it to the host device through the N5K to the N2K to the end host.

This traffic flow is illustrated in the following diagram:

Traffic Flows with OTV on a Stick

Only the AED is allowed to forward unicast traffic between the Layer 2 domain and the Layer 3 domain at a given site. OTV splits the AED role for Odd and Even VLANs between two OTV Edge Devices. To balance traffic forwarding at the N7Ks at the main data center, Delta should plan their odd and even extended VLAN assignments. Specifically, the HSRP design and the STP design should align the HSRP primaries and STP root with the AED for any given VLAN.

I plan to post sample configs in a later article, so check back later, or subscribe to the NetCraftsmen blog feed!

— cwr

_________________________________________________________________________________________

References

Here are some sources of further information on OTV, vPCs, and VDCs:

11 responses to “Designing an OTV Deployment

  1. Hi Carole,

    this article is very useful to me. I’m involved at this moment to delivery a new DC environment to an specific customer inside the DC and your article helps me to improve design suggestion to the customer and so my knowledge.

    Very good.

  2. Thanks for sharing! Could you say a few words about what OTV adds to an alternative that might be just extending the VLANS directly across the dark fiber?

  3. Hi Bob –

    The main advantage to OTV is the ability to connect separate and isolated Layer 2 domains at each location. So while still being able to transport the Layer 2 applications between the sites, OTV provides separate Layer 2 fault domains for each location.

    If you simply extended the VLANs across the dark fiber, you would connect the two locations into one Layer 2 fault domain.

    Carole

  4. Hi,
    We deployed OTV for one of our clients:
    (1) make sure you increase the MTU (e.g, MTU1600) along the path between OTV Edge devices across the two sites. The ping sweep utility within NX-OS is a good tool to check that.
    (2) If your host is not pointing to the N7K default gateway (e.g, your host point to a ASA firewall as default gateway) and you extend that vlan. You will need to adjust the ARP timer on the firewall to match the Nexus. Otherwise, you will issue connectivity if your host is silent for 30min. Nexus MAC ageing is 30min, ARP ageing is 25min. The command
    "sh system internal l2fm l2dbg macdb address xxxx.xxxx.xxxx" is useful in troubleshooting as it will tell you which mac address gets deleted by OTV.
    (3) We use OTV unicast and 5.2(1) has some memory leak bug which we are hitting. There is also a bug related to SNMP (i.e, you do snmp walk and your OTV VDC will crash). We are upgrading to 5.2.3a.. hope it is more stable.

    Rgds
    Eng Wee

  5. Thanks for the feedback! Cisco recommends in their NX-OS OTV Configuration Guide that you configure the join interface and all Layer 3 interfaces that face the IP core between the OTV edge devices with the maximum possible MTU size supported by the IP core.

  6. How exactly OTV works in case if my primary Data center is down.
    Where will be the L3 exactly will be configured.
    For ex i have my primary application in DC 1 with IP 10.10.10.1
    When my DC 1 fails or isolated can i bring back the same sever in DC 2 with IP 10.10.10.2 using OTV & Vmotion. In this case where will be my L3 VLAN located

  7. You need to set up your disaster recovery / continuation of operations site before the primary site goes down, typically by running the critical applications in both places.

  8. Hello,

    Thank you for the great post, it is very informative. I have two question i would be greatfull if you answer them
    1- how OTV working with the DC firewalll, as in the typical deploymnet the default gateway for the servers is firewall no SVI?
    2- if DC N7K act as MPLS PE how that will affect OTV design?
    Thanks.

  9. Thanks for the comment, Ashraf. I personally try to avoid having firewalls doing intra-datacenter routing ever: it doesn’t scale well. Object-oriented ACLs on switches scale better. With "vaults" behind a firewall for servers requiring stateful ACLs (etc.).

    I think OTV and any other L2 between datacenter technology is going to exhibit interesting traffic patterns if you have a FW as default gateway (presumably in both datacenters). If active/passive or pseudo-active/active (half and half active-passive), you’ll get traffic going back to the datacenter with the active firewall. You really need cross-site firewall clustering to do this will. And that has bad / ugly failure modes when it fails. Cisco is reportedly certifying a design using ASA clustering. My personal reaction is that it adds risk — but may be implicit in any decision to use OTV or L2 between datacenters. That’s why I advise careful planning and understanding pros/cons, rather than charging into OTV because the VMware or Disaster Recovery people want L2 between datacenters.

    Re question 2, I’d think as long as you keep the routing and MPLS PE function isolated from the server VLANs, you’d be fine. If your intent is to force all traffic through the firewall, using a VDC for "outside" might be appropriate. I’d need to see and discuss specifics to say more.

    Hope this helps!

  10. In your design, when host A sends a frame, the N5k is connected to the N7k through vPC.
    Traffic can be forwarded to the North N7K which can be a non AED for that particular VLAN.
    The traffic will then be forwarded through the vPC towards South N7k which will process the frames and encapsulate into an "OTV" (read EoMPLSoGRE) packet.

    Did you

  11. Nicolas, your question got cut off. The non-AED doesn’t forward. The AED forwards to both edge devices at the other site, but again the non-AED doesn’t forward that locally (or that’s my understanding of it).
    HTH

Leave a Reply