Just Say No to Jumbo Frames

Author
Peter Welcher
Architect, Operations Technical Advisor

How do you feel about jumbo MTU? I seem to periodically get into debates about jumbos. I’m highly allergic to jumbos. Let’s examine the facts (as I see them), and then we’ll get to the cause of my allergic reaction.

TL;DR: jumbo can cause major operational pain for network administrators.

But first, let’s define jumbo MTU. This usually refers to jumbo frames on Ethernet media. I’m making that distinction because other transports can have different MTU sizes.

Ethernet MTU stands for Maximum Transmission Unit, the largest data payload in an Ethernet frame. The standard is 1500 bytes. Note that the MTU is not the frame size, an Ethernet frame has L2 header (SMAC, DMAC, Ethertype) and checksum added, for a OSI Layer 2 frame size total of 18 more bytes. At Layer 1, there is also the preamble and starting delimiter, adding another 8 bytes.

Using 802.1q VLANs adds 4 bytes (802.1q type code, VLAN ID, then Ethertype).

I’m going to refer you to the Wikipedia page for the various formats for Ethernet frames.

Note that TCP MSS is something different.

Now that everyone is equally confused, let’s dig in deeper…

It Gets Confusing

Various vendors let you set the MTU. One has to be careful, as sloppy terminology sometimes leaves it unclear what exactly is being configured, i.e. which OSI layer headers are included.

On Cisco, setting the IP MTU reduces the IP packet size, usually to accommodate VPN tunneling or other overhead. I usually just use 1400 for such situations — saves on math errors or off by one error, and if you’re splitting the packet, it doesn’t really matter much if you split it a bit earlier than absolutely necessary. You end up with two packets either way.

Why Jumbos?

Reason #1: The ratio of header overhead to payload is better. So, you get marginally more data transmitted with a given amount of bandwidth. Or said differently, jumbos waste less bandwidth on headers.

Reality check: that used to matter. If you’re transmitting to Mars at 8 bits per second, you might care about overhead, greatly. Modems at 16 Kbps or whatever, yes. And yes, I’m old enough to remember modem connect squeal and all that. At 10 Gbps, you likely have bandwidth to spare, unless you’re doing something extreme, where you need every last bit of performance. 

Reason #2: Many devices’ forwarding performance is (was) measurable in packets per second, since how many items the CPU had to deal with was limiting.

This may still be to some extent with cheap server / laptop NICs. TCP offload and other driver techniques may alleviate the CPU burden of adding the packet headers and computing the checksum. Efficient driver coding (e.g. not copying data around in memory!) has also improved forwarding performance. I don’t consider myself a server or NIC expert, so I’ll quickly change the subject …

Reason #3: My storage team (or vendor) told me they need jumbos for better performance.

Vendors have been claiming this for a while, and it may be true. Although one response might be, “and why didn’t you put a more powerful CPU in your storage front end, since the marginal cost would be tiny?” Yes, some organizations do need extreme performance.  

Googling, I see articles that are all over the place. Some might be summarized as “yes, 7% gain in performance”, others show bigger gains and sometimes losses in performance. The right answer is likely “it depends” (on your environment, your NICs, your CPU, your drivers, etc.).

The Downside of Jumbos

Jumbos have to be configured. One more thing that can go wrong / missing. Labor expended.

Jumbos have to be configured to a plan. You have to take re-routing (STP changes or routing changes) into account and set up jumbos consistently across every possible alternative path.

Design-wise, that means at the very least you should pick a region in your datacenter for jumbo deployment, define it well, and then perhaps automate periodic checks for interfaces / ports that didn’t get configured.

You probably don’t want jumbos on campus LAN or the WAN.

What Could Possibly Go Wrong?

If a large frame arrives on a port or interface configured for smaller MTU, it likely gets discarded (See however Cisco “baby giant frames”, which allow a little laxity with frame sizes).

This can lead to very puzzling “why can’t these devices talk” troubleshooting sessions. And then you have to look at every possible path between devices, check the actual MTU (mark up a diagram), and look for inconsistencies. You may find this via doing traceroute or ping with a large packet size and DF bit set, but that will only catch the first problem spot along the current path (you did want High Availability, didn’t you?).

MTU mismatch can be a problem for your routing stability. I’ve seen it now with both EIGRP and OSPF.

With OSPF, the problem arises in the adjacency formation process, if one neighbor has a larger MTU. I’ve seen it once in the field.

You will see the problem during the OSPF Exchange state, and one side won’t be happy because it thinks it is not seeing anything from the other router. This shows up as sequencing through the OSPF state machine, pause, then repeating. The problem occurs when you have enough info in the OSPF LSA DB to cause sending packets too large for the other router. So, this is something you won’t see until one day, when your network gets big enough (and not all that big), OSPF starts breaking.

As often happens, I found an interesting blog at INE about this. It shows CLI output if you’re interested in examining this problem in detail.

However, from a Cisco Tech Note, it looks like there have been various changes in how Cisco handles this situation. Short version: “ip ospf mtu-ignore” may solve your OSPF problem, depending on release version, but you’ll still have jumbo drop issues.

With EIGRP, you can have something similar happen. I’ve seen it with two routers with a L2 switch in between, where the switch had a smaller MTU. As the routing table size grew, EIGRP went unstable between the two routers. This could also happen to OSPF, even if correct MTU checking is going on between the two routers.

Conclusion

Jumbo MTU provides:

  • Less header overhead, more data per packet: minor gain
  • Greater network complexity
  • Minor configuration hassle
  • Really annoying troubleshooting when an MTU mis-match happens somewhere

Conclusion: Just Say No to Jumbo Frames

That is, if local politics, requirements (i.e. extreme performance), and vehemence of argument allow.

References

Comments

Comments are welcome, both in agreement or constructive disagreement about the above. I enjoy hearing from readers and carrying on deeper discussion via comments. Thanks in advance!

—————-

Hashtags: #CiscoChampion #TheNetCraftsmenWay #JumboFrame

Twitter: @pjwelcher

Disclosure Statement
Cisco Certified 20 Years

NetCraftsmen Services

Did you know that NetCraftsmen does network /datacenter / security / collaboration design / design review? Or that we have deep UC&C experts on staff, including @ucguerilla? For more information, contact us at info@netcraftsmen.com.

Leave a Reply