Is Software Really Eating Hardware?

NetCraftsmen®

Editor’s Note: Our friend Russ White, a network architect with LinkedIn, recently passed along his thoughts on the supposed dominance of software over hardware, and we thought that you would appreciate his take. Enjoy.

The networking world is ablaze with software-defined everything. The mantra of the New Year is “software eats hardware.” But before the network engineering world follows the marketing hype into the nearest blind alley, it might be useful to inject a little reality into the discussion. Let’s take one example of software eating hardware, break it down, and really try to understand where things might go.

One particular item of interest in the software eats hardware story has been the increasing speeds at which standard processors can switch packets. As an example, 6Wind has, according to SDX Central, optimized an Intel process to:

…prove dramatic performance improvement compared to other standard open source Linux solutions. For bare metal performance, 6WIND Turbo Router demonstrated 115 Gbps of IMIX throughput with near zero frame loss rates, and Turbo IPsec was able to handle tunnel setups as 12K/sec and maintain 240K concurrent tunnels while achieving over 70 Gbps of throughput, the maximum traffic thrown at it by the traffic generator.

115Gpbs is certainly impressive — but let’s look at what’s really going on here. First, the “average” top-of-rack switch built on fairly standard “white box” chipsets can support 24x10g towards the rack and 4x40g towards the fabric. Larger solutions already exist or are being promised — and this is without getting into pure fiber plays like Intel’s Rack Scale architecture.

The standard top-of-rack switch, then, is pushing around 500gbs, and 1tbs solutions are well in hand. 115gbs is still 1/5th of the available bandwidth in a fairly standard top-of-rack switch, and an order of a magnitude slower than available top-of-rack devices — on a device that’s running bare metal, with no other processing, etc. It doesn’t sound like standard COTS processors are going to be catching up to processors designed to switch packets anytime soon.

Looking at this from another direction, this not only makes sense, but it should also make us ask, “Why would we do this in the first place?” Another trend that might have caught your eye this year is the drive to reduce power consumption in data centers. For instance, Intel has shipped ATOM chips in blade servers, and Cavium is shipping a nice alternative to the x86 platform most engineers consider COTS. Data center processing appears to be diversifying.

What does this mean for a world built around fast switching on a COTS processor?

Just this: there is no way a general purpose processor is ever going to do everything as well as, and as efficiently as, some set of specialized processors. There will always be some jobs for which some form of specialized processor is always going to win. I suspect graphics processing (with GPUs) and packet switching will be two of these jobs. The energy cost of running a full-blown general purpose processor, from die size to sheer “number of buckets,” will always be a factor, and in these factors specialized chips will always win (if for no other reason than they simply have fewer transistors focused on a smaller set of functions).

Will software never win, then? It depends on what you mean when you say “win.” Software will always be the controlling component of data processing. Software will continue (we hope) to move out of the embedded world into the open, so software and hardware are separate “things” you buy when building a new system. On the other hand, network engineers will never be free of hardware considerations, and some form of “custom hardware,” even if this means a small set of “off the shelf” chips that are used in different places in the network for different tasks.

So does software eat hardware? A definite mixed result in the long run. They will both always have their place in the world of network engineering.

Russ White has more than 20 years’ experience in designing, deploying, breaking, and troubleshooting large scale networks. He has co-authored more than 40 software patents, spoken at venues throughout the world, participated in the development of various internet standards, helped develop the CCDE and the CCAr, and worked in Internet governance with the ISOC. Russ is currently a member of the Architecture Team at LinkedIn. His most recent books are “The Art of Network Architecture” and “Navigating Network Complexity.”

Leave a Reply