Click here to request your free 14-day trial of Cisco Umbrella through NetCraftsmen today!

1/7
2016
Netcraftsmen

Is Software Really Eating Hardware?

Editor’s Note: Our friend Russ White, a network architect with LinkedIn, recently passed along his thoughts on the supposed dominance of software over hardware, and we thought that you would appreciate his take. Enjoy.

The networking world is ablaze with software-defined everything. The mantra of the New Year is “software eats hardware.” But before the network engineering world follows the marketing hype into the nearest blind alley, it might be useful to inject a little reality into the discussion. Let’s take one example of software eating hardware, break it down, and really try to understand where things might go.

One particular item of interest in the software eats hardware story has been the increasing speeds at which standard processors can switch packets. As an example, 6Wind has, according to SDX Central, optimized an Intel process to:

…prove dramatic performance improvement compared to other standard open source Linux solutions. For bare metal performance, 6WIND Turbo Router demonstrated 115 Gbps of IMIX throughput with near zero frame loss rates, and Turbo IPsec was able to handle tunnel setups as 12K/sec and maintain 240K concurrent tunnels while achieving over 70 Gbps of throughput, the maximum traffic thrown at it by the traffic generator.

115Gpbs is certainly impressive — but let’s look at what’s really going on here. First, the “average” top-of-rack switch built on fairly standard “white box” chipsets can support 24x10g towards the rack and 4x40g towards the fabric. Larger solutions already exist or are being promised — and this is without getting into pure fiber plays like Intel’s Rack Scale architecture.

The standard top-of-rack switch, then, is pushing around 500gbs, and 1tbs solutions are well in hand. 115gbs is still 1/5th of the available bandwidth in a fairly standard top-of-rack switch, and an order of a magnitude slower than available top-of-rack devices — on a device that’s running bare metal, with no other processing, etc. It doesn’t sound like standard COTS processors are going to be catching up to processors designed to switch packets anytime soon.

Looking at this from another direction, this not only makes sense, but it should also make us ask, “Why would we do this in the first place?” Another trend that might have caught your eye this year is the drive to reduce power consumption in data centers. For instance, Intel has shipped ATOM chips in blade servers, and Cavium is shipping a nice alternative to the x86 platform most engineers consider COTS. Data center processing appears to be diversifying.

What does this mean for a world built around fast switching on a COTS processor?

Just this: there is no way a general purpose processor is ever going to do everything as well as, and as efficiently as, some set of specialized processors. There will always be some jobs for which some form of specialized processor is always going to win. I suspect graphics processing (with GPUs) and packet switching will be two of these jobs. The energy cost of running a full-blown general purpose processor, from die size to sheer “number of buckets,” will always be a factor, and in these factors specialized chips will always win (if for no other reason than they simply have fewer transistors focused on a smaller set of functions).

Will software never win, then? It depends on what you mean when you say “win.” Software will always be the controlling component of data processing. Software will continue (we hope) to move out of the embedded world into the open, so software and hardware are separate “things” you buy when building a new system. On the other hand, network engineers will never be free of hardware considerations, and some form of “custom hardware,” even if this means a small set of “off the shelf” chips that are used in different places in the network for different tasks.

So does software eat hardware? A definite mixed result in the long run. They will both always have their place in the world of network engineering.

Russ White has more than 20 years’ experience in designing, deploying, breaking, and troubleshooting large scale networks. He has co-authored more than 40 software patents, spoken at venues throughout the world, participated in the development of various internet standards, helped develop the CCDE and the CCAr, and worked in Internet governance with the ISOC. Russ is currently a member of the Architecture Team at LinkedIn. His most recent books are “The Art of Network Architecture” and “Navigating Network Complexity.”

View more Posts

 

Virgilio “BONG” dela Cruz Jr.

CCDP, CCNA V, CCNP, Cisco IPS Express Security for AM/EE
Field Solutions Architect, Tech Data

Virgilio “Bong” has sixteen years of professional experience in IT industry from academe, technical and customer support, pre-sales, post sales, project management, training and enablement. He has worked in Cisco Technical Assistance Center (TAC) as a member of the WAN and LAN Switching team. Bong now works for Tech Data as the Field Solutions Architect with a focus on Cisco Security and holds a few Cisco certifications including Fire Jumper Elite.

 

John Cavanaugh

CCIE #1066, CCDE #20070002, CCAr
Chief Technology Officer, Practice Lead Security Services, NetCraftsmen

John is our CTO and the practice lead for a talented team of consultants focused on designing and delivering scalable and secure infrastructure solutions to customers across multiple industry verticals and technologies. Previously he has held several positions including Executive Director/Chief Architect for Global Network Services at JPMorgan Chase. In that capacity, he led a team managing network architecture and services.  Prior to his role at JPMorgan Chase, John was a Distinguished Engineer at Cisco working across a number of verticals including Higher Education, Finance, Retail, Government, and Health Care.

He is an expert in working with groups to identify business needs, and align technology strategies to enable business strategies, building in agility and scalability to allow for future changes. John is experienced in the architecture and design of highly available, secure, network infrastructure and data centers, and has worked on projects worldwide. He has worked in both the business and regulatory environments for the design and deployment of complex IT infrastructures.