New Nexus 9K Items
At the closing session of Tech Field Day 12 (#TFD12), Docker presented on the significant new features in Docker networking. I and some of the #NFD13 delegates attended, as Docker was the closing presentation the day before NFD13. Hey, they had me at the word “networking!”
I’m not going to claim to have wrapped my brain around Docker networking — yet. But learning about it is in my reading queue, and probably should be in yours. I’m writing this quick blog to connect you with some good resources.
Docker started with the houses-versus-apartments analogy. A house has its own heating/cooling, water heater, infrastructure. (And mine has a yard, with flower beds that need mulch — spring sore back once again!) Apartments share infrastructure. A virtual machine (VM) is like a house, a Docker container is like an apartment. Works for me!
One goal of Docker networking was to de-couple network instantiation from the container internals, providing modularity between DevOps development and the eventual Ops deployment. Here’s the point: clean division of labor between teams to recognize that communication across boundaries is hard; separate implementation details from app design and architecture.
I’m all over that. I’ve seen way too many recent ad hoc (a polite word for “DevOps” derived mess?) apps deployments, where changing the networking or addressing would be very helpful (e.g. in summarizable routes, consistent addressing blocks for security zones, etc.).
Docker also talked about containers for Microsoft OS’s, although note it is for server and non-GUI applications only (cf. “Microsoft nano server”).
My big question lately — be it bare metal servers, VMware “application pods” (the group of VMs providing a service/application), or containers — is how do I manage it? In particular, I don’t just want the user experience data, but micro-services or container-to-container performance data. That seems particularly important when containers with micro-services are being spun up and shut down in very short timeframes. How do I detect that containers on host A talking to those on B are slower than usual, and correlate that with high error rate/dirty optics on one of the links in between?
Aside from managing things, I have the feeling there ought to be a balance between decomposing a program into components and network impact, including latency. It’s good programming to build single-purpose container-based app components, simplifying coding and fixing bugs. I get that. My concern is from watching some SOA apps, and getting the feeling there was a lot more “passing the buck” to another server than actually getting the work done. I can imagine even greater chattiness happening with container-based micro-services. I’d like data on that (e.g., time to get a response, how much of it is network, how much the containerized service). I’m not sure what the right answer is. Human coding is costly, network not so much. Having data (actual facts!) has to be useful.
Some related blogs:
Comments are welcome, both in agreement or constructive disagreement about the above. I enjoy hearing from readers and carrying on deeper discussion via comments. Thanks in advance!
Nick has over 20 years of experience in Security Operations and Security Sales. He is an avid student of cybersecurity and regularly engages with the Infosec community at events like BSides, RVASec, Derbycon and more. The son of an FBI forensics director, Nick holds a B.S. in Criminal Justice and is one of Cisco’s Fire Jumper Elite members. When he’s not working, he writes cyberpunk and punches aliens on his Playstation.
Virgilio “Bong” has sixteen years of professional experience in IT industry from academe, technical and customer support, pre-sales, post sales, project management, training and enablement. He has worked in Cisco Technical Assistance Center (TAC) as a member of the WAN and LAN Switching team. Bong now works for Tech Data as the Field Solutions Architect with a focus on Cisco Security and holds a few Cisco certifications including Fire Jumper Elite.
John is our CTO and the practice lead for a talented team of consultants focused on designing and delivering scalable and secure infrastructure solutions to customers across multiple industry verticals and technologies. Previously he has held several positions including Executive Director/Chief Architect for Global Network Services at JPMorgan Chase. In that capacity, he led a team managing network architecture and services. Prior to his role at JPMorgan Chase, John was a Distinguished Engineer at Cisco working across a number of verticals including Higher Education, Finance, Retail, Government, and Health Care.
He is an expert in working with groups to identify business needs, and align technology strategies to enable business strategies, building in agility and scalability to allow for future changes. John is experienced in the architecture and design of highly available, secure, network infrastructure and data centers, and has worked on projects worldwide. He has worked in both the business and regulatory environments for the design and deployment of complex IT infrastructures.