New Nexus 9K Items
If there’s one thing a networking person should probably know about containers, it is of course container networking. I’ve been self-educating on the topic, and I’m finding container networking a complex topic, with variations depending on the tools in use (I’d like a scorecard summary!). For what it’s worth, some blogs I’ve seen indicate that as use of containers grows and scales up, knowledge of container networking is becoming increasingly critical.
Manageability and performance may be two related factors. DevOps teams may or may not coordinate well. In the best case, they may ask for the networking team’s design advice. Will you be ready should that happen?
As a result, I’ll try to give the flavor of container networking, and provide lots of reading references. There are entire books dedicated to this topic. I’m not prepared to write one, and if you’re reading this, you’re probably not looking for a blog that long. For more info, please use the References listed at the end of this blog.
Having said that much, and in the interest of keeping the length reasonable, this blog will focus on Docker Networking.
For me, networking, especially network troubleshooting, is all about the flows. I’ve often enough been in situations where server and hypervisor admins can’t tell me what’s going on in sufficient detail to troubleshoot a problem. Most recently, forms of teaming in VMware mis-matched to a Nexus VPC.
I (and the reader!) have no reason to expect working with containers to be any different.
Here’s my current list of things one might want to know:
This blog will try to address some of those items. Subsequent blogs may tackle more of them.
Note: we won’t worry about the actual implementation mechanism, iptables or whatever. TMI for getting started, we’ll need the big picture first.
For more details, see the O’Reilly book and other references below.
Docker has several “network drivers” or modes: bridge, host, overlay, and macvlan modes.
Terminology: in the following, “ports” will refer to TCP or UDP ports.
Bridge mode acts like a virtual switch (single VLAN) for the containers connected to it. The bridge is associated with a private IP subnet, and attached containers are automatically assigned addresses in that subnet. The host that the containers are running on itself is also connected to the bridge.
By default, containers on a host all connect to the default bridge docker0, and all ports on them are exposed (published) to the outside world.
User-defined bridge networks are basically private subnets, private to the containers connected to them (and the host). However, container names can be used as well as IP addresses. All container ports on a user-defined bridge are accessible to the other attached containers. By default, no container ports are exposed to the outside world.
Think private back-end network. Personally, I somewhat prefer servers to have exactly one network connection. I can live with directly-connected back-end networks. By extension, the same for containers. The concern is that servers should not be doing routing, as troubleshooting that is a nightmare. Another concern is that back-end networks are implicit security, not explicit, so hard to audit.
Containers can be connected and disconnected from user-define bridges on the fly by the administrator / owner.
In host mode, a container interface in host mode shares the interface namespace with the host. That means it is publicly exposed — same IP as the host, different ports. You need to manage the ports.
Host mode is only available on Linux hosts, not Docker for Mac.
Overlay networks use an overlay to securely interconnect the connected containers across multiple Docker hosts.
A Docker swarm is a cluster of Docker engines (and presumably nodes). A swarm provides declarative services, with scaling, state reconciliation, service discovery, load balancing, and other features.
Overlays are also used for multiple applications interacting via swarm services.
There are differences in behavior for standalone containers and Docker swarms. See the link below for more information. Avoiding overlay encapsulation may help performance and appears to be one differentiating factor for service mesh tools (subject for a later blog).
Macvlan mode lets you assign a MAC address to a container virtual network interface, so that it appears to be a separate device directly on the physical network. You have to configure which host physical interface to use, also the subnet and default gateway.
The documentation notes that too many MAC addresses might make the network unhappy. (I’ll second that!) It also notes that the physical port needs to support promiscuous mode, i.e. multiple MAC addresses. I’ll note that likely increases the host CPU workload.
You can use 802.1q VLAN tags with macvlan mode, if you wish: “trunk bridge mode”.
IPVLAN mode is a variant where multiple IP addresses share the same MAC address. This mode may be useful, for example, when switch port security is deployed, imposing a limit to how many different MAC addresses are seen on the physical switch port.
IPVLAN has L2 and L3 modes, see the Container Networking: A Breakdown, Explanation, and Analysis link (References, below) for more details.
You can disable networking on a container, for no outside access, or to add your own customer network driver (advanced topic).
This is an older approach that may still be useful, e.g. for troubleshooting. Container-mapped has a container sharing the IP, MAC, and networking of another container.
You can expose ports, to make them available on the same container network.
In Docker, you have to publish a port for it to be externally accessible. The publicly exposed ports can differ from the container port, with re-mapping done for you.
That’s important to remember. External-facing ports are not necessarily the same as the mapped internal ports.
There is a very helpful series of 12 blogs plus some good hands-on labs provided by Cisco. They cover Docker, Kubernetes, and various steps along the way to effectively using them and Istio service mesh. Highly recommended!
Comments are welcome, both in agreement or constructive disagreement about the above. I enjoy hearing from readers and carrying on deeper discussion via comments. Thanks in advance!
Hashtags: #CiscoChampion #TechFieldDay #TheNetCraftsmenWay
Did you know that NetCraftsmen does network /datacenter / security / collaboration design / design review? Or that we have deep UC&C experts on staff, including @ucguerilla? For more information, contact us at firstname.lastname@example.org.
Nick has over 20 years of experience in Security Operations and Security Sales. He is an avid student of cybersecurity and regularly engages with the Infosec community at events like BSides, RVASec, Derbycon and more. The son of an FBI forensics director, Nick holds a B.S. in Criminal Justice and is one of Cisco’s Fire Jumper Elite members. When he’s not working, he writes cyberpunk and punches aliens on his Playstation.
Virgilio “Bong” has sixteen years of professional experience in IT industry from academe, technical and customer support, pre-sales, post sales, project management, training and enablement. He has worked in Cisco Technical Assistance Center (TAC) as a member of the WAN and LAN Switching team. Bong now works for Tech Data as the Field Solutions Architect with a focus on Cisco Security and holds a few Cisco certifications including Fire Jumper Elite.
John is our CTO and the practice lead for a talented team of consultants focused on designing and delivering scalable and secure infrastructure solutions to customers across multiple industry verticals and technologies. Previously he has held several positions including Executive Director/Chief Architect for Global Network Services at JPMorgan Chase. In that capacity, he led a team managing network architecture and services. Prior to his role at JPMorgan Chase, John was a Distinguished Engineer at Cisco working across a number of verticals including Higher Education, Finance, Retail, Government, and Health Care.
He is an expert in working with groups to identify business needs, and align technology strategies to enable business strategies, building in agility and scalability to allow for future changes. John is experienced in the architecture and design of highly available, secure, network infrastructure and data centers, and has worked on projects worldwide. He has worked in both the business and regulatory environments for the design and deployment of complex IT infrastructures.