New Nexus 9K Items
Device naming standards are boring. Why should we even talk about them? When you’re in the middle of a network melt-down and you are having problems remembering device names and IP addresses, it isn’t boring; it’s frustrating. But that’s not the right time to be thinking about device naming standards. So let’s discuss how device naming standards can help you.
Michael Morris, who writes a blog for Network World described a device naming scheme in Naming Conventions. The CCIE Pursuit Blog did a follow-up to it titled Network Device Naming Conventions. They both have the right intent, but contain a key factor that can make troubleshooting more difficult and that’s what I’ll talk about today.
I started my networking career with the Unix UUCP (Unix-to-Unix CoPy) dial-up network for exchanging email and net news. There was once a major hub host in the UUCP system named rlgvax, which was a DEC Vax-780 at the RLG Corporation in Washington, DC. A few years went by and the 780 became old and slow, relative to newer computers and was eventually replaced by a Gould 9000 super-minicomputer. So the inside joke was that rlgvax was no longer a vax. Cute, but it didn’t change how UUCP operated or how you went about troubleshooting it.
Advance the calendar to more recent times, with routers and switches deployed in a large campus network or in a global network. When you need to troubleshoot a network problem by opening a CLI session with a remote device, can you easily remember the name of the device? Let’s look at a few examples.
The above referenced articles mentioned using a unique site ID. That’s good. I’ve seen device naming conventions that used a three-digit Site ID numbering system. The remainder of the name was the device function (pe = PE router, ce = CE router, me = Metro Ethernet switch), device model, and a unit number of that device type. The resulting names were like the following:
There were benefits and problems with this naming convention. The primary benefit is that it is simple and easy to remember; essential qualities when troubleshooting. The Site IDs could be maintained in a spreadsheet on a central server. Just be careful about where you keep the site list so that you can access it when there’s a major network problem. Michael mentions the UN Code for Trade and Transport Locations as one way to identify sites, possibly with something to handle the case where you have multiple sites per city. You may need to include a region code like ‘na’ for North America, or ‘wr’ for western region. A campus may use building identifiers. Pick something that makes sense for 5-10 years of organization growth.
The problems with the above naming convention start with the device function. The ‘me’ device type was really just another version of the ‘ce’ device type. It encodes the type of device in the function, which was not needed. The next problem was incorporating the device type in the name. Occasionally, the troubleshooting would be delayed while the network team tried to recall what type of device was installed at a site. Fortunately, good network documentation was done and available the majority of the time, so it was easy to look up what devices were installed. There was occasionally a problem where a device had been upgraded (a 3500 replaced with a 3400 or a 7301 replaced with a 7600) and the documentation not updated. Using ‘show cdp neighbor’ from a known neighboring device determined what device type was currently installed. But this could have been avoided if the device type were omitted from the device name. But isn’t the device type useful information? Sure, but it doesn’t need to be in the name and there are other places to obtain that information, such as in the build documents, the NMS system, and from neighboring devices (show cdp neighbor).
Similarly, the device location is important, but probably not something that you want to have in the device name. Otherwise, you have to know its location, and type it correctly, in order to open a CLI connection to it. As a commenter to Michael’s article noted, the sysLocation (snmp-server location in the Cisco CLI) should contain that information, making it available for both CLI users and the NMS systems.
The device function should be part of the name, including its logical role in the network. I prefer to identify Core, Distribution, or Access, possibly with a separate category for key access layer devices that support key servers or customers, while Michael mentions using H, M and L for the importance of a device. In an MPLS network you could use P-Provider, PE-Provider Edge, RR-Route Reflector, and CE-Customer Edge.
Let’s say that you’ve decided upon using a SiteID, a Function, and a Unit number. Should the SiteID come first? I would arrange the elements by ranking: SiteID-Function-Unit. Some resulting names would be:
The system location of each device would contain address, floor, closet, and rack information. If I’ve done my IP addressing to allow summarization, the NMS can easily group devices by address and use the names within a group to identify function and importance of the device.
Once you have a device naming standard, how is it applied? I like to have an automated mechanism to generate the DNS records, including interface-specific DNS names (manually generating them isn’t worth the effort) so that traceroute can tell me that a packet transited 003-core-01-gi4-20 instead of 003-core-01. In addition, I like to make Loopback addresses match the canonical name of the device and create a DNS entry for HSRP addresses. Plan for device names to be case independent since DNS is case independent, see RFC4343. Make sure that the sysName (hostname in Cisco CLI) reflects the name in DNS. Automatically creating DNS records from the NMS system helps enforce this consistency.
My point with the above descriptions of various fields is to make you aware of the impact of using different types of fields in the device names. Simple names are better, in my experience. You have to think less about using simple names, reducing the chance for human error.
NetCraftsmen would like to acknowledge Infoblox for their permission to re-post this article which originally appeared in the Applied Infrastructure blog under http://www.infoblox.com/en/communities/blogs.html
Nick has over 20 years of experience in Security Operations and Security Sales. He is an avid student of cybersecurity and regularly engages with the Infosec community at events like BSides, RVASec, Derbycon and more. The son of an FBI forensics director, Nick holds a B.S. in Criminal Justice and is one of Cisco’s Fire Jumper Elite members. When he’s not working, he writes cyberpunk and punches aliens on his Playstation.
Virgilio “Bong” has sixteen years of professional experience in IT industry from academe, technical and customer support, pre-sales, post sales, project management, training and enablement. He has worked in Cisco Technical Assistance Center (TAC) as a member of the WAN and LAN Switching team. Bong now works for Tech Data as the Field Solutions Architect with a focus on Cisco Security and holds a few Cisco certifications including Fire Jumper Elite.
John is our CTO and the practice lead for a talented team of consultants focused on designing and delivering scalable and secure infrastructure solutions to customers across multiple industry verticals and technologies. Previously he has held several positions including Executive Director/Chief Architect for Global Network Services at JPMorgan Chase. In that capacity, he led a team managing network architecture and services. Prior to his role at JPMorgan Chase, John was a Distinguished Engineer at Cisco working across a number of verticals including Higher Education, Finance, Retail, Government, and Health Care.
He is an expert in working with groups to identify business needs, and align technology strategies to enable business strategies, building in agility and scalability to allow for future changes. John is experienced in the architecture and design of highly available, secure, network infrastructure and data centers, and has worked on projects worldwide. He has worked in both the business and regulatory environments for the design and deployment of complex IT infrastructures.