This blog contains some thoughts about automation. Coding and automation have become trendy. Whenever that happens, I feel the need to understand the pros and cons. So, what are some pros and cons around coding and automation?
Coding, automation, and even ITIL share the attributes of requiring that you have well-defined processes so that deployment (etc.) get done consistently, right, and efficiently. By the way, do note that coding is not the same thing as automation. ACI and Cisco DNA Center / SDA are automation, but you’re not doing much coding with them (at least not unless you start interacting with their APIs).
People don’t seem to be talking about the right and wrong contexts for automation (let alone ITIL, on the process side). This may also tie into why some IT people loathe ITIL. Personally, I think ITIL has some good ideas, perhaps with some historical blind spots that may now be fixed (e.g. performance monitoring — and no, I sleep soundly enough without trying to stay current on any more ITIL reading).
I’ve seen some heavy-handed ITIL implementations too, bogging people down in process. With both ITIL, government, and lawyers, I have the feeling there ought to be a sweet spot, beyond which more paper, process, laws, whatever just add to the problem. Could that possibly be true for coding or automation?
Consider the cost of proper code development. By the time you add test suites, modularity, classes for different IOS types, etc, the code to take intent and emit one configuration line expands to 100 or 1000 lines. Automated testing is code, but does all that extra code also increase the odds of bugs? How about when you use a chain of several tools, where each handoff between tools might introduce some degree of error? An additional consideration: doing that is more efficient in a direct sense, but might be less efficient in terms of finding or developing all the requisite tool skills in a new / replacement hire?
Tentative (obvious?) conclusion: tested open-source code with a large user base is the most likely to be fairly well tested in the field. Small distribution code, not so much.
An analogy may help explain the issue I see. If you’re Apple, and expect to sell a bazillion iPhones, then you invest in very careful component specs, QA testing parameters, automated assembly lines (or Foxconn), etc. You put a lot of money and time into precision about absolutely every assembly line step. If you’re Tesla and expect to sell a lot of solar shingles or batteries, you do likewise (let’s not discuss Tesla car manufacturing issues). If you’re Joe’s Woodworking, not so much. The ROI isn’t there.
Ok, so that’s the big vs. small side of coding or process documentation. Amazon or LinkedIn can (indeed, must) automate, and do so by coding. They get a good ROI (one trusts) by doing that.
If you’re configuring four routers, there’s not much ROI to coding. Using existing tools to automate, sure, that could be a good idea. Adding small tweaks to existing tools, maybe. Something along the lines of CSV file substitution for variables in templates and then pushing configs ought to be fairly light-weight and worthwhile.
Tentative conclusion: you need to right-size your coding projects to your environment and to the potential ROI. You can perhaps upsize them a bit with “skills building” as a justification. That’s if you have the time and management to support that.
Coding / automation perhaps gets you to agile — where you can do more of what you’re doing, quickly. More reliably, maybe. Maybe not. You could introduce a typo in your CSV editing, for example. There’s less of it to edit compared to full configs however, just IP addresses and the things that change between devices, so maybe it’s harder to introduce errors, or easier to spot and fix errors.
Going back to our analogy however, it seems likely there is a trade-off in mechanical automation between rate of change and degree of automation. Production of Kleenex tissue boxes or soap or other household goods are probably heavily automated, with large expenses if you want to change certain things around (size of box, whatever). Others (labeling) might be easier to change, as long as you stick to the same physical specs.
Apple may be using Foxconn and people, because their huge production runs likely change a bit every time another phone model comes out. It’s probably easier / cheaper / faster to update paper procedures and train people than to change machinery physical specifications in assembly lines (says someone who has no assembly line experience).
In networking, don’t we have a lot of change, fast change cycles? I’m thinking if you coded for say FabricPath 5-7 years ago, wouldn’t you have a lot of re-tooling to do to switch to VXLAN / EVPN? Admittedly, less change if you created configuration templates, and an automated process for deployment, with coding more limited to tying your tools suite together?
The main point I’ve got here seems to keep coming back to ROI (time and / or money), not from a mercenary or monetary sense, but in terms of wise investment of time.
ROI depends on scale — how much repetition, time amount of labor saved on each repetition. Granted, reliable change may contribute something to value of automation. ROI also depends on how long you can keep using your automation (or code), how fast things change, how dramatically they change, how long to amortize your investment, and how much will be re-usable in the new, changed environment.
As someone who has scripted a bunch of things over the years, I’ve learned to ask myself, “what’s the core must-have functionality, and then the steps to add functionality to that?” Also, “what parts of this can I use other tools to accomplish?” Having a sharp focus gets to results (or value) more quickly, which helps the ROI situation.
Canned tools for automation have a test and learning curve, and usually cost as well. That’s another place where ROI considerations are likely relevant. Generally, the ROI is better with a canned tool. If a product is hard to learn, or to get to work properly, then maybe it was not the right tool to buy. Unfortunately, that may apply to a lot of the tools in the network management space. Your opinion may be less cynical.
Comments are welcome, both in agreement or constructive disagreement about the above. I enjoy hearing from readers and carrying on deeper discussion via comments. Thanks in advance!
Hashtags: #CiscoChampion #NetworkAutomation #ROI #Coding
Did you know that NetCraftsmen does network /datacenter / security / collaboration design / design review? Or that we have deep UC&C experts on staff, including @ucguerilla? For more information, contact us at email@example.com.
Fifty Shades of Cloud
Become Agile with Equinix Network Edge
Nick has over 20 years of experience in Security Operations and Security Sales. He is an avid student of cybersecurity and regularly engages with the Infosec community at events like BSides, RVASec, Derbycon and more. The son of an FBI forensics director, Nick holds a B.S. in Criminal Justice and is one of Cisco’s Fire Jumper Elite members. When he’s not working, he writes cyberpunk and punches aliens on his Playstation.
Virgilio “Bong” has sixteen years of professional experience in IT industry from academe, technical and customer support, pre-sales, post sales, project management, training and enablement. He has worked in Cisco Technical Assistance Center (TAC) as a member of the WAN and LAN Switching team. Bong now works for Tech Data as the Field Solutions Architect with a focus on Cisco Security and holds a few Cisco certifications including Fire Jumper Elite.
John is our CTO and the practice lead for a talented team of consultants focused on designing and delivering scalable and secure infrastructure solutions to customers across multiple industry verticals and technologies. Previously he has held several positions including Executive Director/Chief Architect for Global Network Services at JPMorgan Chase. In that capacity, he led a team managing network architecture and services. Prior to his role at JPMorgan Chase, John was a Distinguished Engineer at Cisco working across a number of verticals including Higher Education, Finance, Retail, Government, and Health Care.
He is an expert in working with groups to identify business needs, and align technology strategies to enable business strategies, building in agility and scalability to allow for future changes. John is experienced in the architecture and design of highly available, secure, network infrastructure and data centers, and has worked on projects worldwide. He has worked in both the business and regulatory environments for the design and deployment of complex IT infrastructures.