New Nexus 9K Items
For a while now, I’ve been mulling over security architectures. I’ve started wondering if security really belongs on or near endpoints (users, servers, VMs). And are firewalls obsolete?
We need to make a distinction here. I rarely see edge firewalls as a problem. Internet speeds are just generally not that challenging for firewalls. Currently, anyway, although higher speeds (e.g. N x 10 Gbps) get costly.
This blog is more about datacenter core firewalls, and firewalling user segments — places where firewall throughput and cost are more likely to be problems. But it also relates to how we firewall an increasingly cloudy world going forward.
This blog is intended to ask questions, stimulate thought and discussion, not necessarily provide complete answers.
The original problem was something along the lines that application developers don’t reliably document open ports and facilitate hardening servers, and server admins couldn’t be guaranteed to do the hardening, either not caring or for fear of breaking the application. Putting a firewall in front of the servers let network admins limit what source IPs and destination ports could be accessed. At least in principle.
Then firewalls “grew up” and became Next Generation: IPS / IDS, Application Layer Gateways (ALG), and other functions came along, first as dedicated boxes, more recently as Next Gen Firewalls.
As I’ve noted elsewhere, if you want access lists, switches do that cheaply and at wire speed.
Presumably firewalls are also there for stateful TCP enforcement, maybe some protocol correctness enforcement, and the IPS / IDS / ALG roles. Also, maybe to collect flow data, although some switches can do that as well.
I also see sites doing things like trying to run all user to user traffic through firewalls (segmentation and malware spread prevention). Or all WAN or VPN traffic.
When doing design / architecture, it helps to consider pros and cons.
For firewalls, the pros and cons that come to mind are:
What are the alternatives? Well, that depends on what problem you’re trying to solve.
The big driver that comes to mind is price. Performance comes in when you’re talking firewalling in the middle of a datacenter using 40 Gbps links. 100, 400 Gbps???
Really, the only option is trading scale-up for scale-out. That may improve performance but may not improve price much (from having priced out virtual firewalls on VMware not that long ago).
Scale-out could mean an IDS that feeds data to a bank of compute engines. There may be other clever approaches. Clustering physical firewalls is more of an intelligent brute force scale-out.
That leaves us with some other approaches, or types of approach:
Some general comments:
Here are some pros and cons thoughts (hard when there are so many choices):
Endpoint security might solve potential asymmetric path issues with Data Center Interconnect (DCI), LISP and stateful devices, especially firewalls. If the only stateful firewall is at the endpoint, then the path you take doesn’t matter, and asymmetry is not a problem. This might help with the WAN. Of course, how many sites are likely to connect to the Internet without a firewall? Stateful load balancers, especially those doing NAT — probably still a problem.
As noted above, making sure all your servers or user endpoints have the agent on them could be a concern. While it should be fairly easy for a product to report endpoints that do contain the agent, spotting agent-less endpoints does appear to be a harder problem. Sounds like a job for Cisco ISE!
This also brings us back to the start of the article: covering for devices lacking security. Some might argue for a firewall as a form of “belt and suspenders” insurance.
In my reading about containers, I’ve come across the “sidecar model”. The short version is that instead of agents, the automation tool / service mesh framework puts a “sidecar” inline for traffic going to your containers. Think “agent in a separate container” if you want. Or “mini-firewall” in a container?
The attraction of that approach is that the service automation tool can ensure uniform or controlled deployment of sidecars, i.e. no gaps in coverage. And less chance of breaking the code in the container, since the sidecar only filters traffic — no kernel or other hooks in the application container.
Yes, there is cost to doing sidecars — more compute, possibly some latency. If there’s a chain of sidecars (Is that a train?), figuring out which one ate your packets might be interesting, not fun.
Google shifted to using device and identity-based or context-based security, called BeyondCorp. That’s another approach, and controls who can get to what. It is available in GCP as context-based access via Identity Based Proxy.
Looking ahead (or now), there’s also the elephant in the room: IOT. Right now, using segmentation and IOT gateways is one approach. IOT gateways are there perhaps for a reason similar to what we started with for firewalls: managers have no idea what the vulnerabilities might be, so they want to intercept everything to control what goes to or from the IOT devices.
Putting agents on IOT devices isn’t likely to happen. Sidecars only work for containers, although the approach might be viable for Cloud providers to supply for VM instances.
Random thought: What happens (in 20 years?) when APIs drive all computer-to-computer traffic to be HTTPS-based? Including DNS?
As I was writing this, I ran across an Apstra blog on micro-segmentation. It suggests separating intent (policy) from where it is enforced, with IOT as a use case. That’s a valid point, near the endpoint may not fit all situations. Or will IOT change over time, to where power and other budgets allow room for agents?
As we move from VM instances in the Cloud to Containers then serverless, or use different Cloud tools for different business reasons, what does security for serverless look like?
To NetCraftsmen’s Samuel Bickham, for reviewing this. Any errors are mine!
Comments are welcome, both in agreement or constructive disagreement about the above. I enjoy hearing from readers and carrying on deeper discussion via comments. Thanks in advance!
Hashtags: #CiscoChampion #TechFieldDay #TheNetCraftsmenWay #Firewall #DataCenter #CyberSecurity
Did you know that NetCraftsmen does network /datacenter / security / collaboration design / design review? Or that we have deep UC&C experts on staff, including @ucguerilla? For more information, contact us at email@example.com.
Nick has over 20 years of experience in Security Operations and Security Sales. He is an avid student of cybersecurity and regularly engages with the Infosec community at events like BSides, RVASec, Derbycon and more. The son of an FBI forensics director, Nick holds a B.S. in Criminal Justice and is one of Cisco’s Fire Jumper Elite members. When he’s not working, he writes cyberpunk and punches aliens on his Playstation.
Virgilio “Bong” has sixteen years of professional experience in IT industry from academe, technical and customer support, pre-sales, post sales, project management, training and enablement. He has worked in Cisco Technical Assistance Center (TAC) as a member of the WAN and LAN Switching team. Bong now works for Tech Data as the Field Solutions Architect with a focus on Cisco Security and holds a few Cisco certifications including Fire Jumper Elite.
John is our CTO and the practice lead for a talented team of consultants focused on designing and delivering scalable and secure infrastructure solutions to customers across multiple industry verticals and technologies. Previously he has held several positions including Executive Director/Chief Architect for Global Network Services at JPMorgan Chase. In that capacity, he led a team managing network architecture and services. Prior to his role at JPMorgan Chase, John was a Distinguished Engineer at Cisco working across a number of verticals including Higher Education, Finance, Retail, Government, and Health Care.
He is an expert in working with groups to identify business needs, and align technology strategies to enable business strategies, building in agility and scalability to allow for future changes. John is experienced in the architecture and design of highly available, secure, network infrastructure and data centers, and has worked on projects worldwide. He has worked in both the business and regulatory environments for the design and deployment of complex IT infrastructures.