Browse Category

VMware NSX

VMware NSX-v Identity Firewall Configuration Overview and How-To Create Active Directory User Access Rules for Applications (with or without NSX networking deployed)

One of the quickest and easiest returns on investment in the VMware product stack is VMware NSX Identity Firewall. What other product, with an hour of configuration and initial policy creation, returns such business outcomes? I’d hesitate to say there’s any such solution out there that can provide such an immediate return on investment. VMware NSX IDFW provides an incredibly valuable and easy to use solution for VDI and data center “jumpboxes” alike. With that said, I’m going to demonstrate how you can deploy and configure VMware NSX Identity Firewall in under an hour and have an identity-based security solution that can be easily inserted into any existing vSphere environment. …with or without deploying VMware NSX networking.

There are two methods IDFW uses for logon detection: Guest Introspection and/or the Active Directory Event Log Scraper. Guest Introspection is deployed on ESXi clusters where IDFW virtual machines are running. When network events are generated by a user, a guest agent installed on the VM forwards the information through the Guest Introspection framework to the NSX Manager. The second option is the Active Directory event log scraper. Configure the Active Directory event log scraper in the NSX Manager to point at an instance of your Active Directory domain controller. NSX Manager will then pull events from the AD security event log. You can use both in your environment, or one or the other. Note that if both the AD event log scraper and Guest Introspection are used, the two are mutually exclusive: if one of these stops working, the other does not begin to work as a back up.

Before we get started, let’s talk reality. While security postures such as Micro-segmentation and Zero-Trust may indeed be your desired end-state, they’re a much longer journey than Macro-segmentation or Application Fencing. With that said, you can imagine how quickly you could create Application Fencing security policies for application servers or groups in your environment and start by simply controlling user access to them. Now, with the understanding that you can accelerate your NSX Identity Firewall implementation by leveraging macro-segmentation strategies in the initial phase, you’ll quickly realize that it will allow you to create more granular micro-segmentation policies in a secondary phase.

My lab scenario will demonstrate how NSX-v Identity Firewall can quickly secure an HR, Finance and CRM application, based on the users Active Directory group. The data center consists of three clusters, one for management and two for compute resources. RegionA01-COMP02 serves the hr-web-01a, fin-web-01a and web-04a VMs that serve each HR, Finance and CRM application respectively. The web VMs are running on a stereotypical “server VLAN”, on one subnet, as commonly seen in many enterprise environments. The jumpbox or VDI desktop, win-12-jump VM, is on a “user VLAN”, in another subnet.

NSX Identity Firewall configuration requires that NSX Manager be deployed and registered with vCenter. The NSX Manager appliance is deployed from OVA via vCenter and takes about 30 minutes to complete deployment and registration to vCenter. For details on installing NSX Manager, read Install the NSX Manager Virtual Appliance.

Requirements for VMware NSX Identity Firewall are:

  • NSX Manager 6.2 or greater (the latest release is recommended)
  • VMware Virtual Distributed Switch or NSX N-VDS
  • FQDN for NSX Manager
  • NTP configured in NSX Manager to the same source as vCenter, vSphere hosts and Active Directory domain controllers
  • AD account to query the domain (this user must be able to access the entire directory tree structure)
  • AD read-only account with permissions to read Windows Event Log locally or via Group Policy (see MS KB)

**In NSX-v, controllers and networking components are not required

After NSX Manager has been deployed and registered to vCenter, we begin configuring IDFW Event Log Scraping by setting the LDAP and CIFs properties for directory queries and event log reading. After setting the LDAP and CIFS properties, we validate that the directory has performed a sync and that the AD event servers have populated in the Event Server fields. Guest Introspection is also deployed, to explain that configuration.

This demonstration video that I created, will guide you step-by-step through the process of configuring VMware NSX Manager to enable Identity Firewall:

Some simple best-practices for leveraging Microsoft Active Directory user groups are:

  • Create new user-groups in a top level OU when possible and then nest existing groups which may be deeper in the forest.
  • Limit the nesting of Active Directory user groups to three(3) deep for best performance.
  • When leveraging a large enterprise forest, configure the LDAP and CIFs properties for directory queries to a smaller child domain that the user groups are in, instead of the top level forest domain.

Once you’ve finished with the installation and configuration of VMware NSX-v Identity Firewall, it’s time to map Active Directory user groups to NSX security groups and create security objects for enforcement. There are static and dynamic objects that can be leveraged. Dynamic objects yield a more simplified security policy, reducing the number of overall rules and security objects needed. Thus, dynamic object types should be used whenever possible. Static NSX object types are IP Sets, vNICs and Virtual Machine names, where as dynamic security object types contain a set of resources grouped by another construct, such as a vCenter Cluster or Datacenter, Distributed Port Group, Legacy Port Group, Logical Switch, Resource Pool or vApp.

The Object Types for an NSX-v firewall rule are: Security Group, IP Sets, Cluster, Datacenter, Distributed Port Group, Legacy Port Group, Logical Switch, Resource Pool, Virtual Machine, vApp and vNIC. With such a wide array of selection criteria, there are many object types that can be leveraged to create a strategic advantage in your security policy.

 

Now that we’ve configured NSX Identity Firewall, mapped Active Directory user groups to NSX security groups, we’ll create some Active Directory based rules to test access to our applications.

If you want to test blocking without changing the default Layer 3 rule, simply create a rule blocking what you need in a user defined firewall rule section above the rule you want to test. We’ll use this in in the live demo with a firewall rule called VDI to APP that blocks the VDI desktop network, leveraging an IP SET from the Internal Services security group that contains our protected web app servers. See the image below.

Now, let’s login as both users and test the access of the NetAdmin and HRAdmin users to see if they have the appropriate access.

Hope you enjoyed this post and feel free to hit me up on Twitter, LinkedIn and subscribe to my YouTube Channel with any requests for content that you’d like to see. Until next time, as the sun sets slowly in the west, I bid you a fond farewell. Adios amigos!

NSX Design Sessions – Lessons Learned – Two Separate Design Groups – Functional before Physical

This may not be for everyone, but if you’re facilitating NSX design sessions and you haven’t solved these situations by other means, it’s been very evident to me that having separate design sessions may be more effective than “let’s get everyone together” strategies.

I do a fairly large number of NSX designs a year, for a pretty wide client base and regardless of the industry, no two organizations are alike. The vast majority of clients have similar infrastructure, but often times very different functional and operational requirements, both in current and future design goals.

After experiencing this for a number of years and at times, allowing the client to drive who the stakeholders are and who would be contributing, I came to the realization that there really needs to be two key design stakeholders and two separate design groups. Each of the design sessions needs to be performed in order to facilitate the actual business outcomes and to ensure that design features and functional requirements are met.

The infrastructure teams of compute, networking and storage, need physical design meetings for connectivity, capacity and resiliency planning, while the application stakeholders need functional use-case design sessions to determine NSX feature use, options and functional requirements.

Performing functional before physical design is probably the single most important factor in determining the success of your client design and how long the design process will take.

It’s extremely important to have the client spell out any current and future functional use-cases. Each use-case should be documented, with dependencies and requirements, including the outcomes desired. Going through each scenario in detail with each dependency will likely add requirements that were unknown or undiscovered.

As to who should be in each group, the answer is simple, but against many organization’s natural instincts to include everyone in each. Requirements gathering needs to be done without interjection and competition between groups. The vast majority of enterprises have histories of clashes and competing agendas between infrastructure (compute, network, storage) and application owners, developers, and even business units in larger organizations. To avoid these unproductive scenarios, we remove the infrastructure “power players” from the Functional Design Group and then the inverse from the Physical Design Group.The Functional Design Group should include developer system architects, at least one hands-on technical developer, the application owner or owners, a security architect and security owner, and any business sponsors who are responsible for the intended business outcomes. To support the Functional Design Group, infrastructure should provide a technical engineer from compute, networking and storage, with senior level knowledge of current operations and architecture for their respective domains to answer technical questions on current capabilities and operations only. The business sponsors are the key stakeholders for the Functional Design Group.

The Physical Design Group attendees are the infrastructure architects, a technical engineer from compute, networking and storage, with senior level knowledge of current operations and architecture for their respective domains and the stakeholders for overall infrastructure. The key stakeholders for infrastructure are preferably not the managers from compute, network and storage, but rather an overarching director, technical directors or even a vice-president of infrastructure, if that role exists. In support of the infrastructure team, the application team should provide a senior devops member, a security architect and a technical representative from the application team who are to provide answers on current and future design operations only. Again, we don’t want application “power players” in this group to avoid any historically combative scenarios.

Stakeholders need only attend the first and last of their respective design sessions and workshops to ensure that they’re getting what they asked for in the business outcomes they owe back to the business. Exposing stakeholders to “how the sausage is made” or any design challenges, can affect their confidence in the overall initiative and be counter productive.

Again, these are recommendations only and different engagements and clients will at times call for adjustments to group membership, but again, the idea is to keep the infrastructure teams from telling the application teams “they can’t do that” and the like for the infrastructure groups.

Lesson 1: Keep each design group’s membership weighted so they can express their desires and concerns without conflict. The majority of infrastructure and app teams enjoy defeating each other too much and already have a taste for blood. Be the peacekeeper, the broker, the communicator for the groups and ensure any supporting staff from other groups understand their roles in only providing answers, without adding push-back.

Lesson 2: Don’t try to hold a physical design session first. Even though most infrastructure teams are anxious for their connectivity, capacity and redundancy requirements and options, explain the need for an NSX functional use-case design first and how those NSX functional use-cases will supply the physical design requirements for it. If you start with physical design, you may very likely (WILL) end up redesigning (and rebuilding) it again after the functional use-cases are determined.

It’s not easy to tell infrastruture teams that they can’t start with connectivity and physical design, but everyone will be better off, in the end, once you know what the functional use-case requirements are.

Lastly, always listen to exactly what the client is telling you and don’t make assumptions, ask questions. I had this beaten into my head by Paul Mancuso at VMware, a few years back in a mock NSX design defense and I’ve never forgotten it since. There is probably not more sage advice in regards to requirements gathering, than that one point alone.

VMware NSX-T and Kubernetes Overview

As a Networking and Security Technical Account Specialist for VMware, I get a lot of questions regarding NSX and container integration with Kubernetes. Many network and security professionals are not aware of the underlying Kubernetes container architecture, services and communications paths, so before I start into how NSX works with containers, let’s examine why container development simplifies inter-process and application programming and how containers communicate via Kubernetes services.

The development of containers is driven greatly by the programmatic advantages it has versus server-based application development. A Kubernetes Pod is a group of one or more containers. Containers in a pod share an IP address, port space and have localhost communications within the pod. Containers also communicate with each other using standard inter-process communications like SystemV semaphores or POSIX shared memory space. These capabilities provide developers with much tighter and quicker development methods, and a large amount of abstraction from what they have to contend with in server-based application development.

Containers in a pod also share data volumes. Colocation (co-scheduling), shared fate, coordinated replication, resource sharing and dependency management are managed automatically in a pod.

One of the key internal Kubernetes services for NSX-T integration is the kube-proxy service. The kube-proxy service watches the Kubernetes master for the addition and removal of Service Endpoint objects. For each service, it creates an iptables rule that captures traffic to the Kubernetes Service’s back-end sets. Also, for each Endpoint object, it creates iptables rules which select a back-end Pod.

With NSX-T and the NSX Container Plugin (NCP), we leverage the NSX Kube-Proxy, which is a daemon running on each of the Kubernetes Nodes, which most refer to as “Minions” or “Workers”. It replaces the native distributed east-west load balancer in Kubernetes (kube-proxy & iptables) with Open vSwitch (OVS) load-balancing services.

Now that we’ve covered east-west communications in Kubernetes, I’ll address ingress and egress to Kubernetes clusters.

The Kubernetes Ingress is an API object that manages external access to the services cluster. By default, and in typical scenarios, Kubernetes services and pods have IPs only routable by the cluster network. All traffic destined for an edge router is dropped or forwarded elsewhere. An Ingress is a collection of rules that allow inbound connections to reach the cluster services.

Kubernetes Ingress can be configured to give services externally-reachable URLs, load-balance traffic, terminate SSL and offer name based hosting. The most common Open-source software used in non-NSX environments are Nginx and HAProxy, which many of you may be familiar with, supporting server-based application operations. There’s also an external load balancer object, not to be confused with the Ingress object.

When creating a Kubernetes service, you have the option to automatically create a cloud network load balancer. It provides an external-accessible IP address that forwards traffic to the correct port on the assigned minion / cluster node.

Now, let’s add NSX to the picture… When we install NSX-T in a Kubernetes environment, we replace the Kubernetes Ingress object with the NSX-T native layer-7 load balancer that performs these functions.

Since we’ve reviewed how traffic gets into a Kubernetes cluster, let’s take a look at how network security is handled.

A Kubernetes Network Policy is a security construct of how groups of pods are allowed to communicate among themselves and other network endpoints. Kubernetes Network Policies are implemented by the network plugin, so you must use a networking solution which supports NetworkPolicy, as simply creating the resource without a controller to implement it will not work. By default, Kubernetes pods are non-isolated and accept any traffic, from any source. Pods become isolated by having a Kubernetes Network Policy assigned to them. Namespaces that have a Network Policy or more assigned to them allow and reject any traffic per the policy.

And finally, let’s review NSX-T networking components for Kubernetes. As you can see in the graphic below, NSX-T components are deployed to support both the Kubernetes Master management network and the Kubernetes Minion nodes. This diagram depicts the use of a non-routable, “black-holed” network on a logical switch that is not connected to any logical router.

The Kubernetes Master management network and logical switch are uplinked to the NSX-T tier-0 router in the center of the diagram. The tier-0 router is also providing NAT to the Kubernetes Cluster. eBGP, being the only dynamic routing protocol supported by NSX-T at this time, will be configured with peer routes to the top or rack or even back to the core.

NSX tier-1 routers are instantiated in each Kubernetes Cluster Node and one is deployed for ingress services and load balancing we discussed previously.

For those who are unfamiliar with the difference between NSX-v Edges and NSX-T Edges see the “High-Level View of NSX-T Edge Within a Transport Zone” at docs.vmware.com. If you’re an NSX engineer or work with NSX as part of a team, I would highly recommend the VMware NSX-T: Install, Configure, Manage [V2.2] course. NSX-T is a major architecture change from NSX-v and there are too many changes in each component to even begin to list here. With that being said, in a simplistic view, NSX-T tier-0 routers serve as provider routers and NSX-T tier-1 routers serve as tenant routers. Each has different capabilities, so ensure to read up on features and the Service Router (SR) and Distributed Router (DR) components for a better understanding if needed.

Wrapping it up, Kubernetes with VMware NSX-T provides a much richer set of networking and security capabilities than are provided with native Kubernetes. It simplifies and provides automation and orchestration via a CMP or REST API, for K8s DevOps engineers and container developers alike. And to add that NSX-T is hybrid-cloud and multi-cloud capable, with greatly simplified networking and security, Kubernetes users should be very excited once they see what happens when they #runNSX.