VMware NSX-T was recently announced just a bit ago, so I thought it would be helpful to do a live demo of an upgrade to 2.4.1. In short, there’s good news that all but a few of the upgrades I’ve seen have been successful on the initial attempt thanks to the Pre-Upgrade / Post-Upgrade Checks that are built into each section, but any that weren’t successful on the initial upgrade were due to hitting timeouts waiting for a manager node to respond post upgrade (due to really old, under-performing hardware), which were easily remedied with a check of the error and then retry the Manager Node upgrade.
The 2.4.1 update is a core maintenance release, with a new enhancement for VMware HCX, adding functionality for virtual machine migration to on-premises NSX-T based deployments. I have a client doing this at the moment and can attest that it’s greatly welcome functionality for companies acquiring other companies with overlapping networks. VMware HCX provides capabilities to migrate from disparate versions as well, so migrating workloads from an acquired company with many different ESXi versions, is a real plus.
To begin the upgrade of NSX-T 2.4.1, we download the .mub upgrade file from my.vmware.com and log into NSX manager. Once we’re logged into NSX manager, we navigate to System on the top toolbar and then Upgrades on the left navigation pane. In the Upgrade window, choose the location of the NSX .mub upgrade bundle from local disk or URL. After we’ve chosen our upgrade bundle, we click upgrade and the upgrade bundle is validated by NSX Manager and then staged for upgrades. Once the upload status changes to Upgrade Bundle retrieved successfully, we click Begin Upgrade and the VMware NSX Upgrade Coordinator starts.
There are five steps in the upgrade, separated by a clickable top toolbar. Bundle and Status – Hosts – Edges – Controller Nodes – Management Nodes. After accepting the End User Agreement, we run the Host Upgrade on our compute workload clusters that have NSX installed and then we click on Run Post Checks to ensure they’re operable. Edge upgrades are next with the same process as well as Controller Nodes.
The final step in the NSX-T 2.4.1 upgrade process is Management Nodes, which have an option to return the NSX management cluster into service after a single or a 3-node cluster is formed. As a bit of guidance, it’s always a good idea to wait for the 3-node cluster to be operational before a return to service. However, if you’ve got a short outage window or allowance, you can return to service with a single NSX manager node, but be advised, performance usage will increase greatly as other NSX manager nodes rejoin the cluster and sync.
Check out my YouTube video of the VMware NSX-T 2.4.1 Upgrade for a preview of what to expect:
VMware NSX-T Identity Firewall in provides even greater capabilities than it’s predecessor in NSX-v. NSX-T 2.4 is capable of supporting up to 16 vCenter Servers and 1,024 hosts, which provides clients with the ability to achieve multi-vCenter Identity Firewall security policies, simplifying deployment and reducing overall administration.
In the first part of this three-in-one blog series, we’ll cover deploying the VMware NSX-T Unified Appliance and configuring a 3-node Management Cluster. The next part will cover adding a compute manager (adding a vCenter Server), creating a Transport Zone and Transport Nodes (configuring TEPs on ESXi hosts), creating Logical Switches / Ports and then re-configuring VM port groups from VDS to N-VDS backed Logical Switches. Then we’ll wrap up the three-in-one series with enabling Identity Firewall, configuring Active Directory integration, creating security groups with AD-based user mappings and firewall rules to demonstrate AD-based firewall rule security.
(*All steps have been recorded to my YouTube channel in a playlist at the end of this post – but please read the post for insight before watching the recordings)
Part 1 – Deploying NSX Manager and Configuration of the 3-node Management Cluster
Before starting the deployment of NSX-T 2.4, ensure that you have three(3) available hosts that have been patched to ESXi 6.5u2 P03 / 6.7 u1 EP06 or greater. You should always check the interoperability of VMware products in the VMware Product Interoperability Matrices before performing upgrades or deploying new solutions. You can upgrade VMware vSphere hosts by attaching and remediating the default Critical Update baseline in VMware Update Manager, as shown in the video or in the VMware Update Manager Documentation.
Once vCenter and vSphere hosts are patched to the required version, the installation of NSX-T Manager 2.4 is performed by deploying the NSX Unified Appliance via vCenter to the first of the 3-nodes in our vSphere cluster. An anti-affinity separation rule should be created to ensure NSX Manager appliance VMs are separated from each other within the cluster, unless there is a host outage or related cluster failure. The NSX-T Manager appliance requirements for a small instance are 2 vCPUs, 16GB RAM, 200GB storage. Check NSX-T sizing guidelines and system requirements in the NSX Manager VM System Requirements before deploying.
After the OVF has been deployed, we start configuring our NSX Manager 3-node cluster by logging into NSX Manager and adding a Compute Manager (vCenter Server). The NSX-T 2.4 UI now starts with a new wizard (until opted out or the system is configured). While having a wizard added to the new version is a very compelling new feature, it does provide a link to click for Advanced Configuration that ends the wizard and returns you to the UI. We’ll use the advanced configuration for our purposes and it’s advisable for anyone who knows the requirements for their design and how to configure them.
Part 2 – Adding a Compute Manager – Configuring a 3-node Management Cluster – Creating a Transport Zone and Transport Nodes
Add your vCenter Server as a Compute Manager, by clicking on Fabric and then Compute Managers. Add any vCenter Servers with vSphere clusters that you’ll be hosting workloads on. Once you’ve got a Compute Manager added, we’ll move on to the next step of adding two additional NSX Manager nodes, to our 3-node management cluster.
To add nodes to a new NSX-T 2.4 deployment, click on Overview and then Add Nodes in the Management Cluster. You’ll need to deploy two(2) additional nodes to the primary node that was deployed by OVF, in order to have a 3-node, highly-available NSX-T Management Cluster. The NSX-T Management Cluster – Add Node wizard will prompt you to specify a Compute Manager (vCenter w/ the 3-node cluster), credentials, the size or form factor, a node name, cluster, host, datastore, network (Port Group) and IP address information for the management interface. As a special note, the node name will be used for the VM name of the NSX-T Manager VM deployed by this process. After a Management Cluster node, is added, vCenter deploys a new NSX-T Manager VM to the node or resource pool specified, assigns them a secondary role and synchronizes them to the primary NSX-T Manager node.
*Adding more than two secondary Management Cluster nodes will not affect or improve availability, as NSX-T 2.4 will only utilize a 3-node cluster for NSX Manager roles and Repository synchronization. The capability to add additional nodes, is designed to assist with NSX-T host migration, NSX upgrades and or infrastructure replacement, before removing an active node.
Now that we’ve established a 3-node Management Cluster, we’re ready to configure a VIP (virtual IP) for the Management Cluster. You can still access the primary NSX Manager node via it’s IP address, but the Management Cluster VIP should be used for NSX Management and Operations.
Since we have a new NSX Management Cluster VIP, we’ll open a new browser window to configure the rest of the environment from the VIP we created.
Before we begin configuring networking for the NSX-T let’s try to level set and understand what we’re really doing. Deploying NSX-T without an overlay and routing components requires that we have hosts connected to VDSs that we can VLAN bridge to N-DVS Logical Switches. To establish redundant connectivity on ESXi hosts with VLAN bridging in NSX-T, each host will need two network interfaces on a VDS and two interfaces on a N-VDS. The host network interfaces on the VDS should be configured to deliver any and or all VLANs that you intend to serve from the NSX-T environment.
With that said, we now need to check each ESXi host to determine what interfaces are not in use and which we’ll use for our Transport Nodes. Only hosts with VM workloads that you wish to protect with Identity Firewall or the DFW need to be Transport Nodes. Hosts with NSX managers or NSX components do not need to be configured at Transport Nodes.
In my lab, vmnic0 and vmnic1 are on a VDS served by the vCenter and vmnic2 and vmnic3 are not in use and what I’ll use to configure my Transport Nodes.
To configure Transport Nodes, you can choose to do this individually or per vSphere Cluster. I prefer to deploy by vSphere Cluster when possible, so we’ll select the cluster for Skunkworks-Compute, where my lab VM workloads are hosted and click Configure NSX.
Clicking Configure NSX starts the Configure NSX wizard and will prompt you to select a Transport Node Profile. As we have not created a Transport Node Profile, we click on Create New Transport Node Profile, name it and then click the Create New Transport Zone link located just below the Transport Zone selection.
Name the Transport Zone something logical like tz-vlan-backed, enter a name for the N-VDS and select your Host Membership Criteria and select VLAN as the Traffic Type. After filling out the first tab for General, click the N-VDS tab, select the N-VDS from the dropdown that you created, select the default NIOC profile, and under the Uplink Profile dropdown, click Create Uplink Profile. In the Uplink Profile form, enter a name for the Uplink Profile and under Teamings, select the default teaming, set the Teaming Policy to Load Balance Source and type your vmnic names in the Active Uplinks field. In my lab, I’m adding vmnic2 and vmnic3 as previously referenced. Set the Transport VLAN to 0-4094 to allow all VLANs or enter the VLAN ID that you want to use. Enter the MTU that you have configured across your network or leave it blank to set it to the default of 1600. Select LLDP, Send Packet Enabled from the LLDP dropdown and enter the names of the Physical NICs and choose which profile vmnic you want to bind it to. In my lab, once again, I bind physical NIC vmnic2 to profile vmnic2 and physical NIC vmnic3 to profile vmnic3. The IP Assignment field is greyed out as I’m leveraging DHCP in my lab for Transport Nodes. An IP Pool or static IP can be set, but from my experience, it’s far easier to set reservations and manage those, versus the other options, as the use of DHCP is beneficial to extending addressing as you grow. At this point, we’ve filled in the Add Transport Node Profile form and we’ll click Add to complete it. Completing the Transport Node Profile wizard returns you to the Configure NSX wizard, where you left off, to select a Transport Node Profile. Select the Transport Node Profile that you created in the wizard and click Save.
After clicking Save on the Configure NSX wizard for our compute cluster, the Configuration State of the hosts will show “NSX Install In Progress” and change to “NSX Installed” with a Node Status of “Up” once completed. The Configure NSX function is installing the NSX-T VIBs (VMware Installation Bundles) on the hosts, starting their services and establishing communications. As an estimate, configuring NSX on a Host Transport Node, through to communications being established and the Node Status showing “Up”, will take about 5-10 minutes per host in most environments.
Part 3 – Create Logical Switches and Change VM port groups from VDS to N-VDS Logical Switches / Configure Active Directory / Enable IDFW and Create Active Directory based NSGroups and firewall rules
Now that we’ve connected our NSX-T hosts / Transport Nodes to our VLAN backed Transport Zone and N-VDS, we need to create Logical Switches for our workload VMs.
To create a NSX-T Logical Switch, click Advanced Networking & Security in the top toolbar, click Switching on the left navigation pane and then click Add under the Switches tab. Name the Logical Switch, select the Transport Zone you created from the dropdown, use the default Uplink Teaming Policy, ensure Admin Status is set to Up and enter the VLAN ID you wish to host on the Logical Switch. Use the defaults on the Switching Profiles tab and click Add.
At this point, we’re ready to change VM NIC Portgroups from VDS to the NSX Logical Switch. Move the required VMs from VDS to NSX Logical Switch by editing the VM and changing the Port Group of the Network Adapter to the NSX Logical Switch created for it.
Now that we’ve migrated VMs to NSX Logical Switches, we’ll now configure Active Directory and enable IDFW.
To configure Active Directory integration in NSX-T, navigate to System and Active Directory. Click Add Active Directory and you’ll be prompted to enter the FQDN (domain name), the Netbios Name, the Base Distinguished Name (Base DN) and specify a LDAP synchronization interval. Next, click on LDAP and enter the LDAP server IP/FQDN, port and an AD account with permissions to query the entire directory tree.
At this point, you’re now ready to create NSX Security Groups (NSGroups) and map Active Directory groups to them.
To create a NSX-T NSGroup for an Active Directory User Group, click on Advanced Networking and Security, then Inventory, then Groups. Click Add, name the group and then select Members from the Members tab with the object type, AD Group as shown below. The search field in the available groups helps to filter large lists of groups, so make good use of that. Select and move the AD Group desired to the Selected field and click Add.
Now create NSGroups for your VM workloads. In my lab, we’re going to demonstrate implementing a simple prod / non-prod security policy with NSX Security Tags. We add a new NSGroup, name it prod and then on the Membership Criteria tab, choose Virtual Machine – Tag – Equals – prod – Scope – Equals – (blank or create a scope). After that, we do the same and create a NSGroup for nonprod.
We’re closing in to the final step of configuring Identity Firewall rules for our prod and non-prod applications based on our Active Directory mapped NSX Security Groups.
For the lab, we create a rule to grant NetAdmin group users access to any systems, a rule for production access and a rule for non-production access. Then we create two app to app traffic rules to allow prod to prod and non-prod to non-prod. We follow that up with a implicit deny in the App to App firewall rule section to block any traffic not allowed and we’re off and running with our lab setup.
As you can see from the Identity Firewall example in my lab, controlling access based on Active Directory user groups and NSX VM security tags, enables security teams with a easy to use, uniform firewall solution for the data center, cloud and PaaS environments alike. As always, if you’ve got questions or something that you’d like to see demonstrated, hit me up here at virtuallyread.com, on Twitter or on LinkedIn. Until the next post, enjoy Active Directory integration with NSX-T and #runNSX!
YouTube video recording playlist: Deploying VMware NSX-T Identity Firewall on VLAN-backed Networking without an Overlay
One of the quickest and easiest returns on investment in the VMware product stack is VMware NSX Identity Firewall. What other product, with an hour of configuration and initial policy creation, returns such business outcomes? I’d hesitate to say there’s any such solution out there that can provide such an immediate return on investment. VMware NSX IDFW provides an incredibly valuable and easy to use solution for VDI and data center “jumpboxes” alike. With that said, I’m going to demonstrate how you can deploy and configure VMware NSX Identity Firewall in under an hour and have an identity-based security solution that can be easily inserted into any existing vSphere environment. …with or without deploying VMware NSX networking.
There are two methods IDFW uses for logon detection: Guest Introspection and/or the Active Directory Event Log Scraper. Guest Introspection is deployed on ESXi clusters where IDFW virtual machines are running. When network events are generated by a user, a guest agent installed on the VM forwards the information through the Guest Introspection framework to the NSX Manager. The second option is the Active Directory event log scraper. Configure the Active Directory event log scraper in the NSX Manager to point at an instance of your Active Directory domain controller. NSX Manager will then pull events from the AD security event log. You can use both in your environment, or one or the other. Note that if both the AD event log scraper and Guest Introspection are used, the two are mutually exclusive: if one of these stops working, the other does not begin to work as a back up.
Before we get started, let’s talk reality. While security postures such as Micro-segmentation and Zero-Trust may indeed be your desired end-state, they’re a much longer journey than Macro-segmentation or Application Fencing. With that said, you can imagine how quickly you could create Application Fencing security policies for application servers or groups in your environment and start by simply controlling user access to them. Now, with the understanding that you can accelerate your NSX Identity Firewall implementation by leveraging macro-segmentation strategies in the initial phase, you’ll quickly realize that it will allow you to create more granular micro-segmentation policies in a secondary phase.
My lab scenario will demonstrate how NSX-v Identity Firewall can quickly secure an HR, Finance and CRM application, based on the users Active Directory group. The data center consists of three clusters, one for management and two for compute resources. RegionA01-COMP02 serves the hr-web-01a, fin-web-01a and web-04a VMs that serve each HR, Finance and CRM application respectively. The web VMs are running on a stereotypical “server VLAN”, on one subnet, as commonly seen in many enterprise environments. The jumpbox or VDI desktop, win-12-jump VM, is on a “user VLAN”, in another subnet.
NSX Identity Firewall configuration requires that NSX Manager be deployed and registered with vCenter. The NSX Manager appliance is deployed from OVA via vCenter and takes about 30 minutes to complete deployment and registration to vCenter. For details on installing NSX Manager, read Install the NSX Manager Virtual Appliance.
Requirements for VMware NSX Identity Firewall are:
NSX Manager 6.2 or greater (the latest release is recommended)
VMware Virtual Distributed Switch or NSX N-VDS
FQDN for NSX Manager
NTP configured in NSX Manager to the same source as vCenter, vSphere hosts and Active Directory domain controllers
AD account to query the domain (this user must be able to access the entire directory tree structure)
AD read-only account with permissions to read Windows Event Log locally or via Group Policy (see MS KB)
**In NSX-v, controllers and networking components are not required
After NSX Manager has been deployed and registered to vCenter, we begin configuring IDFW Event Log Scraping by setting the LDAP and CIFs properties for directory queries and event log reading. After setting the LDAP and CIFS properties, we validate that the directory has performed a sync and that the AD event servers have populated in the Event Server fields. Guest Introspection is also deployed, to explain that configuration.
This demonstration video that I created, will guide you step-by-step through the process of configuring VMware NSX Manager to enable Identity Firewall:
Some simple best-practices for leveraging Microsoft Active Directory user groups are:
Create new user-groups in a top level OU when possible and then nest existing groups which may be deeper in the forest.
Limit the nesting of Active Directory user groups to three(3) deep for best performance.
When leveraging a large enterprise forest, configure the LDAP and CIFs properties for directory queries to a smaller child domain that the user groups are in, instead of the top level forest domain.
Once you’ve finished with the installation and configuration of VMware NSX-v Identity Firewall, it’s time to map Active Directory user groups to NSX security groups and create security objects for enforcement. There are static and dynamic objects that can be leveraged. Dynamic objects yield a more simplified security policy, reducing the number of overall rules and security objects needed. Thus, dynamic object types should be used whenever possible. Static NSX object types are IP Sets, vNICs and Virtual Machine names, where as dynamic security object types contain a set of resources grouped by another construct, such as a vCenter Cluster or Datacenter, Distributed Port Group, Legacy Port Group, Logical Switch, Resource Pool or vApp.
The Object Types for an NSX-v firewall rule are: Security Group, IP Sets, Cluster, Datacenter, Distributed Port Group, Legacy Port Group, Logical Switch, Resource Pool, Virtual Machine, vApp and vNIC. With such a wide array of selection criteria, there are many object types that can be leveraged to create a strategic advantage in your security policy.
Now that we’ve configured NSX Identity Firewall, mapped Active Directory user groups to NSX security groups, we’ll create some Active Directory based rules to test access to our applications.
If you want to test blocking without changing the default Layer 3 rule, simply create a rule blocking what you need in a user defined firewall rule section above the rule you want to test. We’ll use this in in the live demo with a firewall rule called VDI to APP that blocks the VDI desktop network, leveraging an IP SET from the Internal Services security group that contains our protected web app servers. See the image below.
Now, let’s login as both users and test the access of the NetAdmin and HRAdmin users to see if they have the appropriate access.
Hope you enjoyed this post and feel free to hit me up on Twitter, LinkedIn and subscribe to my YouTube Channel with any requests for content that you’d like to see. Until next time, as the sun sets slowly in the west, I bid you a fond farewell. Adios amigos!