Deploying VMware NSX-T Identity Firewall on VLAN-backed Networking without an Overlay

VMware NSX-T Identity Firewall in provides even greater capabilities than it’s predecessor in NSX-v. NSX-T 2.4 is capable of supporting up to 16 vCenter Servers and 1,024 hosts, which provides clients with the ability to achieve multi-vCenter Identity Firewall security policies, simplifying deployment and reducing overall administration.

In the first part of this three-in-one blog series, we’ll cover deploying the VMware NSX-T Unified Appliance and configuring a 3-node Management Cluster. The next part will cover adding a compute manager (adding a vCenter Server), creating a Transport Zone and Transport Nodes (configuring TEPs on ESXi hosts), creating Logical Switches / Ports and then re-configuring VM port groups from VDS to N-VDS backed Logical Switches. Then we’ll wrap up the three-in-one series with enabling Identity Firewall, configuring Active Directory integration, creating security groups with AD-based user mappings and firewall rules to demonstrate AD-based firewall rule security.

(*All steps have been recorded to my YouTube channel in a playlist at the end of this post – but please read the post for insight before watching the recordings)

Part 1 – Deploying NSX Manager and Configuration of the 3-node Management Cluster

Before starting the deployment of NSX-T 2.4, ensure that you have three(3) available hosts that have been patched to ESXi 6.5u2 P03 / 6.7 u1 EP06 or greater. You should always check the interoperability of VMware products in the VMware Product Interoperability Matrices before performing upgrades or deploying new solutions. You can upgrade VMware vSphere hosts by attaching and remediating the default Critical Update baseline in VMware Update Manager, as shown in the video or in the VMware Update Manager Documentation.

Once vCenter and vSphere hosts are patched to the required version, the installation of NSX-T Manager 2.4 is performed by deploying the NSX Unified Appliance via vCenter to the first of the 3-nodes in our vSphere cluster. An anti-affinity separation rule should be created to ensure NSX Manager appliance VMs are separated from each other within the cluster, unless there is a host outage or related cluster failure. The NSX-T Manager appliance requirements for a small instance are 2 vCPUs, 16GB RAM, 200GB storage. Check NSX-T sizing guidelines and system requirements in the NSX Manager VM System Requirements before deploying.

After the OVF has been deployed, we start configuring our NSX Manager 3-node cluster by logging into NSX Manager and adding a Compute Manager (vCenter Server). The NSX-T 2.4 UI now starts with a new wizard (until opted out or the system is configured). While having a wizard added to the new version is a very compelling new feature, it does provide a link to click for Advanced Configuration that ends the wizard and returns you to the UI. We’ll use the advanced configuration for our purposes and it’s advisable for anyone who knows the requirements for their design and how to configure them.

Part 2 – Adding a Compute Manager – Configuring a 3-node Management Cluster – Creating a Transport Zone and Transport Nodes

Add your vCenter Server as a Compute Manager, by clicking on Fabric and then Compute Managers. Add any vCenter Servers with vSphere clusters that you’ll be hosting workloads on. Once you’ve got a Compute Manager added, we’ll move on to the next step of adding two additional NSX Manager nodes, to our 3-node management cluster.

To add nodes to a new NSX-T 2.4 deployment, click on Overview and then Add Nodes in the Management Cluster. You’ll need to deploy two(2) additional nodes to the primary node that was deployed by OVF, in order to have a 3-node, highly-available NSX-T Management Cluster. The NSX-T Management Cluster – Add Node wizard will prompt you to specify a Compute Manager (vCenter w/ the 3-node cluster), credentials, the size or form factor, a node name, cluster, host, datastore, network (Port Group) and IP address information for the management interface. As a special note, the node name will be used for the VM name of the NSX-T Manager VM deployed by this process. After a Management Cluster node, is added, vCenter deploys a new NSX-T Manager VM to the node or resource pool specified, assigns them a secondary role and synchronizes them to the primary NSX-T Manager node.

*Adding more than two secondary Management Cluster nodes will not affect or improve availability, as NSX-T 2.4 will only utilize a 3-node cluster for NSX Manager roles and Repository synchronization. The capability to add additional nodes, is designed to assist with NSX-T host migration, NSX upgrades and or infrastructure replacement, before removing an active node.

 

Now that we’ve established a 3-node Management Cluster, we’re ready to configure a VIP (virtual IP) for the Management Cluster. You can still access the primary NSX Manager node via it’s IP address, but the Management Cluster VIP should be used for NSX Management and Operations.

Since we have a new NSX Management Cluster VIP, we’ll open a new browser window to configure the rest of the environment from the VIP we created.

Before we begin configuring networking for the NSX-T let’s try to level set and understand what we’re really doing. Deploying NSX-T without an overlay and routing components requires that we have hosts connected to VDSs that we can VLAN bridge to N-DVS Logical Switches. To establish redundant connectivity on ESXi hosts with VLAN bridging in NSX-T, each host will need two network interfaces on a VDS and two interfaces on a N-VDS. The host network interfaces on the VDS should be configured to deliver any and or all VLANs that you intend to serve from the NSX-T environment.

With that said, we now need to check each ESXi host to determine what interfaces are not in use and which we’ll use for our Transport Nodes. Only hosts with VM workloads that you wish to protect with Identity Firewall or the DFW need to be Transport Nodes. Hosts with NSX managers or NSX components do not need to be configured at Transport Nodes.

In my lab, vmnic0 and vmnic1 are on a VDS served by the vCenter and vmnic2 and vmnic3 are not in use and what I’ll use to configure my Transport Nodes.

To configure Transport Nodes, you can choose to do this individually or per vSphere Cluster. I prefer to deploy by vSphere Cluster when possible, so we’ll select the cluster for Skunkworks-Compute, where my lab VM workloads are hosted and click Configure NSX.

Clicking Configure NSX starts the Configure NSX wizard and will prompt you to select a Transport Node Profile. As we have not created a Transport Node Profile, we click on Create New Transport Node Profile, name it and then click the Create New Transport Zone link located just below the Transport Zone selection.

Name the Transport Zone something logical like tz-vlan-backed, enter a name for the N-VDS and select your Host Membership Criteria and select VLAN as the Traffic Type. After filling out the first tab for General, click the N-VDS tab, select the N-VDS from the dropdown that you created, select the default NIOC profile, and under the Uplink Profile dropdown, click Create Uplink Profile. In the Uplink Profile form, enter a name for the Uplink Profile and under Teamings, select the default teaming, set the Teaming Policy to Load Balance Source and type your vmnic names in the Active Uplinks field. In my lab, I’m adding vmnic2 and vmnic3 as previously referenced. Set the Transport VLAN to 0-4094 to allow all VLANs or enter the VLAN ID that you want to use. Enter the MTU that you have configured across your network or leave it blank to set it to the default of 1600. Select LLDP, Send Packet Enabled from the LLDP dropdown and enter the names of the Physical NICs and choose which profile vmnic you want to bind it to. In my lab, once again, I bind physical NIC vmnic2 to profile vmnic2 and physical NIC vmnic3 to profile vmnic3. The IP Assignment field is greyed out as I’m leveraging DHCP in my lab for Transport Nodes. An IP Pool or static IP can be set, but from my experience, it’s far easier to set reservations and manage those, versus the other options, as the use of DHCP is beneficial to extending addressing as you grow. At this point, we’ve filled in the Add Transport Node Profile form and we’ll click Add to complete it. Completing the Transport Node Profile wizard returns you to the Configure NSX wizard, where you left off, to select a Transport Node Profile. Select the Transport Node Profile that you created in the wizard and click Save.

After clicking Save on the Configure NSX wizard for our compute cluster, the Configuration State of the hosts will show “NSX Install In Progress” and change to “NSX Installed” with a Node Status of “Up” once completed. The Configure NSX function is installing the NSX-T VIBs (VMware Installation Bundles) on the hosts, starting their services and establishing communications. As an estimate, configuring NSX on a Host Transport Node, through to communications being established and the Node Status showing “Up”, will take about 5-10 minutes per host in most environments.

Part 3 – Create Logical Switches and Change VM port groups from VDS to N-VDS Logical Switches / Configure Active Directory / Enable IDFW and Create Active Directory based NSGroups and firewall rules

Now that we’ve connected our NSX-T hosts / Transport Nodes to our VLAN backed Transport Zone and N-VDS, we need to create Logical Switches for our workload VMs.

To create a NSX-T Logical Switch, click Advanced Networking & Security in the top toolbar, click Switching on the left navigation pane and then click Add under the Switches tab. Name the Logical Switch, select the Transport Zone you created from the dropdown, use the default Uplink Teaming Policy, ensure Admin Status is set to Up and enter the VLAN ID you wish to host on the Logical Switch. Use the defaults on the Switching Profiles tab and click Add.

At this point, we’re ready to change VM NIC Portgroups from VDS to the NSX Logical Switch. Move the required VMs from VDS to NSX Logical Switch by editing the VM and changing the Port Group of the Network Adapter to the NSX Logical Switch created for it.

Now that we’ve migrated VMs to NSX Logical Switches, we’ll now configure Active Directory and enable IDFW.

To configure Active Directory integration in NSX-T, navigate to System and Active Directory. Click Add Active Directory and you’ll be prompted to enter the FQDN (domain name), the Netbios Name, the Base Distinguished Name (Base DN) and specify a LDAP synchronization interval. Next, click on LDAP and enter the LDAP server IP/FQDN, port and an AD account with permissions to query the entire directory tree.

At this point, you’re now ready to create NSX Security Groups (NSGroups) and map Active Directory groups to them.

To create a NSX-T NSGroup for an Active Directory User Group, click on Advanced Networking and Security, then Inventory, then Groups. Click Add, name the group and then select Members from the Members tab with the object type, AD Group as shown below. The search field in the available groups helps to filter large lists of groups, so make good use of that. Select and move the AD Group desired to the Selected field and click Add.

Now create NSGroups for your VM workloads. In my lab, we’re going to demonstrate implementing a simple prod / non-prod security policy with NSX Security Tags. We add a new NSGroup, name it prod and then on the Membership Criteria tab, choose Virtual Machine – Tag – Equals – prod – Scope – Equals – (blank or create a scope). After that, we do the same and create a NSGroup for nonprod.

We’re closing in to the final step of configuring Identity Firewall rules for our prod and non-prod applications based on our Active Directory mapped NSX Security Groups.

For the lab, we create a rule to grant NetAdmin group users access to any systems, a rule for production access and a rule for non-production access. Then we create two app to app traffic rules to allow prod to prod and non-prod to non-prod. We follow that up with a implicit deny in the App to App firewall rule section to block any traffic not allowed and we’re off and running with our lab setup.

As you can see from the Identity Firewall example in my lab, controlling access based on Active Directory user groups and NSX VM security tags, enables security teams with a easy to use, uniform firewall solution for the data center, cloud and PaaS environments alike. As always, if you’ve got questions or something that you’d like to see demonstrated, hit me up here at virtuallyread.com, on Twitter or on LinkedIn. Until the next post, enjoy Active Directory integration with NSX-T and #runNSX!

YouTube video recording playlist: Deploying VMware NSX-T Identity Firewall on VLAN-backed Networking without an Overlay


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  Subscribe  
Notify of