With a new year comes new goals and a new role

I’ve got big news and it’s just too much for a 280 character Tweet.

With a new year, comes new goals and a new role as I start into the next decade.

I spent the fair majority of 2019 collaborating in presales support for account teams, given the amount of new technologies in the SDN space at VMware and how they aligned with my technical pursuits. I took a real liking to assisting in client solutions design and helping to unify an overall technical vision for enterprises with multiple organizations. Working hand in hand with those account teams provided me with some of the best experiences that I’ve had in my career and from that, I guess you could say, that I developed a desire to be an actual part of it.

So, over the next days and weeks, I’ll be transitioning into my new role as a VMware NSX Network and Security, Solutions Engineer for the US Globals team. The opportunity to work with VMware’s largest clients towards their current and future SDN visions is an amazing chance to participate in some of the most complex and challenging designs being built today, so I dove at it.

I’m very excited to join such an amazing group of individuals and a team with a reputation as some of the best. It’s been extremely difficult to be quiet about this, with how excited I am to take on this challenge and now that I get to write about it, there’s an overwhelming sense of gratitude to those who believe in me and to the many people whom I’ve worked with in and via the VMware TAM and NSX TAM teams over the years.

Since I have the character space to do so, I want to thank some people directly who have been instrumental in helping me get to where I am.

First and foremost, I want to thank my wife for being the independent, brilliant and dynamic woman that she is. Without her, I would be nowhere.

My son Jonas and daughter Sofia who many of you have heard, seen or met via my various social posts are the driving force behind it all and truly make me be the best that I can be. They’ll never know how thankful I am.

To my VMware TAM coworkers, previous and current, thank you all for being amazing human beings. This culture of talented and compassionate people was created and is kept alive by all of your efforts, which I’m very proud to have been a part of.

To the the #vCommunity at large, thank you all. There are way too many to thank and I’m not bragging about my follower stats. Know that I love you all and that our community is a very special place, full of very special people. Yes… All kinds of special. /me grins

Lastly, thank you to VMware for being the kind of company that promotes an environment of career growth, in which I can find and use new passions to grow.

After a few days of thinking about how I reached out and how naturally they reached back, it’s given me a nonstop smile. And as the sun sets slowly in the west, I sit here thinking about how lucky I am to be in a place like this, at a time like this.

#runNSX

VMware NSX-T 2.4.1 Upgrade Live Demo

VMware NSX-T was recently announced just a bit ago, so I thought it would be helpful to do a live demo of an upgrade to 2.4.1. In short, there’s good news that all but a few of the upgrades I’ve seen have been successful on the initial attempt thanks to the Pre-Upgrade / Post-Upgrade Checks that are built into each section, but any that weren’t successful on the initial upgrade were due to hitting timeouts waiting for a manager node to respond post upgrade (due to really old, under-performing hardware), which were easily remedied with a check of the error and then retry the Manager Node upgrade.

The 2.4.1 update is a core maintenance release, with a new enhancement for VMware HCX, adding functionality for virtual machine migration to on-premises NSX-T based deployments. I have a client doing this at the moment and can attest that it’s greatly welcome functionality for companies acquiring other companies with overlapping networks. VMware HCX provides capabilities to migrate from disparate versions as well, so migrating workloads from an acquired company with many different ESXi versions, is a real plus.

To begin the upgrade of NSX-T 2.4.1, we download the .mub upgrade file from my.vmware.com and log into NSX manager. Once we’re logged into NSX manager, we navigate to System on the top toolbar and then Upgrades on the left navigation pane. In the Upgrade window, choose the location of the NSX .mub upgrade bundle from local disk or URL. After we’ve chosen our upgrade bundle, we click upgrade and the upgrade bundle is validated by NSX Manager and then staged for upgrades. Once the upload status changes to Upgrade Bundle retrieved successfully, we click Begin Upgrade and the VMware NSX Upgrade Coordinator starts.

There are five steps in the upgrade, separated by a clickable top toolbar. Bundle and Status – Hosts – Edges – Controller Nodes – Management Nodes. After accepting the End User Agreement, we run the Host Upgrade on our compute workload clusters that have NSX installed and then we click on Run Post Checks to ensure they’re operable. Edge upgrades are next with the same process as well as Controller Nodes.

The final step in the NSX-T 2.4.1 upgrade process is Management Nodes, which have an option to return the NSX management cluster into service after a single or a 3-node cluster is formed. As a bit of guidance, it’s always a good idea to wait for the 3-node cluster to be operational before a return to service. However, if you’ve got a short outage window or allowance, you can return to service with a single NSX manager node, but be advised, performance usage will increase greatly as other NSX manager nodes rejoin the cluster and sync.

Check out my YouTube video of the VMware NSX-T 2.4.1 Upgrade for a preview of what to expect:

Deploying VMware NSX-T Identity Firewall on VLAN-backed Networking without an Overlay

VMware NSX-T Identity Firewall in provides even greater capabilities than it’s predecessor in NSX-v. NSX-T 2.4 is capable of supporting up to 16 vCenter Servers and 1,024 hosts, which provides clients with the ability to achieve multi-vCenter Identity Firewall security policies, simplifying deployment and reducing overall administration.

In the first part of this three-in-one blog series, we’ll cover deploying the VMware NSX-T Unified Appliance and configuring a 3-node Management Cluster. The next part will cover adding a compute manager (adding a vCenter Server), creating a Transport Zone and Transport Nodes (configuring TEPs on ESXi hosts), creating Logical Switches / Ports and then re-configuring VM port groups from VDS to N-VDS backed Logical Switches. Then we’ll wrap up the three-in-one series with enabling Identity Firewall, configuring Active Directory integration, creating security groups with AD-based user mappings and firewall rules to demonstrate AD-based firewall rule security.

(*All steps have been recorded to my YouTube channel in a playlist at the end of this post – but please read the post for insight before watching the recordings)

Part 1 – Deploying NSX Manager and Configuration of the 3-node Management Cluster

Before starting the deployment of NSX-T 2.4, ensure that you have three(3) available hosts that have been patched to ESXi 6.5u2 P03 / 6.7 u1 EP06 or greater. You should always check the interoperability of VMware products in the VMware Product Interoperability Matrices before performing upgrades or deploying new solutions. You can upgrade VMware vSphere hosts by attaching and remediating the default Critical Update baseline in VMware Update Manager, as shown in the video or in the VMware Update Manager Documentation.

Once vCenter and vSphere hosts are patched to the required version, the installation of NSX-T Manager 2.4 is performed by deploying the NSX Unified Appliance via vCenter to the first of the 3-nodes in our vSphere cluster. An anti-affinity separation rule should be created to ensure NSX Manager appliance VMs are separated from each other within the cluster, unless there is a host outage or related cluster failure. The NSX-T Manager appliance requirements for a small instance are 2 vCPUs, 16GB RAM, 200GB storage. Check NSX-T sizing guidelines and system requirements in the NSX Manager VM System Requirements before deploying.

After the OVF has been deployed, we start configuring our NSX Manager 3-node cluster by logging into NSX Manager and adding a Compute Manager (vCenter Server). The NSX-T 2.4 UI now starts with a new wizard (until opted out or the system is configured). While having a wizard added to the new version is a very compelling new feature, it does provide a link to click for Advanced Configuration that ends the wizard and returns you to the UI. We’ll use the advanced configuration for our purposes and it’s advisable for anyone who knows the requirements for their design and how to configure them.

Part 2 – Adding a Compute Manager – Configuring a 3-node Management Cluster – Creating a Transport Zone and Transport Nodes

Add your vCenter Server as a Compute Manager, by clicking on Fabric and then Compute Managers. Add any vCenter Servers with vSphere clusters that you’ll be hosting workloads on. Once you’ve got a Compute Manager added, we’ll move on to the next step of adding two additional NSX Manager nodes, to our 3-node management cluster.

To add nodes to a new NSX-T 2.4 deployment, click on Overview and then Add Nodes in the Management Cluster. You’ll need to deploy two(2) additional nodes to the primary node that was deployed by OVF, in order to have a 3-node, highly-available NSX-T Management Cluster. The NSX-T Management Cluster – Add Node wizard will prompt you to specify a Compute Manager (vCenter w/ the 3-node cluster), credentials, the size or form factor, a node name, cluster, host, datastore, network (Port Group) and IP address information for the management interface. As a special note, the node name will be used for the VM name of the NSX-T Manager VM deployed by this process. After a Management Cluster node, is added, vCenter deploys a new NSX-T Manager VM to the node or resource pool specified, assigns them a secondary role and synchronizes them to the primary NSX-T Manager node.

*Adding more than two secondary Management Cluster nodes will not affect or improve availability, as NSX-T 2.4 will only utilize a 3-node cluster for NSX Manager roles and Repository synchronization. The capability to add additional nodes, is designed to assist with NSX-T host migration, NSX upgrades and or infrastructure replacement, before removing an active node.

 

Now that we’ve established a 3-node Management Cluster, we’re ready to configure a VIP (virtual IP) for the Management Cluster. You can still access the primary NSX Manager node via it’s IP address, but the Management Cluster VIP should be used for NSX Management and Operations.

Since we have a new NSX Management Cluster VIP, we’ll open a new browser window to configure the rest of the environment from the VIP we created.

Before we begin configuring networking for the NSX-T let’s try to level set and understand what we’re really doing. Deploying NSX-T without an overlay and routing components requires that we have hosts connected to VDSs that we can VLAN bridge to N-DVS Logical Switches. To establish redundant connectivity on ESXi hosts with VLAN bridging in NSX-T, each host will need two network interfaces on a VDS and two interfaces on a N-VDS. The host network interfaces on the VDS should be configured to deliver any and or all VLANs that you intend to serve from the NSX-T environment.

With that said, we now need to check each ESXi host to determine what interfaces are not in use and which we’ll use for our Transport Nodes. Only hosts with VM workloads that you wish to protect with Identity Firewall or the DFW need to be Transport Nodes. Hosts with NSX managers or NSX components do not need to be configured at Transport Nodes.

In my lab, vmnic0 and vmnic1 are on a VDS served by the vCenter and vmnic2 and vmnic3 are not in use and what I’ll use to configure my Transport Nodes.

To configure Transport Nodes, you can choose to do this individually or per vSphere Cluster. I prefer to deploy by vSphere Cluster when possible, so we’ll select the cluster for Skunkworks-Compute, where my lab VM workloads are hosted and click Configure NSX.

Clicking Configure NSX starts the Configure NSX wizard and will prompt you to select a Transport Node Profile. As we have not created a Transport Node Profile, we click on Create New Transport Node Profile, name it and then click the Create New Transport Zone link located just below the Transport Zone selection.

Name the Transport Zone something logical like tz-vlan-backed, enter a name for the N-VDS and select your Host Membership Criteria and select VLAN as the Traffic Type. After filling out the first tab for General, click the N-VDS tab, select the N-VDS from the dropdown that you created, select the default NIOC profile, and under the Uplink Profile dropdown, click Create Uplink Profile. In the Uplink Profile form, enter a name for the Uplink Profile and under Teamings, select the default teaming, set the Teaming Policy to Load Balance Source and type your vmnic names in the Active Uplinks field. In my lab, I’m adding vmnic2 and vmnic3 as previously referenced. Set the Transport VLAN to 0-4094 to allow all VLANs or enter the VLAN ID that you want to use. Enter the MTU that you have configured across your network or leave it blank to set it to the default of 1600. Select LLDP, Send Packet Enabled from the LLDP dropdown and enter the names of the Physical NICs and choose which profile vmnic you want to bind it to. In my lab, once again, I bind physical NIC vmnic2 to profile vmnic2 and physical NIC vmnic3 to profile vmnic3. The IP Assignment field is greyed out as I’m leveraging DHCP in my lab for Transport Nodes. An IP Pool or static IP can be set, but from my experience, it’s far easier to set reservations and manage those, versus the other options, as the use of DHCP is beneficial to extending addressing as you grow. At this point, we’ve filled in the Add Transport Node Profile form and we’ll click Add to complete it. Completing the Transport Node Profile wizard returns you to the Configure NSX wizard, where you left off, to select a Transport Node Profile. Select the Transport Node Profile that you created in the wizard and click Save.

After clicking Save on the Configure NSX wizard for our compute cluster, the Configuration State of the hosts will show “NSX Install In Progress” and change to “NSX Installed” with a Node Status of “Up” once completed. The Configure NSX function is installing the NSX-T VIBs (VMware Installation Bundles) on the hosts, starting their services and establishing communications. As an estimate, configuring NSX on a Host Transport Node, through to communications being established and the Node Status showing “Up”, will take about 5-10 minutes per host in most environments.

Part 3 – Create Logical Switches and Change VM port groups from VDS to N-VDS Logical Switches / Configure Active Directory / Enable IDFW and Create Active Directory based NSGroups and firewall rules

Now that we’ve connected our NSX-T hosts / Transport Nodes to our VLAN backed Transport Zone and N-VDS, we need to create Logical Switches for our workload VMs.

To create a NSX-T Logical Switch, click Advanced Networking & Security in the top toolbar, click Switching on the left navigation pane and then click Add under the Switches tab. Name the Logical Switch, select the Transport Zone you created from the dropdown, use the default Uplink Teaming Policy, ensure Admin Status is set to Up and enter the VLAN ID you wish to host on the Logical Switch. Use the defaults on the Switching Profiles tab and click Add.

At this point, we’re ready to change VM NIC Portgroups from VDS to the NSX Logical Switch. Move the required VMs from VDS to NSX Logical Switch by editing the VM and changing the Port Group of the Network Adapter to the NSX Logical Switch created for it.

Now that we’ve migrated VMs to NSX Logical Switches, we’ll now configure Active Directory and enable IDFW.

To configure Active Directory integration in NSX-T, navigate to System and Active Directory. Click Add Active Directory and you’ll be prompted to enter the FQDN (domain name), the Netbios Name, the Base Distinguished Name (Base DN) and specify a LDAP synchronization interval. Next, click on LDAP and enter the LDAP server IP/FQDN, port and an AD account with permissions to query the entire directory tree.

At this point, you’re now ready to create NSX Security Groups (NSGroups) and map Active Directory groups to them.

To create a NSX-T NSGroup for an Active Directory User Group, click on Advanced Networking and Security, then Inventory, then Groups. Click Add, name the group and then select Members from the Members tab with the object type, AD Group as shown below. The search field in the available groups helps to filter large lists of groups, so make good use of that. Select and move the AD Group desired to the Selected field and click Add.

Now create NSGroups for your VM workloads. In my lab, we’re going to demonstrate implementing a simple prod / non-prod security policy with NSX Security Tags. We add a new NSGroup, name it prod and then on the Membership Criteria tab, choose Virtual Machine – Tag – Equals – prod – Scope – Equals – (blank or create a scope). After that, we do the same and create a NSGroup for nonprod.

We’re closing in to the final step of configuring Identity Firewall rules for our prod and non-prod applications based on our Active Directory mapped NSX Security Groups.

For the lab, we create a rule to grant NetAdmin group users access to any systems, a rule for production access and a rule for non-production access. Then we create two app to app traffic rules to allow prod to prod and non-prod to non-prod. We follow that up with a implicit deny in the App to App firewall rule section to block any traffic not allowed and we’re off and running with our lab setup.

As you can see from the Identity Firewall example in my lab, controlling access based on Active Directory user groups and NSX VM security tags, enables security teams with a easy to use, uniform firewall solution for the data center, cloud and PaaS environments alike. As always, if you’ve got questions or something that you’d like to see demonstrated, hit me up here at virtuallyread.com, on Twitter or on LinkedIn. Until the next post, enjoy Active Directory integration with NSX-T and #runNSX!

YouTube video recording playlist: Deploying VMware NSX-T Identity Firewall on VLAN-backed Networking without an Overlay