VMC NSX ALB Load Balancing with HCX Network Extension

  1. Introduction
  2. Planning and Implementation
    1. NSX ALB Cloud Type
    2. NSX ALB Topology and HCX Network Extension
    3. NSX ALB and HCX MON
    4. NSX ALB Service Engine Placement
    5. Service Resilience

Introduction

Since VMware Cloud on AWS SDDC version 1.9, NSX Advanced Load Balancer has been available as a customer-managed solution. The NSX ALB controllers and SEs are manually deployed as VMs through vCenter in a VMware Cloud on AWS SDDC in this setup. Several VMware blogs and techzone articles have been released about deploying NSX ALB on VMware Cloud on AWS. For further information, please refer to the following links:

When VMC customers migrate their on-premises workloads to VMware Cloud on AWS, the migration usually occurs in different waves. During this process, some workloads have already been migrated to the cloud, but some are still running on-premises within the network. In such situations, the VMware Hybrid Cloud Extension (HCX) Network Extension (NE) is extensively used to provide Layer 2 network connectivity between the on-premises workloads and those being migrated to a VMware Cloud on AWS SDDC. The widespread use of HCX is largely due to its ability to simplify the migration process by maintaining virtual machines’ the IP and MAC addresses of the virtual machines after migration. However, implementing load balancing in this hybrid cloud environment can introduce unique complexities. The main focus of this designlet is on how to use the NSX ALB Load Balancer to provide load-balancing services to the workloads on the HCX Layer 2 extension during the migration to VMware Cloud on AWS SDDC.

Planning and Implementation

NSX ALB Cloud Type

At the time of writing, No-Orchestrator is the only supported NSX ALB cloud type configuration for VMware Cloud on AWS SDDC. There is no integration between NSX ALB and the underlying infrastructure in the No-Orchestrator cloud, and SEs are not automatically deployed and configured through the NSX ALB control plane.

NSX ALB Topology and HCX Network Extension

In the NSX ALB inline topology, it’s essential to have an isolated Tier-1 (T1) gateway. This isolated T1 needs to serve as the default gateway for the server pool group members. However, this setup is infeasible for workloads running on an HCX Layer 2 Extension Network. Therefore, inline topology is unsupported for workloads on HCX Layer 2 Extension Networks. In the subsequent sections, all our discussions on topologies will be centered around the one-arm topology.

NSX ALB and HCX MON

HCX MON is an enterprise-level capability of HCX Network Extension feature. By facilitating selective cloud routing within a VMware Cloud on AWS SDDC, MON-enabled network extensions enhance traffic flows for migrated virtual machines. This mechanism circumvents the need for a lengthy round-trip network path via the on-premises gateway. However, it’s important to note that activating MON on the NSX ALB Service Engines (SEs) data interface is an unsupported config and won’t work as desired. One reason for this limitation is that the functionality of MON requires that HCX add host routes on T1 routers for every configured IP on a virtual machine. However, with NSX ALB SE, HCX lacks visibility into the configured virtual IPs (VIPs), preventing it from adding the necessary host routes for these VIPs, which results in a constraint.

NSX ALB Service Engine Placement

Option 1: Service Engine on a routed network segment

As shown in Figure 1, a network (network segment 1) is extended from an on-premises data center to a VMware Cloud on AWS SDDC. Service Engines (SEs) are connected to the default compute gateway (CGW) through a routed network segment (network segment 2) within a VMware Cloud on AWS SDDC. These SEs provide load balancing to backend servers on network segment 1. In this topology, HCX MON can be enabled for migrated backend servers or load-balancing service clients to ensure low latency. Additionally, the VIP of the load balancing service would be allocated from the IP subnet designated for network segment 2. This setup is recommended by VMware due to the optimized network path provided by HCX MON, significantly enhancing network efficiency and performance.

Figure 1: NSX ALB SE on Routed Network

Option 2: Service Engine on HCX Layer2 Extension Network

As illustrated in Figure 2, a network (referred to as network segment 1) is extended from an on-premises data center to a VMware Cloud on AWS SDDC. MON is enabled for this extended network segment. The NSX ALB Service Engines (SEs) data interface also connects to network segment 1. MON can be activated for the backend servers, as shown in Figure 2. However, it’s crucial to remember that enabling MON on the data interface of NSX ALB SEs is unsupported. Consequently, for clients in different networks (for example, network segment 3 shown in the figure above) to access the VIP, the network path needs to pass through the on-premises router. This setup increases end-to-end network latency compared to option 1 and may result in additional egress traffic from AWS to on-premises. Therefore, it’s recommended to avoid option 2 when possible.

Figure 2 NSX ALB SE on HCX Layer2 Extension

Service Resilience

HCX Network Extension High Availability protects extended networks from a Network Extension appliance failure at the source or remote site. It is highly recommended to enable HCX Network Extension HA for better resilience when using NSX ALB load balancing with HCX network extension.

Thank you for your reading!

Leave a comment