Designlet: VMware Cloud on AWS HCX Network Extension

Introduction

HCX Network Extension (NE) provides a Layer 2 VPN (L2VPN) to extend a broadcast domain from a customer site into a VMware Cloud on AWS based SDDC. NE functionality is provided by a dedicated virtual appliance(s) at both sites.

Summary and Considerations

Use Case

NE is used to provide Layer 2 adjacency between VMs at the customer site and VMs that have been migrated to VMware Cloud. This provides a stopgap to facilitate communication between VMs in the same VLAN/port group while migrations are occurring. NE is especially useful for customers who are not able to re-IP VMs during the migration process. Additionally, NE can be used in disaster recovery scenarios.

Pre-requisites

  • Working HCX deployment and service mesh
  • NSX-T 4.3 in your VMware Cloud SDDC. (This is satisfied in all new SDDCs).
  • The premises vSphere Distributed Switch version 5.1.0 or higher when extending vSphere Distributed Switch-based networks
  • NSX-V 6.4 or higher is required for extending NSX-V-based networks
  • NSX-T 2.4 or higher is required for extending NSX-T-based networks
  • HCX Enterprise for NE HA functionality (included with VMware Cloud on AWS)

General Considerations/Recommendations

  • A single NE appliance can extend up to 8 networks. HCX manager supports up to 100 NE appliances.
  • Networks can be extended to a maximum of 3 destinations
  • The default gateway for an extended network exists at the customer site. This can lead to sub-optimal routing for cloud-based VMs. HCX Mobility Optimized Networking can be used to address this scenario.
  • Do not extend networks used for HCX network profiles, vSphere management networks, or other VMkernel networks (e.g., vMotion/vSAN).
  • NE does not detect or mitigate network loops or IP/MAC conflicts. Note - Operate with caution when extending networks from multiple on-premises vCenter Servers that share virtual machine VLANs.
  • NE is a tunnel-based technology, which encapsulates traffic between sites. Depending on the MTU of networks in use, packet fragmentation can occur. HCX Traffic Engineering can be used to optimize TCP MSS and reduce fragmentation between VMs connected via NE.
  • Extending vSS-based networks directly is not supported. A distributed switch must be present.
  • NSX networks can be extended. The NSX manager must be registered with HCX to extend NSX networks. All NSX deployment requirements for HCX apply

Cost implications

Egress charges will apply to VM traffic on extended networks communicating from VMware Cloud to on-prem. These charges will vary depending on whether your HCX service mesh is running over the internet or a DX.

Performance Considerations

An NE appliance is capable of 4-6 Gbps throughput. Additional appliances can be deployed to scale throughput.

Documentation reference

HCX User Guide

Last Updated

August 2023

Background

In VMware Cloud on AWS, network connectivity to ESXi hosts in the management cluster (Cluster-1) is shared between multiple services. It is important to understand how network traffic generated by or for one service can impact others. In particular, the ESXi host where an active Edge is running determines the capacity available for north-south traffic (traffic that is going between the SDDC and an external location, such as on-prem or to the connected VPC or over Transit Connect). Other services that consume network resources on that same host can reduce the amount of capacity available for the Edge, and therefore limit north-south traffic throughput.

In this document, we also refer to network capacity in packets-per-second (PPS), as opposed to measuring throughput such as Gigabits-per-second (Gbps). This is because the interfaces process individual packets at a certain rate which is not significantly affected by the size of the packet. At a given PPS rate, larger packets will transfer more data, resulting in higher bandwidth.

 

 

Planning and Implementation

Planning

HCX Network Extension (NE) provides a Layer 2 VPN between a customer site and an AWS-based SDDC. This service is fully integrated into HCX and provides functionality similar to the NSX L2 VPN. Using an alternative bridging solution, like NSX L2 VPN, is not supported for use with NE, so you should settle on a single L2 extension technology for your migration or disaster recovery needs.

HCX NE appliances are deployed as a pair, with one running at the source site and the other at the destination site. The encrypted tunnel between NE appliances uses UDP port 4500. If there are any firewalls in the path between appliances, it should be configured to allow communication between the appliances on these ports.

NE is an optional service, and customers should understand the pros and cons involved with using it. There are alternatives to using NE, like assigning new IPs to VMs as they are migrated or moving a network with all attached VMs to the cloud in a single migration event. NE is a valuable tool when neither of these options is feasible. While the NE appliance is designed for reliability and quick boot, it is not highly available by default (vSphere High Availability can be used to mitigate this concern). As of HCX version 4.3, customers can configure an HA (High Availability) pair when multiple NE appliances are deployed as part of a service mesh. Do note that no active network extensions must be configured on the NE appliances used for an HA pair. The Network Extension HA mechanism uses a heartbeat interval of 500ms. Three missed heartbeats will trigger the failure detection and failover between the active and passive nodes.

HCX NE High Availability

As of HCX 4.0, an in-service upgrade option for NE appliances is included, which significantly reduces the downtime from a software upgrade to a matter of seconds.

Using NE with other HCX services can provide performance benefits and optimizations to traffic flow. HCX Traffic Engineering performs TCP Flow Conditioning, which dynamically adjusts MSS to reduce fragmentation for NE traffic. HCX Mobility Optimized Networking (MON) provides optimized traffic flows for VMs that are attached to an extended network and have been migrated to VMware Cloud. MON does so by propagating /32 IPs for VMs migrated using MON-enabled Network Extensions.

 

Diagram, schematic</p>
<p>Description automatically generated

Figure 1 - Example HCX Service Mesh with Network Extension

 

Implementation

Eligible networks can be extended via the HCX Manager UI. Follow the steps below to extend a network.

To extend a network

  • In the HCX Manager UI, navigate to Services > Network Extension. Any existing network extensions are displayed on this screen.
  • Select Extend Networks.
  • If you have multiple service meshes, select the appropriate service mesh from the dropdown list.
  • Select the network(s) you want to extend, and click Next.
  • Using the dropdowns, select the NSX-T tier-1 router that the extended network(s) will be attached to, and the NE appliance to use.
  • Provide the gateway IP address and prefix length in CIDR format (e.g. 192.168.10.1/24), and click Submit.

HCX will begin the process of extending the network. A status of Extension complete will appear for the network once the network is extended. To verify NE is working, migrate a VM that is connected to an extended network. Once migrated, verify communication is working between the migrated VM and a local VM in the same network. A simple ping should show increased latency to a migrated VM, indicating that the traffic is being transported across the L2VPN tunnel.

You can view information and metrics about extended networks, including local/remote MAC addresses and amount of data transferred.

To view network extension details

  1. Navigate to Infrastructure > Interconnect.
  2. Under the appropriate service mesh, Click View Appliances.
  3. Expand the desired network extension appliance, and click Network Extension details.
  4. To view metrics and information for a specific network, click Show More Details.

 

 

 

 

Filter Tags

Cloud Migration HCX Networking VMware Cloud on AWS Document Designlet Technical Guide Intermediate Design Migrate