Designlet: VMware Cloud on AWS HCX Mobility Optimized Networking (MON)

Introduction

This document provides you with recommendations and guidelines to plan and implement HCX Mobility Optimized Networking (MON) with HCX Network Extension (NE).

HCX Mobility Optimized Networking (MON) is used along with HCX Network Extension (NE) to prevent traffic tromboning and inefficient traffic flows for VMs that have been migrated to VMware Cloud.

Summary and Considerations

Use Case

When enabling NE for multiple networks with active workloads on-prem and in VMware Cloud, or when local egress is required. Local egress can be used for internet access and access to AWS services like S3.

Pre-requisites

  • Working HCX deployment running on version R140 or later
  • HCX Enterprise license enabled
  • NSX-T 3.0 in your VMware Cloud SDDC
  • VMs participating in MON must have VMware Tools installed

General Considerations/Recommendations

  • Intended for site-to-site service mesh. Does not provide routing optimization for multi-site extension.
  • Routing advertisements are limited to NSX-T routing boundaries.
  • Access to resources beyond NSX-T requires source NAT configured

Performance Considerations

Enabling MON will provide lower latency and higher bandwidth between VMs on extended networks in VMware Cloud, and lower latency to the internet due to local egress. There is no other performance impact from using MON beyond the considerations when using Network Extension.

Cost implications

Egress charges will apply to VM traffic on extended networks communicating from VMware Cloud to on-prem. These charges will vary depending on whether your HCX service mesh is running over the internet or a DX. MON can reduce egress charges by providing optimal routing for workload traffic, preventing charges related to traffic tromboning for VMs on extended networks.

Documentation reference

HCX User Guide

Last Updated

April 2021

 

 

 

Background

In VMware Cloud on AWS, network connectivity to ESXi hosts in the management cluster (Cluster-1) is shared between multiple services. It is important to understand how network traffic generated by or for one service can impact others. In particular, the ESXi host where an active Edge is running determines the capacity available for north-south traffic (traffic that is going between the SDDC and an external location, such as on-prem or to the connected VPC or over Transit Connect). Other services that consume network resources on that same host can reduce the amount of capacity available for the Edge, and therefore limit north-south traffic throughput.

In this document, we also refer to network capacity in packets-per-second (PPS), as opposed to measuring throughput such as Gigabits-per-second (Gbps). This is because the interfaces process individual packets at a certain rate which is not significantly affected by the size of the packet. At a given PPS rate, larger packets will transfer more data, resulting in higher bandwidth.

 

 

Planning and Implementation

Planning

HCX MON leverages integration between HCX and NSX-T to provide optimized traffic flows for VMs that are attached to an extended network and have been migrated to VMware Cloud. Traffic between VMs on the same network are handled natively by HCX Network Extension appliances, but routed traffic between networks must traverse a gateway. Under normal circumstances this leads to traffic tromboning, as cloud VM communication has to travel back to the gateway on-prem before it can be routed to another network. MON addresses this by inserting host routes (a /32 route for each MON-enabled VM) into the NSX-T routing table, and by allowing for the network gateway to be present in the cloud. This can greatly reduce the latency for communication between cloud-based VMs since all communication stays within VMware Cloud. This also allows for local egress of internet-bound traffic. Policy routes can be configured for traffic that should be routed to the on-prem network instead of following the routing table of the local gateway.

 

 

Timeline

Description automatically generated

Figure 1 - Example HCX deployment with Mobility Optimized Networking enabled

 

Implementation

HCX MON is enabled per VM, or for an extended network. Optimized routing will apply to migrated VMs after following the steps below to enable MON. VMs that have not been migrated will continue to communicate with their local gateway.

To enable MON

  • In the HCX Manager UI, navigate to Services > Network Extension.
  • In the Network Extension screen, expand a site pair to see the extended networks. Network Extensions enabled for MON are highlighted with an icon.
  • Expand each extension to display network details.
  • Select a Network Extension and enable the slider for Mobility Optimized Networking. Enabling MON applies to all subsequent events, such as VM migrations and new VMs connected to the network. VMs in the source environment and VMs not having VM Tools are ineligible for MON.
  • For any existing migrated VMs requiring MON, follow the steps below:
  • Select a VM and expand the row. You can select multiple VMs using the check box next to each workload.
  • Select Target Router Location and choose the cloud option from the drop-down menu.
  • Select Proximity Conversion Type: “Immediate Switchover” or “Switchover on VM Event”. “Immediate Switchover” transfers the router location immediately. If a workload VM has ongoing flows to the source router, they will be impacted. “Switchover on VM Event” transfers the router location upon VM events like NIC disconnect and connect operations, and VM power cycle operations.
  • Click Submit. All selected VM workloads are configured for MON, which is indicated by a MON icon being displayed.

Verifying that MON is working is as simple as measuring the latency to the gateway from a VM with a ping before and after MON is enabled. Viewing the NSX-T routing table in HCX will also show that host routes have been installed in the routing table for MON-enabled VMs.

Policy routes can be configured to send traffic through the original source gateway at the customer site instead of the cloud gateway. This can be useful for security appliances or other compliance requirements. Policy routes should be configured for all networks at the customer site, and all RFC 1918 ranges are configured as policy routes by default. These default policy routes may need to be adjusted based on your use case and IP address scheme. Policy routes can also be used to optimize access to S3 and other native services, but configuration will vary depending on your network topology and requirements.

If a default route is being advertised to the SDDC to direct all internet traffic to the customer site, the HCX policy route should also include a “0.0.0.0/0” entry to direct internet-bound traffic to the on-premises gateway. This ensures routing for internet-bound traffic remains symmetric.

To configure a policy route:

  1. In the HCX Manager UI, navigate to Network Extension.
  2. In the Network Extension screen, click the Advanced tab.
  3. Click Policy Routes. A new screen appears with options to Add or Remove networks.
  4. Using the pull-down menu, select a destination site. In the Network field, for which you want traffic routed through the source gateway, click Add.
  5. Complete the entries for Network IP Address and Prefix Length. By default, “Redirect to Peer” is selected. Optionally, you can specify a policy that blocks a network from being redirected by unchecking “Redirect to Peer”.
  6. Click Submit.

 

 

 

 

Filter Tags

Cloud Migration General VMware Cloud on AWS Technical Guide Intermediate Design Migrate