Designlet: Understanding VMware Cloud on AWS Network Performance

Introduction

To fully understand the aspects of network performance in VMware Cloud on AWS, it's helpful to first understand the architecture, the components involved in network forwarding and how those all interact. This blog will provide a foundation for understanding these topics and illustrate the flow of network traffic.

Summary and Considerations

General Considerations
  • VMware Cloud on AWS uses various models of AWS.metal instances for ESXi hosts. Each AWS instance has a single network interface card that is shared for all traffic including storage, network and management.
  • VMware Cloud on AWS uses a packet processing model where performance is typically limited by the packet rate. To understand networking performance it is important to consider both throughput, which is measured in Gigabits per second (Gbps), and packet rate, which is measured in Packets Per Second (PPS).
  • The primary performance measurement in VMware Cloud on AWS is PPS.
  • Network performance will vary on both the CPU performance as well as the network interface of the instance used for the management cluster.
  • Different traffic paths will have different performance characteristics, for example, whether it needs to go through the Edge VM or not.
Documentation References VMware Cloud on AWS Management Cluster Planning
Scalability ConfigMax
Last Updated March 2023

 

VMware Cloud on AWS Network Architecture

VMware Cloud on AWS presents itself as multiple logical routers consisting of NSX Tier-0 (T0) and Tier-1 (T1) routers. As shown below, the NSX Edge is a T0 router with logical connections to the underlying AWS components of the Internet Gateway (IGW), Direct Connect (DX), VMware Transit Connect (vTGW), the Connected VPC and to the T1 routers. The default T1 routers in every SDDC are the Management Gateway (MGW) and the Compute Gateway (CGW). Additional T1 routers can be configured inside the SDDC based on customer design and requirements but are not pictured for simplicity.

Default SDDC Topology with External Connections

This architecture is implemented on AWS instances with a single physical NIC (pNIC), an AWS Elastic Network Adapter (ENA). ENA port speeds vary depending on the instance used but do not directly imply a set performance metric. ENAs are Amazon specific adapters that provide offloading of traffic to the AWS VPC and due to this processing, traffic is measured in PPS. The published throughput numbers reflect full size, jumbo frame flows as reflected here. With a single ENA as the underlying host, all workload traffic (ingress or egress) to & from the SDDC must traverse the same ENA twice. For traffic which flows through the NSX Edge each packet must ingress the T0, get routed and egress the T0 via the same ENA.

AWS Host Flow

This flow is true for all traffic ingress or egressing a SDDC including VPN, Direct Connect, VMware Transit Connect, Connected VPC or East/West traffic traversing hosts in a SDDC. Additional traffic that is part of the SDDC that does not traverse the NSX Edge T0 includes VSAN, vSphere replication and vMotion. It’s important to note that while this traffic does not traverse the NSX Edge T0, it does count towards the total PPS metric when the traffic traverses the host with the active NSX Edge. An example of non-Edge traversing flows would be a vMotion between two other hosts. These flows would count against the PPS metric for those specific hosts, but because they don’t involve the host with the active NSX Edge, it wouldn’t be considered when looking at the SDDC’s network performance as a whole.

VMC Host Details

NSX in VMware Cloud on AWS uses an overlay model for the networking services used by guest VMs inside the SDDC. Overlay networks provide an abstraction layer between what guest VMs use as networks and the actual underlying hardware providing low-level network services. The NSX Edge provides the boundary between the underlying AWS network and the NSX overlay networking. The SDDC uses a NSX networking concept called a Tunnel End Point (TEP) to transport traffic from the guests connected to the overlay networks between the AWS instances.  More about NSX networking fundamentals can be found here.

 

Performance Details

The underlying networking between the NSX Edge and the AWS underlay uses VLAN constructs that enable hairpinning to direct traffic flows through the NSX Edge when needed. These VLANs enable the connectivity to AWS native networking features like Direct Connect, VMware Transit Connect and the Internet Gateway. The VLANs also provide non-overlay-based connectivity between the AWS hosts for vmkernel features like vMotion, vSphere management traffic and NSX TEP to TEP communication.

Traditional hardware centric performance metrics are typically measured in Gigabits Per Second (Gbps) to assess throughput because the hardware performs wire-speed packet processing at even the smallest packet size. The combination of NSX and the underlying ENA forwarding architectures, which provide more flexible and distributed programming solutions require a mindset shift in calculating performance. When considering throughput calculations, the size of the packets is very important.   A transfer using 64-byte packets will not yield the same throughput as the same number of  8900-byte packets.

Here are some high-level examples using a fixed PPS rate of how this translates to traditional throughput-based network performance benchmarks. The formula used is “Packet size * KPPS * 8 / 1,000,000.”

Packet size and performance chart

The effect of packet size on network throughput is significant as shown above.

Additionally, important Edge appliance sizing and management cluster considerations are covered in detail in the Management Cluster Planning Designlet. The sizing of the Edge and the AWS instance type can play a significant part in overall SDDC performance. SDDCs default to “Medium” sized edges and can be changed to “Large” at the user’s request and SDDCs with a focus on high network performance should be using Large edges as well as appropriately sized AWS instances. Outside of the SDDC, underlying AWS VPC networking constraints like the 5Gbps-per-flow limit need to be taken into consideration.  Further details on this particular limit can be found here.

Performance Monitoring

Customers can see traffic information in vCenter when viewing the active NSX Edge guest VM under the Management VMs folder.

Management VMs

VMware Cloud on AWS uses an Active/Standby high availability architecture for the NSX Edges for single and multi-edge SDDCs. It’s not deterministic which NSX Edge guest VM will be the active node, but usually a quick review of the graphs under the Edge VM -> Monitor -> Performance -> Overview page will make it apparent as the network graph for the standby NSX Edge VM will have very low KBps while the active NSX Edge VM will show much more activity. The graph below shows the graph for an Active Edge.

Edge VM Overview Graph

With the active NSX Edge identified, the next step is to identify the host where the Edge is running. Once identified, the vmnic0 of the host can be viewed in vCenter. For network performance monitoring, it’s recommended to select the Performance, Advanced and then select the Custom view and the Packets received and Packets transmitted Chart Options. These two selections will focus the chart on the PPS metrics of the vmnic0. Note the chart displays data in 20 second intervals so to convert to PPS divide the number by 20. For example, if 200,000 PPS are being received and 400,000 are being transmitted, the total PPS for that interval would be 200,000 + 400,000 /20 = 30,000 PPS.

vmnic0 PPS statistics

As mentioned earlier, not all network traffic traverses the NSX Edge including vMotion, NSX TEP to TEP traffic and more. All traffic that traverses the ENA counts towards our performance metric, PPS, which makes the ENA(vmnic0) the most accurate location to monitor network performance statistics. If troubleshooting performance issues, realize that there may be secondary bottlenecks on the Edge VM or underlay network that may be contributing to the issue. 

There may be scenarios where monitoring just the NSX Edge network performance would be valuable. If you want to monitor the NSX Edge specifically, a similar approach can be followed. In vCenter, select the active edge, Performance, Advanced and then select the Custom view, only display the Packets received and Packets transmitted Chart Options. vCenter enumerates the Network Adapters in the guest VM as 4000 and 4001 with Network Adapter 1 (vNIC1) displayed as 4000 and Network Adapter 2 (vNIC2) displayed as 4001. Deselect the vmnic0 and the IP address of the host to simplify the charted data. Four charts will be displayed, two for each vNIC (Rx and Tx).

Edge VM PPS Statistics

By comparing the PPS observed at the vmnic0 and the Edge the amount of non-Edge traffic can be determined. This is useful when establishing a baseline of performance and a general understanding of what the sources and destinations in the SDDC are. In the example below the Edge statistics remain in a steady state during the interval, however the vmnic0 is also being utilized for a vMotion in the same interval. The difference in traffic loads is clearly displayed in the charts.

NSX Edge at steady state of bidirectional traffic

NSX Edge VM Steady State

Host vmnic0 with vMotion at the same time interval

vmnic0 with vMotion

Both the Edge VM network statistics and the host network statistics can be exported to CSV files and easily compared in a spreadsheet application. When comparing the statistics between the Edge VM and the host network statistics, be mindful of the directionality of the data you are analyzing.  Ensure you review both packets transmitted and received and combine them for a total value.

To visualize these counters in the context of VMware Cloud on AWS, the diagram below shows the locations statistics can be collected as well as the formula for calculating PPS.

PPS Observability

When VMware provides guidance on PPS values per instance, we utilize a calculation that allows for a 0.05% packet loss. This accounts for the normal and expected loss behavior experienced in TCP networks when TCP windowing sizes scale up, reach a peak and drop down before scaling up again. A visual representation of TCP packet drops can be seen in all of the charts shown above where packet drops forced retransmissions of data and the congestion control/avoidance mechanism went into effect. Some packet loss is always expected due to a multitude of reasons though it is also good to know that excessive packet drops will be detrimental to network performance.

Summary

Network performance in VMware Cloud on AWS is centered on the PPS metric and north/south traffic between the SDDC and external destinations. Throughput depends on the AWS instance used for the management cluster, which is where the Edge VMs run, as well as other factors. East/west traffic, internal to the SDDC, is also dependent on the ENA but since the traffic can be routed between the hosts on the NSX overlay, it doesn’t have to flow through a single host. In an architecture with a single NIC (ENA) all traffic must ingress and egress the same adapter as packet flows are routed. PPS observability is available in vCenter using the process described above.

Author and Contributors

Author: Ron Fuller

Filter Tags

Architecture General Operations and Management NSX Networking Security VMware Cloud on AWS Document Designlet What's New Advanced Design Deploy Manage Optimize