HCX Design Guide for VMware Cloud on Dell

Overview

As organizations consolidate data centers, extend data centers to the cloud, or replace on-premises infrastructure, they must consider application migration challenges from infrastructure incompatibilities and network complexity to moving workloads without disrupting application dependencies.

VMware HCX®, an application mobility platform, simplifies application migration, rebalances workloads and optimizes disaster recovery across data centers and clouds. HCX enables high-performance, large-scale app mobility across VMware vSphere® and non-vSphere cloud and on-premises environments to accelerate data center modernization and cloud transformation.

VMware HCX can be deployed on VMware Cloud on Dell to simplify migration from your existing datacenter or cloud environment to VMware Cloud on Dell SDDC environments.

Purpose

As customers have started deploying VMware HCX on VMWare Cloud on Dell SDDC environment, they are looking for all the possible design options for this deployment.

This design guide takes you through the high-level steps to enable VMware HCX with VMware Cloud on Dell in different design scenarios. This guide is best used in conjunction with VMware HCX documentation which can be found here.

Audience

This guide is intended for IT administrators, solution architects, and VMware administrators who want to deploy and configure VMware HCX to migrate workloads from their existing environment to VMware Cloud on Dell environment. This information is intended for users who have a basic understanding of VMware HCX and its integration with VMware vSphere. Familiarity with technologies such as VMware NSX-T Data Center and other networking concepts is also helpful.

Design Scenarios

This section will cover several topologies in which HCX can be used to migrate workloads to VMware Cloud on Dell.

Use Case 1: On-premises vSphere Environment with VDS Port Groups at the Source Site (without Layer-2 Network Extension)

This scenario covers a use case of facilitating workload migration from an on-premises environment to VMware Cloud on Dell SDDC using HCX. Depending on the customer’s use case, workloads can be migrated in parallel (HCX Bulk Migration) or serially (HCX vMotion) or using a combination of both (HCX RAV).

The environment in this topology are as follows:

  • Workloads in customer’s on-premises site are deployed and managed using VMware vCenter.
  • The vNICs of these virtual machines are connected to Virtual Distributed Switch (VDS) port groups.
  • The default gateway for all the virtual machines is on a customer managed physical router.
  • L2 network is not stretched between the two networks so there is no traffic traversing the HCX Network Extension appliances.
  • Post migration, the VMs will connect to the logical segments which have been manually created in NSX on VMware Cloud on Dell SDDC site.

Diagram

Description automatically generated

Figure 1: Pre-Migration

Deployment: 

Diagram, text

Description automatically generated

Notes:

1 HCX Cloud URL can be found from VMware Cloud on Dell SDDC in VMware Cloud Console.

2 Network profiles will be created automatically in the VMware Cloud on Dell SDDC. The IP Pool used in the different network profiles will be carved from the SDDC Management subnet that will be specified at the time of ordering SDDC.

 

Migration

This migration covers the scenario when Network Extension is not used to extend Layer-2 networks between the on-prem and VMware Cloud on Dell environments. Details around implementation of Layer-2 Network Extension, is discussed in use case 2

Diagram

Description automatically generated

Notes:

1 If BGP is not configured in the environment to dynamically advertise routes, configure static routes in the physical network to make the migrated networks reachable. VMware Cloud on Dell supports BGP and static routes only.

2 HCX Supports several types of migrations. Details of each migration type can be found on the HCX documentation site.

Diagram

Description automatically generated

Figure 2: Migration

VMs will be migrated from on-premises site to the target VMware Cloud on Dell SDDC by using the HCX-IX appliances. More details about different migration strategies can be found in the HCX documentation..

Use Case 2: On-premises vSphere Environment with VDS Port Groups at the Source Site (with Layer-2 Network Extension Enabled and Mobility Optimized Networking Deactivated)

This scenario covers the use case in which there are active workloads in on-premises while some workloads have been migrated to the target VMware cloud on Dell SDDC Site. To enable communication between the workloads between the on-premises and cloud site, HCX Network Extension can be used to extend Layer-2 networks between the sites.

  • This topology is the same as the previous topology, with the exception that the Layer-2 network is stretched between the two sites.
  • VM workloads at the source site are on vSphere Distributed port groups.
  • The default gateway of the VMs migrated to VMware Cloud on Dell will still reside on the on-premises datacenter.
  • Traffic from the VMs migrated to VMware Cloud on Dell SDDC will traverse the tunnels between the Network Extension appliances.
  • HCX MON is not enabled on the Stretched L2 Networks.
  • The default gateway will be migrated to VMware Cloud on Dell SDDC Site after the migration is completed for all VMs in a specific L2 Segment. Please note that this is a manual operation as described in the “Unextending Networks” section.

Diagram

Description automatically generated

Figure 3: Before extending Networks using L2 Network Extension. No network segments have been created on the target site and connected to NE appliance

More details about HCX Network Extension can be found in the HCX documentation.

Deployment:

The deployment process will remain the same as Use Case 1.

Enabling Layer-2 Network Extension:

Notes:

1 Use case with Mobility Optimized Networking will be addressed in the next section

2 After migration, if the migrated VMs will use the existing default gateway, enter this IP address. If a new default gateway is desired, enter the new IP address that will be used as the default gateway. The IP address entered here will be configured on the Compute Gateway on VMware Cloud on Dell SDDC. However, this interface on the compute gateway will be disconnected until the network is unextended and the gateway migrated over to the VMware cloud on Dell SDDC

3 This segment will be in a disconnected state which is expected since the default gateway for this segment is still on the on-premises datacenter and not on the Compute Gateway of VMware Cloud on Dell SDDC.

 

Extending the network using the HCX Layer-2 Network Extension will do the following in the VMware Cloud on Dell SDDC Site as shown in Figure 4.

  1. Create a Layer 2 extended segments of the form L2E-<dvpg-name> where <dvpg-name> is the name of the Virtual Distributed Switch DVPortGroup in the on-premises datacenter which was extended in HCX Network Extension. In the above example we extended 2 DVPortGroups, hence we see 2 L2E Segments created in the VMware Cloud on Dell SDDC
  2. These L2E segment will have a port created on the compute gateway which is in the disconnected state. This is the reason in the Networking and Security UI, these interfaces will show a type disconnected. This interface will serve as the default gateway only if we manually migrate the default gateway to the SDDC site, by unextending the network, procedure for which is detailed t. In addition, it is also possible to have the default gateway enabled simultaneously in both on-premises and VMware Cloud SDDC Site by enabling MON, which will be discussed in Use Case 3.
  3. An interface/network adapter on the HCX NE appliance will be connected to this network/segment that is being extended. This is how the traffic is bridged between networks in local and target site as shown in Figure 4.

Diagram

Description automatically generated

Figure 4: After Network Extension is enabled, the L2E segments are created in SDDC. The logical port on the T1 will be in disconnected state. The NE Appliances will have an interface connected to the stretched segment

 

Migration:

Migration steps are same as use case 1.

Diagram

Description automatically generated

Figure 5: Post Migration Layer-2 Packet flow where layer-2 packets will be encapsulated and transmitted over an IPsec Tunnel between the NE appliances

Figure 5 shows the Layer-2 packet flow where packets between workloads in the same network (which is extended) will be encapsulated and sent over the IPsec Transport Tunnel (UDP 4500) between the two HCX-NE Appliances.

Figure 6 shows the Layer-3 packet flow where packets between the workloads in different networks will be routed via the router in the on-premises site. No routing happens locally in the VMware Cloud on Dell SDDC because the logical ports on the compute gateway is in a disconnected state.

Diagram

Description automatically generated

Figure 6: Post Migration Layer-3 Packet Flow where the packets will be sent over the NE appliance to the source site gateway to get routed

 

Unextending Networks

Once migration of all VMs is complete, and layer-2 extension is no longer required, a network can be un-extended to move the default gateway to the VMware Cloud on Dell SDDC Site. Please note that this is a manual task involving changing the default gateway location as described below.

Diagram

Description automatically generated

Notes:

1 This step will remove this segment/logical switch/dvpg from the NE appliance and migrate the gateway to the SDDC Site. The logical port on the compute gateway, which was previously disconnected, will be connected and will start serving as the default gateway for all migrated VMs connected to the Logical Segment as depicted in Figure 7.

Diagram

Description automatically generated

Figure 7:Post migration and after unextending the network where the default gateway on the source site has been disconnected and migrated to the destination site and routing is exclusively performed at the SDDC site

As can be seen above, the Gateway interface is shut and after the network is un-extended in HCX, the traffic is routed via the Compute Gateway Tier-0 Gateway TORs To the Physical environment. The unextending operation with the “Connect cloud network to cloud edge gateway after unextending” option enabled will ensure that the CGW interfaces will be in a connected state and serve as a default gateway for all the migrated VMs.

Use Case 3: On-premises vSphere Environment at the on-premises Site with Network Extension High-Availability

The environment in this use case is same as the previous use case with an addition of High Availability enabled for Network Extension (NE). An additional HCX NE Appliance will be deployed and will be used as a Standby appliance. High availability can be used for long lived Layer-2 stretches, in case the primary appliance fails, the Layer-2 extended traffic can failover to the standby appliance to provide seamless connectivity.

Note that the steps below assumes that the NE is deployed as described in the previous section. More details of HCX Network Extension High Availability can be found in the HCX documentation.

Graphical user interface, diagram, text, application

Description automatically generated

As can be seen in Figure 8, with HA enabled, we have one pair of NE appliances in each site. A pair of NE appliances with High Availability enabled will join a HA group. The L2 stretched networks will only be connected to the Active NE appliance. The standby NE appliance will not participate in datapath until the Active NE appliance goes down. Both active and standby appliances will exchange heartbeats periodically to monitor the health of its peer. Loss of heartbeats between the appliances will trigger a failover.

Diagram

Description automatically generated

Figure 8: HCX NE High Availability with Active and Standby NE appliances

If the active NE appliance fails (loss of heartbeats), the standby appliance will take over the active role. The HA group will show a degraded state. During the failover operation is when the stretched networks will now be connected to the “new” Active appliance (which was previously standby) and will start serving the traffic in datapath as illustrated in Figure 9.

Diagram

Description automatically generated

Figure 9: After failure of the Active NE appliance, the standby appliance takes over and starts forwarding traffic through the L2E Tunnel to the source site

Use Case 4: On-premises vSphere Environment with VDS PortGroups at the Source Site (with Mobility Optimized Networking (HCX-MON) Enabled

The default behavior of HCX Network Extension is that all routed traffic for the migrated workloads is steered to the on-premises (source) gateway which could result in sub-optimal routing for migrated workloads in different segments. Mobility Optimized Networking (MON) is an enhancement to the Network Extension feature, which will route the packets locally in the cloud site for all migrated VMs in different networks.

  • Topology is the same as the previous two use cases
  • MON is enabled on the HCX Layer-2 Network Extension
  • Without MON, traffic between two migrated VMs, in different Layer-2 segments/networks will route through the gateway on the source site resulting in sub-optimal routing as shown in Figure 10. With MON enabled, the gateway on the SDDC site will be enabled and traffic for migrated VMs will be routed through the gateway on SDDC site as shown in Figure 11.
  • More details about MON can be found in the HCX documentation.

Deployment:

The deployment process remains the same as discussed in use case 1.

Migration:

The Migration process remains the same as discussed in use case 1.

Enabling Layer-2 Network Extension:

The process to extend networks remains the same as discussed in use case 2.

Enabling MON:

Diagram

Description automatically generated

Notes:

1 Here, you can either choose the router location for each VM individually or simultaneously. To do it individually for each VM, expand the VMs entry and choose the target router location and click submit. To simultaneously update the router location for multiple VMs, select the VMs and select the Router Location above the VM entries and click submit.

In the previous section when HCX L2 Network Extension was enabled without MON, the logical router port created on the Compute Gateway will be disconnected. When MON is enabled, however, this logical port will be connected and in NSX UI, the segment type will say Routed. This interface will have the same IP address as the source site extended network with a subnet mask of /32

Diagram

Description automatically generated

Figure 10: L3 Packet flow between two segments in the same site with MON deactivated where the gateway on the source site is performing the routing operation

As can be seen above, when MON is deactivated, traffic between two segments local to the SDDC will get routed via the Gateway/Router in the source side leading to sub-optimal routing

Diagram

Description automatically generated

Figure 11: L3 Packet flow between two segments in the same site with MON Enabled shows routing between migrated VMs is performed by the CGW which will have /32 static routes with VM IP injected

When MON is enabled, the logical ports for segments created on the Compute Gateway are not disconnected, hence traffic can be routed via the compute gateway between the segments. This is achieved by injecting /32 static routes for every migrated VM in the CGW. This is an automatic operation performed by HCX. In the above diagram, the two migrated VMs in VMware cloud on Dell SDDC are in different NSX Logical segments will have a /32 static route with the VMs IP configured in the CGW, which will ensure that the traffic between the VMs is routed locally instead of steering this traffic to be routed via the on-premises gateway.

MON also provides the option to control which traffic is routed locally via the cloud gateway and what traffic is routed via the on-premises gateway. HCX Policy Routes define which traffic is routed via the source Gateway. By default, every RFC 1918 networks are included in the Policy route configuration i.e., traffic destined to 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16 from the workloads migrated to the VMware cloud on Dell will be steered through the NE appliances to be routed via the On-premises Gateway as shown in Figure 12.

Diagram

Description automatically generated

Figure 12: Traffic from Migrated VMs to other non-migrated workloads with an RFC-1918 IP address will be steered to the on-premises gateway to be routed

For any other traffic like Internet-Bound which is not configured in the HCX Policy routes will be routed via the CGW Tier-0 Gateway TORs Customer infrastructure in the target VMware Cloud on Dell SDDC site as shown in Figure 14. It is possible to steer the non-RFC-1918 routes (internet-bound routes) for example by adding the desired networks in the HCX Policy route. For example, illustrated in Figure 13, we have created a rule to steer all IP traffic (RFC-1918 and other internet-bound) to the source site to be routed by the on-premises gateway.

Graphical user interface, text, application, email

Description automatically generated

Figure 13: a HCX Policy route rule was created to allow all traffic 0.0.0.0/0 to steer all traffic towards the source site

Diagram

Description automatically generated

Figure 14: Non HCX Policy routes will be routed via the Tier-0 Gateway northbound to the external infrastructure

Similarly, it is possible to exclude networks from being routed to the source side by adding a HCX Policy route and adding a Deny rule to prevent the traffic from being steered to the on-premises (source) site. As shown in the figure 15. Here we have a HCX policy route to deny traffic from 10.10.10.0/24 network to be steered to the source site, so the routing for this traffic will be local to the VMware Cloud on Dell environment.

Figure 15: traffic to Network 10.10.10.0/24 will not be steered to the on-premises (source) site and all routing will be local to VMware Cloud on Dell site

Use Case 5: On-premises Environment with NSX Segments at the Source Site (with HCX Network Extension Enabled and MON Deactivated)

This topology is similar to the topology covered in Use Case 2, with the exception that the workload in the source site is now connected to NSX-T Logical Segments.

  • The gateway for these VMs is on a NSX Tier-1 Gateway which connects to a Tier-0 Gateway.
  • The default gateway of the VMs migrated to VMware Cloud on Dell will still reside on the on-premises datacenter.
  • Traffic from the VMs migrated to VMware Cloud on Dell SDDC will traverse the tunnels between the Network Extension appliances.
  • HCX MON is not enabled on the Stretched L2 Networks.

Diagram

Description automatically generated

Figure 16: Topology before extending Networks using Network Extension. No network segments have been created on the target site and connected to NE appliance

Deployment:

The deployment workflow will remain the same as in use case 1 with a slight modification1.

Notes:

1 While creating a compute profile, ensure to add the NSX-T Overlay and/or VLAN Transport Zones in the “Network Containers eligible for Network Extension Section”. In addition, while creating a Service Mesh, also select the NSX Transport Zones in the “Network Extension Appliance Scale Out.” These two steps will ensure that NSX-T Logical Segments (Overlay and VLAN) will show up in Network Extension

HCX will create a Network Extension appliance per Network Container type. For example, if the network container section in compute profile contains a VDS, then HCX will create a pair of network extension appliances in the source and target sites. If the Network containers section in compute profiles has a VDS and NSX-T Overlay transport zone selected, HCX will create two pairs of network extension appliances at source and destination.

 

Enabling Network Extension:

Diagram

Description automatically generated

Notes:

1 Use case with Mobility Optimized Networking will be addressed in the next section

2 After migration, if the migrated VMs will use the existing default gateway, enter this IP address. If a new default gateway is desired, enter the new IP address that will be used as the default gateway. The IP address entered here will be configured on the Compute Gateway on VMware Cloud on Dell SDDC. However, this interface on the compute gateway will be disconnected until the network is unextended and the gateway migrated over to the VMware cloud on Dell SDDC

3 This segment will be in a disconnected state which is expected since the default gateway for this segment is still on the on-premises datacenter and not on the Compute Gateway of VMware Cloud on Dell SDDC.

Diagram

Description automatically generated

Figure 17: After Network Extension is enabled, the L2E segments are created in SDDC. The logical port on the T1 will be in disconnected state. The NE Appliances will have an interface connected to the stretched segment

HCX NE appliance in the on-premises site will have a network adapter connected to the extended NSX Logical Segments as shown in the above, while the HCX NE Appliance on the SDDC site will have its network adapter connected to L2E segments which were created when Network Extension was enabled between the sites.

Migration:

This workflow will remain the same as discussed in use case 1.

Diagram

Description automatically generated

Figure 18: Post Migration Layer-2 Packet flow where layer-2 packets will be encapsulated and transmitted over an IPsec Tunnel between the NE appliances

Figure 18 shows the Layer-2 packet flow where packets between workloads in the same network will be encapsulated and sent over the IPsec Transport Tunnel (UDP 4500) between the two HCX-NE Appliances.

Figure 19 shows the Layer-3 packet flow where packets from workloads in VMware Cloud on Dell SDDC to the external infrastructure (and internet-bound) will be steered over the UDP Transport tunnels from the cloud SDDC to on-premises and then will be routed via the tier-1 gateway tier-0 gateway Customer Gateway in the on-premises site. No routing happens on the VMware Cloud on Dell SDDC because the logical ports on the compute gateway (CGW) is in a disconnected state.

Diagram

Description automatically generated

Figure 19: Post Migration Layer-3 Packet flow where the packets will be sent over the NE appliance to the source site gateway to get routed

Unextending Networks:

Graphical user interface, diagram, text, application

Description automatically generated

As can be seen in Figure 20, the Gateway interface is shut and after the network is unextended in HCX, the traffic is routed via the Compute Gateway Tier-0 Gateway TORs To the Physical environment. The unextending operation with the “Connect cloud network to cloud edge gateway after unextending” option enabled will ensure that the CGW interfaces will be in a connected state and serve as a default gateway for the migrated VMs.

Diagram

Description automatically generated

Figure 20: Post Migration and after unextending the network where the default gateway on the source site has been disconnected and migrated to SDDC and routing is exclusively performed at the SDDC site

Use Case 6: On-premises Environment with NSX Segments at the Source Site (with Mobility Optimized Networking (HCX-MON) Enabled

The default behavior of HCX Network Extension is that all routed traffic for the migrated workloads is steered to the on-premises (source) gateway which could result in sub-optimal routing for migrated workloads in different segments. Mobility Optimized Networking (MON) is an enhancement to the Network Extension feature, which will route the packets locally in the cloud site for all migrated VMs in different networks.

  • This topology is similar to the topology covered in Use Case 3, with the exception that the workloads in the source site are now connected to NSX-T Logical Segments.
  • HCX MON is enabled on the HCX Network Extension.
  • Without MON, traffic between two migrated VMs will route through the gateway on the source site resulting in sub-optimal routing as shown in Figure 21. With MON enabled, the gateway on the SDDC site will be enabled and traffic for migrated VMs will be routed through the gateway on SDDC site as shown in Figure 22.
  • More details about MON can be found in the HCX documentation.

 

Deployment and Migration

The deployment and migration processes are the same as discussed in Use Case 1.

Enabling Network Extension

The process to extend networks is the same as discussed in Use Case 5.

Enabling MON:

This process to enable MON is the same as discussed in Use Case 4.

Diagram

Description automatically generated

Figure 21: Packet flow between segments with MON deactivated. Traffic between two migrated VMs in different subnets will be routed via the on-premises gateway resulting in sub-optimal routing

As can be seen above, when MON is deactivated, traffic between two segments local to the SDDC will get routed via the Tier-1 Gateway in the source side leading to sub-optimal routing 

When MON is enabled, the logical ports for segments created on the Compute Gateway are not disconnected, hence traffic can be routed via the compute gateway between the segments. This is achieved by injecting /32 static routes for every migrated VM in the CGW. In the above diagram, the two migrated VMs in VMware cloud on Dell SDDC are in different NSX Logical segments will have a /32 static route with the VMs IP configured in the CGW, which will ensure that the traffic between the VMs is routed locally instead of steering this traffic to be routed via the on-premises gateway. This is illustrated in Figure 22

As described in Use Case-4, HCX Policy routes influence the how the traffic is steered from the migrated VMs to internal or external networks. With default HCX Policy route configuration, any traffic to RFC 1918 networks which have not been migrated to VMware Cloud on Dell SDDC site, will be steered to the source site to be routed via the on-prem gateway, while the non-RFC 1918 networks will be routed locally in the VMware Cloud on Dell SDDC via the Compute Gateway (CGW) Tier-0 Gateway TORs External infrastructure. The configuration and use of HCX Policy route is the same as described in Use Case 4.

Diagram

Description automatically generated

Figure 22: Packet flow between Segments with MON Enabled – the traffic between migrated VMs in different subnets will be routed locally

Figure 23 shows the packet flow between a Virtual Machine that has been migrated to VMware Cloud on Dell SDDC and a workload that is in the source site, has an RFC 1918 IP address and environment is configured with default HCX Policy route. The traffic will be steered over the UDP transport tunnel between the NE Appliances and delivered to the T1 on the source site to be routed.

Figure 24 shows the packet flow between a Virtual machine that has been migrated to VMware cloud on Dell SDDC and a workload that does not have an RFC 1918 IP address (Internet-bound traffic for example) and using the default HCX Policy route configuration. In this case, the traffic will be routed locally within the SDDC and will not be sent to the source site. This packet flow will also be the same if a HCX policy route is configured with an RFC 1918 network with Deny flag enabled which will prevent steering of traffic to the source site.

Diagram

Description automatically generated

Figure 23: Packet flow from migrated VM to an IP configured in HCX Policy route (with Allow flag enabled)

Diagram

Description automatically generated

Figure 24: Packet flow from migrated VM to an IP configured in HCX Policy route (with Deny flag enabled)

Appendix-1: Guest OS Customization with Bulk Migration

It is possible to modify some VM attributes during the migration process. This section will demonstrate changing a few of these attributes during a migration process. More details on the different characteristics that can be modified can be found in the HCX documentation.

Pre-change config:

A VM with hostname web-test with an IP address 192.168.91.5/24 and MAC 00:50:56:82:ab:4d will be migrated from an On-premises datacenter to VMware Cloud on Dell SDDC. The following images show details about the VMs IP/MAC addresses, Default Gateway. As can be seen from the network config file, no DNS entries have been configured.

Text

Description automatically generatedText

Description automatically generated

During HCX migration, we will change IP, MAC, default gateway of this VM and add DNS Server and Domain entries using HCX guest OS customization feature.

Diagram

Description automatically generated

Figure 25: Before Guest OS Customization

All the tasks are performed from the HCX plugin in the on-premises vCenter Web UI. In the following demonstration, we will disable “Retain MAC” and enter the new IP Address, default gateway, DNS, and domain name.

Graphical user interface, text

Description automatically generated

Diagram

Description automatically generated

Figure 26: Migration after performing HCX Guest OS Customization

High-level Comparison Between Different Migration Types

Details about different HCX Migration types can be found in the HCX documentation. The below table shows a comparison between different migration types at a high level.

 

Cold Migration

HCX vMotion

Bulk Migration

Replication Assisted vMotion (RAV)

Migration

Serial

Serial

Parallel

Parallel Transfer
Serial Switchover

Retain MAC

Retain MAC is mandatory

Retain MAC is mandatory

If not selected, a new MAC will be assigned to the migrated VM

 

VM State

Powered Off

Live

During Transfer, the source VM will be live. During switchover, source VM will be powered off and the migrated replica will be powered-on

Live

Replication Type

Cold (Network File Copy)

Live (vMotion)

Warm (Host Based Replication)

Live (Host Based Replication + vMotion)

Transfer Concurrency

1 per Service Mesh

1 per Service Mesh

100 per HCX Manager*

100 per HCX Manager*

Switchover Concurrency

1 per Service Mesh

1 per Service Mesh

100 per HCX Manager*

1 per Service Mesh

* Please check configmax for the latest numbers as these might change with newer releases

About the Authors

Vivek Mayuranathan is a Product Solutions Architect with the Cloud Infrastructure Business Group at VMware.

Shree Das is the Director of Product Solutions Architects with the Cloud Infrastructure Business Group at VMware.


Associated Content

From the action bar MORE button.

Filter Tags

VMware Cloud on Dell Document Guide Technical Guide Intermediate Advanced Design Migrate