VMware Cloud: Solution Design

Key Deliverables

There are several high-level design decisions that must be made prior to architecting any single component of a VMware Cloud solution. Since the choices made during this stage of the design will impact all other aspects of the architecture, it is important to consider all available options and to understand the impact of each.


The key deliverables of the overall solution design are as follows:


  1. Determine the deployment site(s) for the solution.
  2. Determine the number of SDDCs per site.
  3. Determine a strategy for shared services.



Site Selection

Site selection is the process of determining the target regions for the deployment. When selecting sites, consider the following:

  • Locality and data sovereignty – Are there any user requirements for keeping workloads within certain legal jurisdictions?
  • Latency - How far away, in terms of network latency, is the target site from the main user base?
  • Bandwidth - How much available bandwidth is there to the target site? Are high-speed private lines (e.g., AWS Direct Connect) available?
  • Geography - Are there any geographic requirements to consider? For example, must the target site be physically separate from other sites for fault tolerance?
  • Economics - Are pricing differences between regions a factor?
  • Capacity - Is there sufficient capacity within the desired regions (current and future) to support the deployment?
  • Single or multiple sites – Are there any requirements driving the design toward a single-site deploy or must resources be split up between multiple sites?


Single-Site Design Considerations

Within a single site, it is important to consider whether or not to deploy all resources to a single SDDC or to split them between multiple SDDCs; particularly for large-scale deployments. Consider the following for each approach.

Single SDDC

Advantages include:

  • Ease of management – In general, the fewer resources there are to manage, the less work is required to manage them.
  • Potentially reduced north-south network traffic – A single SDDC offers optimized network performance for inter-application traffic since this traffic will stay local to the SDDC.
  • Simplified network security policy – Network security policies for east-west traffic are easier to manage since tools such as Distributed Firewall (DFW) allow policy to be based on advanced constructs such as tags.

Disadvantages include:

  • All eggs in one basket – All workloads are subject to outages which impact the shared resources for the SDDC.
  • Potential to exceed SDDC capacity – A single SDDC has limits in terms of total host counts, storage capacity, and network edge capacity.



Advantages include:

  • Potentially reduced risk – Spreading workloads between multiple SDDCs helps to limit the impact to outages within any single SDDC.
  • Expanded capacity – Each SDDC represents a dedicated pool of compute, storage, and network capacity. Overall capacity is greater than it would be with a single SDDC.

Disadvantages include:

  • Expanded management footprint – More SDDCs means more to manage. Additionally, each SDDC comes with its own overhead for management components (vCenter, NSX, and other service appliances).
  • Less optimal network flows – Splitting applications between multiple SDDCs turns what would normally be intra-SDDC traffic (east-west) into inter-SDDC traffic (north-south). In addition to increasing load on the network edge of each SDDC, there is also the potential for bandwidth charges for this traffic.
  • More complex network security policies – Splitting applications between multiple SDDCs requires more care when constructing network security policies since it may not be possible to use advanced constructs such as security tags for workloads that are external to a given SDDC.



  • Try to keep applications isolated to a single SDDC. This will help to minimize north-south traffic on the SDDC edge.
  • Similarly, try to group applications which communicate frequently between one another into a single SDDC.
  • Utilize localized shared services within the SDDC whenever possible (see Shared Services below).



Multi-Site Design Considerations

The decision to implement a multi-site design is typically driven by requirements for data locality, disaster recovery, geographic resiliency, or regional proximity to end users. The following are a few considerations to keep in mind with a multi-site design:

  • Data replication – Are there requirements to replicate data between sites?
  • Network connectivity – Is there a need to provide a shared private network between sites? Examples include private MPLS VPN connectivity, integration with colocation facilities, or point-to-point connectivity (VPN) between individual sites. What are the bandwidth and latency requirements for this connectivity?
  • Management – Are there any additional considerations for managing the sites? Examples include dedicated VPN tunnels to the sites, dedicated per-site management tools, or the potential requirement for site-specific administrators or privileged users.


Shared Services

A common optimization strategy is to deploy commonly used services within a close proximity to a site. Typical examples include DNS and Active Directory but may also include custom applications or other network services. There are 3 means of designing this integration:

Local Services Model

In this model, the service is integrated directly within the SDDC. The advantage to this approach is that traffic remains local and does not impact the SDDC edge. The disadvantage is that multiple instances must be managed whenever the design involves multiple SDDCs.


Site Services Model

In this model, the service is deployed centrally within a site and shared among the SDDCs within that site. The advantages are that there are fewer instances of the service to maintain and network latency to the service remains low compared to services located outside of the site. The disadvantage is that network traffic for service will result in additional north-south traffic on the SDDC edge and may result in additional bandwidth charges for the traffic.


Collocation Model

In this model, the services are located in a remote facility which acts as a central hub for multiple sites. Generally, this model is reserved for cases involving hardware network appliances or server infrastructure that cannot be virtualized, or as a means of providing close proximity to a service for a given country or geographic region. There aren’t many notable advantages to this model, and there are several disadvantages including increased network utilization of the SDDC edge, bandwidth charges for the network traffic, and higher latency to the service.


Authors and Contributors

Author: Dustin Spinhirne




Associated Content

home-carousel-icon From the action bar MORE button.

Filter Tags

General VMware Cloud Document Technical Guide Advanced Design