Deploying VMware Cloud Infrastructure for Google Cloud VMware Engine
Deploying VMware Cloud on Google Cloud VMware Engine
Google Cloud VMware Engine is currently available in twelve different regions around the world. The pricing varies depending on the region, and organizations can pay for resources on-demand or prepaid one-year or three-year up front. Choosing an appropriate region to deploy Google Cloud VMware Engine is important to ensure that end users can enjoy the service with minimal latency and other potential performance issues.
Note - not all Google Cloud native services are available in all regions, so proper region selection is crucial to make sure the required services are available for the developers and end users.
Once an appropriate region has been selected, the first step to getting started with Google Cloud VMware Engine is to enable the VMware Engine API for a Google Cloud project in the Google Cloud Console.
The Google Cloud project where the VMware Engine API is enabled must have a quota of nodes, which is the maximum number of nodes that can be used depending on availability. A VMware Engine node quota is assigned per Cloud project, per region. A Cloud project also has a global node quota, which is the total number of nodes that can be used across all regions. To determine an appropriate quota of nodes, organizations must determine the number of private clouds that are needed, and the number of nodes needed per private cloud. Sizing before deployment is important as organizations may need to submit a quota increase request to Google Cloud.
Creating a Private Cloud
A private cloud can be created through the Google Cloud VMware Engine portal, while logged in as a user with a VMware Engine Service Admin IAM role. A private cloud can be created with only one node, but it will be deleted after 60 days. The minimum number of nodes for a private cloud to persist is three. Also, when providing CIDR ranges for the VMware management network and the HCX deployment network, organizations must use CIDR ranges that do not overlap with any existing on-premises or cloud subnets. This should be planned before the private cloud deployment as the management and HCX CIDR ranges cannot be changed after creation.
Access to NSX-T Manager
Google Cloud VMware Engine includes NSX-T, and organizations have direct access to the NSX-T console where they can create overlay subnets for the workloads. By default, VMware Engine allows administrator access to NSX-T manager. Therefore, VMware Identity Manager must be used if organizations require role-based access control (RBAC) for NSX-T.
Organizations can connect their existing on-premises network to VPC network to achieve key use cases, such as data center extension or disaster recovery.
Hybrid connectivity options, such as Cloud VPN and Cloud/Partner Interconnect, are available. If leveraging VMware HCX to migrate workloads from the on-premises data center to a Google Cloud VMware Engine Software-Defined Data Center (SDDC), a Cloud Interconnect-based connectivity is required.
Private services access must be set up for the connections from the VPC network to the VMware Engine network. For private services access, custom routes for the VMware Engine SDDC must be added to the Cloud Router of the Customer VPC when using Cloud Interconnect. If using Cloud VPN, VMware Engine networks must be added to the Cloud VPN tunnel. Organizations must ensure that the management and workload CIDR ranges do not overlap with any other on-premises environments, Google Cloud VMware Engine SDDCs, or interconnected Google VPCs.