Designlet: Using Cloud Interconnect with Google Cloud VMware Engine

Introduction

Google Cloud Interconnect provides connectivity between your on-premises network and Google Cloud through a high bandwidth, low latency connection. This service comes in two versions, Dedicated Interconnect and Partner Interconnect. Dedicated Interconnect uses a direct circuit provisioned by a telco to provide connectivity at 10 or 100 Gbps throughput. Partner Interconnect provides similar connectivity through a service provider at speeds between 50 Mpbs to 10 Gbps.

Summary and Considerations

Use Case

Cloud Interconnect is used when connectivity over the public internet or IPsec VPN does not meet the customer requirements for bandwidth and/or latency. Cloud Internet can be deployed with multiple redundant links, if desired. When used along with dynamic routing, this solution can provide highly available, resilient connectivity to Google Cloud VMware Engine.

Pre-requisites

To use Dedicated Interconnect, you must be able to connect to Google in a supported colocation facility, and you must provide your own router with compatible optics. More information can be found at https://cloud.google.com/network-connectivity/docs/interconnect/concepts/dedicated-overview.

To use Partner Interconnect, you will work with a supported provider to establish connectivity. Requirements will vary from provider to provider. You can find a list of supported service providers at https://cloud.google.com/network-connectivity/docs/interconnect/concepts/service-providers.

General Considerations/Recommendations

When possible, create redundant Cloud Interconnects. This protects against hardware failure, and provides a secondary path when maintenance is required. You should also monitor utilization levels for your interconnect to ensure there is sufficient capacity provisioned.

Additional information, including best practices, is available at https://cloud.google.com/network-connectivity/docs/interconnect/concepts/best-practices

Cost implications

Pricing is dependent on the link speed of the connection. There is an hourly cost associated with the physical connection, and a cost per GB for egress traffic through an Interconnect connection. There is no charge for ingress traffic.

More information, including pricing examples, is available at https://cloud.google.com/network-connectivity/docs/interconnect/pricing

Performance Considerations

Performance of a properly configured Cloud Interconnect will, in most cases, provide higher bandwidth and lower latency than internet-based connectivity.

When deployed in a redundant manner, Cloud Interconnect has an availability 99.99%. A single Cloud Interconnect has an availability of 99.9% You must deploy a supported topology to qualify for the 99.99% SLA. More information can be found at these links:

Documentation reference

Dedicated Interconnect Overview
Partner Interconnect Overview

Last Updated

July 2021

Planning and Implementation

Planning

To use Cloud Interconnect, you will need to satisfy the pre-requisites listed above. Deciding whether to use Dedicated Interconnect vs. Partner Interconnect will be based primarily on the capabilities of your local telco provider, and where your data center is located. It will also depend on your connectivity requirements and budget. For high bandwidth use cases that need more than 10 Gbps connectivity, Dedicated Interconnect supports link speeds of 100 Gbps. Partner Interconnect supports link speeds from 50 Mpbs to 10 Gpbs, allowing you to provision the right amount of bandwidth for your use case.

 

Implementation

Cloud Interconnect requires that you have a VPC and a Cloud Router deployed in your Google Cloud environment. The Cloud Router is used to form a BGP peering connection between Google Cloud and your on-premises network.

image 9

In Google Cloud Platform, routing advertisements are not transitive between VPCs. This means some additional configuration is needed, since VPC peering is used to provide connectivity your private cloud. To access Google Cloud VMware Engine over a Cloud Interconnect, you must perform the following steps:

  • Configure one or more Cloud Interconnect connections between your location and Google Cloud Platform
  • Connect your Google Cloud VMware Engine private cloud to the same VPC as your Cloud Router and Cloud Interconnect. Steps to configure private service access and VPC peering are available at https://cloud.google.com/vmware-engine/docs/networking/howto-setup-private-service-access.
  • Ensure import and export of custom routes is enabled the VPC network peering configuration
  • In your Cloud Router configuration, choose Create custom routes under Advertised Routes
  • Add the IP range(s) allocated to your private cloud under Custom ranges
     

Additional information, including networking topologies and options for ingress and egress traffic, can be found in the Private cloud networking for Google Cloud VMware Engine whitepaper.

 

 

Filter Tags

Cloud Migration Integrations HCX Google Services Networking Google Cloud VMware Engine Technical Guide Design Deploy