Administrative Access
Managing Administrative Access
Protecting the management interfaces of infrastructure is critical, as virtual and cloud administrators have enormous power over workloads and data. Core information security practices such as least privilege, separation of duties, and defense-in-depth are important to deny attackers access to environments.
Cloud Console Account Management
The Google Cloud VMware Engine Console is the central management portal and provides the ability to deploy, manage, and deprovision Private Clouds, subscriptions, and network connectivity.
Role-Based Access Control (RBAC)
VMware Cloud Infrastructure products, from VMware Cloud down to the core vSphere, contain a robust set of permissions that can be configured as part of the roles that users are assigned. These permissions allow granular access to capabilities inside the Google Cloud VMware Engine Private Cloud. Virtual Private Network (VPN)
Google Cloud VPN functions provide an encrypted end-to-end path over untrusted networks using IPsec. It can be used for connections across the open Internet, but also across a Google Interconnect Security is always a tradeoff, and IPsec VPNs trade security for performance, limited by available CPU and network capacity inside the Private Cloud.
IPsec VPNs rely on Path MTU Discovery, which in turn may require relevant ICMP protocol messages (IPv4 type 3, IPv6 type 2) to be permitted. This is a general best practice for networks, as blocking all ICMP messages to disable ICMP echo (“ping”) causes the collateral loss of other important network messages like Fragmentation Needed, Time Exceeded, and more. Path MTU Discovery is important for automatic network optimization of most modern operating systems. Workarounds such as MSS Clamping add complexity and rigidity to an environment and may not be the best solution.
Deploying a VPN to connect to a Private Cloud involves other decisions about network topology and will depend on the network capabilities and topologies of the Private Cloud and other sites. Route-based VPNs use the BGP routing protocol to exchange information about networks between sites. This adds both complexity and flexibility, and the design of these networks is beyond the scope of this document. With simpler IP addressing schemes and network deployments the Policy-Based VPN options are possible. Layer 2 VPN connectivity allows for migrations into the cloud without re-addressing a workload, by extending an on-premises network, but requires the NSX Autonomous Edge appliance to be deployed in the local cloud.
VPNs between sites with dynamic addresses may require additional design considerations or operational process work. If the dynamic address changes then the VPN connection will not be functional until the SDDC is updated for the remote site’s new public IP address.
Ideas to consider:
- Use IKEv2 with a GCM-based cipher with as high a bitrate as can support the required performance levels.
- Use Diffie Hellman Elliptical Curve groups (19, 20 or 21), with the highest group number of those that can support the required performance (generally based on the total number of tunnels).
- Enable Perfect Forward Secrecy where supported on both sides of the VPN connection. Enabling it on one side only may initially work, but will disconnect after a preset amount of time.
- Use a long, randomly generated pre-shared key, or if available, certificate-based authentication.
- If the BGP endpoint is on a different device from the IPSec VPN, or there is a possibility of access to the BGP network being used, then a BGP Secret should be configured on both endpoints to prevent route hijacking.
Private Network Links
Google Interconnect is a solution where a network port on Google’s network is made available for customers to connect to. In most cases, the port will be in a Point of Presence (PoP) datacenter facility where the end customer will order an MPLS WAN connection from their preferred carrier, who will assist with cross-connecting it to the port provided by Google. Other configurations are possible, such as a Hosted Connection (a VLAN on a shared port) a Hosted VIF (a single virtual interface on a shared connection), and in some cases customers may collocate space in the PoP and run the cross-connect directly from their own equipment. All of these options provide different features, bandwidth, and cost models. Dedicated ports provide the most capability and highest bandwidth, Ideas to consider:
In order to minimize latency, select an Google Cloud point-of-presence that your WAN provider can support, and is as close as possible to the sites that will be communicating with the Private Clouds.
Deploy multiple Google Interconnect circuits to different points-of-presence for redundancy, that terminate in the same Google Cloud account so that Google knows they are for redundancy and will provision them on independent paths. Ensure that they have fully independent paths to the enterprise network.
If multiple regions are being used for Private Clouds and latency tolerance is acceptable, consider deploying Google Interconnects to different regions, to provide redundancy against wider-area events while simultaneously providing connectivity to multiple regions.
Use BGP secrets on all BGP sessions to avoid route hijacking.
Google Cloud Connected VPCs
Private services access is a private connection between your Virtual Private Cloud (VPC) network and networks in VMware Engine.
Private services access enables the following behavior:
- Exclusive communication by internal IP address for virtual machine (VM) instances in your VPC network and VMware VMs. VM instances don't need internet access or external IP addresses to reach services that are available through private services access.
- Communication between VMware VMs and Google Cloud-supported services, which support private services access using internal IP addresses.
- Use of existing on-premises connections to connect to your VMware Engine private cloud, if you have on-premises connectivity using Cloud VPN or Cloud Interconnect to your VPC network.
- You can set up private services access independently of VMware Engine private cloud creation. The private connection can be created before or after the creation of the private cloud to which you want to connect your VPC network.
Network Perimeter Controls
Google Cloud has multiple network boundaries and perimeters that should be secured. The primary boundary is at the Private Cloud itself, consisting of dedicated sets of network segments for management and workloads. These network segments are separated from the network uplinks by an NSX Edge Gateway firewall.
Management Appliance Access & Authentication
A deployed Private Cloud will have a number of appliances that manage different aspects of the infrastructure. These appliances are managed by Google as part of the Shared Responsibility Model and include vCenter Server, NSX Manager, and NSX Edge appliances by default. If enabled there may also be HCX Manager, All appliances are joined to the private Cloud’s Single Sign-on (SSO) domain, gve.local . This SSO domain is local to the deployed SDDC Customer are provided a Cloudowner account that has restricted management permissions as part of the Shared Responsibility Model and is allowed to perform operations in support of workloads. Full administrative control of the SDDC is reserved for Google itself.
The initial credentials for cloudowner@gve.local are displayed in the Google Cloud VMware Engine Console. The password for this account can also be changed through the Console A vCenter Server allows for the integration of an LDAP-based identity source which allows customers to use existing directories and authentication sources.
Ideas to consider:
- Use private DNS resolution for vCenter & HCX Manager so that these appliances are accessed from the on-premises network. SRM, vSphere Replication & NSX Manager only support private DNS and private IP connectivity, Adding individual user accounts to the Administrators group, rather than importing an Active Directory group, helps separate authorization from authentication, reducing attack vectors in case of Active Directory compromise.
- Use tiered access models where everyday tasks can be handled by regular accounts/group access, but any privileged access should use a separate account, individually added to the vCenter group.
- Rotate the password for the cloudowner according to your password policy.
- Access to management components should not depend solely on IP address restrictions, as the compromise of an administrator desktop often also includes the compromise of the administrator’s credentials, too. A bastion host or “jump box” solution may be implemented with multi-factor authentication.
- Appropriate hardening and monitoring should be applied to bastion hosts, including considerations for the compromise of an organizations central Active Directory or authentication source. Use of separate administrator accounts is also recommended as a way to help identify the presence of attackers. The compromise of an administrator’s regular desktop account would not automatically lead to the compromise of infrastructure, and may force the attacker to generate login failures which can be monitored.
- Limit connectivity to the Private Cloudss ESXi hosts for destinations using the services required:
- vMotion can be proxied through HCX for a controlled, secure channel.
- IPFix data will originate from SDDC ESXi hosts, and traffic should be restricted through the on-premises firewall to only the IPFix collectors.
- Port Mirroring traffic also originates from the Private Cloud ESXi hosts in a GRE tunnel, and traffic should be restricted through the on-premises firewall to only the necessary ERSPAN destinations.
- vSphere Replication traffic will originate from the SDDC ESXi hosts and traffic should be restricted through the on-premises (or destination SDDC Management gateway) firewall to only the necessary vSphere Replication appliances where VMs are being protected.