Upgrading Oracle Cloud VMware Solution from 6.x to 7.x

Introduction

Oracle Cloud VMware Solution (OCVS) lets you deploy the VMware SDDC environment in the Oracle Cloud Infrastructure (OCI). The solution allows you to deploy VMware SDDC versions 6.5, 6.7 and 7.0. VMware vSphere 6.5 and 6.7 reached the end of general support on October 15th, 2022, and both software versions are in the technical guidance phase.  During the technical guidance phase, VMware products and solutions have limited support scope; hence, customers must upgrade the VMware vSphere 6.5 and 6.7 environments to the VMware vSphere 7.0 Software version.

This document provides step-by-step upgrade guidance for upgrading VMware SDDC from 6. x to 7.0 deployed in OCVS.

Scope of the Document

The implementation steps to upgrade VMware SDDC are validated against OCVC VMware SDDC 6.5 and 6.7. Users of this guide are expected to be the existing customers and partners with OCVS implemented in their environment. Basic Oracle Cloud and VMware SDDC knowledge are required before implementing the steps in the VMware SDDC.

Before you begin

Before upgrading the VMware SDDC environment, please ensure the below checks are done and validated.

  1. Review the existing environment and address issues, and alert if there are any. Ensure that there are no network, DNS, or storage issues.
  2. All the VMware Appliances and ESXi hosts are healthy with no significant alerts or issues.
  3. Backup vCenter server, Check the VMware Documentation for more details.
  4. Backup NSX-T environment, Check the VMware Documentation for more details
  5. Backup vSphere Distributed Switch; check the VMware Documentation for more information.
  6. It is recommended that you have the latest backups for critical workload VMs, even though there is no change or outage required for the workload VMs.
  7. This upgrade process requires you to provision a new host and delete the old host. The new host provisioning requires adequate service limits for ESXi hosts and capacity for boot volume to accommodate the vSphere 7. x requirement in the Oracle Cloud. Work with your oracle cloud representative to ensure that you have the required service limit, quota, and capacity allocated.

VMware SDDC Upgrade

When a new VMware SDDC version is available, a notification is displayed for the same. This step upgrades the VMware SDDC version metadata in the OCI. VMware SDDC version 7 also has a new network architecture with two additional VLANs for provisioning and replication purposes. This workflow allows you to create these two additional VLANs as a part of the SDDC upgrade process.  Follow the below steps to upgrade the VMware SDDC version.

  1. Log in to the OCI console, and from the left navigation pane, Click Hybrid and Select VMware Solution to view VMware SDDC in the OCI.

  1. Select the Compartment.
  2. Click on the appropriate VMware SDDC that you want to upgrade.
  3. If the VMware SDDC software version is 6.5 or 6.7, you should be able to see a notification, as shown in the below screenshot.

Graphical user interface, application

Description automatically generated

  1. Click on Upgrade
    1. Select a new VMware Software version. Select 7 Update 3.
    2. Select Create New VLANs
      1. Provide a CIDR Block for SDDC Replication VLAN. The CIDR Block should be part of the SDDC network.
      2. Provide a CIDR Block for SDDC Provisioning VLAN. The CIDR Block should be part of the SDDC network.
      3. Let the CIDR availability check complete.
      4. Click Upgrade only If both the CIDR blocks are available and highlighted in green.

Graphical user interface, application, Teams

Description automatically generated

Graphical user interface, text, application, email

Description automatically generated

  1. Wait until the upgrade process is completed successfully.
  2. Once all the tasks are shown as Done, Click Finish.

  1. Validate that the VMware SDDC Software version is updated to 7 Update 3.

Graphical user interface, text, application, email

Description automatically generated

  1. Scroll down to the SDDC summary page, Click on SDDC Network, and validate that both Provisioning and Replication Networks are created successfully.

Table

Description automatically generated

NSX-T Upgrade

Please verify the NSX-T version. You can skip the NSX-T upgrade if the current version is 3.2.0.1 19232396 or above. Follow the steps below if the NSX version is lower than 3.2.0.1 19232396.

  1. On the VMware SDDC Overview page, Check the notification.
  2. Click on Get updated binaries and licenses.
  3. Log in to the NSX Manager.
  4. Navigate System -> Lifecycle Management -> Click Upgrade.
  5. Upload the MUB bundle.
  6. Click Prepare for Upgrade.
  7. Click Run Pre-Checks.
  8. Review the pre-check results.
  9. Ensure that backups are available.
  10. You can skip the NSX Upgrade Evaluation Tool.
  11. Click Ok.
  12. Click Start to upgrade the NSX Edge nodes.
  13. Click Next once the upgrade status is successful for NSX Edge Node.
  14. Click Start to upgrade ESXi hosts. The upgrade workflow will put the host into maintenance mode automatically.
  15. Monitor the progress.
  16. Click Next once the ESXi hosts are upgraded.
  17. Click Start to upgrade the NSX manager.
  18. Monitor the progress and validate the system post-upgrade,

vCenter Server Upgrade

This section describes the vCenter Server Upgrade. Before you begin, download the VCSA installer to a system from where you can access the VMware SDDC environment, especially VMware vCenter Server.

  1. On the VMware SDDC Overview page, Check the notification.
  2. Click on Get updated binaries and licenses.
  3. Download VMware vCenter Server Appliance Bundle.

Stage 1: New VCSA Deployment

  1. Navigate to the downloaded VCSA installer ISO.
  2. Double-click on the VCSA Installer ISO.
  3. Navigate to VCSA UI Installer -> win32 and open Installer.

Graphical user interface, application

Description automatically generated

  1. Click Upgrade.

Graphical user interface, application

Description automatically generated

  1. Click Next to start Stage 1 (Deploy vCenter Server).
  2. Enter Source vCenter Server hostname/IP address.

Note: You can find the vCenter Server hostname/IP address from the OCI VMware SDDC Overview page. As shown in the below screenshot.

  1. Leave Appliance HTTPS Port to its default value, which is 443.
  2. Click on Connect to Source.

Graphical user interface, application, Teams

Description automatically generated

  1. Provide the SSO Username
  2. Provide SSO password

Note: You can find the vCenter Server SSO Username and Password from OCI VMware SDDC Overview Page. As shown in the below screenshot.

Graphical user interface, text, application, chat or text message

Description automatically generatedGraphical user interface, text, application, email

Description automatically generated

 

  1. Provide the vCenter Server or ESXi host details that manages the source vCenter. In this case, provide the vCenter Server details. Click Next.
  2. Click Yes, to accept the certificate thumbprint.
  3. Provide the target vCenter Server detail. This is the vCenter Server where you want to deploy the upgraded vCenter Server virtual machine. In this case, we are also using the same source vCenter server to deploy a new vCenter Server instance. Hence provide the same vCenter Server details.
  4. Click Next.
  5. Select the folder, and compute resources.

Note: Disable the lockdown mode on one of the ESXi hosts if you get an error similar to the below screenshot. Check out the VMware Documentation on How to disable ESXi lockdown mode.

  1. Set up target vCenter Server VM.
    1. Provide the VM Name.
    2. Set root password.

Note: It is advised to keep the same password as the old vCenter Server for better password management. It is important to keep the password secure yet accessible. You can copy the same password from the OCI VMware SDDC Overview page and use the same.

  1. Select the Deployment Size—select Medium. Click Next.
  2. Select vSAN datastore.
  3. Select the Network, keep the same network as the old vCenter Server.
  4. Provide the IPv4 details for temporary network settings.
    1. IP Version: IPv4
    2. IP Assignment: Static
    3. Temporary IP Address: This should be an available IP address from the vSphere network.
    4. Subnet Mask or Prefix: Same as the source vCenter Server network.
    5. Default Gateway: Same as the source vCenter Server
    6. DNS Servers: Same as the source vCenter Server.

You can verify the subnet mask, default gateway, and DNS server details via login into the VAMI interface (https://vcenter-ip:5480) and go to the network section.

Graphical user interface, website

Description automatically generated

 Ensure all the network details are correct and click Next.

 Graphical user interface, application

Description automatically generated

  1. Review all the details and click Finish.
  2. Wait for stage 1 to complete. And click Continue for Stage 2. Graphical user interface, text, application

Description automatically generated

Stage 2: Switch Over Phase

Stage 2 is also known as the cutover phase. In this stage, vCenter Server data is migrated to the new vCenter Server instance, and vCenter Server services are resumed on the new vCenter Server instance. This stage requires downtime and vCenter becomes inaccessible during this phase. Also, it is important to have the vCenter Server backups before stage 2.

  1. Click Next to start stage 2 (Upgrade Source vCenter Server).
  2. Let the Pre-Check process complete. If there are no errors, you can continue with the upgrade process. However, you may see an error message related to the unsupported lacpApiVersion.
    1. Upgrade the lacpApiVersion to Enhanced LACP Support to fix the unsupported lacpApiVersion. Follow VMware KB Article 2051311 for the detailed steps.

Graphical user interface, text, application

Description automatically generated

  1. Click on Close and continue with the pre-check process.
  2. Select the upgrade data. This will migrate the selected data to the new vCenter Server instance. Click Next.

  1. Review the details and click Next.
  2. Monitor the process until Stage 2 is finished.

Graphical user interface, text, application

Description automatically generated

 

Apply Licenses

  1. Validate the vCenter Server once stage 2 is upgraded.
  2. On the VMware SDDC Overview page, Check the notification.
  3. Click on Get updated binaries and licenses.
  4. Copy the vcenter_v7 and vsan_v7 license keys.

  1. Log in to the vCenter server.
  2. Click on the navigation menu.
  3. Go to Administration.
  4. Go to Licensing and click on Licenses.
  5. Click on Add.

  1. Paste both vcenter_v7 and vsan_v7 license keys in a separate line.
  2. Click Next.
  3. Review the license details and provide license key names for both licenses.
  4. Click Finish.
  5. Click on Assets.
  6. Select vCenter Server Systems
  7. Select the vCenter Server.
  8. Click Assign License.
  9. Assign the vCenter Server 7 Standard License. Click Ok.

Note: We have added both the licenses, vCenter and vSAN but only assigned the license key to the vCenter Server. We will give the vSAN license key once the vSAN cluster is upgraded. Steps are mentioned under the vSAN Upgrade Section.

ESXi Host Upgrade

The process of upgrading ESXi hosts that are part of OCVS VMware SDDC involves building new ESXi hosts and adding the newly built ESXi hosts to the vCenter Server. This section describes the detailed steps to upgrade the ESXi hosts.

New Host Build

  1. On the VMware SDDC Overview page, Check the notification.
  2. Click on the check box ’I have updated the binaries and licenses in vCenter’ to confirm that you have upgraded the vCenter Server and assigned the licenses as described in the above steps.

Graphical user interface, text, application

Description automatically generated

  1. Click on the first ESXi host instance.

  1. In the ESXi host view, Click on the Upgrade.

Graphical user interface, text, application

Description automatically generated

  1. Leave the default capacity type and click on the checkbox.
  2. Click Upgrade. This will start a new job to create a new ESXi host. 

  1. Repeat the same process for all the ESXi hosts that are part of the VMware SDDC.
  2. Go to the SDDC overview page and scroll down to click on Work Request.

Graphical user interface, application

Description automatically generated

 

  1. Monitor the progress of all the ESXi build jobs and wait until all the requests are 100% complete.
  2. Graphical user interface, application

Description automatically generated

Note: This process builds new ESXi hosts. Please ensure you have the required quota, reservation, and capacity available in OCI to spin up new ESXi host instances.

Adding Hosts to vCenter Server

Add the ESXi hosts to the vCenter Server but do not add the hosts to the cluster object. It should be added directly to the data center object. Follow the below steps on all the ESXi hosts one by one.

  1. Log in to the vCenter Server.
  2. Right-click on the data center.
  3. Click on Add Host.

  1. Provide the Hostname or IP address.

Note: You can get the hostname or IP address through the ESXi Instance page in the OCI. As shown in the below screenshot.

 

  1. Enter Username, root
  2. Enter Password, you can copy the same password as the vCenter Server from the VMware SDDC overview page.
  3. Click Next and Finish.
  1. Ensure that host is put into maintenance mode.
  2. Repeat the same process to add other ESXi hosts as well.

Adding Hosts to Distributed Switch

OCVS VMware SDDC 7.0 uses vSphere Distributed Switch for networking. The newly built ESXi host uses a standard switch for the vmkernel ports. We need to add all the hosts to the distributed switch and migrate its vmkernel ports to the respective VDS port group. After completing this task, new and old ESXi hosts will be able to communicate with each other and become part of the same network port groups.

Add replication and provisioning port groups before adding new ESXi hosts to the dvSwitch.

Note: Keep all the hosts in maintenance mode only.

Add New Replication and Provisioning port groups

Get the VLAN IDs for provisioning and replication networks.

  1. Log in to the OCI console.
  2. Go to the VMware SDDC overview page.
  3. Scroll down and Click on SDDC network.

  1. Select the provisioning VLAN.
  2. Note down the VLAN ID for the provisioning network.

Graphical user interface, text, application, email

Description automatically generated

  1. Go back and select the replication VLAN.
  2. Note down the VLAN ID for the replication network.

Graphical user interface, application

Description automatically generated

Create provisioning and replication port groups.

  1. Log in to the vCenter Server.
  2. Go to the networking tab under the inventory.
  3. Go to DSwitch (Distributed Switch that is in use currently).
  4. Right-click on DSwitch and select Distributed Port Group. Click New Distributed Port Group.

  1. Provide the port group name, vds01-Provisioning

Graphical user interface, application, Teams

Description automatically generated

  1. Configure Setting. Select the VLAN Type as VLAN.
  2. Provide the Provisioning network VLAN ID that you have noted down prior to this step.

Graphical user interface, application, Teams

Description automatically generated

  1. Review the details and click Finish.
  2. Repeat the same process to create another port group, vds01-Replication, with the replication VLAN ID.

Add Hosts to dvSwitch

  1. Log in to the vCenter Server.
  2. Go to the networking tab under the inventory.
  3. Right click on DSwitch (Distributed Switch that is in use currently).
  4. Select Add and Manage Hosts.

Graphical user interface, application

Description automatically generated

  1. Select all the compatible ESXi hosts from the list.
  2. Click Next.

Graphical user interface, text, application

Description automatically generated

  1. Assign uplink-vmnic0 to vmnic0.
  2. Do not set uplink to vmnic1. Keep it None for now. Click Next.

Graphical user interface, text, application, email

Description automatically generated

  1. Manage vmkernel adapters.
    1. Click on vmk0 and click on the related hosts to review its existing port group assignment.
    2. Click on assign port group, Assign Management Network on vmk0.
    3. Similarly, click on each vmk port one by one and assign respective port groups as described below.

vmk port interface

Port group to be assigned

vmk0

Management Network

vmk1

vds01-vMotion

vmk2

vds01-vSAN

vmk3

vds01-replication

vmk4

vds01-provisioning

Graphical user interface, application

Description automatically generated

  1. Click Next.
  2. Leave the default settings for Migrate VM networking screen.
  3. Click Next and finish the process. 

Graphical user interface, text, application, email

Description automatically generated

Review ESXi host networking

It is important to review ESXi host networking so that we can continue with the next steps. Review the host connection state in the vCenter Server and ensure all the hosts are showing as connected and not showing a Disconnected or Not Responding state. The below screenshot shows the ideal state after completing the VDSconfigurations.

Graphical user interface, application

Description automatically generated

Also, it is important to review the vmkernel ports and their associated port groups and dvSwitch. Go to the ESXi host -> Configure -> Networking -> vmkernel adapter. Review all the ports and ensure that the required services are enabled on the respective vmk port. The below screenshot shows what the vmk ports should look like.

Graphical user interface

Description automatically generated

Add Hosts to the vSAN cluster

Once you have verified the networking configurations, you can move next to add ESXi hosts to the vSAN cluster. Please keep the ESXi hosts in maintenance mode only.

  1. Log in to the vCenter Server.
  2. Right-click on the ESXi host and click on Move To.
  3. Expand datacenter oci-w01dc, Select cluster object oci-w01-consolidated01
  4. Click Ok.
  5. Repeat the same process to move all the ESXi hosts into the vSAN cluster.

Graphical user interface, text, application

Description automatically generated

Configure the vSAN disk group

The new hosts are added to the vSAN cluster, but we need to claim the unused disks in the existing vSAN disk group.

  1. Log in to the vCenter Server.
  2. Go to the cluster object oci-w01-consolidated01
  3. Click on Configure.
  4. Go to vSAN and click Disk Management.
  5. Click on Claim Unused Disks.
  6. Claim the first disk as a cache tier and the remaining 7 disks to be claimed as a capacity tier. Configure the same claims on all the ESXi hosts.
  7. Click on Create.
  8. Wait for the operation to complete.
  9. Review vSAN datastore and you should be able to see an increased capacity in the vSAN datastore.

Review NSX-T Fabric Configurations

All the hosts that are part of the cluster object will receive the NSX-T configurations through NSX-T transport node profile. The current cluster object is already configured with the transport node profile so as soon as we add an ESXi host to the cluster object, NSX-T configurations are applied automatically to the newly added hosts.

Review the new host's status in the NSX-T fabric and ensure that the NSX configuration status is shown as Success and Node Status is showing as up.

  1. Login to the NSX-T Manager.
  2. Go to System.
  3. Expand Fabric.
  4. Select Managed by vCenter.
  5. Check the new ESXi hosts in the oci-w01-consolidated01 cluster object
    1. NSX Configuration should be a Success
    2. Node Status should be Up.

Prepare the Hosts for migration

Since all the hosts are added to the vSAN cluster and NSX-T fabric. If everything looks green from vSAN and NSX-T fabric, we can prepare the host for hosting the management and application workloads.

  1. Select the ESXi host.
  2. Right-click, Maintenance mode and select Exit Maintenance mode.
  3. Go to the monitor.
  4. Click on Task to monitor the progress.
  5. Wait for the task to complete.
  6. Click on Summary and validate HA status.

  1. Migrate the test virtual machine to the new host.
  2. Validate the virtual machine and ensure it is accessible on the network. Proceed to the next only after you are satisfied with the network and storage validation on the test virtual machine.
  3. Repeat the same process on all the ESXi hosts.

Please make sure you validate vSphere HA status once all the hosts are moved out of maintenance mode.

Terminating Old Hosts

Once all the new hosts are moved out of maintenance mode, we can start terminating the old ESXi hosts one by one. It is important to note that the billing is applicable on new ESXi hosts as soon as the new ESXi instances are available and active in the OCI. It is important to terminate the old hosts so that we can avoid the additional charges for the host that are not in use.

This step is very critical so please ensure that you follow the below steps to terminate the old ESXi hosts.

  1. Click on the old ESXi host.
  2. Right-click, Maintenance mode, and Enter Maintenance mode.
  3. In the Maintenance wizard, Select Full data migration. This is a must so that all the data virtual machine and its associated vSAN objects can be migrated to the other hosts.
  4. Click on Go to pre-check.
  5. Click Ok.
  6. Click Pre-Check. Wait for the pre-check process to complete.

  1. Click Enter Maintenance Mode If the pre-check result is green.

Graphical user interface, application, Teams

Description automatically generated

  1. Wait for the host to go into maintenance mode.
  2. Once the host is in maintenance mode, right-click and click on Disconnect.
  3. Remove the host from the inventory.
  4. Repeat the same process on all the remaining old ESXi hosts.

Terminate ESXi instances in the OCI

  1. Login to the OCI Console.
  2. Go to the VMware Solution page.
  3. Select the appropriate compartment and VMware SDDC.
  4. Go to the ESXi hosts that are marked as Need attention.

Graphical user interface, application

Description automatically generated

  1. Click on Terminate.

Graphical user interface, text, application, email

Description automatically generated

  1. Repeat the same process on all the old ESXi hosts.

Upgrade vSAN

So now we have VMware SDDC, vCenter Server, and ESXi hosts upgraded to VMware vSphere 7 Update 3. However, the current vSAN disk group is still running with the old vSAN version. Follow the below steps to upgrade the vSAN disk group version.

  1. Log in to the vCenter Server.
  2. Click on the cluster object and Select Configure.
  3. Go to vSAN and click on Services.
  4. Click on Pre-Check Upgrade.
  5. Click on Upgrade if the pre-check results are showing as Ready to upgrade.

Graphical user interface, text, application, email

Description automatically generated

  1. Monitor the upgrade process and check the vSAN cluster health.

Upgrade vSphere Distributed Switch

The new OCVS VMware SDDC 7.0 architecture has slight changes to the network architecture. In the new SDDC, we will move away from the traditional NVDS-backed port group to VDS backed port group. The VDS-backed port group requires version 7 for the distributed switch. Below are the steps to upgrade the distributed switch and prepare the VDSfor the NSX-T configuration.

  1. Log in to the vCenter Server.
  2. Go to the networking tab.
  3. Select DSwitch.
  4. Right-click and Select Upgrade.

  1. Select the latest version and click Finish.

Graphical user interface, text, application</p>
<p>Description automatically generated

  1. Verify the VDSVersion and it should be upgraded to 7 Update 3 as shown in the below screenshot.

Graphical user interface, application, website</p>
<p>Description automatically generated

The old network architecture had only 1 uplink port for the distributed switch. The new network architecture requires 2 uplink ports for the dvSwitch. In the below step, we will just add an additional uplink to the VDSbut will not assign any vmnic at this point.

  1. Right-click on the DSwitch.
  2. Select Settings -> Edit Settings.

Graphical user interface, application, Teams

Description automatically generated

  1. Click on the Uplink tab.
  2. Click on Add.
  3. Provide name as Uplink-vmnic1.
  4. Click Ok.

Graphical user interface, application, Teams

Description automatically generated

Migrate N-VDS to the VDS port group

The following steps will assist you in migrating host networking from NVDS Backed port groups to VDS-backed port groups.

  1. Log in to the NSX Manager.
  2. Go to System -> Fabric-> Profiles -> Uplink Profiles.
  3. Click Add Profile.

  1. Name
  2. LAGs – Leave it default (blank)
  3. Teaming – Add
    1. Teaming Policy – Load balance Source MAC address
    2. Active Uplink – uplink-1, uplink-2
  1. Transport VLAN – Provide the NSX VTEP VLAN ID. Obtain the NSX VTEP VLAN ID from OCI Console. Graphical user interface, text, application

Description automatically generated

Graphical user interface, text, application

Description automatically generated

Graphical user interface, text, application

Description automatically generated

Graphical user interface, application

Description automatically generated

  1. MTU – Keep it blank.
  2. Click Add.

Important: Follow steps B, C and D on all the ESXi hosts one by one.

B. Remove NSX from the ESXi hosts

  1. Log in to the vCenter Server.
  2. Select the ESXi Host.
  3. Right-click and select Maintenance mode.
  4. Enter Maintenance Mode.

Graphical user interface, application, Teams

Description automatically generated

  1. Select Ensure accessibility and click on Go to Pre-Check.

Graphical user interface, application, Teams

Description automatically generated

  1. Run the Pre-check.
  2. Click on Enter Maintenance mode.

Graphical user interface, application

Description automatically generated

Remove NSX Configurations

  1. Log in to the NSX Manager.
  2. Go to System -> Fabric -> Nodes and select vCenter Server.
  3. Select the cluster object.
  4. Click on Action and then click Detach Transport Node Profile.

Graphical user interface, application, Teams

Description automatically generated

Graphical user interface, text, application

Description automatically generated

  1. Select the ESXi host that was put into maintenance mode.
  2. Click on Remove NSX.

Graphical user interface, text, application

Description automatically generated

  1. Select Force Delete.

Graphical user interface, text, application, Teams

Description automatically generated

C. Configure NSX from the VDS-backed configurations

  1. Log in to the NSX Manager.
  2. Go to System -> Fabric -> Nodes and select vCenter Server.
  3. Select the same host that was unprepared in the previous step.
  4. Click Configure NSX.

  1. Host details – Leave it default.
  2. Configure NSX
    1. Type – VDS
    2. Name – Select DSwitch. It should auto-populate from the drop-down.
    3. Transport Zone – Select Overlay-TZ and VLAN-TZ From the drop-down list.
    4. Uplink Profile - Associate the new Uplink Profile that was created in Step A.
    5. IP Assignment (TEP) – Use IP Pool.
    6. IP Pool – VTEP-IP-Pool
    7. Teaming Policy – Map the VDS uplink ports to each uplink.

Graphical user interface, text, application

Description automatically generated

  1. Click Finish.
  2. Monitor the NSX Configuration progress until it is configured successfully.
  1. Log in to the vCenter Server.
  2. Click on the same host that was configured with the new NSX configurations.
  3. Click on Configure.
  4. Go to Virtual Switch.
  5. Go to VDS DSwitch.
  6. Click Manage Physical Adapter.

  1. Select Uplink-vmnic1 and click on +
  2. Add the vmnic1 adapter.
  3. Click Ok.

  1. Right, Click on ESXi host.
  2. Maintenance Mode -> Exit Maintenance Mode.

Repeat Steps B, C and D for all other ESXi hosts one by one.

Create and assign a new transport node profile

So now we have all the ESXi hosts migrated from NVDS-backed port groups to the VDS-backed port groups. We should also create a new transport node profile with the same configurations and apply it to the cluster object. This step will ensure that all the hosts that are part of the cluster remain compliant with the same NSX configurations.

  1. Log in to the NSX Manager.
  2. Go to System -> Fabric -> Profiles -> Transport Node Profiles.

  1. Click Add Profile
    1. Name
    2. New Node Switch – Select vDS.
    3. Name – Select DSwitch. It should auto-populate from the drop-down.
    4. Transport Zone – Select Overlay-TZ and VLAN-TZ From the drop-down list.
    5. Uplink Profile - Associate the new Uplink Profile that was created in Step A.
    6. IP Assignment (TEP) – Use IP Pool.
    7. IP Pool – VTEP-IP-Pool
    8. Teaming Policy – Map the VDS uplink ports to each uplink.
  2. Click Add.

Graphical user interface, application, Teams

Description automatically generated

Graphical user interface, application

Description automatically generated

  1. Go to System -> Fabric -> Nodes -> Select vCenter.
  2. Select Cluster Object.
  3. Click Configure NSX.

Graphical user interface, application

Description automatically generated

  1. Select the newly created Transport Node Profile and Click Apply.

Graphical user interface, text, application

Description automatically generated

  1. Verify the status of all ESXi hosts in the Host transport nodes.

Graphical user interface, text, application

Description automatically generated

Post Upgrade Tasks

The VMware SDDC upgrade is considered completed once vCenter, ESXi hosts, vSAN, VDS, and NSX-T fabric is upgraded as per the above process. Virtual machine hardware and VMtools upgrade are recommended, but it is optional. However, below are the recommended tasks you should consider post-completing all the upgrade activities.

  1. Validate the environment for any alerts or warnings.
  2. Review vSAN Health.
  3. Review NSX-T health. Review this for all the NSX appliances.
  4. Validating critical application workloads and ensuring that all the services are running as expected.
  5. Backing up the vCenter Server, NSX appliances and VDS switch configurations.

Authors and Contributors

Jatin Purohit, Sr. Technical Marketing Manager, CIBG.  VMware


Filter Tags

General Oracle Cloud VMware Solution Document Technical Guide Intermediate Advanced Manage