Enterprises worldwide, across geographies and industries, are pursuing the imperative of digital transformation, which enables them to stay ahead of their competition through business agility and operational efficiency. This is becoming a non-negotiable imperative as organizations face relentless competitive pressure from both traditional rivals and digital upstarts.
Successful digital transformation journeys are built on the foundations of modern infrastructure paradigms such as virtualization, cloud computing, containers, serverless infrastructure, and innovations like artificial intelligence (AI)/ML technologies.
Cloud and virtualization technologies have served as the bedrock on which organizations of all sizes have modernized their datacentre environments. By virtualizing compute, storage, and networks, organizations can transform to modern software-defined datacentres that employ a cloud operating model for better agility, flexibility, utilization, and scalability
The goal of cloud computing, and the virtualization that underpins it, is about minimizing the energy and carbon associated with running workloads. And with that adding to sustainability goals that enterprises have nowadays.
The goal of this subject is how the VMware Cloud Well-Architected Framework can help as a solution to become more sustainable. The following section outline strategies that can help to achieve the reduction of energy and carbon associated with running workloads on top of the platform.
Achieving Workload Energy Efficiency
Workload energy efficiency minimizes the energy required to run workloads hosted on IT infrastructure housed in datacentres. There are four components to achieving workload energy efficiency:
- Making energy visible
- Maximizing productive host utilization
- Designing compute-efficient applications
Making Energy Visible
For a host (server), energy is an intrinsic characteristic reflecting the extent of use by workloads of its compute resources, such as CPU, memory, and disk. Similar to improving the energy efficiency of our homes, making container and host energy visible enables benchmarking that we can act on. Adding that visibility informs strategies for management and optimization.
VMware Aria Operations has the ability to make energy visible:
Maximizing Host Utilization
Before virtualization, the best practice was to run one application per physical server. In other words, servers typically ran at only 5-15% utilization. This gross underutilization translated into massive energy waste — incurring both financial and environmental impacts. Virtualization enables higher server utilization, which enables more consolidation. This drastically reduces global datacenter electricity consumption. However, because many servers today are running at only 20-25% utilization, there is still significant room for improvement. Key opportunities for innovation include:
- Enabling “cloud-sharing” that puts spare capacity to productive use by transient and non-time-sensitive workloads.
- Recouping stranded capacity from oversized virtual machines, containers, and servers that no longer do useful work (sometimes called “zombies”).
- Leveraging hybrid public cloud bursting to provide on-demand peak and backup capacity, enabling customers to reduce on-premises infrastructure and run it with higher utilization.
These innovations would produce productivity and sustainability improvements, while also meeting performance and availability requirements.
Designing Compute-Efficient Applications
Compute-efficient applications are a focus of an emerging practice of , in which applications are designed, architected, coded, and tested in a way that minimizes the use of CPU, memory, network, and storage. Mobile-phone applications are good examples of this. Mobile phones have limited power, so the best-designed apps are built to . The Green Software Foundation has a to research and develop tools, code, libraries, and training for building compute-efficient applications. It also has a working group that’s developing a to help users and developers make informed choices of their tools, approaches, and architectures.
Achieving Workload Carbon Efficiency
Workload Placement and Scheduling
A less-obvious component of workload carbon efficiency is placement and scheduling – when and where workloads are run. Integrating electricity carbon intensity as an optimization factor into workload management can significantly minimize system carbon emissions. A characteristic of the electricity that powers datacenter workloads is — the weighted average of the carbon emitted during the generation of that electricity across all generators on the grid. Carbon emissions can vary anywhere from near-zero for wind, solar, hydro, and nuclear power plants to very carbon-intensive for coal and natural gas power plants (e.g., 500 kg CO2/MWh). The mix of generators contributing electrons and the quantity generated on the local grid varies at any given time. Therefore, a over time.
For workloads that are not latency-sensitive and/or geographically restricted, the management system may determine when and/or where to run these workloads based on when and/or where the electricity is cleanest. For example, the management system can delay running a workload or run the workload in an alternate datacenter. This idea isn’t far-fetched. The share of renewables and low-carbon electricity in 2019 for global electricity generation. In aggregate, workload placement and scheduling could for carbon-intensive electricity. In the longer term, managing datacenter workload demand could also improve the economics and stability of the electricity grid by demand and supply.
Carbon-aware workloads are necessary for enabling workload placement and scheduling to optimize system carbon emissions. Quality-of-service requirements such as latency, geographic restrictions, and mission-critical elements of these workloads can be communicated back to the management system. This enables the management system to identify and prioritize workloads that have the flexibility to alternate their scheduling and/or placement. The Green Software Foundation has a focused on developing an SDK to enable carbon-aware applications.
As we can see, there are pathways to zero-carbon clouds that can help accelerate the coming transition to a low-carbon economy. Innovations that maximize the productive use of cloud infrastructure will bring significant economic and environmental benefits. And managing workloads to use the cleanest energy can help stabilize the grid and provide lower-cost electricity. Some of these innovations can leverage existing capabilities. Others will require the maturation and adoption of emerging capabilities, such as hybrid cloud bursting to provide on-demand capacity for peak loads.