High performance computing and AI

AMD improves energy efficiency of data centers

13. Mai 2022, 7:30 Uhr | Tobias Schlichtmeier
Servertechnik
© AMD

AMD's goal is to improve the energy efficiency of accelerated computing nodes, which consist of EPYC CPUs and Instinct GPUs, by up to 30 times by 2025, from a 2020 baseline. The nodes are used to run HPC and AI training applications.

As more devices become »smart devices« with embedded processors connected to the Internet and often cameras, the data explosion is proceeding at an exponential pace. Artificial intelligence (AI) and high-performance computing (HPC) are permanently changing the computing landscape, enabling the analysis of this vast amount of data. The result: high-value analytics, automated services, improved security, and much more. The challenge, however, is also obvious. The scale of these advanced computations requires ever-increasing energy consumption.

As a manufacturer of high-performance processors for the most demanding analytics, AMD places a high priority on energy efficiency during product development. To this end, a holistic approach is taken to optimize power consumption. This includes architecture, packaging, connectivity and software. The focus on energy efficiency aims to reduce costs, conserve resources and mitigate climate impact.

The priority on energy efficiency is not new. In fact, back in 2014, AMD voluntarily set a goal to increase the typical energy efficiency of its mobile processors by 25 times by 2020. According to the company, it actually surpassed that goal with a 31.7 factor improvement.

So last year, AMD called for a new 30x25 goal: Achieve a 30x improvement in energy efficiency by 2025, starting from the 2020 baseline for accelerated data center compute nodes. These nodes, equipped with AMD EPYC CPUs and AMD Instinct accelerators, are designed to meet some of the world's fastest-growing compute demands in AI training and HPC applications. These applications are essential for scientific research in climate prediction, genomics, and drug discovery, as well as for training AI neural networks for speech recognition, language translation, and expert recommendation systems. The computational requirements for these applications are growing exponentially. AMD believes it is possible to optimize power utilization for these and other accelerated compute node applications through architectural innovation.

AMD and the industry as a whole recognize that efficiency improvements in data centers can help reduce greenhouse gas emissions and improve environmental sustainability. For example, if all global AI and HPC server nodes made similar improvements, it is projected that up to 51 billion kilowatt hours (kWh) of electricity could be saved between 2021 and 2025 compared to industry trends. This is equivalent to $6.2 billion in electricity savings and the carbon benefit of 600 million tree seedlings grown for ten years.

In practice, achieving the 30x target means that in 2025, the power required by AMD accelerated compute nodes for a single computation will be 97 percent lower than in 2020. Getting there will not be easy. To achieve this goal, the energy efficiency of an accelerated compute node must increase more than 2.5 times faster than the industry-wide improvement over the 2015-2020 period.

Progress after one year

Where do we stand? As of mid-2022, AMD is well on its way to achieving the 30x25 goal. It has achieved a 6.79x improvement in energy efficiency over the 2020 baseline using accelerated compute nodes with a third-generation AMD EPYC CPU and four AMD Instinct MI250x GPUs. The progress report is based on a measurement methodology validated by renowned computing energy efficiency researcher and author Dr. Jonathan Koomey.

While there is still a long way to go to reach the 30x25 goal, the work of the engineers and the results to date are encouraging. AMD will continue to report annually on progress.


Das könnte Sie auch interessieren

Verwandte Artikel

AMD Advanced Micro Devices GmbH

Matchmaker+