The COM-HPC specification has recently been officially approved. Some embedded specialists have launched modules on the market yet, including Congatec. Christian Eder explains in a short interview what impact this will have on the embedded market.
On February 24, COM-HPC was officially ratified and is available for download. How do you estimate the demand for the new standard?
We already see a huge demand for COM-HPC, as more and more processing power is needed at the edge – this demand will be met with COM-HPC. To cite a few numbers: According to IDC, half of enterprises' critical infrastructure will be implemented at the edge in the future. As a result, the global edge computing market is expected to grow at a compound annual growth rate of over 23 % from $3.5 billion in 2019 to 2026. Drivers here are IT & telecom and colocation applications, which represent around 54 % of the total market. Manufacturing applications are also expected to register a growth of 20 % in edge data centers by 2026.
In addition, there are numerous edge server applications in control racks and machines. By machines, we mean not only stationary machines, but also mobile machines and cobots. They process many tasks in the field of artificial intelligence. Another big demand comes from the area of COM-HPC clients – here, systems with screen use high-speed interfaces such as PCIe Gen 4.
For what kind of developments is COM-HPC predestined compared to COM-Express?
Simply spoken, for all applications that require more performance and more bandwidth over more I/Os than COM Express offers. For example, COM-HPC Client runs up to 49 PCIe Gen 4 and Gen 5 lanes, twice as many as COM Express Type 6. It can also drive four displays instead of three and supports quad USB 4.0 and up to dual 25 Gigabit Ethernet.
For server-on-modules, COM-HPC also executes twice as many PCIe lanes as COM Express Type 7, 65, and is also specified for the high transfer rates of Gen 4 and Gen 5. Initial preliminary tests show that even PCIe Gen 6 is possible. However, the PCIe Gen 6 specification must be finalized for this. With up to 1 TB of full DRAM and eight times 25 GbE, the server modules even meet data center requirements. They are specified for a maximum power consumption of 300 W – this allows significantly higher CPU performance.
Why is the trend moving more and more towards edge computing?
The main driver here is the need to process more and more data with ever lower latencies. One example: The control of autonomous robots or vehicles cannot be carried out in the cloud. The latencies are too high, and the connection is not reliable. So computing power is needed where the data is generated. In addition, the amount of data continues to increase. For example, in situational awareness. Streaming data from multiple 4k cameras to the cloud for processing is not possible. In addition, many users do not want to store their business-critical data in a public cloud – whether for data protection reasons or out of concern about unauthorized data leakage.
Technical reasons were decisive here: Intel's eleventh-generation Core processors are currently the only embedded x86 processors on the market that support PCI Express Gen 4. The support of these high-speed interfaces is the special feature of COM-HPC – this currently makes »Tiger Lake« the first choice.
However, we also rely on AMD: for example, in the COM Express Compact module with AMD's Ryzen V2000 processor. Based on AMD's processor roadmap, there will also be COM-HPC modules in the future. In addition, COM-HPC is explicitly designed for other computing devices, such as Arm, FPGA and GPGPU. There will be a very diverse product range around COM-HPC in the future.
What role do real-time applications play regarding COM-HPC?
Low latencies are, as already mentioned, one of the main reasons for edge computing – here, real-time is not far away. Similarly, we are talking about Fog servers here. They are used to orchestrate numerous distributed processes and machines. In some cases, they are also used to consolidate previously decentralized control systems – for example, a production cell in which several robot controllers were previously installed in a decentralized manner.
COM-HPC gives applications the cores and I/O they need to consolidate multiple, previously separate systems onto a single hardware platform. A real-time hypervisor separates the hardware and assigns each application its own resources. This keeps real-time control independent of an IoT gateway with a firewall, HMI, and an AI instance for data analytics. This saves system costs and increases reliability, as the probability of failure is higher with multiple systems.