Altera: Keynote at embedded world

FPGAs and AI – the perfect combination at the edge

13. Februar 2025, 7:58 Uhr | Iris Stroh
Sandra Rivera, Altera: »FPGAs are particularly suitable for AI applications at the edge because they offer low latency, deterministic behaviour and high efficiency – this is especially true in areas such as robotics, industrial automation and medical imaging.«
© Altera

Sandra Rivera, CEO of Altera, is the keynote speaker at this year's embedded world Conference. In an interview with Markt & Technik, she explains how FPGAs are used to enable AI in a broad range of use cases and how easy it is to use them.

Diesen Artikel anhören

Markt & Technik: Why do you think that AI applications at the edge represent a great opportunity for FPGA providers in particular?

Sandra Rivera: Historically, FPGAs have been particularly well suited for markets and technologies that are evolving, dynamic and changing. One example is wireless communications infrastructure and the transitions from 1G to 2G, 3G, 4G, 5G and now 6G. These platforms must be future-proof, with sufficient scope and flexibility to allow developers to adapt them to changing standards and requirements for years to come.

AI is a great example of an industry that is changing and dynamic. Today, FPGAs are broadly used in edge applications, and as the AI market evolves and businesses look to bring greater levels of intelligence at the edge, we see many opportunities for FPGA market expansion.

Anbieter zum Thema

zu Matchmaker+

Can you give a few examples of applications?

At the edge, FPGAs are suitable to run AI inference where power, cost, size, and weight are at a premium. One example is video and image processing applications where low latency and deterministic behaviour is crucial. This applies in particular to robotic systems, industrial automation systems and medical imaging and, of course, all types of unmanned systems that move autonomously.

The fine-grain parallel architecture of an FPGA make them ideally suited for these use cases because of their ability to simultaneous compute multiple functions in real time. They do exactly what is expected of them as soon as they are programmed.

Compared to GPUs or CPUs, FPGAs enable high-performance and low power in a small form factor. And the flexibility of an FPGA means they can be re-programmed as workloads and standards evolve.
 

embedded world Conference 2025: Program, Abstracts and Registration

 

What are some areas where FPGAs aren’t as efficient to run AI?

Broadly speaking, building and training very large language models within centralised data centres are best served with clusters of vector processing engines, like those found in GPUs.

This is not to say that data centres are not also an opportunity for FPGA providers. There are many use cases where FPGAs sit next to a GPU or CPU to act as a co-processor or to pre-process data before it’s trained.

The developer environment is much more user friendly when targeting GPUs and MPUs. Do developers have to be RTL specialist to use an FPGA?

Of course, low-level FPGA programming is a challenge for many. This is precisely why we have invested a lot of energy and money in our AI software. Not only does it enable us to reach more developers, including those who can't program RTL, but it also allows any developer to use the typical AI frameworks and models they are familiar with.

If we consider the models and frameworks that we can address with this tool suite and OpenVINO, a lot is already covered, plus the fact that there is little interaction between software and hardware at the lowest level. So thanks to the high level of abstraction, most developers won't notice any difference at all in terms of the hardware they are developing on.

FPGAs have another advantage over MCUs: developers can use our devices to consolidate different workloads in one system. Instead of implementing an AI system based on an MCU, NPU, logic blocks, etc., all of which have to be programmed differently and for which the timing has to be coordinated, developers can use FPGAs to implement the various workloads, plus AI/ML functionality, in just one device.

So does that mean that a ‘normal’ developer of an AI application does not need an RTL to program?

Exactly, although we support two options. FPGA experts can easily add AI/ML capabilities because we use the same tool flow – a clear advantage of Altera, because our competitors use a special tool flow for AI.

For experts in other fields, such as biology, aerodynamics, computer vision, autonomous vehicles, factory automation, etc., the AI developer can use industry-standard libraries and frameworks like PyTorch and Tensor Flow alongside our AI Suite. And thanks to the abstraction between hardware and software already described, they do not come into contact with RTL at all.

FPGAs don't exactly have a reputation for being energy efficient. How can FPGAs help to lower power in edge use cases?

Compared to a CPU or GPU, FPGAs are significantly more power efficient when you look at performance-per-dollar or performance-per-total cost of ownership. The features built into FPGAs, including embedded processors, AI-infused DSP blocks and high-speed I/O, combined with their programmable fabric, means developers can use an FPGA as a custom AI accelerator engine. Often allowing developers to consolidate multiple system components into a single device.

Between our high-performance Agilex 7 FPGAs targeting high-performance applications to Agilex 5 FPGAs targeting mainstream applications to Agilex 3 FPGAs targeting cost and power optimized applications, Altera offers a range of options to address difference system requirements.

How important is the AI business for Altera?

AI provides us a very good opportunity to expand our total addressable market and accelerate growth. FPGAs have always been used at the edge and across many different platforms, and with the features we’re building into the silicon and the tools we’re offering to developers, we’re enabling developers to harness the power of an FPGA to bring greater levels of intelligence in more end applications. Also, because FPGA typically have lifecycles that range from 10 to 15 years and beyond, they provide a great way to future-proof your investments.

Most of the business models we have developed for the near future are based on the use of standard FPGAs in edge platforms, be it in manufacturing, industrial automation, and so on.

With AI, we can expand the possibilities for use in these target markets even further. We are convinced that in the next few years, more than half of all edge platforms will have some level of AI and machine learning functionality. And a large part of the growth we see will come from AI, machine vision and ML applications.

Do you see the innovations as being more hardware or software-related?

The FPGA business is primarily a software business. We sell silicon, but to really take advantage of the high-performance programmable fabric, high-speed I/Os, memory controllers, communication interfaces and embedded hard processor systems featured in modern FPGAs – we need the appropriate software.

On the hardware side, we are currently well positioned with the innovations we have introduced with Agilex. This includes a logic fabric that delivers twice the performance-per-watt compared to competitive FPGAs, AI-infused accelerators Tensor blocks and high-speed transceivers supporting data rates of 116 Gbps.  

When it comes to software and IP, we have significantly increased our investments to lower the barriers to entry and increase the number of developers who can access our hardware. In many cases, it is also about showing what developments are possible with our FPGAs in the first place. So we develop reference designs to show the capabilities of our hardware. We typically do this in collaboration with our customers and partners from our ecosystem. I will show some examples in my keynote at Embedded World. And, of course, we will continue these efforts.


Matchmaker+