ASICs, Memory, 2.5D Integration

Performance Plus Flexibility - New Class of AI ASICs

13. Juni 2018, 5:58 Uhr | Heinz Arnold
Mike Gianfagna, eSilicon: »We see a huge market for the acceleration of AI algorithms. The "neuASIC-Platform" will give this market segment a significant boost.«
© eSilicon

With a whole new class of ASICs, eSilicon wants to pave the way to artificial intelligence.

Diesen Artikel anhören

The idea behind it: Users can create exactly the AI chips that are required for their respective use on the basis of a new development environment that eSilicon has named "neuASIC" platform. It combines the advantages of ASICs with the flexibility required to adapt the AI chips to different neural network architectures and emerging standards.

This "Machine Learning ASIC Platform" (MLAP) is intended to replace the GPUs and FPGAs in the field of AI. GPUs and FPGAs offer some features - for example the possibility to perform MAC operations very effectively - which make them suitable for use in AI. However, these chips are still relatively slow – compared to optimized ICs whose functions are specially optimized, which is possible within the framework of the new platform. "ASICS offer the best ratio of power consumption, performance and low total cost of ownership. So far, however, ASICs have hardly been able to adapt to rapidly changing algorithms, as is typical for the AI sector," said Mike Gianfagna, VP Marketng, in an interview with Markt&Technik. "That's why GPUs and FPGAs have been ahead so far."

The "neuASIC" platform will change that: The modular ASICs can be adapted to changing algorithms via programmability and configurability even after the actual design has been completed. The environment even offers the possibility to combine the different AI function blocks in different ways and thus play with different hardware variants and algorithms to determine the best versions for each operating condition.

The high performance of the ASICs is also achieved because they are manufactured using a 7 nm process. The neural networks can therefore be mapped to the digital functions very effectively. This plays an important role in keeping the training phase for the neural networks as short as possible. "So far, this has been one of the weak points of the chips used for AI devices," Gianfagna explained.

With IPs tailored to the applications and a modular design technology based on 7 nm process technology, eSilcion now wants to overcome these hurdles. The newASIC platform also includes a library of special functions developed to meet AI requirements that can be quickly configured and combined to run customer-specific AI algorithms.  With the help of eSilicon's "Design Profiler" and "AI Engine Explorer" as well as third party IP, AI blocks ("tiles" in eSilicon jargon) can be set up via the "ASIC Chassis Builder". In this way, performance, power, and surface analyses (PPA) can be carried out at an early stage of development, enabling developers to determine which architectures for neural networks are suitable for the respective application. An extensive knowledge base ensures that the system can perform a very accurate PPA.

The "newASIC "IP library contains the functions known from the AI design. In this way, ASICs optimized for the tasks can be developed.
In addition to the neural networks themselves, the monolithically integrated memory on the ASIC and its connection to the chip play a decisive role in the performance of the ASIC. In this area, eSilicon carries out its own developments, but also cooperates with third party IP companies.

SiP Technology Plus 7 nm Processes: This Leads to KI-Performance

The connection to special external memory, such as tenary content addressable memories (TCAMs), is just as important for the performance of the chips. Because high speed and a lot of connections are required, eSilicon decided to use Systems-in-Package (SiP). In addition to the ASIC, other special memory ICs sit on a thin interposer in a package. These are 3D memory stacks, so-called High-Bandwidth Memoris (HBM). The current standard for this is HBM2.  

eSilicon also works closely with TSMC in the 2.5D integration. The SiPs are based on TSMC's Chip-on-Wafer-on-Substrate-Technology (CoWoS). This enables massively parallel connections to the internal memory stacks for fast access times. In addition, the power consumption is considerably reduced. This not only helps performance, but also creates flexibility: Memory allocations can be changed via changes in 2.5D packaging.
"It is fundamentally important not only to have a deep knowledge in ASIC design, but also in packaging, because both must go hand in hand in order to achieve maximum performance," explains Gianfagna.  

Since 2011, eSilicon has been developing ASICs for networking equipment with very similar requirements. This is why the company has gained experience not only in ASIC design over many years, but also in the field of packaging. This is not exactly trivial, because here it is important to precisely understand the mechanical and thermal properties. "Because the performance of the whole chip including the memory can only be optimized in the combination of both sides, it is important to understand both the ASIC side and the connection of the memory," says Gianfagna. Because there are hardly any companies that have both, he sees eSilicon in a good position here.  
Currently eSilicon works together with leading system manufacturers. According to Gianfagna, the "world's first AI ASICs are initially used in data centers, in autonomous cars, and man-machine interfaces.

 

Anbieter zum Thema

zu Matchmaker+

Matchmaker+