embedded world Conference 2019

Interview with Prof. Axel Sikora

27. September 2018, 9:09 Uhr | Frank Riemenschneider
Prof. Axel Sikora, Chairman of the Steering Board of the embedded world Conference (ewC) was interviewed by DESIGN&ELEKTRONIK Editor-in-Chief and ewC Publication Chairman Frank Riemenschneider.
© NürnbergMesse Heiko Stahl

The embedded world Conference is the world's leading meeting place for the embedded community. In an interview with Frank Riemenschneider, Editor-in-Chief of the organizer DESIGN&ELEKTRONIK, Prof. Axel Sikora, Chairman of the Steering Board, explains what embedded intelligence is all about.

Diesen Artikel anhören

DESIGN&ELEKTRONIK: Axel, the embedded world Conference 2019 has the motto "Embedded Intelligence", so one focus will be on AI, machine learning, DeepLearning and so on. Why are these topics so important for the embedded community right now?

Prof. Axel Sikora: Yes, you have just mentioned it, the roots of Embedded World lie in an event that was called Embedded Intelligence in the 1990s, and we decided to take recourse because in the 1990s we dreamt of something that can actually be implemented with today's hardware and software.

What exactly are the decisive advances?

On the one hand the computing power itself, on the other hand the possibilities to shift computing power from the edge via gateways to the cloud, i.e. what is called fog computing. Only then can intelligent, adaptive and self-learning systems be set up.

The implementation of machine learning is anything but trivial. Programming for embedded environments in which processing cycles, memory and energy are scarce is really difficult.  What approaches do you see on the hardware and software side to managing complexity and resource scarcity?

In addition to the aforementioned shift of computing power into the cloud, there are various dedicated hardware platforms such as GPUs, FPGAs or special processor platforms for processing neural networks. As always, the world will probably not be black or white, but a mixture of different approaches.

In DeepLearning, the situation is exacerbated by the fact that learning algorithms for a single inference at test time require teraop computing power, which can mean a few seconds per inference for complex networks. Such high latencies are not practical for edge devices, which typically require a real-time response with zero latency. In addition, deep learning solutions are extremely compute-intensive, which means that Edge devices are unable to map deep learning inferences. Are special ASICs like Google's TPU or Deep Vision's embedded processor the only solution to meet these requirements?

They are certainly a vehicle to take that step further. On the other hand, the topic of embedded intelligence is an algorithmic challenge, because you don't have to train on the embedded side in advance, but rather do a kind of cross-learning, run the initial learning on a server platform and only calculate the adaptivity in the field in the edge.

Another application of ML is called voice control, in which embedded applications do you see potential besides the HMI in the car, instead of interacting with touch displays or buttons with natural speech?

These are all applications where one has human-machine interfaces, this can be the interactive shower control, but it can also be in the industrial area, where I give operating instructions, hints and expert systems can be used on request.

The same question for automatic outlier detection. Algorithms can automatically detect exceptions and unusual events, where in our industry do you think such algorithms can provide the greatest benefit?

I am always careful with superlatives, but where we already see many applications is in predictive maintenance, condition monitoring, where equipment is monitored, conditions are analyzed and then, in time, before an incident occurs, maintenance is carried out, operating cycles or management is changed or the like. I see many applications that are already coming onto the market.

ML techniques are classified according to simulation, hybrid (some percentage in simulation, some percentage in learning on real hardware) and hardware-in-the-loop. Which approaches do you see as the most target-oriented in the embedded environment?

Many of the embedded applications will certainly have to act with very small amounts of data, so simulation plays a big role, but also the human being in the learning chain, i.e. assistant learning, so that certainly different approaches are required than with classical social media analyses.

Let us come to two challenges: An Uber vehicle killed a woman because there was a software bug on Nvidia's ADAS platform. How can you generally rule out SW errors with AI/ML, can you do it at all? How can such a system be verified?

In general, the handling of complexity is a challenge in today's world, not even specifically for machine learning, but for all systems.  There are quite interesting approaches where systems can monitor themselves and independently determine whether the situation in which they find themselves has already been learned and whether they are prepared for it or not. For me this is an approach to achieve a higher reliability and then to say, well, I get into an unknown situation, so I go into a fail-safe state.

Nvidia uses an approach where massive GPUs are connected in parallel. The result is power consumption in the range of hundreds of W, and a GPU, just like a CPU, is not optimized for neural networks from the instruction set, e.g. for matrix multiplication. Wouldn't it be necessary to develop special ML processors with completely new instruction sets in order to become more efficient here?

Yes and no, the world is not black or white here either. On the one hand there are already specific ML chips, on the other hand standard products have the advantage that the development costs can be depreciated over several applications, and thus a higher computing power per Euro can be achieved.

Last question to wireless guru Axel Sikora: Many applications would be inconceivable without a comprehensive 5G infrastructure with low latency times. While Asia is working flat out on 5G, today we don't even have LTE in the area. What do politics and industry have to do to raise Germany's infrastructure to the level of Korea, Japan or Singapore?

Not just making fine weather speeches, but developing activities! What we unfortunately see in politics in the area of digitisation are far too many declarations of intent and far too little hard implementation, which industry demands of politics, but politics also demands of industry.

Many thanks Axel, I thank you for your honest and frank words!

 

 

 

 


Matchmaker+