20. April 2018, 13:51 Uhr | Andreas Pfeffer
Demonstration of the 360° all-round view with the Tensilica Vision P6 DSP.
Under the motto »Automotive Electronics Redefined«, Cadence presented a number of products for driver assistance systems (ADAS) at the embedded world.
The Cadence ADAS reference platform features a system-on-chip (SoC) with four Tensilica Vision P6 processors and two fast LPDDR4 memory devices. With this combination, the SoC achieves a large amount of per-cycle processing and high data throughput while keeping power consumption to a minimum. This is especially important for image processing applications in automotive, computer vision and neural networks applications. Cadence on-booth demos include:
The company uses the Tensilica Vision P6 Digital Signal Processor (DSP) to generate a synthetic bird‘s eye view for 360° all-round image processing. The corresponding video information is provided by four Full HD cameras installed in the vehicle. The four video streams are combined with lens distortion correction and perspective transformation algorithms to create a single seamless Full HD video stream. Power consumption of the SoC is less than 4 W. Typically the driver assistance system is used in a parking assistant.
The On-Device Artificial Intelligence (AI) demonstration of person recognition uses the Tensilica Vision P6 DSP and YOLO algorithm (You Only Look Once). The algorithm is based on a neural network and is trained in the recognition of persons. YOLO is ideal for applications that require fast, energy-efficient object detection (including localization) for multiple object categories. It uses customized network optimization, including decision tuning (pruning) and signal processing (quantization). Reliable person recognition is an essential basic functionin, for example, an emergency brake assistant or for highly automated driving.
Software developers use environments such as Caffe or TensorFlow in the design of neural networks for image classification. The sometimes very complex networks then have to be manually ported to a hardware platform and optimized – a challenging and time-consuming task. For this, Cadence has developed the Xtensa Neural Network Compiler (XNNC), which shortens the time required to convert the neural network into a code for a Tensilica AI DSP as an embedded (target) processor from months to just days. In so doing, the original detection quality of the network in floating point is also replicated in the converted fixed-point representation. The lower power consumption and lower memory bandwidth of the fixed-point processing have an advantageous effect.
An XNNC demonstration will show the implementation of a Caffe or TensorFlow-trained Google Inception V3 and MobileNet. The XNNC uses the neural networks as well as its floating point coefficients (weights) and converts them into a highly optimized fixed-point neural code for the Tensilica AI DSP.