Andreas Schwarztrauber, MDE (Market Development Engineer) Microchip at Arrow Central Europe, is convinced that in ten years everything »that moves« will be equipped with an embedded Smart Vision camera. Anyone who wants to know how these can be implemented should ask Arrow.
»Whether it's drones, vehicles, robots, agricultural machinery or whatever, everything that moves will be equipped with an intelligent camera in some form in the future," according to Schwarztrauber’s conviction. This will create a gigantic market. If we assume that 20 percent of the world's population (source: Statista) has an average of five »moving objects«, each of which is equipped with two intelligent cameras on average, then we are talking about roughly 18 billion intelligent camera systems in ten years.
Schwarztrauber’s expectations are based on simple mathematics and various market forecasts. Gartner, for example, estimates the IoT TAM (Total Addressable Market) in 2020 in the enterprise and automotive environments at around $5.8 billion, spread across ten vertical markets (source). »In addition, we are hearing from various analysts that a large number of Iot projects will be equipped with AI capabilities to bring inferencing from the cloud to the edge«, Schwarztrauber continues. In this context, he refers, for example, to a forecast by Deloitte (source), which assumes that the market with edge AI chips will have comprised 750 million units last year and will grow to 1.5 billion units by 2024.
If you think along these lines, it becomes clear why Arrow is interested in smart vision systems. And what is needed for such systems? Any smart embedded vision system includes the following key elements: optics, sensor, fast data link between sensor and the main processing unit, main processing unit and display. Various components could be used to realize such a system, for example a global or a rolling shutter CMOS image sensor (depending on the exposure time), an SoC FPGA or a processor in combination with an FPGA as processing unit.
According to Schwarztrauber: »An SoC FPGA combined with a global shutter image sensor is an ideal combination.« This is because high frame rates require parallel data processing, a simple exercise for FPGAs. In addition, there are also special sensor interfaces in this area, such as HiSpi, that need to be supported; no problem for an FPGA. And what about power consumption? »In particular Flash FPGAs are ideal if the required AI performance is in the range of a few 100 GOPs. With the VectorBlox Flow, Microchip offers a solution that is currently scalable between 79 and 279 GOPs, which can be well integrated into all relevant ML frameworks such as TensorFlow, PyTorch or Caffe. And all of this at a power consumption of less than 3 W. The advantage over SRAM-based FPGAs is even greater the higher the ambient temperatures are, for example in small packages with electronics which waste a lot of heat.«
The alternative in form of a combination of an FPGA (for preprocessing and special interfaces for example) and a processor can achieve higher computing performance, but the power consumption is then also significantly higher. Schwarztrauber continues: »Even systems with processors that have been optimized for low power consumption require significantly above 10 W, but then the AI performance is also significantly greater than 10 TOPs. Depending on the application, however, a power consumption of more than 10 W is not always possible.«