They represented the entire value chain from deep learning and AI developers, machine vision players to users of embedded vision systems and even stakeholders from the financial sector. The presentation program included the full spectrum of the embedded vision industry, covering topics such as new hardware compute platforms, embedded vision standards and APIs, speciﬁc approaches to optimizing neural networks, as well as real-world examples of the deployment of vision based embedded AI systems. It was framed by a table-top exhibition. Cooperation and exchange between the attendees was supported by close to 80 individual booked B2B meetings between the attendees during the conference breaks.
EMVA Board of Directors member Dr. Chris Yates moderated the conference and in his opening remarks stated that “Embedded vision is one of the most dynamic and creative areas for innovation in our industry”. He noted that the importance of embedded vision is demonstrated by the remarkable fact that there are almost certainly more embedded vision systems in use today than humans on the planet. “These systems are transforming our factories, hospitals, transport, and living spaces, by providing machines with greater cognitive ability through vision”, Yates said.
Artificial Intelligence (AI) as a dominant topic
Artificial Intelligence and with it Deep Learning proved to be a dominant topic in many of the speeches. For good reasons, as David Austin from Intel in his opening keynote showed: He stated that AI is transforming Industry 4.0, which is introduced into machine vision industry by the OPC UA standard, and all the industrial verticals. Thus it soon will be everywhere, impacting manufacturing, energy, logistics and building. With regards to the significance of industrial production within the economy Austin cited a consultant’s estimate that the economic impact of factory IoT applications will reach $1.2 trillion by 2025. Since deep learning models can only describe what they’ve been trained to see he presented three techniques how to come to practical and flexible AI based solutions. His key message was that AI is still in the early adoption stage and it is time to get on board now.
While it was made clear that the continued development of embedded neural networks is ﬁnding widespread adoption and providing real beneﬁts to end users, also topics such as real-world deployment and maintenance were addressed by several presenters, indicating a growing maturity of the ﬁeld as the focus moves beyond pure technical performance of a AI based application. Vassilis Tsagaris, CEO of Irida Labs, put a focus on this aspect in his talks and further explained his point: “The way AI and Deep Learning models are trained and deployed for Embedded Vision needs to move beyond the technology hype and be focused on customer needs. What is currently still missing in the technical world of developing AI is a holistic problem solving approach, including not only training deep learning models, but also validation, inference at the edge devices and continuous update mechanisms. For this purpose, we need to look beyond the machine learning models and deploy AI-based embedded vision in real world products.”
Many of the presentations given addressed edge computing, wherein data which is not sent to a centralized place or database but processed at the source, and thus intelligence is moving into systems as one key characteristic of embedded systems. One example was Jagan Ayyaswami from Micron Technology who presented his thoughts on how to choose an embedded architecture and to answer the question ‘How do we keep the data as close as possible to the image capture’. Ratislav Struharik from IDS pointed out that edge computing requires low latency, time-sensitivity, and task-specific processing at relatively low cost in power, price and size. The same would hold true for AI on the Edge.