Expert interview on medical AI potential, changing development cycles, the importance of data and the disruptive approaches of embedded intelligence.
Artificial intelligence is also changing medical technology. With more computing power on ever smaller components, AI is even moving into devices. We talk to Viacheslav Gromov of German embedded AI provider Aitad about the state of the art and the changing development of smart medical devices.
Mr. Gromov, to what extent is embedded AI in medicine a real driver of innovation?
Embedded AI enables what medical technology has seen as a challenge for years but has been unable to realize due to technical limitations. Artificial intelligence is now practicable in the device itself, without the use of security-critical cloud connections or larger, more expensive central elements such as PCs or large GPU-based controllers. Key trend drivers for this technology are its core properties: embedded AI is secure, resource-efficient, and cost-effective in the sense that it does not require network connectivity. At the same time, it functions in real time, delivering results and decisions within milliseconds.
Which medical fields will benefit most from embedded AI?
The spectrum ranges from personal health to surgical equipment and patient care. Take, for example, a home user respirator or prosthesis that adapts to the user by speech or usage profile (movement, respiratory flow) or automatically changes its function. In the operating room, the lighting can automatically adapt to the respective surgical step with light intensity and temperature through surgical area recognition. And hospital beds can, for example, use lidar or pressure sensors to document the patient's position or use their gestures and calls to inform the nursing staff if necessary.
Embedded AI in medical devices relieves and supports staff in daily tasks, improves patient interaction, and enables improved or novel key functions, such as the aforementioned program customization. The technology plays an important role primarily in the following three areas: predictive/preventive maintenance, user interaction, and functional innovation. What is particularly surprising here is that there are solutions in all size segments, depending on the data situation, machine learning models and the system size and volume.
What could these "functional innovations" be?
While topics such as predictive maintenance involve monitoring individual components or user interaction is much about object and person recognition, gestures and speech, functional innovations are very application-specific: Here, the focus is on embedded AI addressing the potential for improvement of medical technology devices. This could be automated tooth status detection by dental cleaning instruments or cutting edge detection and program adaptation in high-frequency surgery, or an entirely new additional function of the ventilator. The characteristic of such innovations is therefore their direct impact on product evolution or even disruption, i.e. the emergence of completely new products or services based on AI.
How can embedded AI and edge AI be differentiated in medical applications?
The core difference is the processing depth on site in the device or at the sensor. Take the example of complex object recognition including all pre-processing and result verification, such as is required for the aforementioned operation field adaptation of many devices.
Edge AI is essentially rudimentary processing at the network edge. AI does act locally, but it only sharpens or crops objects from images, for example. The objects themselves are only recognized on the larger central processing units or in the cloud. Consequently, it requires large amounts of data to be transferred from the edge of the network, i.e., the edge node.
Embedded AI, on the other hand, acts as a self-sufficient solution. The majority of the processing steps are realized securely and in real time directly in the device. The overall result is available locally. This was not possible in the past, as neither the technology nor the corresponding high-performance and low-cost semiconductors were available. With embedded AI, this is changing. The technology is the next step after edge AI.
That sounds promising. How can an existing medical device become an embedded AI-enabled medical device in practice?
By getting advice from manufacturers on what is possible in terms of embedded AI in their devices, keyword: product development or further development. In doing so, we recommend getting ideas from outside the company. Many product developers today do not even know what is possible and feasible with embedded AI. External consultants who specialize in embedded AI can test the products, for example, and then make suggestions as to which embedded AI is feasible, with which benefits for the users, and at what cost to the manufacturer, and accompany them from prototype to series production.
We recommend relying on individual solutions here. Standard solutions never cover 100 percent of what is needed. Every manufacturer currently relying on embedded AI also has a USP, as the technology is still new to the market. It should be a proprietary and individual solution that no market competitor can simply use.
What skills and efforts are required of MedTech manufacturers?
The challenge is to collect the right data, use it intelligently, and apply the right technology. Embedded AI requires an interdisciplinary team for development. After all, mechanical engineers for fasteners and sensor placements, hardware developers for component manufacturing, software developers for system integration, and data experts for AI development must work together. Particular skill requirements lie especially in AI and embedded expertise. If the resources are not sufficiently available in the company, it is advisable to outsource individual processes or the entire development.
Once development including serialization is complete, developer variance and resource requirements as well as overall costs are low: embedded AI, unlike cloud concepts, does not require back-end maintenance, interface security updates, and continuous cloud rental.
What is the embedded AI product development process and how long does it take to go into production?
In most cases, it is possible to complete a proof-of-concept with the first prototype after a maximum of six months. The embedded AI process goes through the following stages up to series development: After conceptualization, it is a matter of collecting data. Using the processed and enriched data, data experts build a machine learning model. Embedded software developers convert this into executable, hardware-oriented code, including all sensor and interface requirements.
This step requires a high level of expertise, as standard tools do not lead any further here. This development step reveals what the final component will look like based on the model size and the appropriate semiconductor. The embedded hardware is therefore developed individually here. After that, the system is ready for practical testing. The series launch includes the certification process of the overall product, but also environmental and life tests.
You say it's all about collecting the right data. What is different about a data-driven development process?
Data-driven developments are not only encountered in AI, they are now an inevitable turning point due to trends from (I)IoT and smart devices to blockchain and robotics. If we talk about data-driven development processes, then with every new or further development of a product, the data provide information about the final feasibility and complexity of the system components. Since this is an open-ended approach, it also requires some rethinking of the product design and maintenance process. When developers get involved, they are usually pleasantly surprised by the possibilities and additional contexts or insights.
At the same time, they can use the data from the pilot series for further development of subsequent products with a modified design. The result is a long-term data strategy that product manufacturers can charge their customers for in the form of updates or continuous service models.
What can developers, manufacturers, users and patients expect with embedded AI in the coming years?
More processing performance on ever smaller and cheaper semiconductors. This will be the future. Fueled by current research trends with memristor arrays or spiking neural networks as well as adaptive learning, i.e., autonomous AI learning in the field. Embedded AI will be able to process more and more data while increasing the depth of analysis. This will massively shift the trend line from on-site AI and such on large central computers or on the cloud.
The same embedded AI for voice control could be used, for example, to better assess emotional states or psyche based on voice pitch and speech tone. Medical devices in doctors' offices, hospitals, and operating rooms will have AI that gives them intelligence to interact with the user and patient on a functional and collaborative level. Sensor fusion will play an increasingly important role, with embedded AI using a variety of sensors to detect illnesses or provide warnings during procedures. In this case, more data and more processing often meant more insights. (uh)
This interview was first published on www.medical-design.news .
|Aitad is a German embedded AI provider. The company deals with the development and testing of AI electronic systems, especially in connection with machine learning in an industrial context (esp. system components). As a development partner, Aitad takes over the complete process from data collection to development and delivery of system components. In doing so, the Offenburg-based company takes a different approach than many manufacturers: instead of a ready-made AI solution, an individual system is developed for each customer.
To do this, the company first examines how customer products benefit from the use of AI, presents the advantages and possibilities, develops the system at all levels, builds a prototype of the new system in-house based on collected data thanks to a prototyping EMS track, and is always on hand to assist with series production and system maintenance. In doing so, Aitad acts as an interdisciplinary full-stack provider with areas of data science, mechanical engineering, and embedded hardware and software. In addition, the embedded AI exprts conduct in-house and external research on numerous algorithmic and semiconductor fundamentals of AI technology.