Interview with Ralf Herrtwich, Nvidia Autonomous driving – No chance without artificial intelligence

Prof. Dr.-Ing. Ralf Herrtwich ist Senior Director Automotive Software bei Nvidia.
Prof. Dr.-Ing. Ralf Herrtwich is Senior Director Automotive Software at Nvidia.

Prof. Dr.-Ing. Ralf Herrtwich is Senior Director Automotive Software at Nvidia and is researching how autonomous vehicles can be realized with artificial intelligence. We asked him where the industry currently stands and what challenges still need to be overcome.

Prof. Herrtwich, Artificial Intelligence is considered a central building block on the road to autonomous driving. Why is this and what does it mean for the development process?
Herrtwich:
In order for an autonomous vehicle to achieve its goal, it must make the right decisions in a dynamic traffic environment. Not only must it know what the road looks like and where it is allowed to go, it must also know with whom it shares the road and how these other road users are likely to react. In short, it must understand the static as well as the dynamic world around it – and it does this through artificial intelligence. So when we develop autonomous vehicles, a large part of our work consists of developing systems that draw the right conclusions from the information provided by the vehicle's different sensors – whether it’s cameras, radars or lidars.
What role does training data play in the implementation of autonomous driving functions and where do we currently stand?
Herrtwich:
Artificial intelligence today is mainly created by developing so-called deep neural networks (DNN). Similar to the human brain, these networks analyze input patterns step-by-step until they can conclude from a series of pixels that these pixels, for example, are a pedestrian. The beauty of these networks is that you don't have to program them explicitly, you train them. For example, you show a neural net several thousand images of pedestrians and from these it deduces which elements make up the image of a pedestrian. This way, it can later identify pedestrians in images that it has never seen before. The more objects are trained, the greater the number of elements an autonomous vehicle is able to classify. These networks can then be exhaustively tested in simulation to ensure their accuracy. Our cloud-based »Drive Constellation« simulation solution makes it possible to drive millions of miles in virtual environments across a broad range of scenarios – from routine driving to rare or even dangerous situations – with greater efficiency, cost-effectiveness and safety than what is possible in the real world. .
In contrast to explicit programming, deep neural networks (DNNs) are considered by many to be non-transparent. How can such systems be sufficiently validated? Especially when you have many safety-critical systems, as in a car.
Herrtwich:
Bringing an autonomous vehicle onto the roads requires rigorous testing to ensure safety. Developers must test their self-driving technology on millions or billions of miles and encounter a near-infinite range of scenarios to statistically prove it drives safer than humans. Simulation enables developers to test rare and dangerous scenarios that may be difficult or impossible to replicate in the real world. Nvidia Drive Constellation provides a way to test self-driving technology in any possible weather, traffic condition or location, as well as rare and dangerous scenarios. These tests are repeatable and scalable, enabling comprehensive validation before cars reach the road. What didn't work correctly can be used as new training material. This way the system gets better and better over time. And when the error rate has dropped below the target value, the software can go on the road.
As far as I know, we currently only find trained AI systems in cars. When, if ever, will we see self-learning systems?
Herrtwich:
AI is actually being adopted across virtually every industry, from healthcare, to energy to finance. All these systems are being trained to perform better than a human. However they are far from being self-aware. The first step toward achieving a self-learning system is when the car realizes that it senses an unknown object and „asks“ the development team what it is. We see this as the next development phase for our vehicles. We call it active learning – like a student at school who asks the teacher if he hasn't understood something. And, just as the student at some point no longer needs the teacher because he or she can leverage the library or the Internet to build their knowledge base, you can probably expect the same from a machine one day. But it hasn't happened yet.
Nvidia comes from graphics processing and has basically adapted GPU architectures to AI applications. Meanwhile there are many startups that try their hand at very focused DNN and CNN architectures. How does Nvidia proceed with AI hardware?
Herrtwich:
The foundations we have laid with our parallel processing architecture and open software platforms including CUDA and TensorRT are still the benchmark when it comes to AI platforms. Stability on the software architecture side allows our customers and ourselves to benefit from the performance increase through new hardware. We just introduced Nvidia „DRIVE AGX Orin“, a highly advanced software-defined platform for autonomous vehicles and robots, which is powered by our new SoC called Orin.   Orin delivers 200 trillion operations per second (TOPS) – nearly 7x the performance of the previous generation Xavier SoC. For autonomous vehicles, this means that the vehicle's environment can be analyzed more and more effectively from increasing sensor data, and then simultaneously run dozens of redundant and diverse DNNs for increased safety.
Thanks for the interview.

Prof. Dr.-Ing. Ralf Herrtwich will presumably give the presentation »Implementing AI for Automated Driving« (Session 8.2) at the Embedded World Conference on Feb. 25th at 14:30 (presumably by video conferencing)

Nvidia @ embedded world: hall 2, booth 239 (cancelled on short notice)

Prof. Dr. Ralf G. Herrtwich

runs automotive software development for Nvidia in Germany where he can draw upon his long R&D experience in vehicle automation as well as infotainment and telematics technology. He currently focusses on innovations in artificial intelligence for autonomous vehicle perception and maneuvering. Past assignments in Dr. Herrtwich’s career include managing the automotive business unit of Here Technologies as well as developing self-driving vehicles for Mercedes-Benz. In 2013, his team made an S-Class re-enact the world’s first overland drive, covering the historic 65-miles Bertha Benz Route autonomously in regular traffic. A computer scientist by education, Herrtwich started his career in academia at TU Berlin and UC Berkeley. He then held management positions with IBM and several telecommunication start-ups before joining Daimler in 1998 to manage its advanced engineering on telematics, infotainment and, later, driver assistance and chassis systems. Since 2009, he also is honorary professor for vehicle information technology at the Technical University of Berlin. In 2019, he was named Fellow of the German Computer Science Society.