A few years ago, it seemed that autonomous driving would be available to everyone within a very short time. Hope has been followed by disillusionment. Pierre Olivier, CTO of LeddarTech, is convinced in an interview with Markt & Technik that the time is finally ripe.
Markt & Technik: Automated driving is by far not as simple as originally thought. There is a lot of discussion about whether Level 3 is superfluous, and the next step will be directly from Level 2 to Level 4.
Pierre Olivier: The discussion is old, years ago there was already a discussion about whether Level 3 was redundant. Today there is talk of Level 2+, etc., but for me this is nothing other than Level 3 without safety functions. But that doesn’t change the fact that people want autopilot functions that allow the vehicles to drive on their own without the driver still being forced to pay attention. That’s what some call Level 2+ today, but to me that’s exactly what Level 3 is.
An autopilot function would probably be something everyone would like to use, but Level 3 also means that the driver has to take back control of the vehicle within a certain time, and that seems to be a technical problem that has not yet been solved...I wouldn’t say it doesn’t work. I think that just over time it’s been found that autonomous driving is not as easy to implement as people initially thought. In 2015, for example, it was stated by Elon Musk that autonomous vehicles, and I’m talking about fully autonomous vehicles here, will be on the road within three years.
Forbes predicted that there would be 10 million self-driving cars on the road by 2020, and Waymo CEO even still stated at the end of 2017: »Cars that drive fully autonomously already exist.« And these are just a few statements. But the reality is quite different. Tesla, for example, has corrected its expectations several times, although today the company is once again convinced that they will soon be ready. But even Tesla today states that autonomous driving is a very complex problem. I would even say that it is one of the most difficult problems we have ever tried to solve, because it is basically about copying the human brain.
Are the problems more in the software, in the hardware, or is it a problem of training data for the AI?
I think all three points play a role. Tesla, for example, banned radar from its cars for whatever reason and all experiences with it show that the performance got significantly worse, videos from users are really scary. In my view, this example confirms one point: the best sensor technologies are necessary for autonomous driving, because only the combination makes it possible to really secure the car. There is also another point. There are studies to find out why accidents of vehicles with ambulances and fire engines, etc., have happened with autopilot activated. This raises the question for me: was the AI of these vehicles trained with inadequate data? Because if the training set is not sufficient, then a system cannot recognize corresponding vehicles. A real 3D sensor would anyway recognize that there is an obstacle, so that the vehicle would stop in any case, regardless of whether it knows what the obstacle is. This example shows again that hardware is really important. This is not to say that AI and deep learning are not also important, quite the opposite. AI and deep learning are the best tools we have to cover the multitude of environmental conditions. If you have to program these different conditions individually, it would never work. So, it takes the best hardware, the best software, and the best training data to solve this complex problem.
And it takes responsible OEMs and Tier-ones. OEMs have the highest responsibility because if they allow a driver to easily bypass a safety system, then that is very reckless. You can’t rely on the end user to always act responsibly.
Why should an end user act irresponsibly, he should be the first to be interested in a safe vehicle?
Drivers are not trained; an autopilot is a very complex system. Aircraft pilots use autopilot systems, but they are extensively trained in them. They know exactly when to rely on them and when not to. In a car, the driver turns on the autopilot and then just tries it out to see how it works. That is a completely different approach.
Back to the training data, companies like Tesla or Waymo should have an infinite amount of training data available by now, shouldn’t these companies already be further along in their developments?
When someone gets his driving license today, it takes him around 50 hours to get it. At that point, he can drive reasonably well in traffic and no longer pose a danger. But to become a really good and safe driver, at least 100,000 kilometers of driving are necessary. Data collected over and over again in the same environment is not even close to being enough. It’s not about quantity of data, it’s about diversity. Waymo focuses more on the quantity of data. But if you look at videos of vehicles, it shows that the systems always make mistakes when they are in situations that they don’t recognize.
Many companies are involved in the field of “automated driving,” yet success has been lacking so far. You are convinced of success, what do you think is necessary for the story to become a success story?
Many things come to mind. First and foremost, however, is the contribution of the infrastructure, i.e., V2X. Today, Super Cruise from General Motors is considered the safest system. But it only works on roads that are mapped. Is it acceptable that a car can only drive on a road that has been mapped? If enough highways are mapped, this might be a good compromise, i.e., V2X would really help. But when it comes to systems that work under all conditions, you quickly reach the limits, and indeed the limits of computing capacity. It’s not so much the sensors, but the computing hardware. Even if I have good training data available, the computing hardware is the limiting factor.