LeddarTech acquires VayaVision

»The most accurate 3D Environmental Model «

8. September 2020, 9:16 Uhr | Iris Stroh

Fortsetzung des Artikels von Teil 1

Page 2

Upsampling sounds a bit like mysticism, as if you could get information that doesn't even exist. Where are the limits of the software?

Of course, the limit in the combination of software and sensor is also the physical possibilities of the sensor. Put simply, one could say that an autonomous vehicle can be realized with four cameras, because with four cameras I can achieve 360-degree coverage of the surrounding area. And yet, in practice, the four cameras are not enough because, on the one hand, there is no redundancy and, on the other hand, only one sensor technology would be used, which could easily put the vehicle into situations where it is blind because it is too sunny, for example, or because it is raining. So you have to use at least a second sensor technology to make the system a bit safer. However, if you want to implement a really safe system, you need a camera, radar and LiDAR, because every technology has its limitations, also in the future. The ideal sensor doesn’t exist. All sensor modalities have their strengths and weaknesses. Our digital full waveform processing approach combined with raw data fusion and upsampling enables LeddarTech to significantly improve the performance of the sensor suite in an ADAS or AD system while reducing system design complexity and cost.

With the software the resolution of the surrounding image can be increased. Meanwhile, cameras with a resolution of 4K are used, so why upsampling at all?

Resolution does not always equal resolution. The systems must be able to detect, classify, track, etc. obstacles. Even if the camera has a great resolution, it is not enough. High values for dynamic range or amplitude do not say anything about position accuracy. Nor do they tell you the size of the object, the velocity, direction or differentiate between a poster with a picture of a car or pedestrian versus a real car and pedestrian in the scene. And this is where the quality of our software solution comes in where we are able to accurately detect the objects, properly classify them while eliminating false positive or negative detections to make the system more robust and safer in general. That's where the combination of LeddarTech and VayaVision delivers superior signal processing, perception and sensor fusion by leveraging advanced deep neural nets, AI and computer vision.

Should the VayaVision software be offered as a product or only in combination with the in-house technology?

We are currently sampling camera and radar perception and fusion to various customers with some publicly announced programs.  We are also working on a LiDAR perception stack solution based on LeddarTech’s Leddar Pixell 3D wide field-of-view LiDAR sensor that will sample to lead customers by the end of 2020 and will be available for used in series production by 2H21.

It is said that the software of VayaVision is hardware-agnostic, so completely independent of Leddartech technology?

Yes, the software is not bound to any special hardware. An OEM or Tier One can build a complete fusion platform with any sensor, processor and operating system (OS) available in the market.

You said that VayaVision uses AI algorithms, doesn't that require very high processing power?

No, the software can run on any standard processor from Renesas, Nvidia, Qualcomm or NXP. That's the good thing: The perception, sensor fusion or software can run on processors that are currently in automotive today.

Are there already tests that show how much sensor fusion is improved with VayaVision software?

Yes, we have run some benchmarks and achieved the best KPIs (Key Performance Parameters) in the industry for camera-radar fusion stacks. In other words, we are one of the industry leaders in terms of perception and sensor fusion.


Anbieter zum Thema

zu Matchmaker+

  1. »The most accurate 3D Environmental Model «
  2. Page 2
  3. Page 3

Verwandte Artikel

LeddarTech Inc.