Sponsored

Rivians feelings about lidar

ElectricTrucking

Well-Known Member
Joined
Dec 23, 2018
Threads
3
Messages
374
Reaction score
288
Location
USA
Vehicles
Porsche 911, Chevy Bolt
Elon Musk made a strong statement about the uselessness of Lidar. Rivian appears to be a strong believer in the system along with other systems for their self driving features. I wonder how Rivian feels about Musk's negative views of lidar.
Sponsored

 

DocTwinkie

Well-Known Member
First Name
George
Joined
Mar 12, 2019
Threads
3
Messages
65
Reaction score
47
Location
Ohio
Vehicles
2019 Acura RDX, 2014 Volvo XC60
I canā€™t say Iā€™ve delved deeply into the tech but cameras are say to fool with optical illusions. The Tesla that plowed straight into a truck because the white side created a false horizon that tricked the camera probably wouldnā€™t have happened with LIDAR As it would have identified a solid object. Radar range can pick up objects but at close range. At high speed radar can be too late to see a stationary object.

Lidar does have limitations with heavy precipitation but then all imaging modalities do.

I donā€™t see musk ever truly achieving his goal of fully autonomous safely with just a camera.
 

stank65

Well-Known Member
First Name
Rich
Joined
Apr 1, 2019
Threads
0
Messages
48
Reaction score
20
Location
Coopersburg, PA
Vehicles
Tesla Model 3, Chevy Tahoe, GMC Acadia
I canā€™t say Iā€™ve delved deeply into the tech but cameras are say to fool with optical illusions. The Tesla that plowed straight into a truck because the white side created a false horizon that tricked the camera probably wouldnā€™t have happened with LIDAR As it would have identified a solid object. Radar range can pick up objects but at close range. At high speed radar can be too late to see a stationary object.

Lidar does have limitations with heavy precipitation but then all imaging modalities do.

I donā€™t see musk ever truly achieving his goal of fully autonomous safely with just a camera.
This is yet another excellent course correction from Tesla.
 

Hmp10

Well-Known Member
Joined
Mar 7, 2019
Threads
5
Messages
629
Reaction score
542
Location
Naples, FL
Vehicles
2015 Tesla Model S P90D; 2018 Honda Odyssey
All autonomous driving cars have to use radar sensors, because lidar sensors are seriously compromised in nighttime, rain, fog, or dusty conditions. Also, the greater range of radar over lidar becomes critical when driving at higher speeds. However, in good lighting conditions, lidar offers advantages over radar in imaging detail and allows construction of a digital image more accurate than one produced by radar.

So, the question is whether lidar sensors are also necessary. Lidar uses light waves instead of radio waves to do its work. However, a Tesla is equipped with 8 cameras, which are also light sensors and imagers. So the issue is not whether Tesla is using light waves to sense its surroundings, but whether a lidar sensor or a camera is the better source for collecting light that a computer can analyze to develop driving responses.

Before 2012 the answer would have clearly been lidar. Up to that time, neural net programming was too little evolved to be able to develop computer-driven responses to visual images that cameras produce. The digital point maps that lidar sensors build were more amenable to analysis by traditional programming.

However, in 2012 there was a breakthrough in neural net programming that allows computers to train themselves to analyze and develop responses to visual images more as a human would. Tesla (which introduced the Model S in 2012) has been taking this approach. Each successive programming version has added sophistication to what the neural net can accomplish. Version 9 (which requires the new CPU's just being introduced in Teslas) uses a single neural network to process images from all 8 cameras with one set of weights, as well as processing 3 color channels and 2 frames simultaneously to measure speed and direction of objects.

In short, the suite of radar sensors, ultrasonic detectors, and cameras that Tesla uses does everything that cars equipped with lidar do . . . and at a somewhat lower cost, since lidar sensors are very expensive.

I don't think Musk envisions ever using only cameras since, as with lidar sensors, they require certain light conditions to work. With neural net programming, cameras can outperform lidar, but not sensors required for low-visibility conditions.
 

stank65

Well-Known Member
First Name
Rich
Joined
Apr 1, 2019
Threads
0
Messages
48
Reaction score
20
Location
Coopersburg, PA
Vehicles
Tesla Model 3, Chevy Tahoe, GMC Acadia
All autonomous driving cars have to use radar sensors, because lidar sensors are seriously compromised in nighttime, rain, fog, or dusty conditions. Also, the greater range of radar over lidar becomes critical when driving at higher speeds. However, in good lighting conditions, lidar offers advantages over radar in imaging detail and allows construction of a digital image more accurate than one produced by radar.

So, the question is whether lidar sensors are also necessary. Lidar uses light waves instead of radio waves to do its work. However, a Tesla is equipped with 8 cameras, which are also light sensors and imagers. So the issue is not whether Tesla is using light waves to sense its surroundings, but whether a lidar sensor or a camera is the better source for collecting light that a computer can analyze to develop driving responses.

Before 2012 the answer would have clearly been lidar. Up to that time, neural net programming was too little evolved to be able to develop computer-driven responses to visual images that cameras produce. The digital point maps that lidar sensors build were more amenable to analysis by traditional programming.

However, in 2012 there was a breakthrough in neural net programming that allows computers to train themselves to analyze and develop responses to visual images more as a human would. Tesla (which introduced the Model S in 2012) has been taking this approach. Each successive programming version has added sophistication to what the neural net can accomplish. Version 9 (which requires the new CPU's just being introduced in Teslas) uses a single neural network to process images from all 8 cameras with one set of weights, as well as processing 3 color channels and 2 frames simultaneously to measure speed and direction of objects.

In short, the suite of radar sensors, ultrasonic detectors, and cameras that Tesla uses does everything that cars equipped with lidar do . . . and at a somewhat lower cost, since lidar sensors are very expensive.

I don't think Musk envisions ever using only cameras since, as with lidar sensors, they require certain light conditions to work. With neural net programming, cameras can outperform lidar, but not sensors required for low-visibility conditions.
I initially bought into this explanation as as well. The reality is that full self drive should have multiple overlapping redundancies, and lidar is one of them. Also lidar is an active system where imaging cameras are a passive system that is dependent on the surrounding lighting conditions. This make lidar far superior to imaging cameras especially at night. Saying lidar is too expensive in the mission critical function of the car makes me smirk. Also, Tesla has realized that full self drive may be farther off and/or not as desirable as previously thought for the everyday consumer. With the approach they are taking they have changed their pricing model significantly. My Model three is an ā€œadvanced autopilotā€ variant that is no longer even offered.

Donā€™t get me wrong, I absolutely love my Model 3 and wouldnā€™t trade it for any other car out there today, but from owning one, I realize the limitations of the car and how important it is for self driving to be completely bulletproof for it to be successful. Solid adaptive cruise control (Advanced Autopilot) will be more than enough for the general public (because the car is so fun to drive when you arenā€™t in traffic). Leaving out a sensor that give you a clear advantage in some driving scenarios and an additional layer or redundancy is a mistake IMO.
 

Sponsored

Hmp10

Well-Known Member
Joined
Mar 7, 2019
Threads
5
Messages
629
Reaction score
542
Location
Naples, FL
Vehicles
2015 Tesla Model S P90D; 2018 Honda Odyssey
While lidar is an active system, it is still dependent on the infrared light pulses it emits being able to reach an object and return. This limits its utility in rain, fog, and dusty conditions. It will work in clear nighttime where a camera won't, but without a neural net computer the point map it paints cannot be used by the car unless it can be correlated to an accurate and detailed digital map of the road, its markings and signage, and surrounding environment. Most roads are not mapped at the level of detail needed for lidar to work effectively at night without neural net programming.

I tend to agree with you that the more redundancy (or near-redundancy in the case of cameras plus lidar, as they are different technologies) the better for autonomous driving. Whether such redundancy, in fact, generates more safety will have to await the accumulation of real-world data as various systems are deployed en masse. Since lidar units are more mechanically complex, more prone to failure, and considerably more expensive than cameras, it might ultimately come down to a cost-benefit analysis for use in cars.
 

stank65

Well-Known Member
First Name
Rich
Joined
Apr 1, 2019
Threads
0
Messages
48
Reaction score
20
Location
Coopersburg, PA
Vehicles
Tesla Model 3, Chevy Tahoe, GMC Acadia
Why do you keep saying ā€˜at nightā€™. Lidar is an active system and more effective at night and significantly better than neural net imaging at night because it does not require external light. Yes it is negatively effected by certain weather conditions, but that makes the combination of imaging + lidar + radar the best for safety and redundancy which needs to be the primary objective in all full self drive solutions. Why limit your solution?
 

Hmp10

Well-Known Member
Joined
Mar 7, 2019
Threads
5
Messages
629
Reaction score
542
Location
Naples, FL
Vehicles
2015 Tesla Model S P90D; 2018 Honda Odyssey
I wrote that lidar "will work in clear nighttime where a camera won't". However, although it does not require ambient light to work, its digital bit point construct does have to be compared to a digitized map of the area it's scanning for a non-neural-net computer to know what to do with the data the lidar sensors are providing. If you are in an area for which detailed digital maps are not available -- and that includes many areas -- then you're only going to get object-avoidance, not true autonomous driving.

I don't know what you mean by "neural net imaging". Neural net does not refer to an imaging technology; it's an AI programming protocol that determines how to interpret and use data.

A camera in good daylight conditions, feeding a neural-net-programmed computer, will be able to steer a car in an area for which detailed digital maps are not available, whereas a lidar sensor feeding a conventionally-programmed computer will not. On the other hand, a camera is useless at night, no matter how the computer is programmed.

Clearly, the reason most manufacturers are experimenting with deploying both cameras and lidar sensors is that, in tandem, they cover more conditions, as you quite reasonably argue they should. Musk has placed his bet, at least for now, on the ability of neural net computing to do enough with data it receives from its array of cameras, radar, and ultrasonic sensors to be the functional equivalent of a system also incorporating lidar. It's really more of a software-driven approach than a hardware-driven one. Whether he's bet smartly remains to be seen. He has made his mistakes before, and this could be one of them.

Certainly the chief engineer for the Model S, who is now the CEO of Lucid Motors, thinks Musk is wrong, as the Lucid Air is going to use both cameras and lidar.
 
 




Top