Sponsored

Rivian Eventual L3 and Data Gathering

EVian

Well-Known Member
Joined
Aug 17, 2019
Threads
0
Messages
48
Reaction score
36
Location
Newmarket, UK
Vehicles
BMW 330e
As data collected from Tesla or Ford would be pretty useless for tuning Rivian's algorithms I don't think this is the case.
Depends what is meant by ‘data’. If one bought access to the source data (the streams from cameras and other sensors) then different algorithms could be applied to that data. That could be useful. Of course that would mean probably Peta bytes of data so is probably infeasible, but technically possible.
Sponsored

 

ajdelange

Well-Known Member
First Name
A. J.
Joined
Aug 1, 2019
Threads
9
Messages
2,883
Reaction score
2,317
Location
Virginia/Quebec
Vehicles
Tesla XLR+2019, Lexus, Landcruiser, R1T
Occupation
EE Retired
I think you are missing the essential point that a cyber system, as originally conceived by Wiener in the 40's is a FEEDBACK system. You cannot feedback from a sensor or camera on another vehicle as that vehicle in not in the loop.
 

EVian

Well-Known Member
Joined
Aug 17, 2019
Threads
0
Messages
48
Reaction score
36
Location
Newmarket, UK
Vehicles
BMW 330e
I think you are missing the essential point that a cyber system, as originally conceived by Wiener in the 40's is a FEEDBACK system. You cannot feedback from a sensor or camera on another vehicle as that vehicle in not in the loop.
But Tesla annotates the image data with information about where the road goes etc. So you can apply your own algorithms to the image data and correct the algorithm if it doesn’t get the right answer about where the road goes.

Edit: agree though that making the car do the right thing for that ‘road’ is a separate problem; that is the easier part of the problem though.
 

EVian

Well-Known Member
Joined
Aug 17, 2019
Threads
0
Messages
48
Reaction score
36
Location
Newmarket, UK
Vehicles
BMW 330e
By the way, if any of you have 3 hours of your life to give away, this is worth a watch:


Note the OP’s request not to derail from topic, so I only post this as a good source of information about how these types of systems work. Note that the first hour-ish is just footage of a Tesla driving, so you can skip that bit. It’s not a new video, but it is very interesting.
 

ajdelange

Well-Known Member
First Name
A. J.
Joined
Aug 1, 2019
Threads
9
Messages
2,883
Reaction score
2,317
Location
Virginia/Quebec
Vehicles
Tesla XLR+2019, Lexus, Landcruiser, R1T
Occupation
EE Retired
But Tesla annotates the image data with information about where the road goes etc. So you can apply your own algorithms to the image data and correct the algorithm if it doesn’t get the right answer about where the road goes.
Yes you can but that's trivial and a solved problem. No need for any training data there.

Edit: agree though that making the car do the right thing for that ‘road’ is a separate problem; though.
Yes and this is the hard part of the problem. Consider the teenage son. He can see that the road goes to the right but he hasn't the experience to know how far to turn the wheel to stay on it. In training an AI algorithm the system turns the wheel and then sees how the car moves relative to the road. If the car overshoots this is fed back to the tuning algorithm which reduces the gain such that next time the wheel doesn't get turned so much for a like amount of deflection. This is how you learn and the idea with AI is to get machines to learn the way you do.

Let's try it another way. Were you an aspiring violinist you might learn something in general by watching videos and/or listening to recordings of itzhak Perlman playing but to really refine your bowing it would be much better to play in front of itzhak Perlman and have him correct your bowing errors. Second best would be to record yourself playing and then carefully review the videos. The point really is that the system being tuned has to be in the loop. This is the nature of feedback control. You can't tune your violin by watching Itzhak tune his.
 

Sponsored

EVian

Well-Known Member
Joined
Aug 17, 2019
Threads
0
Messages
48
Reaction score
36
Location
Newmarket, UK
Vehicles
BMW 330e
Yes you can but that's trivial and a solved problem. No need for any training data there.

Yes and this is the hard part of the problem. Consider the teenage son. He can see that the road goes to the right but he hasn't the experience to know how far to turn the wheel to stay on it. In training an AI algorithm the system turns the wheel and then sees how the car moves relative to the road. If the car overshoots this is fed back to the tuning algorithm which reduces the gain such that next time the wheel doesn't get turned so much for a like amount of deflection. This is how you learn and the idea with AI is to get machines to learn the way you do.

Let's try it another way. Were you an aspiring violinist you might learn something in general by watching videos and/or listening to recordings of itzhak Perlman playing but to really refine your bowing it would be much better to play in front of itzhak Perlman and have him correct your bowing errors. Second best would be to record yourself playing and then carefully review the videos. The point really is that the system being tuned has to be in the loop. This is the nature of feedback control. You can't tune your violin by watching Itzhak tune his.
I have to disagree; the hard part of the problem is understanding the world, not navigating through it. The teenage son (or daughter...) doesn’t crash because they turn the steering wheel incorrectly - it’s because they don’t fully comprehend the situation.
 

electruck

Well-Known Member
Joined
Oct 6, 2019
Threads
69
Messages
3,495
Reaction score
6,454
Location
Dallas, TX
Vehicles
2023 Rivian R1S
"Hardware" means the physical processor chips in which the programs are executed and the memory chips in which instructions (programs) and data are stored. These will be present in the vehicle that is delivered to you. Level 2 code will also be on board and presumably enabled. There may be a version of the Level 3 firmware on board too but presumably it will not be enabled. As the level 3 code evolves to the state where Rivian feels comfortable with drivers actually using it a current version will be downloaded OTA or the version already on board will be enabled OTA (probably the former). Put another way, the truck will be delivered with Level 2. Level 3 will come later. When it does no additional hardware will need to be installed.
Since you are a stickler for details, the hardware also includes the lidar, radar, cameras and ultrasonic sensors. These too will be included at delivery. As will any actuators required to steer, brake and accelerate.
 
Last edited:

ajdelange

Well-Known Member
First Name
A. J.
Joined
Aug 1, 2019
Threads
9
Messages
2,883
Reaction score
2,317
Location
Virginia/Quebec
Vehicles
Tesla XLR+2019, Lexus, Landcruiser, R1T
Occupation
EE Retired
I have to disagree; the hard part of the problem is understanding the world, not navigating through it...
Well I suppose you can think of it any way you want to but the man's question was whether one could develop a self driving system quickly or put another way will Tesla be at an advantage in this regard relative to Rivian. The answer is "yes." The impled question is "Will Rivian be able to catch up enough to be a viable competitor?" "The answer is, again, "yes". Tesla may have billions of miles of training data (and again I emphasize that you can't train yourself by watching someone else work out so that the training data has to be collected by the system being trained) but there is a so called "steep learning curve". It doesn't take that much data to arrive at a viable system if not one as sophisticated as Tesla's current system. Rivian has announced that the delivered trucks will have Level 2 meaning they have collected enough data on the road to support that level. As the delivered vehicles are being driven at level 2 data will be collected which will allow the transition to level 3.

Be aware that Tesla's level 3 ain't that great. If anyone is considering buying a Tesla rather than a Rivian because Tesla has level 3 use other criteria because, among other things, Tesla does not really have level 3 at this point in time. Many of the features don't work much of the time to the point that I'm afraid to use them except on the freeway when there basically aren't any other cars around. For all the data Tesla may have collected they have not been able to train what we call judgement into their system and frankly having observed, if from a distance, AI from Norbert to Elon, I really wonder if they ever will. They have handled the easy part (I should probably say "relatively easy" because there were certainly challenges) by which I mean that the car has a reasonable sense of its environment to the point that it can distinguish trucks, vans, cars, buses, motorcycles and people and display them in their correct locations relative to the vehicle most of the time and knows where the vehicle is in the roadway but it does not, like the teenager, always make decisions that I am comfortable with.
 
Last edited:

skyote

Well-Known Member
Joined
Mar 12, 2019
Threads
55
Messages
2,725
Reaction score
5,647
Location
Austin, TX
Vehicles
Jeeps, 2500HD Duramax, R1S Preorder (Dec 2018)
The real challenge is the number of situations that can be encountered & how best to handle them *safely. I believe the perception side of the problem is mostly solved.

The other interesting thing to me is how different people have different driving preferences. Speed, following distance, acceleration/braking rates, etc. I hope that vehicles can learn from their specific drivers in non-autonomous scenarios, within certain safety limits of course, or have configurable parameters (akin to driver memory settings). Else people will likely get annoyed with how their vehicle drives them.
 

ajdelange

Well-Known Member
First Name
A. J.
Joined
Aug 1, 2019
Threads
9
Messages
2,883
Reaction score
2,317
Location
Virginia/Quebec
Vehicles
Tesla XLR+2019, Lexus, Landcruiser, R1T
Occupation
EE Retired
The real challenge is the number of situations that can be encountered & how best to handle them *safely. I believe the perception side of the problem is mostly solved.
Here's a situation to ponder. A full autonomy vehicle is taking two surgeons to the airport from which they will fly to a distant city to perform life saving surgery on a little girl. The car is proceeding at a pretty good clip when three vagrants wander out into the street. The only paths available to the car are to strike the vagrants probably killing them or to swerve into the other lane which will probably kill the passengers as it is blocked by a parked moving van (removal lorry). What's the right decision for the autopilot?

This is the kind of question being pondered by those studying self driving. You can find on-line surveys where you answer questions for a bunch of scenarios. Those running them hope to determine what society feels about this sort of thing. Then you have to figure out how to program the autopilot to implement whatever the programmer decides is right. This will not happen in our life time.


The other interesting thing to me is how different people have different driving preferences. Speed, following distance, acceleration/braking rates, etc. I hope that vehicles can learn from their specific drivers in non-autonomous scenarios, within certain safety limits of course, or have configurable parameters (akin to driver memory settings). Else people will likely get annoyed with how their vehicle drives them.
All good points. The current Tesla's have settings for following distance, speed etc. so the diver's preferences can be accommodated. But what about people in the other cars on the road? The Tesla "Navigate on Autopilot" feature will, if it finds itself behind a vehicle going slower than the speed limit, move into a faster lane if it is free. It will do this even though a car in the faster lane is closing from behind and it will do it "safely" but it will sometimes cut it so close that were I the guy in the car behind I would certainly be alarmed to the point of using either the horn or my middle finger or both to inform the "driver" of the Tesla of my displeasure. That's why I don't use this feature if traffic is at all heavy. It doesn't work about half the time anyway.

You all may have concluded that I am a bit cynical about AI. I've been hearing about how great it is going to be for a long, long time. I'm still waiting. People expect a lot of it because of hype by the acolytes (Elon Musk is an acolyte) and because, as we have seen here, they don't really understand how it works. We can't really get into that here beyond saying that it takes what it observes, combines that with what it knows a-priori and guesses (and that's in italics for a reason) as to which of several hypotheses most likely represents the situation. It then decides, based on which hypothesis it has chosen, what is the best thing to do in order to minimize a cost. There are thus three parts to this: observation, analysis (application of knowledge) and action decision and I hope it is clear that there is uncertainty attached to all three. If the machine is a human who hands down decisions that minimize cost we call him "wise". We are trying to build wise machines. As you have noted the perception (observation) bit is in fairly good shape (though it has taken Tesla 2 x a billion transistors to get there). It's applying the a-priori and the costs that present the challenges. IMO.

Now with all that it occurs to me that readers may conclude autopilot is useless. That is not true. Simply relieving the driver of the responsibilities of staying in lane and regulating speed can make a long trip much less tiring. The autopilot can maintain speed more uniformly than you can and that's going to improve range through better efficiency. Warning of threats is also beneficial. All these are things that the cars can do and collected data have shown that accident rates for Tesla drivers using autopilot are lower than for drivers in general.
 
Last edited:

Sponsored

skyote

Well-Known Member
Joined
Mar 12, 2019
Threads
55
Messages
2,725
Reaction score
5,647
Location
Austin, TX
Vehicles
Jeeps, 2500HD Duramax, R1S Preorder (Dec 2018)
Oh, and I think Rivian will have plenty of data & mature algorithms at launch or shortly thereafter.

I might have asked about partnership & collaboration with Amazon/Aurora, and received responses along the lines of "Amazon has a vested interest in autonomy for its upcoming delivery vans".
 

ajdelange

Well-Known Member
First Name
A. J.
Joined
Aug 1, 2019
Threads
9
Messages
2,883
Reaction score
2,317
Location
Virginia/Quebec
Vehicles
Tesla XLR+2019, Lexus, Landcruiser, R1T
Occupation
EE Retired
Autonomy for Amazon is of especial interest as it has the added element of what's known as the "Traveling Salesman Problem" which, in a nutshell, requires finding the route a traveling salesman (or delivery man) should take in serving a specified list of stops while minimizing some cost. The obvious cost is total miles driven but, in the case of Amazon, Fedex, UPS etc. actually has a "number of left turns" component. Other components can include things like tolls, bridge crossings, use of certain roadways at certain times etc.
 

EVian

Well-Known Member
Joined
Aug 17, 2019
Threads
0
Messages
48
Reaction score
36
Location
Newmarket, UK
Vehicles
BMW 330e
Well I suppose you can think of it any way you want to but the man's question was whether one could develop a self driving system quickly or put another way will Tesla be at an advantage in this regard relative to Rivian. The answer is "yes." The impled question is "Will Rivian be able to catch up enough to be a viable competitor?" "The answer is, again, "yes". Tesla may have billions of miles of training data (and again I emphasize that you can't train yourself by watching someone else work out so that the training data has to be collected by the system being trained) but there is a so called "steep learning curve". It doesn't take that much data to arrive at a viable system if not one as sophisticated as Tesla's current system. Rivian has announced that the delivered trucks will have Level 2 meaning they have collected enough data on the road to support that level. As the delivered vehicles are being driven at level 2 data will be collected which will allow the transition to level 3.
But there is an important distinction here. I would agree with you (that you can’t learn from someone else’s data) if the AI system was directly controlling where the vehicle was placing itself in lane etc. But it’s not. The AI system is understanding the environment and deciding where the vehicle should be. The task of making it go where it should be is separate. So you absolutely can learn how to sense the world from annotated data. The only feedback is whether the AI marked up the footage with correct road shape and direction and placement of vehicles etc., and that ‘correct’ answer can be part of the data used. The video I linked in #5 has examples of companies whose business model is to do just this, albeit not always with real data (the data being simulated in some cases).

The act of placing the vehicle where it should be is, relatively, trivial, and can be implemented by a procedural system rather than an AI system.

Here's a situation to ponder. A full autonomy vehicle is taking two surgeons to the airport from which they will fly to a distant city to perform life saving surgery on a little girl. The car is proceeding at a pretty good clip when three vagrants wander out into the street. The only paths available to the car are to strike the vagrants probably killing them or to swerve into the other lane which will probably kill the passengers as it is blocked by a parked moving van (removal lorry). What's the right decision for the autopilot?

This is the kind of question being pondered by those studying self driving. You can find on-line surveys where you answer questions for a bunch of scenarios. Those running them hope to determine what society feels about this sort of thing. Then you have to figure out how to program the autopilot to implement whatever the programmer decides is right. This will not happen in our life time.

...

You all may have concluded that I am a bit cynical about AI. I've been hearing about how great it is going to be for a long, long time. I'm still waiting. People expect a lot of it because of hype by the acolytes (Elon Musk is an acolyte) and because, as we have seen here, they don't really understand how it works. We can't really get into that here beyond saying that it takes what it observes, combines that with what it knows a-priori and guesses (and that's in italics for a reason) as to which of several hypotheses most likely represents the situation. It then decides, based on which hypothesis it has chosen, what is the best thing to do in order to minimize a cost. There are thus three parts to this: observation, analysis (application of knowledge) and action decision and I hope it is clear that there is uncertainty attached to all three.
Succinctly put. And these are absolutely the kinds of decisions a level 5 system will need to be making like you say. The situation you described does place additional requirements on the observation part: how does the system observe the people to be vagrants. Not unachievable, but certainly subject to a great deal more uncertainty.

I thought I quoted another response about the perception part being solved; seems I didn’t. I would say it’s 95% solved, but it’s the last 5% that takes 95% of the effort. In the general case it will work out where the road goes, and what vehicles and pedestrians are around. But for example if there is a slight crest and it can’t see where the road goes, it might be able to use other visual cues, but only if it’s similar to situations already seen by the fleet. It’s these edge and corner cases that cause trouble, and will keep AI systems guessing for some time to come.
 

ajdelange

Well-Known Member
First Name
A. J.
Joined
Aug 1, 2019
Threads
9
Messages
2,883
Reaction score
2,317
Location
Virginia/Quebec
Vehicles
Tesla XLR+2019, Lexus, Landcruiser, R1T
Occupation
EE Retired
The act of placing the vehicle where it should be is, relatively, trivial, and can be implemented by a procedural system rather than an AI system.
I had an airplane that did it with a couple of transistors. It was done for many years with pneumatics. Yes, it can be done with a procedural system but it isn't. In an autopilot we want the response to feedback to be much more sophisticated that we can get with the simple PID type algorithms that linear feedback implies. In fact the major reasons for going to intelligent control are to free us of the restrictions of linearity and of the necessity of knowing the system characteristic a-priori.

The irony in this quote is that ultimately the code that executes the AI algorithms is procedural e.g. the last Kalman filter I implemented was written in Fortran, the last fuzzy controller in C.

When I wrote that last sentence it clicked on me that perhaps you might have concluded from the video to which you posted the link the that as the Neural Accelerator is used for classification in the Tesla "computer", AI is only used for classification. That's not true. It is also used for control. A simple example would be a fuzzy temperature controller such as the ones you can buy for a few $ from eBay and as are found in "smart" thermostats. These are pure control systems and in order to give optimum response they train themselves. But I cannot train the thermostat I intend to use in my house by installing it in your house just as Rivian cannot train it's controller by looking at Telsla's training data.
 
Last edited:

EVian

Well-Known Member
Joined
Aug 17, 2019
Threads
0
Messages
48
Reaction score
36
Location
Newmarket, UK
Vehicles
BMW 330e
I had an airplane that did it with a couple of transistors. It was done for many years with pneumatics. Yes, it can be done with a procedural system but it isn't. In an autopilot we want the response to feedback to be much more sophisticated that we can get with the simple PID type algorithms that linear feedback implies. In fact the major reasons for going to intelligent control are to free us of the restrictions of linearity and of the necessity of knowing the system characteristic a-priori.

The irony in this quote is that ultimately the code that executes the AI algorithms is procedural e.g. the last Kalman filter I implemented was written in Fortran, the last fuzzy controller in C.

When I wrote that last sentence it clicked on me that perhaps you might have concluded from the video to which you posted the link the that as the Neural Accelerator is used for classification in the Tesla "computer", AI is only used for classification. That's not true. It is also used for control. A simple example would be a fuzzy temperature controller such as the ones you can buy for a few $ from eBay and as are found in "smart" thermostats. These are pure control systems and in order to give optimum response they train themselves. But I cannot train the thermostat I intend to use in my house by installing it in your house just as Rivian cannot train it's controller by looking at Telsla's training data.
I didn’t mean to suggest that there is no AI in the control part, no. I realise you could reasonably deduce that from what I wrote.

Now clearly I haven’t written any part of the Tesla system, but I do have relevant knowledge. To have one amorphous system that perceives the world and directly controls steering and brakes would be insanity, but I don’t think that’s what you’re suggesting. That would not pass a Functional Safety Assessment.

Do we agree then that the perception/classification and control aspects are separate parts or functions within the system? Feedback for the control part is whether the vehicle is where it is supposed to be and that its movement is within certain parameters; rate of steering, g-forces etc.
‘Where the vehicle is supposed to be’ is as determined by the perception/classification part. If the vehicle is doing something it shouldn’t as determined by the user and that is because the perception is wrong then there is no feedback from the system itself. Feedback comes from the user (they take control), and that’s something the fleet will ultimately learn from, but not the individual vehicle right there and then (cf. the Tesla Autonomy Day video).

So for the perception part it can learn from data, either real or simulated, that is played to it, rather than being sensed in the real world. The perception part only needs to work out what it perceives and then compare that with the reference answer for the data played to it.
Sponsored

 
 




Top