ajdelange
Well-Known Member
- First Name
- A. J.
- Joined
- Aug 1, 2019
- Threads
- 9
- Messages
- 2,883
- Reaction score
- 2,317
- Location
- Virginia/Quebec
- Vehicles
- Tesla XLR+2019, Lexus, Landcruiser, R1T
- Occupation
- EE Retired
It's not that I am suggesting it. The Tesla cars are exactly that and they are considered the safest cars on the road. I'm not sure that what is meant by "amorphous" here. They are hardly that but there are loops within loops within loops if you get down to the motor/inverter controls.To have one amorphous system that perceives the world and directly controls steering and brakes would be insanity, but I don’t think that’s what you’re suggesting.That would not pass a Functional Safety Assessment.
Of course but they can't be treated independently (you may have guessed I was a systems engineer). The job of a control system is to estimate the state of the system, compute the cost of being in other than a desired state and come up with a set of inputs to the system to move it towards the region in phase space where the costs are low (minimum). Clearly there are separate parts to this and some decoupling is possible. The Tesla FSD engine contains some Wiener filters (deconvolutions) and these need to be trained adaptively to minimize the error in estimated state and actual state. The optimum filters depend on the sensors and environment - not on how the cost functions are applied to the estimates made by these filters so the adaptation algorithms for them can be developed independently of the servo algorithms to some extent. But that dependence on the sensor suite cannot be neglected in terms of OP's question. If you are starting with a clean sheet of paper you might well start with algorithms that take sensor geometry, precision and noise model as parameters because you haven't any idea at this point what your sensor suite is going to be. Such algorithms ought to be able to work with any sensor suite and so data from Tesla could be used to check that your algorithms are basically working. Thus I agree that one might use data from someone else to evolve a classifier/estimator. But one must proceed with caution. For example required word size depends on the condition number of the covariance matrices being inverted (Wiener filter) and if your sensor suite turns out to deliver covariance matrices that are more singular than Teslas you may be in for a big surprise (this happened on a program I worked on - it was not a happy time). Practically speaking one really must train with data derived from his own system or one known to be very like it.Do we agree then that the perception/classification and control aspects are separate parts or functions within the system?
As noted above the data from the real world depends very much on the sensor suite. The data from 3 surgeons in the road is very different from 2 cameras, 2 sonic sensors and 2 Lidars than it is from 4 cameras, 3 sonic sensors and a radar.So for the perception part it can learn from data, either real or simulated, that is played to it, rather than being sensed in the real world. The perception part only needs to work out what it perceives and then compare that with the reference answer for the data played to it.
Feedback for the control part is the difference between the estimated state vector and the minimum cost state vector at the autopilot subsystem level. IOW d and q aren't elements of the autopilot state vector but velocity and position would be.Feedback for the control part is whether the vehicle is where it is supposed to be and that its movement is within certain parameters; rate of steering, g-forces etc.
Not sure whether "supposed to be" means where it ought to be or where the classification system supposes it to actually be. The latter is correct if we understand that we are talking about phase space - not Euclidean space.‘Where the vehicle is supposed to be’ is as determined by the perception/classification part.
The feedback is still there. It is still the difference between a low cost state vector and the estimate of the current state vector (which in this case does not represent the true state) If the system perceives that there is a surgeon in the road when there isn't it thinks that the car is in a high cost state and and slams on the brakes to get out of it. That's not good of course but the control part of the system acted as it should given the information it got from the estimator. This happens fairly frequently with the current Tesla autopilot, BTW.If the vehicle is doing something it shouldn’t as determined by the user and that is because the perception is wrong then there is no feedback from the system itself.
I mentioned above loops within loop. The outermost loop is the driver. If he fears that he's going to get rear ended because the car has slammed on the brakes in response to a false alarm he'll stomp the accelerator to over ride the autopilot. Now there is a loop outside the driver loop too. If the car senses that autopilot was over ridden during sudden braking it may well decide to forward the current state estimate and sensor data to to the mother ship for analysis. I don't think this data would be of much use to Rivian except in the broadest sense. Now I'll note that this is a little confusing as "user feedback" is not the thing that a controls engineer thinks of when he sees "feedabck"Feedback comes from the user (they take control), and that’s something the fleet will ultimately learn from, but not the individual vehicle right there and then (cf. the Tesla Autonomy Day video).
Well no, not unless the data comes from a sensor suite that is identical to that of the estimator being developed as discussed above or can be transformed to have essentially the same characteristics (e.g. a facial recognition system using low resolution cameras could be trained from a library of high resolution images.)So for the perception part it can learn from data, either real or simulated, that is played to it, rather than being sensed in the real world.
But it can't, for example, be trained to recognize dog breeds based on weight from a library of photographs of dogs.The perception part only needs to work out what it perceives and then compare that with the reference answer for the data played to it.
In an attempt to wrap it up: Your ideas can generally be responded to with phrases like "well maybe under some conditions", "well OK if", "possibly if you first" and so on but clearly the simplest and most robust path for Rivian is to collect data from their own sensor suite and use it to develop their state estimators and to collect vehicle dynamics data from their development models and merge these into one integrated system that perceives the world and directly controls steering and brakes and motors. Clearly they will have had plenty of time to do this by the time they start shipping and are probably out there collecting data as I write this.
Sponsored