Sponsored

Donald Stanfield

Well-Known Member
First Name
Donald
Joined
Jul 31, 2022
Threads
47
Messages
5,121
Reaction score
10,149
Location
USA
Vehicles
2025 R1S Tri Ascend, 2024 i4 M50
Occupation
Stuff and things
Ok. Here’s some data comparing AV versus humans. None of it is perfect, all have holes in them, and it’ll be a while to see a study on all roads under all conditions, but from an injury and fatality standpoint (which is how I think most would define as dangerous), prelim data seem to point in the right direction.

It is worse in some scenarios, better in others, depending on who you’re comparing it to and under what conditions. Also ADAS tech is fragmented, so hard to really draw accurate comparisons. Data is nuanced and takes time to parse through. Keep in mind it’s older data and new data being rapidly gathered as we speak. Field progressing quickly and will be on the losing end of this argument at some point if we keep saying “self-driving cars suck” when it becomes glaringly obvious which is safer overall. As alluded to earlier, will take a while for public sentiment to come around. It seems you took what I wrote personally, definitely not intended and never called anyone any names, please go back and read what I wrote with regard to the historical analogy, which also isn’t perfect and also never meant to be apples to apples. Again, the real comparison we all want is computer vs human driver. Time will tell. But given trends, I’m willing to bank on AVs eventually proving safer from an injury and fatality standpoint overall.

https://deepblue.lib.umich.edu/bitstream/handle/2027.42/178179/UMTRI-2023-18.pdf
Sure, they will eventually be better than the average human, but they are far away right now. Self-driving shouldn't be allowed on the road in its current state outside of the curated special cases like Waymo. There's nothing personal involved, I think your data is dishonest and it ignores the serious and deadly flaws of AVs at their current level of safety.

It's the same sort of thing Elon does when he lies about his FSD system.
Sponsored

 

doozenberg

Well-Known Member
Joined
Nov 1, 2021
Threads
7
Messages
106
Reaction score
73
Location
Socal
Website
rivian.com
Vehicles
‘18 Model 3, ‘22 Model Y, '25 R1S Standard
Occupation
Physician
Sure, they will eventually be better than the average human, but they are far away right now. Self-driving shouldn't be allowed on the road in its current state outside of the curated special cases like Waymo. There's nothing personal involved, I think your data is dishonest and it ignores the serious and deadly flaws of AVs at their current level of safety.

It's the same sort of thing Elon does when he lies about his FSD system.
Ok. Guess we are talking about two different timelines then. You are talking about now. I am talking about not too distant future state.

Aaand… now the clear anti-Elon bias comes out definitively, whom I never mentioned. Argument is over. Thanks for making your biases apparent.
 

Donald Stanfield

Well-Known Member
First Name
Donald
Joined
Jul 31, 2022
Threads
47
Messages
5,121
Reaction score
10,149
Location
USA
Vehicles
2025 R1S Tri Ascend, 2024 i4 M50
Occupation
Stuff and things
Ok. Guess we are talking about two different timelines then. You are talking about now. I am talking about not too distant future state.

Aaand… now the clear anti-Elon bias comes out definitively, whom I never mentioned. Argument is over. Thanks for making your biases apparent.
LOL Elon has repeatedly lied about the usability of FSD, this isn't debatable. Why are you talking about stuff that hasn't happened yet as though it has? Why are you comparing current autonomous vehicles with automatic elevators if you're admitting that the state of those vehicles today isn't on par? It seems you got called out and are now backpedaling and trying to use my mention of Elon and his grandiose claims as an exit under the guise of bias.

The only one exhibiting bias here is you, bias over the current state of self-driving technology. Exaggerations don't work on people who know the current state of the technology. FSD has no place on public roads. We don't allow impaired human drivers under the threat of legal penalties to operate a motor vehicle, yet we allow AI to do an even worse job. Elon isn't the only offender here, but he is the worst one with his fantastical claims over his driving system.

The argument was over when you posted loaded data to support your claims.
 

Donald Stanfield

Well-Known Member
First Name
Donald
Joined
Jul 31, 2022
Threads
47
Messages
5,121
Reaction score
10,149
Location
USA
Vehicles
2025 R1S Tri Ascend, 2024 i4 M50
Occupation
Stuff and things

doozenberg

Well-Known Member
Joined
Nov 1, 2021
Threads
7
Messages
106
Reaction score
73
Location
Socal
Website
rivian.com
Vehicles
‘18 Model 3, ‘22 Model Y, '25 R1S Standard
Occupation
Physician
LOL Elon has repeatedly lied about the usability of FSD, this isn't debatable. Why are you talking about stuff that hasn't happened yet as though it has? Why are you comparing current autonomous vehicles with automatic elevators if you're admitting that the state of those vehicles today isn't on par? It seems you got called out and are now backpedaling and trying to use my mention of Elon and his grandiose claims as an exit under the guise of bias.

The only one exhibiting bias here is you, bias over the current state of self-driving technology. Exaggerations don't work on people who know the current state of the technology. FSD has no place on public roads. We don't allow impaired human drivers under the threat of legal penalties to operate a motor vehicle, yet we allow AI to do an even worse job. Elon isn't the only offender here, but he is the worst one with his fantastical claims over his driving system.

The argument was over when you posted loaded data to support your claims.
Yes, loaded data from labs at the University of Michigan and the University of Central Florida, the latter of which was published in Nature. Where is the unbiased published data countering these findings?

Objectivity has left the chat.

Rivian R1T R1S Wassym: Rivian isn't prioritizing autonomous driving (Business Insider article) IMG_1500
 

Sponsored

Donald Stanfield

Well-Known Member
First Name
Donald
Joined
Jul 31, 2022
Threads
47
Messages
5,121
Reaction score
10,149
Location
USA
Vehicles
2025 R1S Tri Ascend, 2024 i4 M50
Occupation
Stuff and things
Yes, loaded data from labs at the University of Michigan and the University of Central Florida, the latter of which was published in Nature. Where is the unbiased published data countering these findings?

Objectivity has left the chat.

IMG_1500.gif
Obviously, I wasn't talking about your studies that didn't at all reinforce your argument. I was referring to your data asserting Waymo was safer than human drivers on a per mile basis. That was loaded data, and I already explained how. The argument was over once you tried to pass that off as proof of your assertion that AVs are safer than human drivers in their current state.

Now you've tried to pivot to saying you meant some time in the future but not now, after insinuating I was a luddite similar to those who doubted automatic elevators safety, all the while refusing to post data to support a similar safety profile in the comparison you made.

Keep switching those goal posts, accusing me of bias for making a relevant statement about Elon Musk and his exaggerated claims on full self driving technology that are currently under federal investigation, and posting non sensical Yoda memes.

Are you seven years old?
 

Donald Stanfield

Well-Known Member
First Name
Donald
Joined
Jul 31, 2022
Threads
47
Messages
5,121
Reaction score
10,149
Location
USA
Vehicles
2025 R1S Tri Ascend, 2024 i4 M50
Occupation
Stuff and things
Yes, loaded data from labs at the University of Michigan and the University of Central Florida, the latter of which was published in Nature. Where is the unbiased published data countering these findings?

Objectivity has left the chat.

IMG_1500.gif
Also, I'm not the person making the claim ( that AVs are safer than human drivers), so I'm not the one who needs to source their argument. The default position is that human drivers are safer until proven otherwise as that is the current standard. If you want to replace that standard YOU need to prove it, and you haven't done so. You've posted loaded data on Waymo and irrelevant data collection from a couple university studies that do nothing to support your original claim.

I'll make this real simple for you, are self driving vehicles safer than human drivers in their current state today, Sunday November 10th 2024 yes or no? If yes, please source your argument, if no admit your automatic elevator comparison was faulty.
 

norivian

Well-Known Member
First Name
norivianyet
Joined
Dec 28, 2021
Threads
10
Messages
239
Reaction score
112
Location
Cali
Vehicles
R1T in 2023 🤞
A capability similar to Tesla’s autopilot would be very nice to have asap. An FSD-like capability would be nice eventually.
 

Kaiju

Active Member
Joined
Aug 18, 2024
Threads
0
Messages
40
Reaction score
70
Location
Texas
Vehicles
R1T
So just for the record, a Waymo has a much bigger (and better) suite of sensors compared to a typical passenger car whether or not it was meant to drive itself, and they tend to be constrained to areas on which their software has been trained in excruciating detail. Cruise still got banned from San Francisco regardless for the woman-dragging incident. The major flaw in the Waymo data is that they haven't driven enough miles in enough places to get a gauge for serious or fatal accidents, which happen much less frequently than fender benders. It also operates in a comparatively tiny box in low-speed areas known for not having very much inclement weather, which was the entire point of choosing a place like Phoenix. Maybe it is better at its closed-course fair weather environs than humans, though I'm still not actually sure it's apples to apples since the human drivers its being compared to still operate in a much bigger sandbox.

That aside you won't see that level of self-driving sophistication in any 2024-era consumer vehicle. So it's definitely a stretch to say it has any reflection on the efficacy of something like Driver+ or FSD. Even with that being said, none of the Waymo data is under conditions that usually kill people. No freeways at all. How is Waymo on black ice? Snow? How does it respond to potholes and road debris at highway speed? No idea. An actual self-driving vehicle doesn't get to just disengage itself when the going gets tough and dump the problem back on the human that's supposed to be paying attention, so what does it do when it runs into a problem? At highway speeds reactions have to be a lot faster and Waymo's default behavior of stopping or pulling over when it encounters a problem might be incredibly dangerous because that's exactly the sort of thing that causes pileups.

It's the reason Teslas fly into the back of emergency vehicles. It's not because it doesn't detect them.

It's cool to marvel at the things the technology can do, but if you know some basics as to how 202X-era self-driving software works, it's terrfying. The demon in this case is what exactly the software does about false positives. You'd like to think that if the car detects an obstruction it would stop, but that's not how any of that works because on a highway coming to a stop is not a fail-safe condition. A car that comes to an abrupt stop on a highway for no reason causes accidents, so it shouldn't do that. The problem is self-driving software picks up all sorts of stuff it flags as obstructions that aren't real. Road and exit signs is a big one. Obviously the car can't come to a panic stop in the middle of the freeway every time it passes a sign, or every fourth time it passes a sign. Or that one weird sign that got hit by a semi six years ago and didn't get fixed. So...the software just gets programmed to ignore that stuff because a highway isn't supposed to be a minefield of fixed obstructions and maybe 9999 out of 10000 times that's correct. That's not to say that false positives only happen 1 out of 10000 times. They happen every time you engage any such system on a highway and the system chooses correctly the vast majority of the time to ignore it. That's straight up because most of the time there isn't actually something blocking the road that doesn't look like the back of another car.

Problem is that approach is not always correct and the software is biased towards not stopping, because it sees vastly more false positives than actual ones. So it will sometimes completely ignore unfamiliar things like a jacknifed semi-truck, a K-rail put up by an overnight road construction crew or a weird flashy thing that's not centered in a lane. Or a strange reflection on a wet road in twilight. Or there will be a situation in a blizzard where the LIDAR and RADAR disagree on whether or not there's an obstruction and it has to pick which one is correct. In these cases it's also not always wrong. It's just that every now and then, the software makes a bad decision and deems an actual obstruction isn't real and then proceeds straight into it at full speed. This is why having a false sense of security is particularly dangerous. This shit won't happen every time you use it, but 1 time in 1000 is definitely enough to kill anyone who commutes daily. For most people so is 1 in 10 000.

For that reason be very wary of self-driving software. I'm not going to say your life is in danger, but your life very much is at stake every time the software makes a decision, and it's only good at making those decisions in a very boxed-in set of parameters. You won't know when it's operating outside of its box because it doesn't ping you every time it sees a weird thing it doesn't understand. If there's something out of the ordinary going on there is always a chance it will make an all-or-nothing decision that could kill you. I suppose in this way it's not much different than a drunk or distracted driver in that it won't take any evasive action at all. Then it becomes a question if you're paying enough attention to stop it when there's nothing going on that would alert you to anything being wrong.
 
Last edited:

Donald Stanfield

Well-Known Member
First Name
Donald
Joined
Jul 31, 2022
Threads
47
Messages
5,121
Reaction score
10,149
Location
USA
Vehicles
2025 R1S Tri Ascend, 2024 i4 M50
Occupation
Stuff and things
So just for the record, a Waymo has a much bigger (and better) suite of sensors compared to a typical passenger car whether or not it was meant to drive itself, and they tend to be constrained to areas on which their software has been trained in excruciating detail. Cruise still got banned from San Francisco regardless for the woman-dragging incident. The major flaw in the Waymo data is that they haven't driven enough miles in enough places to get a gauge for serious or fatal accidents, which happen much less frequently than fender benders. It also operates in a comparatively tiny box in low-speed areas known for not having very much inclement weather, which was the entire point of choosing a place like Phoenix. Maybe it is better at its closed-course fair weather environs than humans, though I'm still not actually sure it's apples to apples since the human drivers its being compared to still operate in a much bigger sandbox.

That aside you won't see that level of self-driving sophistication in any 2024-era consumer vehicle. So it's definitely a stretch to say it has any reflection on the efficacy of something like Driver+ or FSD. Even with that being said, none of the Waymo data is under conditions that usually kill people. No freeways at all. How is Waymo on black ice? Snow? How does it respond to potholes and road debris at highway speed? No idea. An actual self-driving vehicle doesn't get to just disengage itself when the going gets tough and dump the problem back on the human that's supposed to be paying attention, so what does it do when it runs into a problem? At highway speeds reactions have to be a lot faster and Waymo's default behavior of stopping or pulling over when it encounters a problem might be incredibly dangerous because that's exactly the sort of thing that causes pileups.

It's the reason Teslas fly into the back of emergency vehicles. It's not because it doesn't detect them.

It's cool to marvel at the things the technology can do, but if you know some basics as to how 202X-era self-driving software works, it's terrfying. The demon in this case is what exactly the software does about false positives. You'd like to think that if the car detects an obstruction it would stop, but that's not how any of that works because on a highway coming to a stop is not a fail-safe condition. A car that comes to an abrupt stop on a highway for no reason causes accidents, so it shouldn't do that. The problem is self-driving software picks up all sorts of stuff it flags as obstructions that aren't real. Road and exit signs is a big one. Obviously the car can't come to a panic stop in the middle of the freeway every time it passes a sign, or every fourth time it passes a sign. Or that one weird sign that got hit by a semi six years ago and didn't get fixed. So...the software just gets programmed to ignore that stuff because a highway isn't supposed to be a minefield of fixed obstructions and maybe 9999 out of 10000 times that's correct. That's not to say that false positives only happen 1 out of 10000 times. They happen every time you engage any such system on a highway and the system chooses correctly the vast majority of the time to ignore it. That's straight up because most of the time there isn't actually something blocking the road that doesn't look like the back of another car.

Problem is that approach is not always correct and the software is biased towards not stopping, because it sees vastly more false positives than actual ones. So it will sometimes completely ignore unfamiliar things like a jacknifed semi-truck, a K-rail put up by an overnight road construction crew or a weird flashy thing that's not centered in a lane. Or a strange reflection on a wet road in twilight. Or there will be a situation in a blizzard where the LIDAR and RADAR disagree on whether or not there's an obstruction and it has to pick which one is correct. In these cases it's also not always wrong. It's just that every now and then, the software makes a bad decision and deems an actual obstruction isn't real and then proceeds straight into it at full speed. This is why having a false sense of security is particularly dangerous. This shit won't happen every time you use it, but 1 time in 1000 is definitely enough to kill anyone who commutes daily. For most people so is 1 in 10 000.

For that reason be very wary of self-driving software. I'm not going to say your life is in danger, but your life very much is at stake every time the software makes a decision, and it's only good at making those decisions in a very boxed-in set of parameters. You won't know when it's operating outside of its box because it doesn't ping you every time it sees a weird thing it doesn't understand. If there's something out of the ordinary going on there is always a chance it will make an all-or-nothing decision that could kill you. I suppose in this way it's not much different than a drunk or distracted driver in that it won't take any evasive action at all. Then it becomes a question if you're paying enough attention to stop it when there's nothing going on that would alert you to anything being wrong.
Well put. The lack of notification of anything being wrong means that self driving should make you more nervous than just not using it. The biggest issue is as you said the software works 999 out of 1000 times so it lulls you into a false sense of security.

Using self driving software is like getting into the passenger seat with someone intoxicated who swears they are “totally fine to drive bro” and appears fine right up until the point they kill someone.

We have seen the stories of the people using self drive and paying no attention to the road. This is why I prefer the level in the Rivian, it isn’t very good so it does some of the work keeping you in the lane but not enough to make you stop paying attention.
 

Sponsored

COdogman

Well-Known Member
First Name
Brian
Joined
Jan 21, 2022
Threads
32
Messages
9,427
Reaction score
26,664
Location
Colorado
Vehicles
2023 R1T
Occupation
Dog Wrangler
Clubs
 
It is so funny and predictable that the only people on earth who can’t seem to admit autonomous driving tech is not ready for prime time yet (meaning safe), are hardcore Tesla Stans. They are totally fine with all other drivers on the road being volunteered as guinea pigs during this experiment. And if you question it at all then you are accused of wanting to hold back progress🙄

The fact that Tesla doesn’t release the full, raw data gathered on the topic says a lot as far as I’m concerned. They know exactly what it would show and that would include far more errors and accidents than the third party tracking shows. If it is as safe as they claim they should be showing that data to anyone and everyone.
 

Zoidz

Well-Known Member
First Name
Gil
Joined
Feb 28, 2021
Threads
145
Messages
3,951
Reaction score
8,600
Location
PA
Vehicles
23 R1S Adv, Avalanche, BMWs-X3,330cic,K1200RS bike
Occupation
Engineer
...
I suppose in this way it's not much different than a drunk or distracted driver in that it won't take any evasive action at all. Then it becomes a question if you're paying enough attention to stop it when there's nothing going on that would alert you to anything being wrong.
And therein lies the big problem - drivers becoming complacent with FSD. It works "good enough" 999 times, so the driver is not paying attention when the 1 in 1000 situation occurs.

In a sense, Telsa has become complacent about FSD development, prioritizing lower price over safety/quality:

"In 2021, we began our transition to Tesla Vision by removing radar from Model 3 and Model Y, followed by Model S and Model X in 2022. Today, in most regions around the globe, these vehicles now rely on Tesla Vision, our camera-based Autopilot system.

In 2022 we took the next step in Tesla Vision by removing ultrasonic sensors (USS) from Model 3 and Model Y for most global markets, followed by all Model S and Model X in 2023.

With today’s software, this approach gives Autopilot high-definition spatial positioning, longer range visibility and the ability to identify and differentiate between objects."


Longer range visibility as compared to radar???

And it can't even detect a freaking deer directly in front of it at night.
 

usofrob

Well-Known Member
First Name
Robert
Joined
Apr 9, 2022
Threads
5
Messages
640
Reaction score
566
Location
Michigan
Vehicles
Tesla 3, lotus Elise
Occupation
MBSE
I honestly miss just regular old cruise control. I have a Model 3 and I rarely use the cruise control now because I just don't trust the SW, and even when turning on TACC, it now starts to add FSD logic to things. That's the exact thing I'm trying to avoid. So, that's why I can't use it anymore. Often times when I'm leaving my subdivision, there are cars parked on the side of the road, and while I'm steering around them with plenty of space, it'll beep and warn me I'm about to hit something. It'll slow down on a road with no cars for seemingly no reason. It'll slow for some bends in the road but not others.

But, I really think the risk is more related to the reaction time needed to recognize and fix the problem by the responsible human. Cruise Control or Traffic Aware Cruise Control or FSD are all there to help the driver not need to pay attention to as many things. But, when something goes wrong, the driver needs time to recognize the failure and respond to it before it becomes a problem.

For simple CC, you can easily anticipate that because you'll be going to fast for a certain condition. For TACC, you may have to look further down the road, or see if you're getting unexpectedly too close to something. This seems still very possible to do safely. But, with FSD, it's doing all the driving, and you never really know what it's about to do. So, if it starts changing lanes into the shoulder, you don't know if it's avoiding crashing into the car in front of you, or if it think it can pass on the shoulder. The driver then needs to take full assessment of all the ways the FSD could be failing, and because it was doing more of the work, the driver may not be paying as much attention.

I had a 2015 Model S before my Model 3, and it was also a MobilEye TACC, like Rivian. By the time I sold it, it was the best implementation of "FSD" that I've seen. It had gotten very good at following the roads around me, and recognizing car from not car, it didn't take the exits by accident. It was a decently useful tool. Since getting AP2 with the Model 3, it's gained more features but less reliability. My hope is that Rivian sits on reliable TACC for a while with automatic lane change. It was a good balance of helpfulness, comfort and risk.

As far as testing Beta FSD on public roads. Initially, I was for it, and had enabled learning from my vehicle. But, it does seem to be getting riskier as it gets more capable. But, other car manufacturers seem to be able to make driver assistance systems without getting untrained people to test their potentially deadly system on public roads.

I've heard of Elon saying that self driving cars will eventually save a lot of lives. So, any risks we take now are beneficial in the long run. But, I think the math on that depends significantly on how long it'll take to get to full self driving. And, we've seen that's taking a lot more time than he thinks. I highly suspect it'll take something like AGI (artificial general intelligence) in order to interact with the general surroundings. For example, when a police officer says to drive on the grass to go around an accident or to read a sign that says road closed, take a detour on X road. So, spending more time now when we don't have the technical capability of solving the problem seems more like putting lives at risk for no reason.
 

docwhiz

Well-Known Member
First Name
Mark
Joined
May 22, 2023
Threads
5
Messages
526
Reaction score
464
Location
Lake Tahoe, California
Vehicles
Tesla Model S LR (2022), Land Rover Discovery 2
Occupation
Retired
I honestly miss just regular old cruise control. I have a Model 3 and I rarely use the cruise control now because I just don't trust the SW, and even when turning on TACC, it now starts to add FSD logic to things. That's the exact thing I'm trying to avoid. So, that's why I can't use it anymore. Often times when I'm leaving my subdivision, there are cars parked on the side of the road, and while I'm steering around them with plenty of space, it'll beep and warn me I'm about to hit something. It'll slow down on a road with no cars for seemingly no reason. It'll slow for some bends in the road but not others.

But, I really think the risk is more related to the reaction time needed to recognize and fix the problem by the responsible human. Cruise Control or Traffic Aware Cruise Control or FSD are all there to help the driver not need to pay attention to as many things. But, when something goes wrong, the driver needs time to recognize the failure and respond to it before it becomes a problem.

For simple CC, you can easily anticipate that because you'll be going to fast for a certain condition. For TACC, you may have to look further down the road, or see if you're getting unexpectedly too close to something. This seems still very possible to do safely. But, with FSD, it's doing all the driving, and you never really know what it's about to do. So, if it starts changing lanes into the shoulder, you don't know if it's avoiding crashing into the car in front of you, or if it think it can pass on the shoulder. The driver then needs to take full assessment of all the ways the FSD could be failing, and because it was doing more of the work, the driver may not be paying as much attention.

I had a 2015 Model S before my Model 3, and it was also a MobilEye TACC, like Rivian. By the time I sold it, it was the best implementation of "FSD" that I've seen. It had gotten very good at following the roads around me, and recognizing car from not car, it didn't take the exits by accident. It was a decently useful tool. Since getting AP2 with the Model 3, it's gained more features but less reliability. My hope is that Rivian sits on reliable TACC for a while with automatic lane change. It was a good balance of helpfulness, comfort and risk.

As far as testing Beta FSD on public roads. Initially, I was for it, and had enabled learning from my vehicle. But, it does seem to be getting riskier as it gets more capable. But, other car manufacturers seem to be able to make driver assistance systems without getting untrained people to test their potentially deadly system on public roads.

I've heard of Elon saying that self driving cars will eventually save a lot of lives. So, any risks we take now are beneficial in the long run. But, I think the math on that depends significantly on how long it'll take to get to full self driving. And, we've seen that's taking a lot more time than he thinks. I highly suspect it'll take something like AGI (artificial general intelligence) in order to interact with the general surroundings. For example, when a police officer says to drive on the grass to go around an accident or to read a sign that says road closed, take a detour on X road. So, spending more time now when we don't have the technical capability of solving the problem seems more like putting lives at risk for no reason.
I agree completely.
I had a 2015 Model S and the MobileEye TACC was great. No false warnings. No phantom braking.
(And the auto windshield wipers and auto high beams worked perfectly.)
I replaced it with a 2022 Model S and the TACC is dangerously unusable for all of the reasons you cite. (And the auto windshield wipers and auto high beams are problematic.)
Definitely a regression.
I've tried the "demo" of FSD in my Model S and have had enough "problems" with it that I just don't trust it. (Even when it "works" it's frustratingly hesitant and slow.)
I really don't have much confidence that Tesla will ever get FSD right.
 

usofrob

Well-Known Member
First Name
Robert
Joined
Apr 9, 2022
Threads
5
Messages
640
Reaction score
566
Location
Michigan
Vehicles
Tesla 3, lotus Elise
Occupation
MBSE
(And the auto windshield wipers and auto high beams are problematic.)
Hah, yeah, they also saved some money and made those things worse as well. I only got my Gen2 Rivian about a month ago, but I really like the auto high beams with their sections for turning it off just for oncoming traffic. My Model 3 with automatic high beams basically don't even turn on in my subdivision. But with the Rivian, I nearly missed a turn because I was watching the high beams turn on and off the different sections as passing vehicles. It seems to work very well, and it actually helps with visibility. I hear the newer Teslas will enable that feature soon too, but it probably won't work as well. :-/
Sponsored

 
 





Top