Sponsored

Ingo B

Well-Known Member
Joined
Apr 13, 2023
Threads
4
Messages
266
Reaction score
463
Location
CA
Vehicles
2023 R1T
Occupation
Word Slinger
Really cool stuff, and the summary is very helpful. That said, and I'm likely in the minority, but I'm not a fan of ceding all control to a set of sensors and cameras. Call me old school, but those Terminator/Matrix films got me spooked. YouTube search "Adam Carolla Carmageddon". It's a comedic bit, but too rooted in reality to be 100% funny.

But again, I'm a fan of progress, so it's an interesting watch - from the sidelines, with both hands on the steering wheel and head on a swivel.
Sponsored

 

CBRacerX

Well-Known Member
Joined
Jun 30, 2024
Threads
2
Messages
209
Reaction score
169
Location
USA
Vehicles
No EV Yet
Thanks for taking a shot at putting the information together OP. I bought a Gen 1 last week with no expectation of FSD, so I‘ll try and stay happy with ACC and the “mapped roads” assist.
 

phaduman

Well-Known Member
Joined
May 7, 2024
Threads
8
Messages
68
Reaction score
93
Location
San Jose, CA
Vehicles
R1T-G2,Tesla M3&MY, Motorcycles
Occupation
Techie
Very helpful thread - as I am currently looking at buying a truck and nothing compares better than R1T for what I want. Great product. Hopefully I will make a decision soon (used Gen1 vs Gen2).

Staying on the topic on this thread, I would like to share some experience.
I bought the Tesla Model S refresh in Sept 2016 (they were AP1.0 h/w with MobileEye solution for autopilot from 2012-2016) - only used 1 forward facing camera for making driving decisions. Was very good for relatively straight freeways and would fail when roads had relatively sharper turns. It very much looks like Gen1 ended up with same/similar results.

Model S from Oct 2016 came with the 8 camera systems, and Tesla's own AP2.0 H/W suite, and a redundant MobileEye chipset. Tesla tried to negotiate with MobileEye to run in parallel with their own autopilot s/w shadowing MobileEye but MobileEye pulled back and didn't supply chips to Tesla. Tesla was on emergency trying to get their own autopilot to work (including labeling video to recognize objects - the whole enchilada...) and it took them 1.5yr+ to get basic object detection and get same level of autopilot features that existed with MobileEye and Tesla AP1.0 h/w system. MobileEye was dead sure Tesla would fail and get back to them. Elon and his team in a way did 5-8yrs of MobileEye work by then in 1.5 to 2yrs. Remarkable. But basically from 2016Oct to sometime 2017Oct or early 2018, the Autopilot2.0 h/w with 8 camera system was terrible, even compared to 2013-2016 AP1.0 h/w with MobileEye system. But then, it was at par and then started shining.

Tesla went on to upgrade the compute from AP2.0/2.5 to AP3.0 (Tesla's own chips) and made it free for those that had paid for FSD in full and $1.5K or $2K (I forgot) for those that hadn't paid for FSD. They sent mobile Tesla service engineers to come and replace the system at home.

This system (8-camera + AP3.0) has been on Million+ ModelS, X, Model3 and ModelY . During that time, we had bought the Model 3 (2018) and a Model Y in 2022. And I could see all the enhancements done throughout the time. We have always purchased FSD in all our cars (except the early ModelS which wasn't capable and we sold that in 2022). I am a techie, and absolutely loved every enhancement they were making - the two steps forward and one step backward releases - all of it. The 3D rendition of vehicles from highly inaccurate and jumpy to high accurate and smooth - multi-lane and opposite lane views (from side cameras) and accurate speed and shape of vehicles going on both directions, cross-traffic views at intersections etc - just remarkable. As a techie - I am truly lucky to be experiencing all that. Even in 2024, while on the AP3.0 systems, now clear to all, cannot be truly autonomous, there are still enhancements coming to make them better and safer. BTW, the driver-facing camera (in-cabin) has also been working now - monitoring face with and without sunglasses, able to track face/eyes and also whether driver is holding phone, looking away, looking at screen etc. Quite remarkable - given our 2018 Model 3 is now at 90K miles and still improving. It's not Elon I am giving credit to - but all the talented engineers that are singularly focused on innovating daily!


I think (and I could be wrong on the dates a bit) the camera system (5Megapixel vs. 1.5Megapixel)was enhanced and AP4.0 h/w was put on Tesla devices throughout late 2023+ (or early 2024). The ultrasonic sensors were removed (Elon challenged his team - if human eyes didn't need sensors, why would a car need one...), also removed forward facing radar sometime in 2022...the reason "need to have one decision maker - camera and radar both couldn't argue if there was a stationary object (ambulance) in front or it's a shadow or...". Tesla is currently looking to re-add a new 4D radar system when they think the radar resolution has remarkably improved. We will see.

With higher mexapixel 8-camera systems and AP4.0 (now called AI4.0), all of Tesla's focus on enhancements are on these systems - which have been shipping in all Tesla cars since 2024...

So Gen1 to me, reading up from other articles and here, is like Tesla Model S with AP1.0 systems (2012-2016). I do not think Rivian has the means to develop Vision-radar based autonomy while trying to get R2 and R3 out in high volume, and helping other sister companies of VW...

Might as well make a decision to partner - and without Lidars in the cars, it is best to go with the leader in the vision-based AI systems - which is Tesla. Yes, RJ will have to get past the big Elon issue - and I really hope he understands why - it is for greater good for humanity (better safer cars on the road), and for a better future of Rivian. I love what Rivian has done so far - I am an investor (in Rivian and in Tesla) and I really want Rivian to succeed.

(my dates and some data may not be fully accurate - but all the above experiences I have observed myself and gathered through reading material over the time).
 

Eric9610

Well-Known Member
Joined
Nov 27, 2023
Threads
9
Messages
471
Reaction score
480
Location
So. Cal.
Vehicles
R1T QM, Tesla MY, MB E350, MB S550
Clubs
 
Very helpful thread - as I am currently looking at buying a truck and nothing compares better than R1T for what I want. Great product. Hopefully I will make a decision soon (used Gen1 vs Gen2).

Staying on the topic on this thread, I would like to share some experience.
I bought the Tesla Model S refresh in Sept 2016 (they were AP1.0 h/w with MobileEye solution for autopilot from 2012-2016) - only used 1 forward facing camera for making driving decisions. Was very good for relatively straight freeways and would fail when roads had relatively sharper turns. It very much looks like Gen1 ended up with same/similar results.

Model S from Oct 2016 came with the 8 camera systems, and Tesla's own AP2.0 H/W suite, and a redundant MobileEye chipset. Tesla tried to negotiate with MobileEye to run in parallel with their own autopilot s/w shadowing MobileEye but MobileEye pulled back and didn't supply chips to Tesla. Tesla was on emergency trying to get their own autopilot to work (including labeling video to recognize objects - the whole enchilada...) and it took them 1.5yr+ to get basic object detection and get same level of autopilot features that existed with MobileEye and Tesla AP1.0 h/w system. MobileEye was dead sure Tesla would fail and get back to them. Elon and his team in a way did 5-8yrs of MobileEye work by then in 1.5 to 2yrs. Remarkable. But basically from 2016Oct to sometime 2017Oct or early 2018, the Autopilot2.0 h/w with 8 camera system was terrible, even compared to 2013-2016 AP1.0 h/w with MobileEye system. But then, it was at par and then started shining.

Tesla went on to upgrade the compute from AP2.0/2.5 to AP3.0 (Tesla's own chips) and made it free for those that had paid for FSD in full and $1.5K or $2K (I forgot) for those that hadn't paid for FSD. They sent mobile Tesla service engineers to come and replace the system at home.

This system (8-camera + AP3.0) has been on Million+ ModelS, X, Model3 and ModelY . During that time, we had bought the Model 3 (2018) and a Model Y in 2022. And I could see all the enhancements done throughout the time. We have always purchased FSD in all our cars (except the early ModelS which wasn't capable and we sold that in 2022). I am a techie, and absolutely loved every enhancement they were making - the two steps forward and one step backward releases - all of it. The 3D rendition of vehicles from highly inaccurate and jumpy to high accurate and smooth - multi-lane and opposite lane views (from side cameras) and accurate speed and shape of vehicles going on both directions, cross-traffic views at intersections etc - just remarkable. As a techie - I am truly lucky to be experiencing all that. Even in 2024, while on the AP3.0 systems, now clear to all, cannot be truly autonomous, there are still enhancements coming to make them better and safer. BTW, the driver-facing camera (in-cabin) has also been working now - monitoring face with and without sunglasses, able to track face/eyes and also whether driver is holding phone, looking away, looking at screen etc. Quite remarkable - given our 2018 Model 3 is now at 90K miles and still improving. It's not Elon I am giving credit to - but all the talented engineers that are singularly focused on innovating daily!


I think (and I could be wrong on the dates a bit) the camera system (5Megapixel vs. 1.5Megapixel)was enhanced and AP4.0 h/w was put on Tesla devices throughout late 2023+ (or early 2024). The ultrasonic sensors were removed (Elon challenged his team - if human eyes didn't need sensors, why would a car need one...), also removed forward facing radar sometime in 2022...the reason "need to have one decision maker - camera and radar both couldn't argue if there was a stationary object (ambulance) in front or it's a shadow or...". Tesla is currently looking to re-add a new 4D radar system when they think the radar resolution has remarkably improved. We will see.

With higher mexapixel 8-camera systems and AP4.0 (now called AI4.0), all of Tesla's focus on enhancements are on these systems - which have been shipping in all Tesla cars since 2024...

So Gen1 to me, reading up from other articles and here, is like Tesla Model S with AP1.0 systems (2012-2016). I do not think Rivian has the means to develop Vision-radar based autonomy while trying to get R2 and R3 out in high volume, and helping other sister companies of VW...

Might as well make a decision to partner - and without Lidars in the cars, it is best to go with the leader in the vision-based AI systems - which is Tesla. Yes, RJ will have to get past the big Elon issue - and I really hope he understands why - it is for greater good for humanity (better safer cars on the road), and for a better future of Rivian. I love what Rivian has done so far - I am an investor (in Rivian and in Tesla) and I really want Rivian to succeed.

(my dates and some data may not be fully accurate - but all the above experiences I have observed myself and gathered through reading material over the time).
The reason Tesla removed LiDar is not because they need it to catch up as a tech but rather the cost of the sensors. This is why you see true premium brands and cars having no issue adding them in. MB and GM have the best FSD on the market and no one talks about it. In fact, BMW even had an M3 that could set lap records without human input. They even implemented a system where it could correct a humans driving on the track without the human ever knowing. Teslas removal of hardware is cost related not function related.
 

Whale Blubber

Well-Known Member
Joined
May 2, 2023
Threads
4
Messages
381
Reaction score
538
Location
Austin
Vehicles
2019 BMW X3 3.0si, 2021 Toyota RAV4 XLE
Always seemed to me that Elon's idea that if human vision is adequate for human drivers, FSD shouldn't need other sensor systems, was pretty faulty logic. But I'm no engineer, just a regular human reasoning with human brain parts.
 

Sponsored

phaduman

Well-Known Member
Joined
May 7, 2024
Threads
8
Messages
68
Reaction score
93
Location
San Jose, CA
Vehicles
R1T-G2,Tesla M3&MY, Motorcycles
Occupation
Techie
Moon shot (“only vision is enough)…not sure we can say it was faulty, at least yet.

And also, if autonomous driving has to be ubiquitous - ie even a sub-30K car must have it to make the road safe for all - then the logic to remove cost is a good one. While it benefits the drivers thar can afford a 100K car with lidars, they can still be hit by drunk drivers in other cars without those features.

so far, I am quite impressed with the autopilot evolution in my 2018 Model 3, from behaving like a total toddler on the road back in 2020 to now a teen-ager - cautious most of the time and missing some experience in driving. I take over because car is a bit too “cautious” during certain times. I am hopeful vision-only will succeed and then cheap enough to get to Honda Civic default price range so every one benefits.
 

mpshizzle

Well-Known Member
Joined
Jun 16, 2024
Threads
44
Messages
800
Reaction score
1,052
Location
Utah
Vehicles
2025 R1S Dual Max (Baymax)
Very helpful thread - as I am currently looking at buying a truck and nothing compares better than R1T for what I want. Great product. Hopefully I will make a decision soon (used Gen1 vs Gen2).

Staying on the topic on this thread, I would like to share some experience.
I bought the Tesla Model S refresh in Sept 2016 (they were AP1.0 h/w with MobileEye solution for autopilot from 2012-2016) - only used 1 forward facing camera for making driving decisions. Was very good for relatively straight freeways and would fail when roads had relatively sharper turns. It very much looks like Gen1 ended up with same/similar results.

Model S from Oct 2016 came with the 8 camera systems, and Tesla's own AP2.0 H/W suite, and a redundant MobileEye chipset. Tesla tried to negotiate with MobileEye to run in parallel with their own autopilot s/w shadowing MobileEye but MobileEye pulled back and didn't supply chips to Tesla. Tesla was on emergency trying to get their own autopilot to work (including labeling video to recognize objects - the whole enchilada...) and it took them 1.5yr+ to get basic object detection and get same level of autopilot features that existed with MobileEye and Tesla AP1.0 h/w system. MobileEye was dead sure Tesla would fail and get back to them. Elon and his team in a way did 5-8yrs of MobileEye work by then in 1.5 to 2yrs. Remarkable. But basically from 2016Oct to sometime 2017Oct or early 2018, the Autopilot2.0 h/w with 8 camera system was terrible, even compared to 2013-2016 AP1.0 h/w with MobileEye system. But then, it was at par and then started shining.

Tesla went on to upgrade the compute from AP2.0/2.5 to AP3.0 (Tesla's own chips) and made it free for those that had paid for FSD in full and $1.5K or $2K (I forgot) for those that hadn't paid for FSD. They sent mobile Tesla service engineers to come and replace the system at home.

This system (8-camera + AP3.0) has been on Million+ ModelS, X, Model3 and ModelY . During that time, we had bought the Model 3 (2018) and a Model Y in 2022. And I could see all the enhancements done throughout the time. We have always purchased FSD in all our cars (except the early ModelS which wasn't capable and we sold that in 2022). I am a techie, and absolutely loved every enhancement they were making - the two steps forward and one step backward releases - all of it. The 3D rendition of vehicles from highly inaccurate and jumpy to high accurate and smooth - multi-lane and opposite lane views (from side cameras) and accurate speed and shape of vehicles going on both directions, cross-traffic views at intersections etc - just remarkable. As a techie - I am truly lucky to be experiencing all that. Even in 2024, while on the AP3.0 systems, now clear to all, cannot be truly autonomous, there are still enhancements coming to make them better and safer. BTW, the driver-facing camera (in-cabin) has also been working now - monitoring face with and without sunglasses, able to track face/eyes and also whether driver is holding phone, looking away, looking at screen etc. Quite remarkable - given our 2018 Model 3 is now at 90K miles and still improving. It's not Elon I am giving credit to - but all the talented engineers that are singularly focused on innovating daily!


I think (and I could be wrong on the dates a bit) the camera system (5Megapixel vs. 1.5Megapixel)was enhanced and AP4.0 h/w was put on Tesla devices throughout late 2023+ (or early 2024). The ultrasonic sensors were removed (Elon challenged his team - if human eyes didn't need sensors, why would a car need one...), also removed forward facing radar sometime in 2022...the reason "need to have one decision maker - camera and radar both couldn't argue if there was a stationary object (ambulance) in front or it's a shadow or...". Tesla is currently looking to re-add a new 4D radar system when they think the radar resolution has remarkably improved. We will see.

With higher mexapixel 8-camera systems and AP4.0 (now called AI4.0), all of Tesla's focus on enhancements are on these systems - which have been shipping in all Tesla cars since 2024...

So Gen1 to me, reading up from other articles and here, is like Tesla Model S with AP1.0 systems (2012-2016). I do not think Rivian has the means to develop Vision-radar based autonomy while trying to get R2 and R3 out in high volume, and helping other sister companies of VW...

Might as well make a decision to partner - and without Lidars in the cars, it is best to go with the leader in the vision-based AI systems - which is Tesla. Yes, RJ will have to get past the big Elon issue - and I really hope he understands why - it is for greater good for humanity (better safer cars on the road), and for a better future of Rivian. I love what Rivian has done so far - I am an investor (in Rivian and in Tesla) and I really want Rivian to succeed.

(my dates and some data may not be fully accurate - but all the above experiences I have observed myself and gathered through reading material over the time).

If you are a techie who owned a Tesla the decision is easy. 100% gen 2. Both will be a step backwards in technology from Tesla. Gen 1 is hampered by it's mobile eye system, and will likely have few, if any autonomy updates.

Gen 2 has a MUCH higher ceiling. I've only had my Gen 2 for about 2.5 months and it's also had some substantial improvements to autonomy in that time. Lane centering is far more stable as of the last update, and lane changes are much more competent too. As far as I can tell it's still running on the mobile eye system. Who knows when they'll switch to the Rivian system, but there's a much higher ceiling at least.

Also - phantom braking is (mostly) not a thing in these vehicles. (Amazing what a good rader will do). That was a nearly daily occurrence in my HW 3 model 3 - it's happened probably 4 times to me over the course of 4,000 miles and 2.5 months
 

godfodder0901

Well-Known Member
First Name
Jared
Joined
Mar 12, 2019
Threads
27
Messages
4,486
Reaction score
8,178
Location
Washington
Vehicles
2022 Rivian R1T LE
The reason Tesla removed LiDar is not because they need it to catch up as a tech but rather the cost of the sensors. This is why you see true premium brands and cars having no issue adding them in. MB and GM have the best FSD on the market and no one talks about it. In fact, BMW even had an M3 that could set lap records without human input. They even implemented a system where it could correct a humans driving on the track without the human ever knowing. Teslas removal of hardware is cost related not function related.
They didn't remove LIDaR; they never had it. They did have RADAR, which they removed.

And GM's SuperCruise doesn't use LiDaR either; it uses maps that are LiDaR based. The next-Gen UltraCruise may add LiDaR, however.
 
Last edited:

COdogman

Well-Known Member
First Name
Brian
Joined
Jan 21, 2022
Threads
32
Messages
9,948
Reaction score
28,593
Location
Colorado
Vehicles
2023 R1T
Occupation
Dog Wrangler
Clubs
 
Moon shot (“only vision is enough)…not sure we can say it was faulty, at least yet.

And also, if autonomous driving has to be ubiquitous - ie even a sub-30K car must have it to make the road safe for all - then the logic to remove cost is a good one. While it benefits the drivers thar can afford a 100K car with lidars, they can still be hit by drunk drivers in other cars without those features.

so far, I am quite impressed with the autopilot evolution in my 2018 Model 3, from behaving like a total toddler on the road back in 2020 to now a teen-ager - cautious most of the time and missing some experience in driving. I take over because car is a bit too “cautious” during certain times. I am hopeful vision-only will succeed and then cheap enough to get to Honda Civic default price range so every one benefits.
The decision should NOT be cost based, period. It should be based on what is safest on the road.

The larger body of evidence doesn’t really support the idea that FSD drives “like a teenager”. It’s more like a drunk person. Video after video online of FSD running red lights and stop signs, pulling into traffic at the worst times, going the wrong way down one way streets or roundabouts. Tesla refuses to release raw data on FSD to the public which should tell you a lot. Even using their cherry-picked numbers it is more dangerous than the average human driver.

If LiDaR is the safest and is used by enough automakers the cost will come down at scale. That is how nearly all safety tech in history has filtered down to less expensive vehicles.
 

schlosrat

Well-Known Member
First Name
Steve
Joined
Jun 23, 2024
Threads
5
Messages
73
Reaction score
69
Location
Florida Space Coast
Vehicles
2024 R1T Dual Large
Occupation
Engineer
I bought my G1 in July of this year when I had the choice of G1 or G2 as a new vehicle. I'm very happy with my choice and feel I made the right decision for several reasons - one being that I had no illusions that I'd ever see anything like Tesla FSD on my truck. I'm OK with that. At the same time, I believe I've got loads of safety features making this very likely the safest vehicle I've ever had - and loads of nice convenience features too like adaptive cruise control, lane centering, etc.

One of the reasons I'm fine with never being able to get an L3 FSD-like feature comes down to safety. There are situations that the best SW and HW have yet to learn how to cope with, and times when having the car be the driver is definitely the wrong choice for safety. One day we'll get there, but it's not today - not even in Teslas.

One issue that I believe is likely to affect any vehicle is that they have a really hard time "seeing" vehicles that are beyond the next one to them in any direction.

Consider the case where there's a stopped car way ahead in your lane and other traffic also in your lane between you and the stopped car. Your sensors can easily see the traffic right ahead of you, but those same vehicles are blocking the line of sight to the stopped car that you as a human watching the road can see. What do the cars in front of you do? They might slow down, but they also might accelerate so that they can merge into an adjacent lane ahead of a car beside them. If they do the latter, what does your vehicle see? It sees traffic ahead accelerating, and so your vehicle might speed up. When the car ahead merges over then your vehicle may be able to see the stopped car, but it may also be too late. I'm not making this scenario up, I learned about it in a safety engineering class I took recently as an example the instructors used which was pulled from real-world crash data where the self-driving vehicle made choices an alert human would likely not have made and caused fatal accidents.

My point isn't to highlight a particularly (and deadly) corner case that's fairly common - we've all seen other drivers do this on the road, and I'm sure many of us have been the driver that accelerated before merging over to avoid a stopped or dangerously slower car. My point is that until the sensor technology coupled with the SW gets to where it can be reliably aware of traffic that's obscured by intervening vehicles turning this responsibility over to your car is probably an unwise and unsafe thing to do.

So, I'm quite happy with the set of features in G1. It may be that those who bought much earlier had an expectation or hope that they'd one day see an FSD-like capability in their G1 vehicle, but the info that was available to me at the time of my purchase was pretty clear that this was not going to be available in G1, and OP's detailed post only confirms this.
 

Sponsored

sacramentoelectric

Well-Known Member
Joined
Sep 4, 2021
Threads
12
Messages
452
Reaction score
941
Location
sacramento
Vehicles
BMW iX, BMW i3
Clubs
 
Always seemed to me that Elon's idea that if human vision is adequate for human drivers, FSD shouldn't need other sensor systems, was pretty faulty logic. But I'm no engineer, just a regular human reasoning with human brain parts.
It is if you don’t take the statement at face value. We have over a decade of statements from Elon which clearly demonstrate that he often bullshits without reservation. At this point, it’s safe to assume he’s never being straight with anyone, ever. Who knows if he actually believes that cameras are sufficient. I am confident that if he believed that it made financial sense to only include cameras even though they wouldn’t the job done for many, many years, he would have no issue with making definitive statements saying they were the perfect solution. We’re long past the point where anyone should take anything he says in good faith. It’s all marketing from a man who seems to think the ends always justify the means.
 

phaduman

Well-Known Member
Joined
May 7, 2024
Threads
8
Messages
68
Reaction score
93
Location
San Jose, CA
Vehicles
R1T-G2,Tesla M3&MY, Motorcycles
Occupation
Techie
I only have my own experiences to share when I talk about Tesla's autopilot...

I am always educating myself. LiDAR systems require HD mapping (done previously) to enable Level3 driving - also acknowledged by Benz. The two groups with differing philosophies:

1. limited # of cities & roadways, give highest safety possibility, with LiDAR+Vision. An issue of this is that it may take forever to get cities to update and upkeep their infrastructure & for vendors to continue updating their HD maps to allow their cars in Level3+. And when actual situation differs from pre-mapped situation (road repairs e.g.) - then car gives up self-driving and asks Driver to take over.

2. Vision-only systems that mimic human drivers (but with more eyes - sides, rear...) and learn & adapt and make decisions on the fly - like humans would when lanes disappear, cones due to temporary lane closures etc - do not depend on HD maps.

Tesla took #2 approach and has allowed autopilot in all cities in USA (without HD Mapping), Canada and testing in China and other places. Big bets and big risks. And to their engineers' credit (I am not giving credit to Elon here) - the # of serious issues are limited given the Millions of cars operating with that technology. Example in point:2yrs ago, when lane markings disappeared - my car would struggle and give up - but now, it is able to paint virtual lane marks on the digital screen and able to proceed (with confidence).

#1 approach - requires not just costs to come down to make sophisticated technologies to come down to corolla levels, but continuous participation of cities, vendors, mappers etc to keep updating central DBs with info and API availability - e.g. city A is doing a temporary road repair...after the lanes are blocked - they run a few rounds with HD cameras to record the changes and push those videos to the central repository where these updates are incorporated and then all the vendors & cars download that latest "change" into their vehicles to make real-time decisions. Again, once the repair is done, do the same process again.

An eventual approach could be a combination of #1 and #2 - and likely will take years. But if we don't have some crazy people pushing those boundaries of technologies and taking differing approaches, we will not see innovation happen at a rapid pace.
 

starbux

Active Member
First Name
Dom
Joined
Sep 7, 2024
Threads
3
Messages
35
Reaction score
142
Location
Gig Harbor, WA
Vehicles
R1T
Wanted to share an update for gen 2 stuff. What hardware is included in the Rivian Autonomy Platform?



Every Gen 2 Rivian vehicle includes our complete Rivian Autonomy Platform hardware system and redundant onboard computers:

  • 11 high resolution exterior cameras help with day-to-day and higher speed driving, and can see up to 10 seconds ahead at highway speeds.
  • 12 ultrasonic sensors provide 360° close-range coverage.
  • 4 corner radars and 1 forward radar enable more complex maneuvers and monitoring, like lane changes and rear-cross traffic.
  • driver-facing camera integrated into the rearview mirror to detect driver fatigue and distraction.
We developed the Rivian Autonomy Platform in-house to enable optimal performance of our system and increased scalability for future feature updates. Powered by entirely new perception hardware and software, the system fuses inputs from an array of 11 internally developed cameras and five radars performing over 250 trillion operations per second, an industry leading level of compute power. This multimodal sensor approach excels at perceiving different types of information, offering redundancy for improved detection.
Sponsored

 
 





Top