Well when Telsa, this former employee / whistleblower, and these journalists refer to “autopilot,” they’re specifically talking about the software and hardware marketed under the “____ Autopilot” banner that Telsa uses for those features.
Some of these more advanced autopilot features clearly have issues, and it probably stems from the fact that they’re only using cameras and ultra sonic, not lidar.
In my experience with a Model 3 and AAP, when those cameras and sensors were wet, it was pretty clear that they were getting dangerous. It started raining during our test drive, so we had a before / after experience on the same roads. Once everything got obstructed with water, you could see the car’s collision detection struggle to detect other objects. Objects on the center display would erratically pop in out of view. And this was a showroom car, it wasn’t the first rain of the year, and it was behaving “normally” according to staff.
Even if basic autopilot was fine, this left such a sour taste in my mouth that I had no appetite to give that company my money. Almost dying and almost killing a kid were a big “fuck this company” for me.
My (non-tesla) vehicle can tell when the sensors are impaired by frost or mud or whatever. It flashes a warning on my dash and disables the lane-keeping and/or collision detection until next startup. Does Tesla not do that?
There’s a human tendency to become complacent after a while, which presents a risk.
Can’t wait for safer-than-human self-driving technology, and know we’ll need to take certain risks of some sort to get there, but there are good arguments against “PLEASE remain fully attentive 100% of the time for this technology that will in fact only require full attentiveness in edge cases”. You might be an exception of course! But Average Meat Driver is going to slip into complacency after many, many miles of perfect autopiloting.
It’s the same as cruise control, but it’s supposed to eliminate human error. I’d argue most of the people having issues from not paying attention probably weren’t paying attention in the first place and were dangerous to begin with
None of what you mentioned is in basic autopilot. Autopilot is lane keep and traffic aware cruise control only.
Let’s not get pedantic. They are part of the “enhanced autopilot” package.
https://www.tesla.com/support/autopilot
Which is not included with the base vehicle. It’s an extra purchase.
Well in that case, the advanced autopilot features that almost killed me were totally safe.
Sure, which I consider part of FSD, which almost killed me like 3 times when I had a loaner with it active.
But that’s not basic autopilot. AP is fine assuming people pay attention.
Well when Telsa, this former employee / whistleblower, and these journalists refer to “autopilot,” they’re specifically talking about the software and hardware marketed under the “____ Autopilot” banner that Telsa uses for those features.
Some of these more advanced autopilot features clearly have issues, and it probably stems from the fact that they’re only using cameras and ultra sonic, not lidar.
In my experience with a Model 3 and AAP, when those cameras and sensors were wet, it was pretty clear that they were getting dangerous. It started raining during our test drive, so we had a before / after experience on the same roads. Once everything got obstructed with water, you could see the car’s collision detection struggle to detect other objects. Objects on the center display would erratically pop in out of view. And this was a showroom car, it wasn’t the first rain of the year, and it was behaving “normally” according to staff.
Even if basic autopilot was fine, this left such a sour taste in my mouth that I had no appetite to give that company my money. Almost dying and almost killing a kid were a big “fuck this company” for me.
My (non-tesla) vehicle can tell when the sensors are impaired by frost or mud or whatever. It flashes a warning on my dash and disables the lane-keeping and/or collision detection until next startup. Does Tesla not do that?
It was supposed to
There’s a human tendency to become complacent after a while, which presents a risk.
Can’t wait for safer-than-human self-driving technology, and know we’ll need to take certain risks of some sort to get there, but there are good arguments against “PLEASE remain fully attentive 100% of the time for this technology that will in fact only require full attentiveness in edge cases”. You might be an exception of course! But Average Meat Driver is going to slip into complacency after many, many miles of perfect autopiloting.
It’s the same as cruise control, but it’s supposed to eliminate human error. I’d argue most of the people having issues from not paying attention probably weren’t paying attention in the first place and were dangerous to begin with