There’s always going to be cases when these systems fail. Even with a self driving car that’s 10x safer driver than the best human there’s still going to be 4000 fatal accidents a year just in the US alone. FSD is probably already safer driver than a human. When it fails this generally means that it got stuck somewhere - not that it caused an accident. I haven’t seen the video in question but that probably was an older version or an autopilot, not FSD.
It seems like a good decision then to limit self driving systems to situations where they are less likely to fail.
FSD is probably already safer driver than a human.
Even with the horrendous driving skills of some people, that’s a very bold claim without some actual evidence.
When it fails this generally means that it got stuck somewhere - not that it caused an accident. I haven’t seen the video in question but that probably was an older version or an autopilot, not FSD.
It doesn’t make that much difference what Tesla calls their latest beta software update imho. If their autopilot is enough to get you into dangerous situations, how is a system with even less human oversight going to be fundamentally different? I’ll need to see some more critical reviews of this system after years of not delivering on their claims and only rolling features out to select beta testers to maintain plausible deniability.
I didn’t find the specific video of older versions trying really hard to drive into oncoming traffic, though there are plenty. I found one of the FSD beta from 6 months ago though, where it can’t seem to decide which lane is correct.
It doesn’t make that much difference what Tesla calls their latest beta software update imho.
Autopilot and FSD Beta are two different systems of which autopilot is the less advanced one. There’s only one death ever linked to the use of FSD Beta and that includes the older versions aswell.
The only statistics available regarding the safety of FSD and autopilot are from Tesla itself which one should probably take with a grain of salt but they seem to idicate it being 5x safer than an average American driver.
Then there are ofcourse plenty of independent YouTubers doing videos of putting these systems to test such as AI DRIVR and CYBRLFT who give pretty honest assesments on the strenghts and weaknesess of them.
Autopilot and FSD Beta are two different systems of which autopilot is the less advanced one. There’s only one death ever linked to the use of FSD Beta and that includes the older versions aswell.
I know. Tesla has already advertised that their newer system is fully based on ANN. Factoring in their current track record doesn’t inspire any confidence in me. I’m not reading that paywalled article, but one death for a system that only had limited rollout until very recently isn’t enough to make me believe it’s reasonably safe either. There just isn’t trustworthy, large-scale data out there yet. We need to keep the perspective in mind here: this is pretty much Tesla’s last chance to actually make good on their empty promises and they have a lot to prove.
At this point I’m not willing to take any statistical claim coming from Tesla, salt or not.
I can only think of that one video where some guys Tesla desperately wants to veer into oncoming traffic.
it achieved sentience and realized it was a piece of shit made by a moron
There’s always going to be cases when these systems fail. Even with a self driving car that’s 10x safer driver than the best human there’s still going to be 4000 fatal accidents a year just in the US alone. FSD is probably already safer driver than a human. When it fails this generally means that it got stuck somewhere - not that it caused an accident. I haven’t seen the video in question but that probably was an older version or an autopilot, not FSD.
It seems like a good decision then to limit self driving systems to situations where they are less likely to fail.
Even with the horrendous driving skills of some people, that’s a very bold claim without some actual evidence.
It doesn’t make that much difference what Tesla calls their latest beta software update imho. If their autopilot is enough to get you into dangerous situations, how is a system with even less human oversight going to be fundamentally different? I’ll need to see some more critical reviews of this system after years of not delivering on their claims and only rolling features out to select beta testers to maintain plausible deniability.
I didn’t find the specific video of older versions trying really hard to drive into oncoming traffic, though there are plenty. I found one of the FSD beta from 6 months ago though, where it can’t seem to decide which lane is correct.
Autopilot and FSD Beta are two different systems of which autopilot is the less advanced one. There’s only one death ever linked to the use of FSD Beta and that includes the older versions aswell.
The only statistics available regarding the safety of FSD and autopilot are from Tesla itself which one should probably take with a grain of salt but they seem to idicate it being 5x safer than an average American driver.
Then there are ofcourse plenty of independent YouTubers doing videos of putting these systems to test such as AI DRIVR and CYBRLFT who give pretty honest assesments on the strenghts and weaknesess of them.
I know. Tesla has already advertised that their newer system is fully based on ANN. Factoring in their current track record doesn’t inspire any confidence in me. I’m not reading that paywalled article, but one death for a system that only had limited rollout until very recently isn’t enough to make me believe it’s reasonably safe either. There just isn’t trustworthy, large-scale data out there yet. We need to keep the perspective in mind here: this is pretty much Tesla’s last chance to actually make good on their empty promises and they have a lot to prove.
At this point I’m not willing to take any statistical claim coming from Tesla, salt or not.