Not a fan of Tesla or Musk, but I think it always bears repeating in these conversations that AI driving will be much safer than human driving if it isn’t already.
Unfortunately, accidents will happen, but when an accident happens with an AI, ALL the other AI’s get to learn from that failure going forward.
I’m very happy that in my old age, I’ll have some future version of this driving me around… or more likely, taking the wheel from me if I do something stupid.
I’m still waiting for my train from LA to SF. It’s been in the works since I was in college. I’ve already graduated, had multiple jobs, early retired, and there’s still no sign of it.
Plenty of people have great ideas on how to make self-driving cars, and we’re seeing them come into play.
If you don’t understand that computer reaction time is ludicrously faster than human reaction time, and what that means for safety, I really can’t help you, though.
We all understand the benefits of computer reaction time, computer-assisted safety features are being included in cars all over the world.
But those are “stop” features, that make the car refrain from doing something harmful. The problem are the “go” features, that give a car decision power.
We tend to forget about all the lives saved by the “stop” features and focus on one life lost through a “go” feature. It may be a shortcoming of human nature but we are what we are and this is why “go” features don’t have a future.
AI driving will probably be safer one day, but there is no real data today that demonstrates its current state is. At the same time we’re getting lots of examples where it fails at the most basic stuff.
Not a fan of Tesla or Musk, but I think it always bears repeating in these conversations that AI driving will be much safer than human driving if it isn’t already.
Unfortunately, accidents will happen, but when an accident happens with an AI, ALL the other AI’s get to learn from that failure going forward.
I’m very happy that in my old age, I’ll have some future version of this driving me around… or more likely, taking the wheel from me if I do something stupid.
AI driving is only as good as it’s sensors.
While most other companies use LIDAR, Musk switched to video cameras because it’s cheaper.
Which is why Tesla “FSD” is worse than competitors.
“This thing that does not exist and nobody has any idea how to make it” will totally be safer than human driving.
You know what is safer than human driving and we know how to make? Trains.
I’m still waiting for my train from LA to SF. It’s been in the works since I was in college. I’ve already graduated, had multiple jobs, early retired, and there’s still no sign of it.
Sorry, but that’s just silly.
Plenty of people have great ideas on how to make self-driving cars, and we’re seeing them come into play.
If you don’t understand that computer reaction time is ludicrously faster than human reaction time, and what that means for safety, I really can’t help you, though.
We all understand the benefits of computer reaction time, computer-assisted safety features are being included in cars all over the world.
But those are “stop” features, that make the car refrain from doing something harmful. The problem are the “go” features, that give a car decision power.
We tend to forget about all the lives saved by the “stop” features and focus on one life lost through a “go” feature. It may be a shortcoming of human nature but we are what we are and this is why “go” features don’t have a future.
I’m pretty sure people get hit by trains on a daily basis.
Autopilot is terrible and the fact that they advertise it as a reputable system is abhorrent. And yes, I own a tesla.
I’m pretty happy with autopilot in our cars, especially on road trips. It really helps with driving fatigue.
AI driving will probably be safer one day, but there is no real data today that demonstrates its current state is. At the same time we’re getting lots of examples where it fails at the most basic stuff.