• 0 Posts
  • 30 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle

  • It’s a statistical model. Given a sequence of words, there’s a set of probabilities for what the next word will be.

    That is a gross oversimplification. LLM’s operate on much more than just statistical probabilities. It’s true that they predict the next word based on probabilities learned from training datasets, but they also have layers of transformers to process the context provided from a prompt to eke out meaningful relationships between words and phrases.

    For example: Imagine you give an LLM the prompt, “Dumbledore went to the store to get ice cream and passed his friend Sam along the way. At the store, he got chocolate ice cream.” Now, if you ask the model, “who got chocolate ice cream from the store?” it doesn’t just blindly rely on statistical likelihood. There’s no way you could argue that “Dumbledore” is a statistically likely word to follow the text “who got chocolate ice cream from the store?” Instead, it uses its understanding of the specific context to determine that “Dumbledore” is the one who got chocolate ice cream from the store.

    So, it’s not just statistical probabilities; the models’ have an ability to comprehend context and generate meaningful responses based on that context.

















  • Their assumptions about what the car can or will do without the need for human intervention makes them an insane risk to everyone around them.

    Do you have statistics to back this up? Are Teslas actually more likely to get into accidents and cause damage/injury compared to a human driver?

    I mean, maybe they are. My point is not that Teslas are safer, only that you can’t determine that based on a few videos. People like to post these videos of Teslas running a light, or getting into an accident, but it doesn’t prove anything. The criteria for self-driving cars to be allowed on the road shouldn’t be that they are 100% safe, only that they are as safe or safer than human drivers. Because human drivers are really, really bad, and get into accidents all the time.


  • This same concept is why you can’t make a 100% safe self driving car. Driving safety is a function of everyone on the road. You could drive as safely as possible, but you’re still at the mercy of everyone else’s decisions. Introducing a system that people aren’t familiar with will create a disruption, and disruptions cause accidents.

    Again, we don’t need a 100% safe self driving car, we just need a self driving car that’s at least as safe as a human driver.

    I disagree with the premise that humans are entirely predictable on the road, and I also disagree that self driving cars are less predictable. Computers are pretty much the very definition of predictable: they follow the rules and don’t ever make last minute decisions (unless their programming is faulty), and they can be trained to always err on the side of caution.