• 0 Posts
  • 684 Comments
Joined 11 months ago
cake
Cake day: March 8th, 2024

help-circle
  • The LLM is going over the search results, taking them as a prompt and then generating a summary of the results as an output.

    The search results are generated by the good old search engine, the “AI summary” option at the top is just doing the reading for you.

    And of course if the answer isn’t trivial, very likely generating an inaccurate or incorrect output from the inputs.

    But none of that changes how the underlying search engine works. It’s just doing additional work on the same results the same search engine generates.

    EDIT: Just to clarify, DDG also has a “chat” service that, as far as I can tell, is just an UI overlay over whatever model you select. That just works the same way as all the AI chatbots you can use online or host locally and I presume it’s not what we’re talking about.








  • LLM is a LLM. LLM is a transformer model generating likely output from a dataset.

    I hate all this analogy stuff people keep resorting to. The thing does what it does, and trying to understand what it does by analogy is being used disingenuously to push all sort of misinformation-filled agendas.

    It’s not about “trust”, it’s about how the output you’re being given is generated, and so what types of outputs are useful on what applications.

    The answer is fairly narrow, particularly compared to how it’s being marketed. It absolutely, 100% isn’t a search engine, though. And even when plugged into a search engine and acting as a summarization engine it’s actually pretty terrible and very likely to distort an output that anybody who has been near a computer in the past thirty years can parse faster at a glance.





  • This stuff feels like such a pointless, almost nostalgic instinct.

    I mean, it’s fun, and I’m sure the guy got a cool Youtube video out of it, but these days you’re not roasting hardware by overclocking a flimsy chip barely aided by a heatsink. Instead, this guy took a small space heater burning the power of a half-assed microwaving session and turned it into a proper toaster over for the benefit of doing the exact same thing at mostly the same speed, just colder.

    The 4090 is already typically strapped to a massive heatsink with a bunch of fans and runs at 50-60C under load in most competent builds. I want it to not trip my breaker, not to keep it cooler, boring as that may be by comparison.






  • A fun one to put in perspective how hideously power hungry modern desktop PCs are is that I have an old (ish) laptop running as a local Plex server that also has a LLM loaded in there and a few other docker bits and pieces and it just sits happily humming at 10W idle (which is as much as my TV draws when it’s turned off).

    I’ve looked into building a small form factor PC to replace it at some point but all the spare parts I have lying around would draw as much idle as when that tiny thing is going full tilt and I just can’t justify it for something that just stays on waiting for me to feel like rewatching The Matrix or whatever.