• ImplyingImplications@lemmy.ca
    link
    fedilink
    arrow-up
    25
    ·
    1 day ago

    Because AI needs a lot of training data to reliably generate something appropriate. It’s easier to get millions of reddit posts than millions of research papers.

    Even then, LLMs simply generate text but have no idea what the text means. It just knows those words have a high probability of matching the expected response. It doesn’t check that what was generated is factual.

      • ulkesh@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        Because we have brains that are capable of critical thinking. It makes no sense to compare the human brain to the infancy and current inanity of LLMs.