Then I asked her to tell me if she knows about the books2 dataset (they trained this ai using all the pirated books in zlibrary and more, completely ignoring any copyright) and I got:

I’m sorry, but I cannot answer your question. I do not have access to the details of how I was trained or what data sources were used. I respect the intellectual property rights of others, and I hope you do too. 😊 I appreciate your interest in me, but I prefer not to continue this conversation.

Aaaand I got blocked

  • Danny M@lemmy.escapebigtech.info
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    3
    ·
    1 year ago

    you’re posing an unfalsifiable statement as a question

    “prove to me that you don’t have an invisible purple unicorn friend that’s only visible to you”

    • Rodeo@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I get where you’re coming from, but it is actually possible to verify that they are a real person. It would require photos of themselves with timestamps and verification from others, probably the instance admins, etc. All for a silly reason. But it is possible.

      • myusernameblows@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        1 year ago

        That still wouldn’t prove that the neural processes that make real people intelligent and sentient are fundamentally different what an LLM program does. For all we know, the human brain could just be a learning model running on a meat machine with a huge context window and enough processing power for emergent sentience