What service can I use to ask questions about my database of blog posts? “Tell me everything you know about Grideon, my fictional character” etc

  • Danitos@reddthat.com
    link
    fedilink
    arrow-up
    4
    ·
    3 days ago

    OP can also use an embedding model and work with vectorial databases for the RAG.

    I use Milvus (vector DB engine; open source, can be self hosted) and OpenAI’s text-embedding-small-3 for the embedding (extreeeemely cheap). There’s also some very good open weights embed modelsln HuggingFace.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      I understand conceptually how these work, but I have a hard time of how to get started . I have the model, I know embeddings exist and what they are, and rags, and vector dbs, and then I have my SQL DB. I just don’t know what the steps are.

      Do you have any guides you recommend?

      • Danitos@reddthat.com
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        Milvus documentation has a nice example: link. After this, you just need to use a persistent Milvus DB, instead of the ephimeral one in the documentation.

        Let me know if you have further questions.

        • Scrubbles@poptalk.scrubbles.tech
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          That’s a great start! A lot of it depends on OpenAI, is there any guide you know of that lets me run completely locally? I use TabbyAPI for most of my inference, and happy to run anything else for training

          • Danitos@reddthat.com
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            2 days ago

            It would work the same way, you would just need to connect with your local model. For example, change the code to find the embeddings with your local model, and store that in Milvus. After that, do the inference calling your local model.

            I’ve not used inference with local API, can’t help with that, but for embeddings, I used this model and it worked quite fast, plus was a top2 model in Hugging Face. Leaderboard. Model.

            I didn’t do any training, just simple embed+interference.