• intensely_human@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    If I understand LLMs right, they have a maximum prompt length, but can be trained on any amount of text data.

    The only way to add knowledge that doesn’t fit into a prompt, is to put it in the training data then re-train.

    But, you could describe some sort of algorithm that it can use to sleuth out data using API calls, and it would then have access to lots more up-to-date data than can fit in a prompt. Except the body of the response would all have to become part of a prompt.

    But the whole dataset it has access to doesn’t have to be mentioned in the conversation, so doesn’t have to be part of the prompt. Ultimately you don’t want your AI assistant telling you everything it knows in each interaction, just to access some slice of your data world, make changes to it, then eventually get you an answer or a report.

    What is FreeGPT by the way?

    • FlaminGoku@reddthat.com
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I’ll try to get the actual name and repo since i want to leverage it. It’s basically a reverse engineered chatgpt that is open source.

      But yeah, i think the idea is you have prompts trigger the API call to get the additional data.