[deleted by user]

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      3 months ago

      How does having a key solve anything? Its not that the source doesn’t exist, it’s that the source says something different to the LLM’s interpretation of it.

        • skisnow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          3 months ago

          The hash proves which bytes the answer was grounded in, should I ever want to check it. If the model misreads or misinterprets, you can point to the source and say “the mistake is here, not in my memory of what the source was.”.

          Eh. This reads very much like your headline is massively over-promising clickbait. If your fix for an LLM bullshitting is that you have to check all its sources then you haven’t fixed LLM bullshitting

          If it does that more than twice, straight in the bin. I have zero chill any more.

          That’s… not how any of this works…