• 1 Post
  • 33 Comments
Joined 11 months ago
cake
Cake day: June 4th, 2025

help-circle
  • You still lose the internal state between each token in the database output. It would let it plan, but it would still be externalizing that planning, one token at a time. Condensing all of the internal state into a single token at a time still means huge losses in detail as well as fragmentation of responses, resulting in all the problems that you see with LLMs.

    Somehow the actual internal state needs to not only be preserved, but fed back into itself. That’s how brains work. Condensing it into tokens isn’t enough.


















  • Except LLMs are absolutely terrible at working with a new, poorly documented library. Commonly-used, well-defined libraries? Sure! Working in an obscure language or an obscure framework? Good luck.

    LLMs can surface information. It’s perhaps the one place they’re actually useful. They cannot reason in the same way a human programmer can, and all the big tech companies are trying to sell them on that basis.