Remix.run Logo
franktankbank 3 days ago

Even if your queries are hidden via a local running model you must have some humility that your queries are not actually unique. For this reason I have a very difficult time believing that a basic LLM will be able to properly reason about complex topics, it can regurgitate to whatever level its been trained. That doesn't make it less useful though. But on the edge case how do we know the query its ingesting gets trained with a suitable answer? Wouldn't this constitute an over-fitting in these cases and be terribly self-reinforcing?