Remix.run Logo
theshrike79 3 days ago

> the problem is that in order to develop an intuition for questions that LLMs can answer, the user will at least need to know something about the topic beforehand

This is why simonw (The author) has his "pelican on a bike" -test, it's not 100% accurate but it is a good indicator.

I have a set of my own standard queries and problems (no counting characters or algebra crap) I feed to new LLMs I'm testing

None of the questions exist outside of my own Obsidian note so they can't be gamed by LLM authors. And I've tested multiple different LLMs using them so I have a "feeling" on what the answer should look like. And I personally know the correct answer so I can immediately validate them.

barapa 3 days ago | parent [-]

They are training on your queries. So they may have some exposure to them going forward.

franktankbank 3 days ago | parent | next [-]

Even if your queries are hidden via a local running model you must have some humility that your queries are not actually unique. For this reason I have a very difficult time believing that a basic LLM will be able to properly reason about complex topics, it can regurgitate to whatever level its been trained. That doesn't make it less useful though. But on the edge case how do we know the query its ingesting gets trained with a suitable answer? Wouldn't this constitute an over-fitting in these cases and be terribly self-reinforcing?

keysdev 3 days ago | parent | prev [-]

Not if one ollama pull to ur machine.