Remix.run Logo
silon42 5 days ago

Surely there are prompts on the "internet" that it will borrow from...

vineyardmike 5 days ago | parent [-]

Definitionally no.

Each LLM responds to prompts differently. The best prompts to model X will not be in the training data for model X.

Yes, older prompts for older models can still be useful. But if you asked ChatGPT before GPT-5, you were getting a response from GPT-4 which had a knowledge cutoff around 2022, which is certainly not recent enough to find adequate prompts in the training data.

There are also plenty of terrible prompts on the internet, so I still question a recent models ability to write meaningful prompts based on its training data. Prompts need to be tested for their use-case, and plenty of medium posts from self-proclaimed gurus and similar training data junk surely are not tested against your use case. Of course, the model is also not testing the prompt for you.

meowface 5 days ago | parent [-]

Exactly.

I wasn't trying to make any of the broader claims (e.g., that LLMs are fundamentally unreliable, which is sort of true but not really that true in practice). I'm speaking about the specific case where a lot of people seem to want to ask a model about itself or how it was created or trained or what it can do or how to make it do certain things. In these particular cases (and, admittedly, many others) they're often eager to reply with an answer despite having no accurate information about the true answer, barring some external lookup that happens to be 100% correct. Without any tools, they are just going to give something plausible but non-real.

I am actually personally a big LLM-optimist and believe LLMs possess "true intelligence and reasoning", but I find it odd how some otherwise informed people seem to think any of these models possess introspective abilities. The model fundamentally does not know what it is or even that it is a model - despite any insistence to the contrary, and even with a lot of relevant system prompting and LLM-related training data.

It's like a Boltzmann brain. It's a strange, jagged entity.