Remix.run Logo
secondcoming 4 days ago

How can we be sure that LLMs won't start giving stale answers?

mark-r 4 days ago | parent | next [-]

We can't. I don't think the LLMs themselves can recognize when an answer is stale. They could if contradicting data was available, but their very existence suppresses the contradictory data.

zahlman 4 days ago | parent [-]

LLMs don't experience the world, so they have no reason a priori to know what is or isn't truthful in the training data.

(Not to mention the confabulation. Making up API method names is natural when your model of the world is that the method names you've seen are examples and you have no reason to consider them an exhaustive listing.)

g947o 4 days ago | parent | prev | next [-]

They will, but model updates and competition help solve the problem. If people find that Claude consistently gives better/more relevant answers over GPT, for example, people will choose the better model.

The worst thing with Q/A sites isn't they don't work. It's that they there are no alternatives to stackoverflow. Some of the most upvoted answers on stackoverflow prove that it can work well in many cases, but too bad most other times it doesn't.

Someone1234 4 days ago | parent | prev | next [-]

They still use the official documentation/examples, public Github Repos, and your own code which are all more likely to be evergreen. SO was definitely a massive training advantage before LLMs matured though.

Cloudef 4 days ago | parent | prev [-]

LLMs are just statistics, eventually they kill themselves with feedback loop by consuming their own farts (literally)