Remix.run Logo
goalieca 5 days ago

> It’s reasonable to say LLMs hallucinate because they aren’t trained to say they don’t have a statistically significant answer.

I’ve not seen anyone intuitively explain parameters for a real scale model.. perhaps because it’s all just thousand dimensional nonsense.

Statistics is a funny thing too. Pretty much everyone has seen how trend lines don’t always extrapolate very well.

I think OpenAI is biased to thinking that adding more parameters and training better will fix all ills. In a handwaving way, you can see this like adding more degrees to the polynomial when you curve fit on a spreadsheet. With enough parameters you can perfectly fit any dataset. That all works until you run across new inputs that are unlike training data.

utyop22 5 days ago | parent [-]

"I think OpenAI is biased to thinking that adding more parameters and training better will fix all ills."

Their whole existence depends on this happening. Else they go bust.