Remix.run Logo
EmiDub 3 days ago

Why do we keep having these LLM studies that are completely unsurprising. Yes, the probabilistic text generator is more likely to output a correct answer when the input more closely matches its training sources than when you add random noise to the prompt. They don’t actually “understand” maths. It’s worrying how much research seems to operate from the premise that they do.

pnt12 3 days ago | parent [-]

"It’s worrying how much research seems to operate from the premise that they do."

They are testing an hypothesis, we don't know if they're optimistic or pessimistic about it. Is it even relevant?

They have studied that LLMs can be easily confused with non-sequitors, and this is interesting. Maybe prompts to LLM should be more direct and foccused. Maybe this indicates a problem with end users interacting with LLMs directly - many people have difficulty on writing in a clear and direct way! Probably even more people when speaking!