Remix.run Logo
pornel 3 days ago

I've wasted time debugging phantom issues due to LLM-generated tests that were misusing an API.

Brainstorming/explanations can be helpful, but also watch out for Gell-Mann amnesia. It's annoying that LLMs always sound smart whether they are saying something smart or not.

Miraste 3 days ago | parent [-]

Yes, you can't use any of the heuristics you develop for human writing to decide if the LLM is saying something stupid, because its best insights and its worst hallucinations all have the same formatting, diction, and style. Instead, you need to engage your frontal cortex and rationally evaluate every single piece of information it presents, and that's tiring.

valenterry 3 days ago | parent [-]

It's like listening to a politician or lawyer, who might talk absolute bullshit in the most persuading words. =)