Remix.run Logo
hwers 8 hours ago

I agree, it can be incredibly frustrating at times. My rule is that if it “compiles” in my brain as an understood idea then i accept it. I also push back a lot (sometimes it points out good errors in my thinking, sometimes it admits it hallucinated). Real humans hallucinate a lot as well or confidently state subtly wrong ideas, it’s a good habit anyway. It’s basically the same approach when presented with a “formula” for something in school. If i dont know how to derive/prove it then i dont accept it as part of my memorized or accepted toolkit/things i use (and try to forget it). If it fits with the rest of my network of understood ideas i do. It’s annoying but still more time efficient than trawling through lecture slides with domain specific language etc

TimTheTinker 32 minutes ago | parent | next [-]

> My rule is that if it “compiles” in my brain as an understood idea then i accept it.

Unfortunately, individual people are not anywhere as reliable as a compiler for ensuring compliance to reality. We are particularly susceptible to flattery and other emotional manipulation, which LLMs frequently employ. This becomes particularly problematic when you ask for feedback on an idea.

In that case, a useful hack is to frame prompts as if you're an impartial observer and want help evaluating something, not as if the idea under evaluation is your own.

utopiah 7 hours ago | parent | prev | next [-]

> Real humans hallucinate a lot as well or confidently state subtly wrong ideas, it’s a good habit anyway.

I think that's actually deeply different. If a human keeps on apologizing because they are being caught in a lie, or just a mistake, you distrust them a LOT more. It's not normal to shrug off a problem then REPEAT it.

I imagine the cost of a mistake is exponential, not linear. So when somebody says "oops, you got me there!" I don't mistrust them just marginally more, I distrust them a LOT more and it will take a ton of effort, if even feasible, to get back to the initial level of trust.

I do not think it's at all equivalent to what "Real humans" do. Yes, we do mistake, but the humans you trust and want to partner with are precisely the one who are accountable when they make mistakes.

qsera 5 hours ago | parent | prev [-]

>Real humans hallucinate..

You seem to have a different understanding of what it means in the context of neural networks.

Real humans will not make up non existent api and implement a solution with it, (unless they do it on purpose).