| ▲ | coldtea 3 hours ago | |||||||
>Human programmers don't usually hallucinate things out of thin air Oh, you wouldn't believe how much they do that too, or are unreliable in similar ways. Bullshiting, thinking they tested x when they didn't, misremembering things, confidently saying that X is the bottleneck and spending weeks refactoring without measuring (to turn out not to be), the list goes on. >So no, they aren't working the exact same way. However they work internally, most of the time, current agents (of say, last year and above) "describe the issue exactly in the way a human programmer would". | ||||||||
| ▲ | qsera 3 hours ago | parent [-] | |||||||
That is not hallucinating... LLM hallucinating is not an edge case. It is how they generate output 100% time. Mainstream media only calls it "hallucination" when the output is wrong, but from the point of view of a LLM, it is working exactly it is supposed to.... | ||||||||
| ||||||||