▲ | glenstein 9 hours ago | |
Simianwords said: "use GPT 5 with thinking and search disabled and get it to give you inaccurate facts for non niche, non deep topics" and noted that mistakes were possible, but rare. JustExAWS replied with an example of getting Python code wrong and suggested it was a counter example. Simianwords correctly noted that their comment originally said thinking mode for factual answers on non-niche topics and posted a link that got the python answer right with thinking enabled. That's when you entered, suggesting that Simian was "missing" the point that GPT (not distinguishing thinking or regular mode), was "not always right". But they had already acknowledged multiple times that it was not always right. They said the accuracy was "high enough", noted that LLMs get coding wrong, and reiterating that their challenge was specifically about thinking mode. You, again without acknowledging the criteria they had noted previously, insisted this was cherry picking, missing the point that they were actually being consistent from the beginning, inviting anyone to give an example showing otherwise. At no point between then and here have you demonstrated an awareness of this criteria despite your protestations to the contrary. Instead of paying attention to any of the details you're insulting me and retreating into irritated resentment. | ||
▲ | OnlineGladiator 8 hours ago | parent [-] | |
Thank you for repeating yourself again. It's really hammering home the point. Please, continue. |