▲ | torginus 4 days ago | |
I have noticed this too - often when one model volunteered the wrong answer - such as making up a nonexistent API, I asked another, and it gave me the exact same thing! It's highly unlikely that two totally independent models would make up the same fictional thing. There must be something strange going on (most likely training on each others' wrong outputs, but I dunno) | ||
▲ | joseda-hg 4 days ago | parent [-] | |
I've been burned by getting a deprecated version of an API Or hallucinated that a method of X library should exist in Y because they're similar |