Remix.run Logo
low_tech_love 3 days ago

I think the problem is that for every person who actually understands that ChatGPT should not be used for objective things like a die roll, there are 10 or 20 who would say “well, it looks ok, and it’s fast, convenient, and it passes nicely for an answer”. People are pushing the boundaries and waiting for the backlash, but the backlash never actually comes… so they keep pushing.

Think about this: suppose you’re reading a scientific paper and the author writes “I did a study with 52 participants, and here are the answers”. Would there be any reason to believe that data is real?

mmcwilliams 3 days ago | parent [-]

I agree that the fundamental problem is a misunderstanding about what transformer models produce and how, but people not getting bitten until far down the road is a responsibility that service providers need to address, not everyone else.

I'm not sure I follow your hypothetical. The author making the claim in a public paper can be contacted for the data. It can be verified. Auditing the internals of an LLM, especially a closed one that, is not the same.