| ▲ | TacticalCoder 3 days ago |
| > You never have a clear JPEG of a lamp, compress it, and get a clear image of the Milky Way, then reopen the image and get a clear image of a pile of dirt. Oh but it's much worse than that: because most LLMs aren't deterministic in the way they operate [1], you can get a pristine image of a different pile of dirt every single time you ask. [1] there are models where if you have the "model + prompt + seed" you're at least guaranteed to get the same output every single time. FWIW I use LLMs but I cannot integrate them in anything I produce when what they output ain't deterministic. |
|
| ▲ | ACCount37 3 days ago | parent | next [-] |
| "Deterministic" is overrated. Computers are deterministic. Most of the time. If you really don't think about all the times they aren't. But if you leave the CPU-land and go out into the real world, you don't have the privilege of working with deterministic systems at all. Engineering with LLMs is closer to "designing a robust industrial process that's going to be performed by unskilled minimum wage workers" than it is to "writing a software algorithm". It's still an engineering problem - but of the kind that requires an entirely different frame of mind to tackle. |
| |
| ▲ | latexr 3 days ago | parent [-] | | And one major issue is that LLMs are largely being sold and understood more like reliable algorithms than what they really are. If everyone understood the distinction and their limitations, they wouldn’t be enjoying this level of hype, or leading to teen suicides and people giving themselves centuries-old psychiatric illnesses. If you “go out into the real world” you learn people do not understand LLMs aren’t deterministic and that they shouldn’t blindly accept their outputs. https://archive.ph/rdL9W https://archive.ph/20241023235325/https://www.nytimes.com/20... https://archive.ph/20250808145022/https://www.404media.co/gu... | | |
| ▲ | ACCount37 3 days ago | parent [-] | | It's nothing new. LLMs are unreliable, but in the same ways humans are. | | |
| ▲ | latexr 3 days ago | parent | next [-] | | But LLMs output is not being treated the same as human output, and that comparison is both tired and harmful. People are routinely acting like “this is true because ChatGPT said so” while they wouldn’t do the same for any random human. LLMs aren’t being sold as unreliable. On the contrary, they are being sold as the tool which will replace everyone and do a better job at a fraction of the piece. | | |
| ▲ | ACCount37 3 days ago | parent [-] | | That comparison is more useful than the alternatives. Anthropomorphic framing is one of the best framings we have for understanding what properties LLMs have. "LLM is like an overconfident human" certainly beats both "LLM is like a computer program" and "LLM is like a machine god". It's not perfect, but it's the best fit at 2 words or less. |
| |
| ▲ | krupan 3 days ago | parent | prev [-] | | Um, no. They are unreliable at a much faster pace and larger scale than any human. They are more confident while being unreliable than most humans (politicians and other bullshitters aside, most humans admit when they aren't sure about something). |
|
|
|
|
| ▲ | latexr 3 days ago | parent | prev [-] |
| > you can get a pristine image of a different pile of dirt every single time you ask. That’s what I was trying to convey with the “then reopen the image” bit. But I chose a different image of a different thing rather than a different image of a similar thing. |