Remix.run Logo
achierius 4 days ago

> I generally will respond to stuff like this with "people do this, too"

But why? You're making the assumption that everyone using these things is trying to replace "average human". If you're just trying to solve an engineering problem, then "humans do this too" is not very helpful -- e.g. humans leak secrets all the time, but it would be quite strange to point that out in the comments on a paper outlining a new Specter attack. And if I were trying to use "average human" to solve such a problem, I would certainly have safeguards in place, using systems that we've developed and, over hundreds of years, shown to be effective.

saurik 3 days ago | parent | next [-]

Well, if you are going to try to use an LLM--something that is a giant black box that has no hope any time soon of being proven anywhere near as reliable as a CPU, and which has been trained explicitly on input data that makes it remarkably similar with respect to its limitations to a human--then you need to get used to using it to replace the "average human" and start doing everything you can to convince yourself it is a human so that you don't forget to add all of those safeguards we have shown to be effective.

empath75 3 days ago | parent | prev [-]

One can talk about LLMs in contexts that aren't about engineering, and are instead about topics like: "Do LLMs think" or "Are LLMs intelligent". People _frequently_ point to some failure mode of LLMs as dispositive proof that LLMs are incapable of thinking or aren't intelligent, in which case it is relevant that humans, which are universally agreed to be intelligent, frequently make similar mistakes.