▲ | BeetleB 13 hours ago | |
> You're blaming the user for having a bad experience as a result of not using the service "correctly". Definitely. Just as I used to blame people for misusing search engines in the pre-LLM era. Or for using Wikipedia to get non-factual information. Or for using a library as a place to meet with friends and have lunch (in a non-private area). If you're going to try to use a knife as a hammer, yes, I will fault you. I do expect that if someone plans to use a tool, they do own the responsibility of learning how to use it. > If you're taking the position that it's the user's fault for asking LLMs a question it won't be good at answering, then you can't simultaneously advocate for not censoring the model. If it's the user's responsibility to know how to use ChatGPT "correctly", the tool (at a minimum) should help guide you away from using it in ways it's not intended for. Documentation, manuals, training videos, etc. Yes, I am perhaps a greybeard. And while I do like that many modern parts of computing are designed to be easy to use without any training, I am against stating that this is a minimum standard that all tools have to meet. Software is the only part of engineering where "self-explanatory" seems to be common. You don't buy a board game hoping it will just be self-evident how to play. You don't buy a pressure cooker hoping it will just be safe to use without learning how to use it. So yes, I do expect users should learn how to use the tools they use. |