▲ | lolinder 8 months ago | |||||||||||||||||||||||||||||||
What distinguishes LLMs from classical computing is that they're very much not pedantic. Because the model is predicting what human text would follow a given piece of content, you can generally expect them to react approximately the way that a human would in writing. In this example, if a human responded that way I would assume they were either being passive aggressive or were autistic or spoke English as a second language. A neurotypical native speaker acting in good faith would invariably interpret the question as a request, not a question. | ||||||||||||||||||||||||||||||||
▲ | pbhjpbhj 8 months ago | parent [-] | |||||||||||||||||||||||||||||||
In your locality. I've asked LLM systems "can you..." questions. I'm asking surely about their capability and allowed parameters of operation. Apparently you think that means I'm brain damaged? | ||||||||||||||||||||||||||||||||
|