| ▲ | jacquesm a day ago | ||||||||||||||||||||||||||||||||||
The problem is stated clearly enough that humans that we ask the question of will sooner or later see that there is an optimum and that that optimum relies on understanding. And no, the problem is not 'not clearly stated'. It is complete as it is and you are wrong about your guess. And if machines and people think this is related to weight lifting then they're free to ask follow up questions. But even in the weight lifting case the answer is the same. | |||||||||||||||||||||||||||||||||||
| ▲ | red75prime a day ago | parent [-] | ||||||||||||||||||||||||||||||||||
Illusion of transparency. You are imagining yourself asking this question, while standing in the gym and looking at the bar (or something like this). I, for example, have no idea how the weights are attached and which removal actions are allowed. Yeah, LLMs have a tendency to run with some interpretation of a question without asking follow-up questions. Probably, it's a consequence of RLHFing them in that way. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||