| ▲ | themafia an hour ago | ||||||||||||||||
> we're not actually on the right track to achieve real intelligence. Real intelligence means you have to say "I don't know" when you don't know, or ask for help, or even just saying you refuse to help with the subtext being you don't want to appear stupid. The models could ostensibly do this when it has low confidence in it's own results but they don't. What I don't know if it's because it would be very computationally difficult or it would harm the reputation of the companies charging a good sum to use them. | |||||||||||||||||
| ▲ | cmrdporcupine 27 minutes ago | parent | next [-] | ||||||||||||||||
That's just not how they work, really. They don't know what they don't know and their process requires an output. I think they're getting better at it, but it's likely just the number of parameters getting bigger and bigger in the SOTA models more than anything. | |||||||||||||||||
| ▲ | colechristensen an hour ago | parent | prev | next [-] | ||||||||||||||||
You can TELL the models to do this and they'll follow your prompt. "Give me your answer and rate each part of it for certainty by percentage" or similar. | |||||||||||||||||
| |||||||||||||||||
| ▲ | bluefirebrand an hour ago | parent | prev [-] | ||||||||||||||||
My theory is because the people building the models and in charge of directing where they go love the sycophantic yes-man behavior the models display They don't like hearing "I don't know" | |||||||||||||||||