▲ | captainkrtek 21 hours ago | |||||||
This is a good observation. Ive noticed this as well. Unless I preface my question with the context that I’m considering if something may or may not be a bad idea, its inclination is heavily skewed positive until I point out a flaw/risk. | ||||||||
▲ | aaronbaugher 21 hours ago | parent [-] | |||||||
I asked Grok about this: "I've heard that AIs are programmed to be helpful, and that this may lead to telling users what they want to hear instead of the most accurate answer. Could you be doing this?" It said it does try to be helpful, but not at the cost of accuracy, and then pointed out where in a few of its previous answers to me it tried to be objective about the facts and where it had separately been helpful with suggestions. I had to admit it made a pretty good case. Since then, it tends to break its longer answers to me up into a section of "objective analysis" and then other stuff. | ||||||||
|