▲ | macintux 5 days ago | |
An LLM would, surely, have to: * Know whether its answers are objectively beneficial or harmful * Know whether its answers are subjectively beneficial or harmful in the context of the current state of a person it cannot see, cannot hear, cannot understand. * Know whether the user's questions, over time, trend in the right direction for that person. That seems awfully optimistic, unless I'm misunderstanding the point, which is entirely possible. | ||
▲ | dcrazy 5 days ago | parent [-] | |
It is definitely optimistic, but I was steelmanning the optimist’s argument. |