| ▲ | bee_rider 2 hours ago | |
Does it do this for really cut and dry problems? I’ve noticed that ChatGPT will put a lot of effort into (retroactively) “discovering” a basically-valid alternative interpretation of something it said previously, if you object on good grounds. Like it’s trying to evade admitting that it made a mistake, but also find some say to satisfy your objection. Fair enough, if slightly annoying. But I have also caught it on straightforward matters of fact and it’ll apologize. Sometimes in an over the top fashion… | ||