| ▲ | fxtentacle 3 hours ago | ||||||||||||||||
Same here. While LLMs sometimes work surprisingly well, I also encounter edge cases where they fail surprisingly badly multiple times per day. My guess is that other people maybe just don't bother to check what the AI says which would cause them to not notify omission errors. Like when I was trying to find a physical store again with ChatGPT Pro 5.4 and asked it to prepare a list of candidates, but the shop just wasn't in the list, despite GPT claiming it to be exhaustive. When I then found it manually and asked GPT for advice on how I could improve my prompting in the future, it went full "aggressively agreeable" on me with "Excellent question! Now I can see exactly why my searches missed XY - this is a perfect learning opportunity. Here's what went wrong and what was missing: ..." and then 4 sections with 4 subsections each. It's great to see the AI reflect on how it failed. But it's also kind of painful if you know that it'll forget all of this the moment the text is sent to me and that it will never ever learn from this mistake and do better in the future. | |||||||||||||||||
| ▲ | jmalicki 3 hours ago | parent | next [-] | ||||||||||||||||
"I also encounter edge cases where they fail surprisingly badly multiple times per day. " If 80% of the time they 10x my output, and the other 20% I can say "well they failed, I guess this one I have to do manually" - that's still an absolutely massive productivity boost. | |||||||||||||||||
| |||||||||||||||||
| ▲ | senordevnyc 2 hours ago | parent | prev | next [-] | ||||||||||||||||
Like when I was trying to find a physical store again with ChatGPT Pro 5.4 and asked it to prepare a list of candidates I wonder if it was getting blocked on searches or something, and just didn't tell you. | |||||||||||||||||
| ▲ | pdntspa 3 hours ago | parent | prev [-] | ||||||||||||||||
[dead] | |||||||||||||||||