| ▲ | collabs an hour ago | |||||||||||||||||||||||||||||||||||||
Or, it will acknowledge that it made a mistake and continue to make the same mistake again. I asked Claude to generate an HTML page about PowerShell 7. It gave me a page saying 7.4 was the latest LTS release. I corrected it with links showing 7.6 was released in March and asked it to regenerate with the latest information. It generated basically the same page with the same claim that 7.4 was the latest release. | ||||||||||||||||||||||||||||||||||||||
| ▲ | ericmay an hour ago | parent [-] | |||||||||||||||||||||||||||||||||||||
> Or, it will acknowledge that it made a mistake and continue to make the same mistake again. People do this too though. At least the AI generally tries to follow instructions that you give it even when you are lacking clarity in the details. I feel like it's similar to the self-driving car problem. The car could have 99.9999% reliability, drive much better and safer than a human, yet folks will still freak out about a single mistake that's made even though you have actual humans today driving the wrong way down the highway, crashing in to buildings, drunk driving, stealing cars, and all sorts of other just absolutely stupid things. We need to move away from this idea that because it's an AI system it should give you perfect responses. It's not a deterministic system and it can be wrong, though it should get better over time. Your Google search results are wrong all the time too. The NYT writes things that are factually incorrect. Why do we have such a high standard for these models when we don't apply them elsewhere? | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||