| ▲ | recursivegirth 6 hours ago | |
Fundamental flaw with LLMs. It's not that they aren't trained on the concept, it's just that in any given situation they can apply a greater bias to the antithesis of any subject. Of course, that's assuming the counter argument also exists in the training corpus. I've always wondered what these flagship AI companies are doing behind the scenes to setup guardrails. Golden Gate Claude[1] was a really interesting... I haven't seen much additional research on the subject, at the least open-facing. | ||
| ▲ | yesitcan 2 hours ago | parent [-] | |
This is the most Hacker News reply to a humorous comment. | ||