| ▲ | csnover a day ago | |
Your response is a non-sequitur that does not answer the question you yourself posed, and you are responding to yourself with a chatbot. Given that it is a non-sequitur, presumably it is also the case that no work was done to verify whether the output of the LLM was hallucinated or not, so it is probably also wrong in some way. LLMs are token predictors, not fact databases; the idea that it would be reproducing a “historical exploit” is nonsensical. Do you believe what it says because it says so in a code comment? Please remember what LLMs are actually doing and set your expectations accordingly. More generally, people don’t participate in communities to have conversations with someone else’s chatbot, and especially not to have to vicariously read someone else’s own conversation with their own chatbot. | ||