▲ | SquareWheel 6 hours ago | |
Jailbreaking an LLM is little more than convincing it to teach you how to hotwire a car, against its system prompt. It doesn't unlock any additional capability or deeper reasoning. Please don't read into any such conversations as being meaningful. At the end of the day, it's just responding to your own inputs with similar outputs. If you impart meaning to something, it will respond in kind. Blake Lemoine was the first to make this mistake, and now many others are doing the same. Remember that at the end of the day, you're still just interacting with a token generator. It's predicting what word comes next - not revealing any important truths. edit: Based on your edit, I regret feeling empathy for you. Some people are really struggling with this issue, and I don't see any value in pretending to be one of them. |