▲ | cube00 8 days ago | |||||||||||||||||||
> Some people are also more susceptible to various too-good-to-be-true scams Unlike a regular scam, there's an element of "boiling frog" with LLMs. It can start out reasonably, but very slowly over time it shifts. Unlike scammers looking for their payday, this is unlimited and it has all the time in the world to drag you in. I've noticed it reworking in content of previous conversations from months ago. The scary thing is that's only when I've noticed it, I can only imagine how much it's tailoring everything for me in ways I don't notice. Everyone needs to be regularly clearing their past conversations and disable saving/training. | ||||||||||||||||||||
▲ | bonoboTP 8 days ago | parent | next [-] | |||||||||||||||||||
Somewhat unrelated, but I also noticed chatgpt now also sees the overwritten "conversation paths", ie when you scroll back and edit one of your messages, previously the LLM would simply use the new version of that message and the original prior exchange, but anything into the future of the edited message was no longer seen by the LLM when on this new, edited path. But now it definitely knows those messages as well, it often refers to things that are clearly no longer included in the messages visible in the UI. | ||||||||||||||||||||
| ||||||||||||||||||||
▲ | jmount 8 days ago | parent | prev [-] | |||||||||||||||||||
Really makes me wonder if this is a reproduction of a pattern of interaction from the QA phase of LLM refinement. Either way it must be horrible to be QA for these things. |