▲ | elcritch 3 days ago | |
Given how LLMs work it makes sense that mentioning a topic even to negate it still adds that locus of probabilities to its attention span. Even humans are prone to being affected by it as it's a well known rhetorical device [1]. Then any time the probability chains for some command approaches that locus it'll fall into it. Very much like chaotic attractors come to think of it. Makes me wonder if there's any research out there on chaos theory attractors and LLM thought patterns. | ||
▲ | dreamcompiler 3 days ago | parent [-] | |
Well, all LLMs have nonlinear activation functions (because all useful neural nets require nonlinear activation functions) so I think you might be onto something. |