| ▲ | kanodiaayush 12 days ago | |||||||||||||
I tried it, I have tried a very similar but still different use case. I wonder if you have thoughts around how much of this is our own context management vs context management for the LLM. Ideally, I don't want to do any work for the LLM; it should be able to figure out from chat what 'branch' of the tree I'm exploring, and then the artifact is purely for one's own use.  | ||||||||||||||
| ▲ | mdebeer 12 days ago | parent | next [-] | |||||||||||||
Hi, matti here. Very interesting you bring this up. It was quite a big point of discussion whilst jamie and I were building. One of the big issues we faced with LLMs is that their attention gets diluted when you have a long chat history. This means that for large amounts of context, they often can't pick out the details your prompt relates to. I'm sure you've noticed this once your chat gets very long. Instead of trying to develop an automatic system to descide what context your prompt should use (I.e which branch you're on), we opted to make organising your tree a very deliberate action. This gives you a lot more control over what the model sees, and ultimately how good the responses. As a bonus, if a model if playing up, you can go in and change the context it has by moving a node or two about. Really good point though, and thanks for asking about it. I'd love to hear if you have any thoughts on ways you could get around it automatically.  | ||||||||||||||
  | ||||||||||||||
| ▲ | protocolture 11 days ago | parent | prev [-] | |||||||||||||
>I tried it, I have tried a very similar but still different use case. I wonder if you have thoughts around how much of this is our own context management vs context management for the LLM. Completely subjectively, for me its both. I have several Chat GPT tabs where it is instructed not to respond, or to briefly summarise. System works both ways imho.  | ||||||||||||||