▲ | firasd 3 days ago | |||||||||||||
A big problem with the chat apps (ChatGPT; Claude.ai) is the weird context window hijinks. Especially ChatGPT does wild stuff.. sudden truncation; summarization; reinjecting 'ghost snippets' etc I was thinking this should be up to the user (do you want to continue this conversation with context rolling out of the window or start a new chat) but now I realized that this is inevitable given the way pricing tiers and limited computation works. Like the only way to have full context is use developer tools like Google AI Studio or use a chat app that wraps the API With a custom chat app that wraps the API you can even inject the current timestamp into each message and just ask the LLM btw every 10 minutes just make a new row in a markdown table that summarizes every 10 min chunk | ||||||||||||||
▲ | cruffle_duffle 3 days ago | parent [-] | |||||||||||||
> btw every 10 minutes just make a new row in a markdown table that summarizes every 10 min chunk Why make it time based instead of "message based"... like "every 10 messages, summarize to blah-blah.md"? | ||||||||||||||
|