▲ | ants_everywhere 3 days ago | |
> I think you misunderstand how context in current LLMs works. Thanks but I don't and I'm not sure why you're jumping to this conclusion. EDIT: Oh I think you're talking about the last bit of the comment! If you read the one before I say that feeding it the entire repo isn't a great idea. But great idea or not, people want to do it, and it illustrates that as context window increases it creates demand for even larger context windows. | ||
▲ | brulard 2 days ago | parent [-] | |
I said that based on you saying you exhaust a million token context windows easily. I'm no expert on that, but I think the current state of LLMs works best if you are not approaching that 1M token limit, because large context (reportedly) deteriorates response quality quickly. I think state of the art usage is managing context in tens or low hundreds thousands tokens at most and taking advantage of splitting tasks across subtasks in time, or splitting context across multiple "expert" agents (see sub-agents in claude code). |