▲ | brulard 3 days ago | |||||||
I think you misunderstand how context in current LLMs works. To get the best results you have to be very careful to provide what is needed for immediate task progression, and postpone context thats needed later in the process. If you give all the context at once, you will likely get quite degraded output quality. Thats like if you want to give a junior developer his first task, you likely won't teach him every corner of your app. You would give him context he needs. It is similar with these models. Those that provided 1M or 2M of context (Gemini etc.) were getting less and less useful after cca 200k tokens in the context. Maybe models would get better in picking up relevant information from large context, but AFAIK it is not the case today. | ||||||||
▲ | remexre 3 days ago | parent | next [-] | |||||||
That's a really anthropomorphizing description; a more mechanical one might be, The attention mechanism that transformers use to find information in the context is, in its simplest form, O(n^2); for each token position, the model considers whether relevant information has been produced at the position of every other token. To preserve performance when really long contexts are used, current-generation LLMs use various ways to consider fewer positions in the context; for example, they might only consider the 4096 "most likely" places to matter (de-emphasizing large numbers of "subtle hints" that something isn't correct), or they might have some way of combining multiple tokens worth of information into a single value (losing some fine detail). | ||||||||
▲ | ants_everywhere 3 days ago | parent | prev | next [-] | |||||||
> I think you misunderstand how context in current LLMs works. Thanks but I don't and I'm not sure why you're jumping to this conclusion. EDIT: Oh I think you're talking about the last bit of the comment! If you read the one before I say that feeding it the entire repo isn't a great idea. But great idea or not, people want to do it, and it illustrates that as context window increases it creates demand for even larger context windows. | ||||||||
| ||||||||
▲ | jimbokun 3 days ago | parent | prev [-] | |||||||
It seems like LLM need to become experts at managing their OWN context. Selectively gripping and searching the code to pull into context only those parts relevant to the task at hand. | ||||||||
|