▲ | dwallin 5 days ago | |
Very much agree with this. Looking at the dimensionality of a given problem space is a very helpful heuristic when analyzing how likely an llm is going to be suitable/reliable for that task. Consider how important positional encodings are LLM performance. You also then have an attention model that operates in that 1-dimensional space. With multidimensional data significant transformations to encode into a higher dimensional abstraction needs to happen within the model itself, before the model can even attempt to intelligently manipulate it. |