Remix.run Logo
imtringued 2 days ago

You can think of the K(=key) matrix in attention as a neural network where each token is turned into a tiny classifier network with multiple inputs and a single output.

The softmax activation function picks the most promising activations for a given output token.

The V(=value) matrix forms another neural network where each token is turned into a tiny regressor neural network that accepts the activation as an input and produces multiple outputs that are summed up to produce an intermediate token which is then fed into the MLP layer.

From this perspective the transformer architecture is building neural networks at runtime.

But there are some pretty obvious limitations here: The LLM operates on tokens, which means it can only operate on what is in the KV-cache/context window. If the candidates are not in the context window, it can't score them.

yahoozoo a day ago | parent [-]

I’m not sure if I’m just misunderstanding or we are talking about two different things. I know at a high level how transformers/LLMs decide its next token in the response it is generating.

My question to the post I replied to was basically: given a coding problem, and a list of possible solutions (candidates), how can a LLM generate a meaningful numerical score for each candidate to then say this one is a better solution than that one?