▲ | ajtejankar a day ago | |
IMO LoRAs are no different from context tokens. In fact, before LoRAs tuned prompt vectors were a popular adapter architecture. Conceptually, the only difference is that prompt adapters only interact with other tokens through the attention mechanism while LoRAs allow you to directly modify any linear layer in the model. Essentially, you can think of your KV cache as dynamically generated model weights. Moreover, I can't find the paper, but there is some evidence that in-context learning is powered by some version of gradient descent inside the model. | ||
▲ | causal 17 hours ago | parent [-] | |
LoRA's are more robust than context tokens - their influence remains strong over long contexts and do a much better job of actually changing behavior rather than mimicking a desired behavior via instruction. But even if LoRA isn't it - the point is that "skill" seems like the wrong term for something that already has a name: instructions. These are instruct-tuned models. Given instructions they can do new things; this push to rebrand it as a "skill" just seems like marketing. |