| ▲ | pastel8739 7 hours ago | |||||||||||||
Ok, how about this?
It is trivial to get an LLM to produce new output, that’s all I’m saying. It is strictly false that LLMs will only ever output character sequences that have been seen before; clearly they have learned something deeper than just that. | ||||||||||||||
| ▲ | kube-system 6 hours ago | parent [-] | |||||||||||||
All of the data is still in the prompt, you are just asking the model to do a simple transform. I think there are examples of what you’re looking for, but this isn’t one. | ||||||||||||||
| ||||||||||||||