| ▲ | arbot360 3 days ago | |
What do you make of this article? They used an auto-regressive genomic model to perform in-context learning experiments compared to language models. This showed that ICL behavior is not exclusive to language models. https://arxiv.org/html/2511.12797v1 | ||
| ▲ | adamzwasserman 3 days ago | parent [-] | |
This is great, thanks for the link. IMHO it actually supports the broader claim: if ICL emerges in both language models and genomic models, it suggests the phenomenon actually is about structure in the data, not something special about neural networks or transformers per se. Genomes have statistical regularities (motifs, codon patterns, regulatory grammar). Language has statistical regularities (morphology, syntax, collocations). Both are sequences with latent structure. Similar architectures trained on either will repeat those structures. That's consistent with my "instrumentation" view: the transformer is revealing structure that exists in the domain, whether that domain is English, French, or DNA. The architecture is the microscope; the structure was already there. | ||