Remix.run Logo
djoldman 5 days ago

One interesting metric for LLMs is that for some tasks their precision is garbage but recall is high. (in essence: their top 5 answers are wrong but top 100 have the right answer).

As relates to infinite context, if one pairs the above with some kind of intelligent "solution-checker," it's interesting if models may be able to provide value across absolute monstrous text sizes where it's critical to tie two facts that are worlds apart.

mormegil 5 days ago | parent [-]

This probably didn't belong here?

djoldman 5 days ago | parent [-]

It didn't! Thanks