| ▲ | andriy_koval 2 hours ago | |
> "Frontier LLMs can do it with enough context" is not really a strong argument against fine-tuning, because they're expensive to run. I am not expert in this topic, but I am wondering if large cached context is actually cheap to run and frontier models would be cost efficient too in such setting? | ||