Remix.run Logo
andy_ppp 3 hours ago

Fine tuning these models (at least with PPO or equivalent) requires even more VRAM than inference does, potentially 2-3 times more.