| ▲ | SilverElfin 3 days ago | |
I’ve seen comments saying that many foundational model providers like DeepSeek haven’t done a full pretraining in a long time. Does that mean this use of chips is in reference to the past? | ||
| ▲ | londons_explore 3 days ago | parent | next [-] | |
Whilst there aren't many papers on the matter, I would guess that pretraining from scratch is a bit of a waste of money when you could simply expand the depth/width of the 'old' model and retrain only the 'new' bit. | ||
| ▲ | KurSix 3 days ago | parent | prev [-] | |
Even if they're not doing full-from-scratch training every cycle, any serious model updates still soak up GPU hours | ||