▲ | doctorpangloss 5 days ago | |
I’m pretty sure input tokens are cheap because they want to ingest the data for training later no? They want huge contexts to slice up. | ||
▲ | awwaiid 4 days ago | parent [-] | |
Afaik all the large providers flipped the default to contractually NOT train on your data. So no, training data context size is not a factor. |