▲ | chrismustcode 6 days ago | |
I’d be stunned if a 270m model could code with any proficiency. If you have an iPhone with the semi-annoying autocomplete that’s a 34m transformer. Can’t imagine a model (even if it’s a good team behind it) to do coding with 8x the parameters of a next 3/4 word autocomplete. | ||
▲ | 0x457 6 days ago | parent [-] | |
Someone should try this on that model: https://www.oxen.ai/blog/training-a-rust-1-5b-coder-lm-with-... |