| ▲ | rickdg 7 hours ago | |
How do these compare to Apple's Foundation Models, btw? | ||
| ▲ | simonw 7 hours ago | parent | next [-] | |
So much better. Hard to quantify, but even the small Gemma 4 models have that feels-like-ChatGPT magic that Apple's models are lacking. | ||
| ▲ | snarkyturtle 7 hours ago | parent | prev [-] | |
AFM had a 4096 token context window and this can be configured to have a 32k+ token context window, for one. | ||