| ▲ | buzzerbetrayed 7 hours ago | ||||||||||||||||
For me the hallucination and gaslighting is like taking a step back in time a couple of years. It even fails the “r’s in strawberry” question. How nostalgic. It’s very impressive that this can run locally. And I hope we will continue to be able to run couple-year-old-equivalent models locally going forward. | |||||||||||||||||
| ▲ | dimmke 4 hours ago | parent | next [-] | ||||||||||||||||
I haven't seen anybody else post it in this thread, but this is running on 8GB of RAM. It's not the full Gemma 4 32B model. It's a completely different thing from the full Gemma 4 experience if you were running the flagship model, almost to the point of being misleading. It's their E2B and E4B variants (so 2B and 4B but also quantized) https://ai.google.dev/gemma/docs/core/model_card_4#dense_mod... | |||||||||||||||||
| |||||||||||||||||
| ▲ | shtack an hour ago | parent | prev | next [-] | ||||||||||||||||
With reasoning on I found E4B to be solid, but E2B was completely unusable across several tests. | |||||||||||||||||
| ▲ | 1f60c 5 hours ago | parent | prev [-] | ||||||||||||||||
Strangely, reasoning is not on by default. If you enable it, it answers as you'd expect. | |||||||||||||||||