Remix.run Logo
0x00cl 6 days ago

I see you are using ollamas ggufs. By default it will download Q4_0 quantization. Try `gemma3:270m-it-bf16` instead or you can also use unsloth ggufs `hf.co/unsloth/gemma-3-270m-it-GGUF:16`

You'll get better results.

simonw 6 days ago | parent | next [-]

Good call, I'm trying that one just now in LM Studio (by clicking "Use this model -> LM Studio" on https://huggingface.co/unsloth/gemma-3-270m-it-GGUF and selecting the F16 one).

(It did not do noticeably better at my pelican test).

Actually it's worse than that, several of my attempts resulted in infinite loops spitting out the same text. Maybe that GGUF is a bit broken?

danielhanchen 6 days ago | parent | next [-]

Oh :( Maybe the settings? Could you try

temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0

canyon289 6 days ago | parent | next [-]

Daniel, thanks for being here providing technical support as well. Cannot express enough how much we appreciate your all work and partnership.

danielhanchen 6 days ago | parent [-]

Thank you and fantastic work with Gemma models!

simonw 6 days ago | parent | prev [-]

My topping only lets me set temperature and top_p but setting them to those values did seem to avoid the infinite loops, thanks.

danielhanchen 6 days ago | parent [-]

Oh fantastic it worked! I was actually trying to see if we can auto set these within LM Studio (Ollama for eg has params, template) - not sure if you know how that can be done? :)

JLCarveth 6 days ago | parent | prev [-]

I ran into the same looping issue with that model.

danielhanchen 6 days ago | parent [-]

Definitely give

temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0

a try, and maybe repeat_penalty = 1.1

Patrick_Devine 5 days ago | parent | prev [-]

We uploaded gemma3:270m-it-q8_0 and gemma3:270m-it-fp16 late last night which have better results. The q4_0 is the QAT model, but we're still looking at it as there are some issues.