| ▲ | embedding-shape 13 hours ago | |||||||
Every time a new model is released, there are a bunch of reports or written experiences about people using the model with software that seemingly doesn't support it. GPT-OSS really made that clear, where 90% of the ecosystem declared it broken, but most people were using dumb quants and software that didn't properly support it. Guess we'll repeat the same thing with OLMo now. | ||||||||
| ▲ | Sabinus 3 hours ago | parent | next [-] | |||||||
I'm really glad to read this, as this was my experience in LM studio with olmo. Worked for the first message but got progressively more unstable. Also doesn't seem to reset model state for a new conversation, every response following the model load gets progressively worse, even in new chats. | ||||||||
| ▲ | andy99 13 hours ago | parent | prev [-] | |||||||
There are a bunch (currently 3) of examples of people getting funny output, two of which saying it’s in LM studio (I don’t know what that is). It does seem likely that it’s somehow being misused here and the results aren’t representative. | ||||||||
| ||||||||