Remix.run Logo
_345 6 hours ago

We need more voices like this to cut through the bullshit. It's fine that people want to tinker with local models, but there has been this narrative for too long that you can just buy more ram and run some small to medium sized model and be productive that way. You just can't, a 35b will never perform at the level of the same gen 500b+ model. It just won't and you are basically working with GPT-4 (the very first one to launch) tier performance while everyone else is on GPT-5.4. If that's fine for you because you can stay local, cool, but that's the part that no one ever wants to say out loud and it made me think I was just "doing it wrong" for so long on lm studio and ollama.

zozbot234 5 hours ago | parent | next [-]

> We need more voices like this to cut through the bullshit.

Open models are not bullshit, they work fine for many cases and newer techniques like SSD offload make even 500B+ models accessible for simple uses (NOT real-time agentic coding!) on very limited hardware. Of course if you want the full-featured experience it's going to cost a lot.

solenoid0937 4 hours ago | parent [-]

I fell for this stuff, went into the open+local model rabbit hole, and am finally out of it. What a waste of time and money!

People that love open models dramatically overstate how good the benchmaxxed open models are. They are nowhere near Opus.

slopinthebag 2 hours ago | parent | prev [-]

> We need more voices like this to cut through the bullshit.

Just because you can't figure out how to use the open models effectively doesn't mean they're bullshit. It just takes more skill and experience to use them :)