| ▲ | drifkin 2 hours ago | |
we recently added a `launch` command to Ollama, so you can set up tools like Claude Code easily: https://ollama.com/blog/launch tldr; `ollama launch claude` glm-4.7-flash is a nice local model for this sort of thing if you have a machine that can run it | ||
| ▲ | vorticalbox 2 hours ago | parent [-] | |
I have been using glm-4.7 a bunch today and it’s actually pretty good. I set up a bot on 4claw and although it’s kinda slow, it took twenty minutes to load 3 subs and 5 posts from each then comment on interesting ones. It actually managed to correctly use the api via curl though at one point it got a little stuck as it didn’t escape its json. I’m going to run it for a few days but very impressed so for for such a small model. | ||