Remix.run Logo
girvo 7 hours ago

Like me: that’s what I do. Either Opus 4.7 or GLM 5.1 for planning, write it out to a markdown file, then farm it out to Qwen 3.6 27B on my DGX Spark-alike using Pi. Works amusingly well all things considered.

brianjking 4 hours ago | parent | next [-]

How are you interacting with GLM 5.1? Via the Claude Code harness? I really wish they'd release a fully multimodal model already.

2ndorderthought 7 hours ago | parent | prev | next [-]

How is glm 5.1? I have t tried it yet but have been meaning too

girvo 6 hours ago | parent [-]

It's surprisingly good. Beats MiniMax 2.7 and Qwen 3.5 Plus in my testing (I haven't tested 3.6 plus though), quite handily. It's far better than Sonnet, and often equivalent to Opus for the web development and OCaml tasks I'm using it for. It definitely isn't Opus 4.7, but its far good enough to earn it's keep and is substantially cheaper.

sshine 5 hours ago | parent [-]

I agree with this. And also: it uses more thinking time to reach this. So while you get a lot of tokens on their plan, the peak 3x token usage multiplier + the extra thinking means you run into the rate limit anyways.

girvo 5 hours ago | parent [-]

True, though the $20 equivalent used for planning only I don’t hit those limits often, vs Claude where the Pro can literally hit limits with a single prompt haha

aftbit 7 hours ago | parent | prev [-]

What hardware are you using to power this?

girvo 6 hours ago | parent [-]

> DGX Spark-alike

Probably wasn't clear enough if you don't know what that is already, apologies

It's an Asus Ascent GX10, which is a little mini PC with 128GB of LPDDR5X as shared memory for an Nvidia GB10 "Blackwell" (kind of, it's a long story) GPU and a MediaTek ARM CPU

sterlind 3 hours ago | parent | next [-]

pulls up chair

could you tell me the long story?

edit: or wait, is it quasi-Blackwell the way all DGX Sparks are quasi-Blackwell? like the actual silicon is different but it's sorta Blackwell-shaped?

girvo 3 hours ago | parent [-]

Yeah exactly. Shader model 121 is different to SM 120 (consumer Blackwell) and is different again to data centre Blackwell SM100.

The promise of this chip was “write your code locally, then deploy to the same architecture in the data centre!”

Which is nonsense, because the GB10 is better described as “Hopper with Blackwell characteristics” IMO.

Still great hardware, especially for the price and learning. But we are only just starting to get the kernels written to take advantage of it, and mma.sync is sad compared to tcgen05

aftbit 5 hours ago | parent | prev [-]

Ah yeah I saw that, I was just curious which particular mini-PC you were using. I was considering picking up one of the various AI Max 395 boxes before the RAMpocalypse but didn't take the plunge. Thanks for the response!

girvo 5 hours ago | parent [-]

I heavily considered one of the AMD Strix Halo boxes, but part of the reason I wanted this was to learn CUDA :)