Remix.run Logo
wcallahan 9 days ago

I just used GPT-OSS-120B on a cross Atlantic flight on my MacBook Pro (M4, 128GB RAM).

A few things I noticed: - it’s only fast with with small context windows and small total token context; once more than ~10k tokens you’re basically queueing everything for a long time - MCPs/web search/url fetch have already become a very important part of interacting with LLMs; when they’re not available the LLM utility is greatly diminished - a lot of CLI/TUI coding tools (e.g., opencode) were not working reliably offline at this time with the model, despite being setup prior to being offline

That’s in addition to the other quirks others have noted with the OSS models.

XCSme 9 days ago | parent | next [-]

I know there was a downloadable version of Wikipedia (not that large). Maybe soon we'll have a lot of data stored locally and expose it via MCP, then the AIs can do "web search" locally.

I think 99% of web searches lead to the same 100-1k websites. I assume it's only a few GBs to have a copy of those locally, thus this raises copyright concerns.

Aurornis 8 days ago | parent [-]

The mostly static knowledge content from sites like Wikipedia is already well represented in LLMs.

LLMs call out to external websites when something isn’t commonly represented in training data, like specific project documentation or news events.

XCSme 8 days ago | parent [-]

That's true, but the data is only approximately represented in the weights.

Maybe it's better to have the AI only "reason", and somehow instantly access precise data.

stirfish 5 days ago | parent | next [-]

Is this Retrieval Augmented Generation, or something different?

XCSme 5 days ago | parent [-]

Yes, RAG, but have the model specifically optimzied for RAG.

adsharma 8 days ago | parent | prev [-]

What use cases will gain from this architecture?

XCSme 8 days ago | parent [-]

Data processing, tool calling, agentic use. Those are also the main use-cases outside "chatting".

conradev 9 days ago | parent | prev | next [-]

Are you using Ollama or LMStudio/llama.cpp? https://x.com/ggerganov/status/1953088008816619637

diggan 9 days ago | parent [-]

> LMStudio/llama.cpp

Even though LM Studio uses llama.cpp as a runtime, the performance differs between them. With LM Studio 0.3.22 Build 2 with CUDA Llama.cpp (Linux) v1.45.0 runtime I get ~86 tok/s on a RTX Pro 6000, while with llama.cpp compiled from 1d72c841888 (Aug 7 10:53:21 2025) I get ~180 tok/s, almost 100 more per second, both running lmstudio-community/gpt-oss-120b-GGUF.

esafak 8 days ago | parent [-]

Is it always like this or does it depend on the model?

diggan 8 days ago | parent [-]

Depends on the model. Each runner needs to implement support when there are new architectures, and they all seemingly focuses on different things. As far as I've gathered so far, vLLM focuses on inference speed, SGLang on parallizing across multiple GPUs, Ollama on being as fast out the door with their implementation as possible, sometimes cutting corners, llama.cpp sits somewhere in-between Ollama and vLLM. Then LM Studio seems to lag slightly behind with their llama.cpp usage, so I'm guessing that's the difference between LM Studio and building llama.cpp from source today.

fouc 8 days ago | parent | prev | next [-]

What was your iogpu.wired_limit_mb set to? By default only ~70% or ~90GB of your RAM will be available to your GPU cores unless you change your wired limit setting.

MoonObserver 9 days ago | parent | prev | next [-]

M2 Max processor. I saw 60+ tok/s on short conversations, but it degraded to 30 tok/s as the conversation got longer. Do you know what actually accounts for this slowdown? I don’t believe it was thermal throttling.

summarity 9 days ago | parent | next [-]

Physics: You always have the same memory bandwidth. The longer the context, the more bits will need to pass through the same pipe. Context is cumulative.

VierScar 9 days ago | parent [-]

No I don't think it's the bits. I would say it's the computation. Inference requires performing a lot of matmul, and with more tokens the number of computation operations increases exponentially - O(n^2) at least. So increasing your context/conversation will quickly degrade performance

I seriously doubt it's the throughput of memory during inference that's the bottleneck here.

MereInterest 9 days ago | parent | next [-]

Nitpick: O(n^2) is quadratic, not exponential. For it to “increase exponentially”, n would need to be in the exponent, such as O(2^n).

esafak 8 days ago | parent [-]

To contrast with exponential, the term is power law.

zozbot234 9 days ago | parent | prev | next [-]

Typically, the token generation phase is memory-bound for LLM inference in general, and this becomes especially clear as context length increases (since the model's parameters are a fixed quantity.) If it was pure compute bound there would be huge gains to be had by shifting some of the load to the NPU (ANE) but AIUI it's just not so.

summarity 9 days ago | parent | prev [-]

It literally is. LLM inference is almost entirely memory bound. In fact for naive inference (no batching), you can calculate the token throughput just based on the model size, context size and memory bandwidth.

zozbot234 9 days ago | parent [-]

Prompt pre-processing (before the first token is output) is raw compute-bound. That's why it would be nice if we could direct llama.cpp/ollama to run that phase only on iGPU/NPU (for systems without a separate dGPU, obviously) and shift the whole thing over to CPU inference for the latter token-generation phase.

(A memory-bound workload like token gen wouldn't usually run into the CPU's thermal or power limits, so there would be little or no gain from offloading work to the iGPU/NPU in that phase.)

torginus 8 days ago | parent | prev [-]

Inference takes quadratic amount of time wrt context size

mich5632 8 days ago | parent | prev | next [-]

I think this the difference between compute bound pre-fill (a cpu has a high bandwidth/compute ratio), vs decode. The time to first token is below 0.5s - even for a 10k context.

gigatexal 9 days ago | parent | prev | next [-]

M3 Max 128GB here and it’s mad impressive.

Im spec’ing out a Mac Studio with 512GB ram because I can window shop and wish but I think the trend for local LLMs is getting really good.

Do we know WHY openAI even released them?

diggan 9 days ago | parent | next [-]

> Do we know WHY openAI even released them?

Regulations and trying to earn good will of developers using local LLMs, something that was slowly eroding since it was a while ago (GPT2 - 2019) they released weights to the public.

Epa095 8 days ago | parent | prev | next [-]

If the new gpt 5 is actually better, then this oss version is not really a threat to Openai's income stream, but it can be a threat to their competitors.

lavezzi 8 days ago | parent | prev [-]

> Do we know WHY openAI even released them?

Enterprises can now deploy them on AWS and GCP.

zackify 9 days ago | parent | prev | next [-]

You didn’t even mention how it’ll be on fire unless you use low power mode.

Yes all this has been known since the M4 came out. The memory bandwidth is too low.

Try using it with real tasks like cline or opencode and the context length is too long and slow to be practical

Aurornis 8 days ago | parent [-]

> Yes all this has been known since the M4 came out. The memory bandwidth is too low.

The M4 Max with 128GB of RAM (the part used in the comment) has over 500GB/sec of memory bandwidth.

zackify 8 days ago | parent [-]

Which is incredibly slow when you’re over 20k context

radarsat1 8 days ago | parent | prev [-]

How long did your battery last?!

woleium 8 days ago | parent [-]

planes have power sockets now, but i do wonder how much jet fuel a whole plane of gpus would consume in electricity (assuming the system can handle it, which seems unlikely) and air conditioning.

TimBurman 8 days ago | parent [-]

That's an interesting question. According to Rich and Greg's Airplane Page[1], the A320 has three generators rated for 90kVA continuous each, one per engine and a third in the auxilary power unit that isn't normally deployed. Cruising demand is around 140 kVA of the 180 kVA supplied by the engines, leaving 40 kVA to spare. The A380 has six similar generators, two in reserve. They give the percentages so you could calculate how much fuel each system is consuming.

[1] https://alverstokeaviation.blogspot.com/2016/03/

This page also has a rendered image of the generator:

https://aviation.stackexchange.com/questions/43490/how-much-...