| ▲ | Zambyte 12 hours ago | ||||||||||||||||
How can I observe it being loaded into CPU memory? When I run a 20gb model with ollama, htop reports 3gb of total RAM usage. | |||||||||||||||||
| ▲ | zamadatix 12 hours ago | parent | next [-] | ||||||||||||||||
Think of it like loading a moving truck where: - The house is the disk - You are the RAM - The truck is the VRAM There won't be a single time you can observe yourself carrying the weight of everything being moved out of the house because that's not what's happening. Instead you can observe yourself taking many tiny loads until everything is finally moved, at which point you yourself should not be loaded as a result of carrying things from the house anymore (but you may be loaded for whatever else you're doing). Viewing active memory bandwidth can be more complicated than it'd seem to set up, so the easier way is to just view your VRAM usage as you load in the model freshly into the card. The "nvtop" utility can do this for most any GPU on Linux, as well as other stats you might care about as you watch LLMs run. | |||||||||||||||||
| |||||||||||||||||
| ▲ | p1esk 11 hours ago | parent | prev [-] | ||||||||||||||||
Depends on map_location arg in torch.load: might be loaded straight to GPU memory | |||||||||||||||||