Remix.run Logo
winocm 7 hours ago

Perhaps I am the odd one out here, but a small part of me wants to see what happens when you run a proprietary SOTA model on a laptop.

amelius a minute ago | parent | next [-]

You burn your lap?

pianopatrick 2 hours ago | parent | prev | next [-]

Currently I'm testing something like this just to see what happens. I have an old laptop with 4GB of RAM. I attached a USB drive with Gemma 4 31B model (which is 32.6 GB). Currently the laptop is running llama.cpp and trying to respond to a prompt by streaming the model from disk.

The USB drive light is flickering, showing something is happening. It's been about 8 hours since I entered the prompt and I've gotten about 10 tokens back so far. I'm going to leave it running overnight and see what happens.

reisse 7 hours ago | parent | prev | next [-]

Nothing special?

I mean, inference engine might need to get some tweaks, to support whatever compute is available. But then, if you put a few terabytes of disk for swap, and replace RAM to bigger sticks if possible, it should work? Slowly, of course, but there is no reason it should not to.

reverius42 6 hours ago | parent [-]

The big difference will be measuring seconds per token instead of tokens per second.

martijnvds 3 hours ago | parent [-]

Seconds per token is just fractional tokens per second ;)

yfw 7 hours ago | parent | prev | next [-]

You can if you have enough ram slots?

SilentM68 3 hours ago | parent | prev [-]

Not sure if this is exactly the scenario you envision but I run ComfyUI on an Acer Helio 300 laptop, from four years ago. Has 16GB RAM, NVIDIA GeForce RTX 2060 w/6144MiB of VRAM and have generated a few images using "NetaYumev35_pretrained_all_in_one.safetensors" @ 10.6GB checkpoint, (well beyond the 6GB capacity of the RTX 2060 card). That being said, it takes more than 10 minutes to complete the task. Of course, I have to turn off all other apps, and browser tabs or hibernate them. If I don't, the laptop's fans begin to spin up like an airplane propeller. It's worth mentioning that I've tried to do this with other IDEs and all seem to fail with some error or another, usually out of VRAM issue. I've only gotten it to work with ComfyUI.

I use an anaconda environment, though would have preferred an "uv" environment, on Linux and automate the startup sequence using the following script (start_comfy.sh) from the term rather than manually starting the environment from same said term:

#!/bin/bash

#

# temporary shell version

eval "$(conda shell.bash hook)"

conda activate comfy-env

comfy launch -- --lowvram --cpu-vae

Here are some of the images: https://imgbox.com/nqjYhdx3 https://imgbox.com/93vSWFic https://imgbox.com/qs1898dz

I'm hesitant to increase the sizes of the renders as that will surely stress my laptop's components.

t_mahmood an hour ago | parent [-]

I'm not running local for exactly the same reason, to not stress my components. As it seems we are in for a long haul due to this AI bubble (can't wait for it to pop) so need to make sure I survive this madness, as for sure I can't afford to replace anything right now.