Remix.run Logo
suladead 4 days ago

I built pretty much this exact rig myself, but now it's gathering dust, any other uses for this rather than localLLMS

Tepix 4 days ago | parent | next [-]

Sell it? There are people who want a rig like this.

dotnet00 4 days ago | parent | prev | next [-]

The 3090 I have in my server (Ollama on it is only used occasionally nowadays since I have dual 5080s on my work desktop), also handles accelerating transcoding in Plex, and is in the process of being setup to handle monitoring my 3d printers for failures via camera.

Am also considering setting up Home Assistant with LLM support again.

asimovDev 4 days ago | parent | prev | next [-]

Play DnD by yourself with Llama as a DM

robotswantdata 4 days ago | parent | prev | next [-]

Heating

ProllyInfamous 4 days ago | parent [-]

I use an older machine/GPU for wintertime heating, mining Monero (xmrig).

Should one get lucky and guess the next valid block, that pays the entire month's electricity — since an electric space heater would already be consuming the exact same amount of kWH as this GPU, there is no "negative cost" to operate.

This machine/GPU used to be my main workhorse, and still has ollama3.2 available — but even with HBM, 8GB of VRAM isn't really relevant in LLM-land.

DaSHacka 4 days ago | parent | prev | next [-]

vidya

winkelmann 4 days ago | parent | prev | next [-]

3D rendering and fluid simulation stuff could be interesting.

thiago_fm 4 days ago | parent | prev [-]

Playing games, it has a good graphics card