| ▲ | deskamess 3 days ago | |
I have an old 2060 with 6GB (I think). I also have a work laptop 3060 with 6GB (shared to 8GB). What can I do with those? I dabble a bit here and there but I would like to run my own local LLM for 'fun'. Thanks! | ||
| ▲ | sosodev 3 days ago | parent [-] | |
If you just want to run a local LLM you could download ollama and do it in minutes. You'll be limited to small models (I would start with qwen3:1.7b) but it should be quite fast. | ||