| ▲ | kkralev 13 hours ago | ||||||||||||||||
i think the real gap isnt at the high end tho. theres a whole segment of people who just want to run a 7-8b model locally for personal use without dealing with cloud APIs or sending their data somewhere. you dont need 4 GPUs for that, a jetson or even a mini pc with decent RAM handles it fine. the $12k+ market feels like it's chasing a different customer than the one who actually cares about offline/private AI | |||||||||||||||||
| ▲ | wmf 13 hours ago | parent [-] | ||||||||||||||||
just want to run a 7-8b model locally This is already solved by running LM Studio on a normal computer. | |||||||||||||||||
| |||||||||||||||||