Remix.run Logo
cr125rider 3 hours ago

This is satire, right?

seabrookmx an hour ago | parent | next [-]

Off topic but I like your username! Ironically I have matching 2003 CR85 and CR250's but not the 125 :P

ilaksh 2 hours ago | parent | prev [-]

No, AI capabilities of some sort are obviously important. But I know a lot of people don't appreciate that.

But you aren't seriously suggesting that graphics hardware is irrelevant are you?

whilenot-dev 22 minutes ago | parent [-]

The few things that make me agree with GP:

1. "AI" is a marketing term used by the likes of OpenAI/Anthropic/Google. LocalLLaMa communities prefer to use "LLM" or "model". So for a lot of people "AI" is just a service (see 4.)

2. "AI capability" is an irrelevant spec and marketing slug. The hardware specs will give you the needed infomation to consider a model[0][1].

3. If you'll want to run a model locally, you'd know that a midrange notebook isn't the device to look for. Instead look at workstations with discrete graphic cards + lots of VRAM (24GB+), Strix Halo APUs or a MacBook with lots of RAM, or some dedicated workstations like the NVIDIA DGX Spark[2].

4. An inference engine can run anywhere, you can pick any LLM hosting service. LLM clients just expect an API endpoint anyway.

[0]: https://www.canirun.ai/

[1]: https://www.caniusellm.com/

[2]: https://www.nvidia.com/en-us/products/workstations/dgx-spark...