Remix.run Logo
dakolli 3 days ago

The kid who showed his work in detail in math class is doing better in life 9/10 times than the kids that only knew how to use a calculator. Now consider how well the people who think you just need to know how to yell at the calculator are going to do?

When Maps apps came around, people totally lost the brain muscle for being able to navigate. Using LLMs is no different, people over reliant on these tools are simply ngmi. They are going to be totally reliant on their favorite billionaire being willing to sell them competency via their thinking machines.

I would caution everyone to consider if the Billionaires who are screaming that you're going to be left behind, laid off and redundant if you don't (pay them to) use their brain nerfing machine, whether or not they have your best interest at heart.

You're not going to be left behind.

https://arxiv.org/abs/2506.08872

Hasslequest 3 days ago | parent | next [-]

Firstly, you can run the LLMs on your own machine. So I find the proprietary/moat narrative weak.

Secondly, I find that correct usage of LLMs can accelerate learning. My brother used an LLM to generate flash cards for a driver's license test. I use LLMs to digest a ton of text and debug issues that would have been impossible to find (I would have given up) Have it generate, explain, review, compare code or general writing.

It is like having access to wise old man in every field. They may have inferior reasoning capability, and their memory may falter, but they have seen everything in their corpus and are great at pointing you to external references. And you can delegate them to busywork.

dakolli 3 days ago | parent [-]

> Firstly, you can run the LLMs on your own machine. So I find the proprietary/moat narrative weak.

You cannot run useful models on consumer hardware, sorry this is wrong and will always be the case. Atleast for 10 years until GPUs with 48GB vRAM depreciate. This is a limitation of llm architecture. You cannot post train a <1T Param model to a place where it competes with frontier model capability. If you think you 70b param models (which still require 5k in GPUs) are useful, you are being dishonest with yourself.

It costs about $60-80k to run a 1T param model at your house like Kimi 2.5 .. which is the only size model that's going to get anywhere close to a foundation model's capability. Nobody is going to spend close to 100k to run a mediocre open source model as opposed to spending $200.00 a month. Its a ridiculous notion.

Hasslequest 2 days ago | parent [-]

I run Qwen-3.5 based LLMs in the 20-35 parameter range on my laptop's iGPU and it works great for my use case, which includes coding, search, reasoning, and general tasks. Gemma3 is good too.

There are ones that are distilled with better reasoning models or abliterated for whatever you need, and the multimodal features work... fine.

Just started running local LLMs this week, and it is pretty much overkill for what anyone in my family needs. All it really lacks is some tools for it to use, which I am putting together now.

To be fair, the best model I have used is claude sonnet. I don't really know what I am missing with opus.

dakolli 3 days ago | parent | prev [-]

Who the hell downvotes this lol

SpicyLemonZest 3 days ago | parent [-]

I think that framing your observations in terms of "Billionaires who are screaming" about a "brain nerfing machine" doesn't help think about the issues clearly or contribute to a healthy discussion.