Remix.run Logo
wg0 2 hours ago

I'm stepping away from LLMs in general and did cancel Claude code subscription this month because I respect myself very much and I deserve a better and transparent treatment.

If you must - in my experience Deepseek v4 is incredible value in every aspect. Pricing is transparent.

But like I said, I have funds in different AI gateways but I'm preferring to write by hand because I don't want surprising bugs and unnecessary code in my end result.

2ndorderthought 2 hours ago | parent | next [-]

I did this and I use small local models as a productivity booster. It's been refreshing

bombcar an hour ago | parent [-]

Hints or tips on how to start with local models? I’m considering a new MacBook Pro and wondering if I should take that into account.

2ndorderthought an hour ago | parent [-]

The biggest hint I have is set a budget. Then try some models out on either cloud instances or a computer you own. See if they work for you.

Spec your machine accordingly. Some models I recommend trying to get a feel for what's out there. Qwen 3.6 35b a3b, granite4.1 8b, llama 3.2 3b.

There are plenty of others but those give a good taste for different sizes and what they can do. If it's not enough then you are out maybe 5 bucks.

Also check in with r/localllama they have a bunch of people who can help you go further, spec machines, get better performance and results. If you don't want to post that's cool but there are lots of comments on how to get going. They are pretty friendly though so I'd read the rules and make a post asking for help

sunnybeetroot an hour ago | parent | prev | next [-]

You can use an LLM, review the code and therefore avoid surprising bugs and unnecessary code in your end result.

ai_terk_er_jerb an hour ago | parent | prev | next [-]

Admittedly havent used deepseek v4, but v3 was so overhyped and bad that I'm reluctant to wasting my time on it.

Maybe you will inspire me to use it.

dgellow 2 hours ago | parent | prev | next [-]

So close to doing the same

cyanydeez an hour ago | parent | prev [-]

installing a local model gives you time to work on the important code and let the ai do the drudgery