| ▲ | 2ndorderthought 3 hours ago | |
The biggest hint I have is set a budget. Then try some models out on either cloud instances or a computer you own. See if they work for you. Spec your machine accordingly. Some models I recommend trying to get a feel for what's out there. Qwen 3.6 35b a3b, granite4.1 8b, llama 3.2 3b. There are plenty of others but those give a good taste for different sizes and what they can do. If it's not enough then you are out maybe 5 bucks. Also check in with r/localllama they have a bunch of people who can help you go further, spec machines, get better performance and results. If you don't want to post that's cool but there are lots of comments on how to get going. They are pretty friendly though so I'd read the rules and make a post asking for help | ||