|
| ▲ | lukaslalinsky 2 days ago | parent | next [-] |
| I disagree, even though I'd love for it to be different. With models like Opus, I can give it a good architecture and expect good results. For many of the less expensive models, that is not the case, they make mistakes, you need to over specify, they get stuck in a loop, etc. As you get to the models you can realistically run locally, it gets so frustrating I'd rather be writing the code myself. |
| |
| ▲ | datsci_est_2015 a day ago | parent [-] | | At what point will local inference catch up to today’s cloud inference? Will it ever? If it doesn’t, does that imply a certain dead-end for the LLM inference industry? | | |
| ▲ | lukaslalinsky 5 hours ago | parent [-] | | I don't think at any point in foreseeable future we will have terabytes of RAM for dedicated LLM chips at home. |
|
|
|
| ▲ | varispeed 2 days ago | parent | prev [-] |
| The issue is that local models are dumb and tend to make mistakes than look good at a first glance. So any "saving" is quickly ruined by having to do an extensive review. You might as well just write things yourself. |
| |
| ▲ | datsci_est_2015 2 days ago | parent [-] | | I use it as code scaffolding, which means in a way I’m often rewriting it. For me, writing from scratch isn’t the same amount of effort as using a code scaffolding tool. |
|