| ▲ | Ask HN: Are small local LLMs good at coding? | |
| 2 points by usermac 12 hours ago | 3 comments | ||
I deal with the professional LLMs, of course, but I'm really intrigued by the possibility of local coding offline. I've got a MacBook Air M4 16gb. Does it have any chance at all of doing coding? NOTE: I am not too worried about the context window because the way I work is very targeted and surgical. I'll have it look at one file and have it do something very exact to that file. | ||
| ▲ | benchwright 11 hours ago | parent | next [-] | |
They can be. I've done some drift detection work against local models and for the most part, they do ok. I think there's always room for an augmented approach where local models can handle programmatic parsing and structure and large models can handle actual coding routines. I try to use witness coding using local+api where possible to see if there's capabilities that can be caught one side or the other. | ||
| ▲ | mockbolt 11 hours ago | parent | prev | next [-] | |
[dead] | ||
| ▲ | anonymousemail 11 hours ago | parent | prev [-] | |
[dead] | ||