▲ | oblio 4 days ago | ||||||||||||||||||||||||||||||||||||||||
I think the point was that for example for programming, people perceive state of the art LLMs as being net positive contributors, at least for mainstream programming languages and tasks, and I guess local LLMs aren't net positive contributors (i.e. an experienced programmer can build the same thing at least as fast when using an LLM). | |||||||||||||||||||||||||||||||||||||||||
▲ | segmondy 4 days ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||
I know this is false, DeepSeekv3.1, GLM4.5, KimiK2-0905, Qwen-235B are all solid open models. Last night, I vibed rough 1300 lines of C server code in about an hour. 0 compilation error, ran without errors and got the job done. I want to meet this experienced programmer that can knock out 1300 lines of C code in an hour. | |||||||||||||||||||||||||||||||||||||||||
|