| ▲ | embedding-shape 6 hours ago | |
Then local models shouldn't suffer from the same problems, but they do. They just aren't trained in the direction of "less code == better long-term maintainability" I'd say, rather than some grand "increased-token-usage" conspiracy. You can certainly steer them a bit to reduce the issue parent talks about, but they still go into that direction whenever they can, adding stuff on top of stuff, piling hacks/shim on top of other hacks/shims, just like many human developers :) | ||
| ▲ | bonesss 6 hours ago | parent [-] | |
Training data is the masses of code from everyone. Restrict that data to just the best of the best, the tersest of the tersest, and we’d see better output. I don’t think people are sharing that kinda stuff (Jane Street’s gems stay locked up), and even if they did my presumption is that it’d be too narrow and demanding for general audiences. Big hopes for the long future, damned to some degree of mediocrity in the near term mass product. | ||