| ▲ | embedding-shape 9 hours ago |
| But these are inherently subjective things, what the "right idea" is, or the "right implementation" is all up in our head that we can try to verbalize, but I don't think you can come up with an objective score for it, ask 100 programmers you'll get 100 different answers what "clean design" is. |
|
| ▲ | quotemstr 9 hours ago | parent [-] |
| And that's why my whole schtick when it comes to agent design is that agents need to learn online, continuously, and in adapter space via some PEFT mechanism (I like soft prompts and prefix tuning), because it's really hard to ascend gradients in discrete domains like tokens. |
| |
| ▲ | embedding-shape 9 hours ago | parent [-] | | > The model knows damn well when it's written ugly code. You can just ask it. That's not been my experience at all, what model and prompt would you use for that? Every single one I've tried is oblivious to if a design makes sense or not unless explicitly prompted for it with constraints, future ideas and so on. | | |
| ▲ | CuriouslyC 7 hours ago | parent [-] | | The problem is that the model doesn't know what you mean by "bad code" a priori. If you list specific issues you care about (e.g. separation of concerns, don't repeat yourself, single responsibility, prefer pure functions, etc) it's pretty good at picking them out. Humans have this problem as well, we're just more opinionated. | | |
| ▲ | embedding-shape 6 hours ago | parent [-] | | Yes, that's exactly what I mentioned earlier, if you describe the implementation, you can get something you can work with long-term. But if you just describe an idea, and let the LLM do both the design of the implementation and the implementation itself, eventually it seems to fall over itself and changes takes longer and longer time. |
|
|
|