| ▲ | justin_dash 8 hours ago | |||||||
So at this point I think it's pretty obvious that RLHFing LLMs to follow instructions causes this. I'm interested in a loop of ["criticize this code harshly" -> "now implement those changes" -> open new chat, repeat]: If we could graph objective code quality versus iterations, what would that graph look like? I tried it out a couple of times but ran out of Claude usage. Also, how those results would look like depending on how complete of a set of specs you give it. | ||||||||
| ▲ | IncreasePosts 6 hours ago | parent [-] | |||||||
In my experience prompting llms to be critical leads then to imagine issues, or to bike shed | ||||||||
| ||||||||