▲ | gervwyk 4 days ago | |
Good point. i’m making the assumption that if the LLM has a more limited feature space to produce as output, then the output is more predictable, and thus faster to comprehend changes. Similar to when devs use popular libraries, there is a well known abstraction, therefore less “new” code to comprehend as i see familiar functions, making the code predictable to me. | ||
▲ | hamandcheese a day ago | parent [-] | |
I think we are essentially describing the same thing. You just want to achieve it by constraining the input space at a significantly higher level (yaml schema defines the output space instead of a compiler and/or test suite). I still think you'll be at a significant disadvantage since the LLM has been trained on millions of lines of all mainstream languages, and 0 lines of gervwyks funny yaml lang. |