| ▲ | simonw 2 days ago | |||||||
At this point having any LLM write code without giving it an environment that allows it to execute that code itself is like rolling a heavily-biased random number generator and hoping you get a useful result. Things get so much more interesting when they're able to execute the code they are writing to see if it actually works. | ||||||||
| ▲ | fragmede 2 days ago | parent [-] | |||||||
So much this. Do we program by writing reams of code and never running the compiler until it's all written and then judging the programmer as terrible when it doesn't compile? Or do we write code by hand incrementally and compile and test as we go along? So why would do we think having the AI do that and fail is setting it up for success? If I wrote code on a whiteboard and was judged for making syntax errors, I'd never have gotten a job. Give the AI the tools it needs to succeed, just like you would for a human. | ||||||||
| ||||||||