| ▲ | omoikane 2 hours ago | |
I remember when everything was "machine learning" as opposed to the current LLM stuff. Some of the machine learning techniques involve training and using models that are more or less opaque, and nobody looked at what was inside those models because you can't understand them anyway. Once LLM generated code becomes large enough that it's infeasible to review, it will feel just like those machine learning models. But this time around, instead of trying to convince other people who were downstream of the machine learning output, we are trying to convince ourselves that "yes we don't fully understand it, but don't worry it's statistically correct most of the time". | ||