▲ | rustystump 6 days ago | |||||||
Another hard disagree. The crux here is that if u are not an expert in the given domain you do not know where that missing 25% is wrong. You think you do but you dont. I have seen people bring in thousands of lines of opencv lut code in ai slop form because they didnt understand how to interpolate between two colors and didnt have the experience to know that is what they needed to do. This is the catch 20/20 of the ai expert narrative. The other part is that improvement has massively stagnated in the space. It is painfully obvious too. | ||||||||
▲ | A4ET8a8uTh0_v2 6 days ago | parent [-] | |||||||
<< you do not know where that missing 25% is wrong I think there is something to this line of thinking. I just finished a bigger project and without going into details, one person from team supposedly dedicated to providing viable data about data was producing odd results. Since the data was not making much sense, I asked for info on how the data was produced. I was given SQL script and 'and then we applied some regex' explanation. Long story short, I dig in and find that applied regex apparently messed with dates in an unexpected way and I knew because I knew the 'shape' that data was expected to have. I corrected it, because we were right around the deadline, but.. I noted it. Anyway, I still see llm as a tool, but I think there is some reckoning on the horizon as: 1. managers push for more use and speed given that new tool 2. getting there faster wronger, because people go with 1 and do not check the output ( or don't know how to check it or don't know when its wrong ) It won't end well, because the culture does not reward careful consideration. | ||||||||
|