| ▲ | georgemcbay 5 hours ago | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
IMO around December of last year LLM output (for coding at least, not for everything) went from "almost 100% certainly slop" to "probably not slop, if you asked for the right thing while being aware of context limitations". A lot of people seem stuck with their older (correct at the time) views of them still always producing slop. FWIW I am more of an AI doomer (in the sense that I think the economic results from them will be disastrous for knowledge workers given our political realities) than booster, but in terms of utility to get work done they did pass a clear inflection point quite recently. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | bluefirebrand 5 hours ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
> if you asked for the right thing while being aware of context limitations So, still pretty likely to produce slop in a large majority of cases If the most useful place for them is where you've already specced things out to that degree of precision then they aren't that useful? Speccing things to that precision is the time consuming and difficult work anyways, after all. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||