| ▲ | aluzzardi 5 hours ago | ||||||||||||||||
> My experience with LLM generated SQL in OLTP and OLAP platforms has been a mixed bag Models are evolving fast. If your experience is older than a few months, I encourage you to try again. I mean this with the best intentions: it's seriously mind boggling. We started doing this with Sonnet 4.0 and the relevance was okay at best. Then in September we shifted to Sonnet 4.5 and it's been night and day. Every single model released since then (Opus 4.5, 4.6) has meaningfully improved the quality of results | |||||||||||||||||
| ▲ | whoami4041 5 hours ago | parent [-] | ||||||||||||||||
I totally agree. However, none of them are infallible and never will be. They're nondeterministic by nature. There is an interesting psychological nuance that I've noticed even in myself that comes with AI assistance in coding, and that's the review/approval fatigue. The model could be chugging along happily for hours and make a sudden, terrific error in the 10th hour after you've been staring at reasoning and logs endlessly. The risk of missing the terrific error in that moment is very high at the tail end of the session. The point I was making (poorly) is that in this specific domain, where businesses are making data-driven decisions on output and insights that can determine the trajectory of the entire organization, human involvement is more critical than, say, writing something like a python function with an LLM. | |||||||||||||||||
| |||||||||||||||||