▲ | roenxi 4 days ago | |||||||
> This is a huge red flag that you have no idea what you're doing at the fundamental software architecture level imo. Or at least you have bad process (prior to LLMs). Particularly in the present. If any of the current models can consistently make senior-level decisions I'd like to know which ones they are. They're probably going to cross that boundary soon, but they aren't there yet. They go haywire too often. Anyone who codes only using the current generation of LLM without reviewing the code is surely capping themselves in code quality in a way that will hurt maintainability. | ||||||||
▲ | andrei_says_ 4 days ago | parent [-] | |||||||
> They're probably going to cross that boundary soon How? There’s no understanding, just output of highly probable text suggestions which sometimes coincides with correct text suggestions. Correctness exists only in the understanding of humans. In the case of writing to tests there are infinite ways to have green tests and break things anyway. | ||||||||
|