| ▲ | pritambarhate 2 hours ago | |||||||
Even with LLMs delivering software that consistently works requires quite a bit of work and in most cases requires certain level of expertise. Humans also write quite a bit of garbage code. People using LLMs to code these days is similar to how majority people stopped using assembly and moved to C and C++, then to garbage collected languages and dynamically typed languages. People were always looking for ways to make programmers more productive. Programming is evolving. LLMs are just next generation programming tools. They make programmers more productive and in majority of the cases people and companies are going to use them more and more. | ||||||||
| ▲ | fauigerzigerk an hour ago | parent [-] | |||||||
I'm not opposed to AI generated code in principle. I'm just saying that we don't know how much effort was put into making this and we don't know whether it works. The existence of a repository containing hundereds of files, thousands of SLOCs and a folder full of tests tells us less today than it used to. There's one thing in particular that I find quite astonishing sometimes. I don't know about this particular project, but some people use LLMs to generate both the implementation and the test cases. What does that mean? The test cases are supposed to be the formal specification of our requirements. If we do not specify formally what we expect a tool to do, how do we know whether the tool has done what we expected, including in edge cases? | ||||||||
| ||||||||