| ▲ | anon-3988 4 hours ago | |
I haven't thought about it this way but I have been feeling that something is off. I think you got it right. There is a HUGE difference between "I understand the concept" and "I can write down the concept on paper". Everyone fools themselves into thinking that they understand but the illusion immediately falls apart when they try to write it down. The problem with LLM is that it is actually able to produce something that "works". But more often that not, what it produces is usually beyond what the author actually understand. Arguably, one could ask "Why does it matter?" If there are enough tests and monitoring to capture the behavior of the program, who cares how it is implemented? To me this is extremely disappointing. I have always wanted to write software that lasts for a century but if software becomes a commodity, I see software quality mattering less and less. If something is broken, just ask the LLM to patch it. Unlike a human, there is no limit to this. The LLM will happily fix that 50 years old Fortran code that no one understand. There is a lot less pressure to rethink at the fundamental principles. | ||