| ▲ | phendrenad2 8 hours ago | |
Sigh. Another one standing on the train tracks giving the approaching train a good scolding. First this article tries to equate AI-generated code with "forgery". Please, tell me how you "forge math". Next, it makes a little dig at senior engineers who use LLMs, because they must not realize that "every line of code is a liability". No no, senior engineers realize this, but they are also adept at observing successes and failures and coming up with a mental model for risk. That's part of keeping an application running, otherwise we'd all still be using jQuery and leftPad. We made the jump to react because we recognized that these NEW lines of code were far more valuable than their "liability". Somehow the author decided to store "liability" in a boolean. Oh, was AI involved, or is that a genuine human error..? Next the article makes a tired appeal to the fact that LLMs are trained on open-source code and are therefore "plagiarizing" this code constantly. This is where the train comes around the mountain. So when the AI generates Carmack's Reverse, is it plagiarizing Carmack or the book that he got the idea from? In what percentages? And what do I do with this valuable insight? Send Carmack $0.01 in an envelope for the privilege? In short, I don't know what the author wants, but I hope writing this helped. | ||