| ▲ | CognitiveLens a day ago | |||||||
but as mlinsey suggests, what if it's influenced in small, indirect ways by 1000 different people, kind of like the way every 'original' idea from trained professionals is? There's a spectrum, and it's inaccurate to claim that Claude's responses are comparable to adapting one individual's work for another use case - that's not how LLMs operate on open-ended tasks, although they can be instructed to do that and produce reasonable-looking output. Programmers are not expected to add an addendum to every file listing all the books, articles, and conversations they've had that have influenced the particular code solution. LLMs are trained on far more sources that influence their code suggestions, but it seems like we actually want a higher standard of attribution because they (arguably) are incapable of original thought. | ||||||||
| ▲ | saalweachter a day ago | parent | next [-] | |||||||
It's not uncommon, in a well-written code base, to see documentation on different functions or algorithms with where they came from. This isn't just giving credit; it's valuable documentation. If you're later looking at this function and find a bug or want to modify it, the original source might not have the bug, might have already fixed it, or might have additional functionality that is useful when you copy it to a third location that wasn't necessary in the first copy. | ||||||||
| ||||||||
| ▲ | sarchertech a day ago | parent | prev [-] | |||||||
If the problem you ask it to solve has only one or a few examples, or if there are many cases of people copy pasting the solution, LLMs can and will produce code that would be called plagiarism if a human did it. | ||||||||
| ||||||||