| ▲ | nextos a day ago | ||||||||||||||||
In case of LLMs, due to RAG, very often it's not just learning but almost direct real-time plagiarism from concrete sources. | |||||||||||||||||
| ▲ | doix a day ago | parent | next [-] | ||||||||||||||||
Isn't RAG used for your code rather than other people's code? If I ask it to implement some algorithm, I'd be very surprised if RAG was involved. | |||||||||||||||||
| ▲ | sholain a day ago | parent | prev [-] | ||||||||||||||||
RAG and LLMs are not the same thing, but 'Agents' incorporate both. Maybe we could resolve the bit of a conundrum by the op in requiring 'agents' to give credit for things if they did rag them or pull them off the web? It still doesn't resolve the 'inherent learning' problem. It's reasonable to suggest that if 'one person did it, we should give credit' - at least in some cases, and also reasonable that if 1K people have done similar things ad the AI learns from that, well, I don't think credit is something that should apply. But a couple of considerations: - It may not be that common for an LLM to 'see one thing one time' and then have such an accurate assessment of the solution. It helps, but LLMs tend not to 'learn' things that way. - Some people might consider this the OSS dream - any code that's public is public and it's in the public domain. We don't need to 'give credit' to someone because they solved something relatively arbitrary - or - if they are concerned with that, then we can have a separate mechanism for that, aka they can put it on Github or Wikipedia even, and then we can worry about 'who thought of it first' as a separate consideration. But in terms of Engineering application, that would be a bit of a detractor. | |||||||||||||||||
| |||||||||||||||||