| ▲ | martin-t 6 hours ago | |
As opposed to an irregular person? LLMs are not persons, not even legal ones (which itself is a massive hack causing massive issues such as using corporate finances for political gain). A human has moral value a text model does not. A human has limitations in both time and memory available, a model of text does not. I don't see why comparisons to humans have any relevance. Just because a human can do something does not mean machines run by corporations should be able to do it en-masse. The rules of copyright allow humans to do certain things because: - Learning enriches the human. - Once a human consumes information, he can't willingly forget it. - It is impossible to prove how much a human-created intellectual work is based on others. With LLMs: - Training (let's not anthropomorphize: lossily-compressing input data by detecting and extracting patterns) enriches only the corporation which owns it. - It's perfectly possible to create a model based only on content with specific licenses or only public domain. - It's possible to trace every single output byte to quantifiable influences from every single input byte. It's just not an interesting line of inquiry for the corporations benefiting from the legal gray area. | ||