| ▲ | ronsor 4 hours ago |
| Royalties for inference are unrealistic in a way that even royalties for training aren't. The LLaMA models were released openly. Copies exist everywhere in the world. You aren't going to be able to charge someone for running `llama.cpp`; a court order ceases to have practical relevance at that point. |
|
| ▲ | eaglelamp 4 hours ago | parent | next [-] |
| Inference might be unreasonable for a royalty agreement, but, in assessing damages, it is certainly relevant. "I made enough copies for everyone" isn't a valid defense for copyright infringement. |
|
| ▲ | swader999 4 hours ago | parent | prev [-] |
| These models can provide citations so I don't see why they can't tick a royalty owed. I'm sure many here could help build this pipeline. |
| |
| ▲ | Aurornis 4 hours ago | parent | next [-] | | First, LLMs do not reliably cite works. They are not looking things up in a database and repeating them. I think this false idea occurs a lot in people who don't understand what LLMs are or how they work. Second, royalties are not required to cite a source. Can you imagine how disastrous it would be to everything from news reporting to scientific publishing if that was the case? | | |
| ▲ | swader999 4 hours ago | parent [-] | | Yeah well then I want my robot running this crap locally in its brain so I can get it to farm my two acres and haul water for me and I'll unplug from the rest of this nonsense going forward lol. |
| |
| ▲ | ronsor 4 hours ago | parent | prev [-] | | ... LLMs cannot reliably provide citations. If you ask for citations, and the model did not use a web search tool, then whatever "citations" you receive are unreliable. Please do not trust these models to be honest. Just because they can discuss a topic doesn't mean they "know" where the knowledge came from in the same way that you don't need to have studied physics to catch a ball. |
|