| ▲ | jollymonATX 10 hours ago |
| This is my hope as well, but fear of ai scrape is real among folks I have chatted with this about. |
|
| ▲ | incompatible 7 hours ago | parent | next [-] |
| Fear of AI scrape? I'm just amused at the idea of my words ending up manipulating chatbots to rewrite stuff that I've written, force-feeding it in some distorted form to people silly enough to listen. |
|
| ▲ | api 5 hours ago | parent | prev [-] |
| If you are putting something out for free for anyone to see and link and copy, why is LLM training on it a problem? How’s that different from someone archiving it in their RSS reader or it being archived by any number of archive sites? If you don’t want to give it away openly, publish it as a book or an essay in a paid publication. |
| |
| ▲ | jollymonATX 3 hours ago | parent | next [-] | | Its important to consider others perspectives, even if inaccurate. As it was expressed to me when I suggested "why not write a blog" to a relative who is into niche bug photos and collecting they didn't want to give their writing and especially photos to be trained on. They have valid points honestly and an accurate framing of what will happen, it will get injested eventually likely. I think they overestimate a tad their works importance overall but still they seemed to have a pretty accurate guage of likely outcomes. Let me flip the question, why should they not be able to choose "not for training uses" even if they put it up publically? | |
| ▲ | justinator 4 hours ago | parent | prev [-] | | This is not an answer to your question, but one issue is that if you write about some niche sort of thing (as you do, on a self-hosted blog) that no one else is really writing about, the LLM will take it as a sole source on the topic and serve up its take almost word for word. That's clearly plagiarism, but it's also interesting to me as there's really no way the user who's querying their fav. ai chatbot if the answer has truthiness. I can see a few ways this could be abused. |
|