Remix.run Logo
conartist6 6 days ago

I sent them a politely worded threat and they responded right away opting me out:

> Hello, I am writing you as an author of Open Source software seeking to protect my security and that of my users.

> What I would like to know is: how may I prevent deepwiki from indexing my projects, specifically those in the ----- GitHub organization? If you consider yourselves to have implicit legal permission to train on my projects and write about them, know that I hereby explicitly and permanently revoke that permission.

> Since you likely believe that I lack the authority to get you to stop, I will add this:

> To the extent allowed by law I will consider any incorrect information you publish about my projects to be libelous and, given this notice, made with your intention. LLMs have no will to act, so publishing misinformation about my project, at such time as that happens, could only be the result of human will.

> Kind regards, > Conrad Buck

michaelmior 6 days ago | parent | next [-]

Just because you "consider" incorrect information to be libelous does not mean that it is. While it's true that LLMs have no will to act, the use of an LLM to publish information that ends up being incorrect does not imply that the user of the LLM intended to post incorrect information.

conartist6 6 days ago | parent [-]

In that universe, there is no accountability at all. There is no way the system of law would allow that.

I admit that in my message I was careful not to make an accusation. I only stated my disposition towards their actions.

michaelmior 6 days ago | parent [-]

IANAL, but I don't believe that lack of intent to cause harm means that relief can't be granted. You could still take legal action if they refused to remove the information. To my knowledge, a question that still hasn't been thoroughly tested from a legal perspective is to what degree users of LLMs should be reasonably expected to be aware of the potentially for false information and to what degree continued use despite that knowledge constitutes willful negligence.

Humans can make mistakes too when compiling information and when mistakes are done unintentionally without the intent of causing harm, I believe liability is typically limited. I would expect the same would be true of LLM use. As long as the user of the LLM has taken reasonable precautions to ensure accuracy, I think liability probably should be limited in most cases. In the case of DeepWiki getting something wrong, I think the case for significant reputational damage is pretty weak.

conartist6 6 days ago | parent [-]

It's reasonable precautions where it seems to me they are likely to be what I would consider partially to wholly negligent. The lack of an ability to opt out, for example.

michaelmior 4 days ago | parent [-]

Agreed that there should be an ability to opt out.

mock-possum 6 days ago | parent | prev [-]

Yuck, based on tone alone I would ignore an email worded this way.

conartist6 6 days ago | parent [-]

Sometimes you have to get angry to set boundaries.

Anyone human who puts their name to their words is free to write about my work.

If their wiki was just a wiki with human and AI contributorship that would have been a much better product and I probably would have been fine with it.