| |
| ▲ | rprend 3 days ago | parent [-] | | They’re not lying when they say they have AI write their code, so it’s not just promotion. They will thrive or die from this thesis. If present YC portfolio companies underperform the market in 5-10 years, that’s a strong signal for AI skeptics. If they overperform, that’s a strong signal that AI skeptics were wrong. 3. You are absolutely right. New startups have greenfield projects that are in-distribution for AI. This gives them faster iteration speed. This means new companies have a structural advantage over older companies, and I expect them to grow faster than tech startups that don’t do this. Plenty of legacy codebases will stick around, for the same reasons they always do: once you’ve solved a problem, the worst thing you can do is rewrite your solution to a new architecture with a better devex. My prediction: if you want to keep the code writing and office culture of the 2010s, get a job internally at cloud computing companies (AWS, GCP, etc). High reliability systems have less to gain from iteration speed. That’s why airlines and banks maintain their mainframes. | | |
|
| |
| ▲ | esafak 3 days ago | parent [-] | | They do. Where did you get this? All the providers have clauses like this: "4.1. Generally. Customer and Customer’s End Users may provide Input and receive Output. As between Customer and OpenAI, to the extent permitted by applicable law, Customer: (a) retains all ownership rights in Input; and (b) owns all Output. OpenAI hereby assigns to Customer all OpenAI’s right, title, and interest, if any, in and to Output." https://openai.com/policies/services-agreement/ | | |
| ▲ | shakna 3 days ago | parent | next [-] | | The outputs of AI are most likely in the public domain. As automated process output are public domain, and the companies claim fair use when scraping, making the input unencumbered, too. It wouldn't be OpenAI holding copyright - it would be no one holding copyright. | | |
| ▲ | bcrosby95 3 days ago | parent | next [-] | | Courts have already leaned this way too, but who knows what'll happen when companies with large legal funds enter the arena. | |
| ▲ | 3 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | macrolime 3 days ago | parent | prev [-] | | So you're saying machine code is public domain if it's compiled from C? If not, why would AI generated code be any different? | | |
| ▲ | fhd2 3 days ago | parent | next [-] | | That would be considered a derivative work of the C code, therefore copyright protected, I believe. Can you replay all of your prompts exactly the way you wrote them and get the same behaviour out of the LLM generated code? In that case, the situation might be similar. If you're prodding an LLM to give you a variety of resu But significantly editing LLM generated code _should_ make it your copyright again, I believe. Hard to say when this hasn't really been tested in the courts yet, to my knowledge. The most interesting question, to me, is who cares? If we reach a point where highly valuable software is largely vibe coded, what do I get out of a lack of copyright protection? I could likely write down the behaviour of the system and generate a fairly similar one. And how would I even be able to tell, without insider knowledge, what percentage of a code base is generated? There are some interesting abuses of copyright law that would become more vulnerable. I was once involved in a case where the court decided that hiding a website's "disable your ad blocker or leave" popup was actually a case of "circumventing effective copyright protection". In this day and age, they might have had to produce proof that it was, indeed, copyright protected. | | |
| ▲ | macrolime 3 days ago | parent [-] | | "Can you replay all of your prompts exactly the way you wrote them and get the same behaviour out of the LLM generated code? In that case, the situation might be similar. If that's not the case, probably not." Yes and no. It's possible in theory, but in practice it requires control over the seed, which you typically don't have in the AI coding tools. At least if you're using local models, you can control the seed and have it be deterministic. That said, you don't necessarily always have 100% deterministic build when compiling code either. | | |
| ▲ | fhd2 2 days ago | parent [-] | | That would be interesting. I don't believe getting 100% the same bytes every time a derivative work is created in the same way is legally relevant. Take filters applied to copyright protected photos - might not be the exact same bytes every time you run it, but it looks the same, it's clearly a derivative work. So in my understanding (not as a lawyer, but someone who's had to deal with legal issues around software a lot), if you _save_ all the inputs that will lead to the LLM creating pretty much the same system with the same behaviour, you could probably argue that it's a derivative work of your input (which is creative work done by a human), and therefore copyright protected. If you don't keep your input, it's harder to argue because you can't prove your authorship. It probably comes down to the details. Is your prompt "make me some kind of blog", that's probably too trivial and unspecific to benefit from copyright protection. If you specify requirements to the degree where they resemble code in natural language (minus boilerplate), different story, I think. (I meant to include more concrete logic in my post above, but it appears I'm not too good with the edit function, I garbled it :P) |
|
| |
| ▲ | shakna 3 days ago | parent | prev | next [-] | | Derivatives inherit. Public domain in, public domain out. Copyright'd in, copyright out. Your compiled code is subject to your copyright. You need "significant" changes to PD to make it yours again. Because LLMs are predicated on massive public data use, they require the output to PD. Otherwise you'd be violating the copyright of the learning data - hundreds of thousands of individuals. | |
| ▲ | tapoxi 3 days ago | parent | prev | next [-] | | Monkey Selfie case, setting the stage for an automated process is not enough to declare copyright over a work. | |
| ▲ | immibis 2 days ago | parent | prev [-] | | No, and your comment is ridiculously bad faith. Courts ruled that outputs of LLMs are not copyrightable. They did not rule that outputs of compilers are not copyrightable. | | |
| ▲ | ranger_danger 2 days ago | parent [-] | | I think that lawsuit was BS because it went on the assumption that the LLM was acting 100% autonomously with zero human input, which is not how the vast majority of them work. Same for compilers... a human has to give it instructions on what to generate, and I think that should be considered a derivative work that is copyrightable. | | |
| ▲ | shakna 2 days ago | parent [-] | | If that is the case - then it becomes likely that LLMs are violating the implicit copyright of their sources. If the prompt makes the output a derivative, then the rest is also derivative. | | |
| ▲ | immibis a day ago | parent | next [-] | | The sensible options were that either LLM outputs are derivative of all their training data, or they're new works produced by the machine, which is not a human, and therefore not copyrightable. Courts have decided they're new works which are not copyrightable. | |
| ▲ | ranger_danger 2 days ago | parent | prev [-] | | I would say all art is derivative, basically a sum of our influences, whether human or machine. And it's complicated, but derivative works can be copyrighted, at least in part, without inherently violating any laws related to the original work, depending on how much has changed/how obvious it is, and depending on each individual judge's subjective opinion. https://www.legalzoom.com/articles/what-are-derivative-works... | | |
| ▲ | shakna 2 days ago | parent [-] | | If all art is derivative, then the argument also applies to the LLM output. If the input has copyright, so does the output. If the input does not, then neither does the output. A prompt is not enough to somehow claim artistry, because the weights have a greater influence. You cannot separate the sum of the parts. |
|
|
|
|
|
| |
| ▲ | robocat 3 days ago | parent | prev | next [-] | | What about patents - if you didn't use cleanroom then you have no defence? Patent trolls will extort you: the trolls will be using AI models to find "infringing" software, and then they'll strike. ¡There's no way AI can be cleanroom! | |
| ▲ | 3 days ago | parent | prev [-] | | [deleted] |
|
|