Remix.run Logo
bobjordan a day ago

I've been working on a large codebase that was already significant before LLM-assisted programming, leveraging code I’d written over a decade ago. Since integrating Claude and Codex, the system has evolved and grown massively. Realistically, there’s a lot in there now that I simply couldn't have built in a standard human lifetime without them.

That said, the core value of the software wouldn't exist without a human at the helm. It requires someone to expend the energy to guide it, explore the problem space, and weave hundreds of micro-plans into a coherent, usable system. It's a symbiotic relationship, but the ownership is clear. It’s like building a house: I could build one with a butter knife given enough time, but I'd rather use power tools. The tools don't own the house.

At this point, LLMs aren't going to autonomously architect a 400+ table schema, network 100+ services together, and build the UI/UX/CLI to interface with it all. Maybe we'll get there one day, but right now, building software at this scale still requires us to drive. I believe the author owns the language.

wcarss 21 hours ago | parent | next [-]

This is the take, very well said. I've been trying to use analogies with cars and cabinet making, but building a house is just right for the scale and complexity of the efforts enabled, and the ownership idea threads into it well.

Going into the vault!

heavyset_go 21 hours ago | parent | prev | next [-]

> I believe the author owns the language.

Not according to the US Copyright Office. It is 100% LLM output, so it is not copyrighted, thus it's free for anyone to do anything with it and no claimed ownership or license can stop them.

wild_egg 20 hours ago | parent [-]

Do you have a citation for that?

heavyset_go 20 hours ago | parent | next [-]

Yes[1]. Copyright applies to human creations, not machine generated output.

It's possible to use AI output in human created content, and it can be copyrightable, and substantiative, transformative human-creative alteration of AI output is also copyrightable.

100% machine generated code is not copyrightable.

[1] https://newsroom.loc.gov/news/copyright-office-releases-part...

wild_egg 18 hours ago | parent [-]

> The content you are looking for is currently unavailable.

heavyset_go 17 hours ago | parent [-]

Here's the correct link, I accidentally added an 'l' to the end when pasting: https://newsroom.loc.gov/news/copyright-office-releases-part...

kccqzy 20 hours ago | parent | prev [-]

There are so many cases of the copyright office rejecting the request to register copyright for AI-generated works. Here’s just one example: https://www.copyright.gov/rulings-filings/review-board/docs/... (skip to section III).

wild_egg 18 hours ago | parent [-]

> This analysis will be “necessarily case-by- case” because it will “depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work.”

This seems the opposite of the cut and dry "cannot be copyrighted" stance I was replying to.

kccqzy 16 hours ago | parent [-]

Yes it does depend on the circumstances. You are free to waste your own time to try this at the copyright office, but in my opinion, this project's 100% LLM output where the human element is just writing prompts and steering the LLM is the same circumstance as my linked case where the human prompted Midjourney 624 times before producing the image the human deemed acceptable. The copyright office has this to say:

> As the Office described in its March guidance, “when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the ‘traditional elements of authorship’ are determined and executed by the technology—not the human user.”

anonnon 19 hours ago | parent | prev [-]

> Realistically, there’s a lot in there now that I simply couldn't have built in a standard human lifetime without them.

I have yet to see a study showing something like a 2x or better boost in programmer productivity through LLMs. Usually it's something like 10-30%, depending on what metrics you use (which I don't doubt). Maybe it's 50% with frontier models, but seeing these comments on HN where people act like they're 10x more productive with these tools is strange.

thunky 17 hours ago | parent [-]

Odd choice of a comment to post this reply to.

I guess you're just not going to believe what anyone says.

anonnon 17 hours ago | parent [-]

> Odd choice of a comment to post this reply to.

How? They claimed LLMs somehow enabled them to write more code in the span of 3.5 years (assuming they started with ChatGPT's introduction) than they would be able to write in the span of decades. No studies have shown this. But at least one study did show that LLM devs overestimate how productive these systems make them.

thunky 16 hours ago | parent [-]

> How?

You're calling this person a liar because they don't have a study to back up their personal anecdote. Which is a strange position to take imo.

anonnon 13 hours ago | parent [-]

It's strange that I don't accept unverified anecdotes on their face, especially when they contradict the best evidence available? Also

> calling this person a liar

"Liar" implies a deliberate attempt to deceive, but I specifically mentioned the possibility that these tools just make you feel much more productive than you actually are, as at least one study found. But I'm sure a lot of these anecdotes are, in fact, lies from liars (bots/shills). The fact that Anthropic has to resort to stuff like this: https://news.ycombinator.com/item?id=47282777

should make everyone suspicious of the extravagant claims being made about Claude.

thunky 6 hours ago | parent [-]

You're the only one in this thread that mentioned 2x and 10x productivity boosts and studies.

Obviously everyone has their own experiences with LLMs. But I think it's an interesting position to take to tell random people that their reported experience is wrong. Or how you could be so certain that LLMs can't possibly be that useful.