Remix.run Logo
porridgeraisin 5 hours ago

Oh come on, you don't have to be condescending about function calls.

https://news.ycombinator.com/item?id=47260385

Sharlin 5 hours ago | parent [-]

I was talking about libraries, higher-level units of reuse than individual functions. And your "syntactic" vs "semantic" reuse makes zero sense. Functions are literally written and invoked for their semantics – what they make happen. "Syntactic reuse" would be macros if anything, and indeed macros are very good at reducing boilerplate.

You might have a more compelling argument if instead of syntax and semantics you contrasted semantics and pragmatics.

porridgeraisin 5 hours ago | parent [-]

A library is a collection of data structures functions. My argument still holds.

> Syntactic reuse would be macros

Well sure. My point is that what can be reused is decided ahead of time and encoded in the syntax. Whereas with LLMs it is not, and is encoded in the semantics.

> Pragmatics

Didn't know what that is. Consider my post updated with the better terms.

runarberg 3 hours ago | parent [-]

I’m not sure your logic is sound. It sounds like you are insisting on some nuance which simply isn’t there. LLM generates unmaintainable slop, which is extremely difficult to reason about, uses wrong abstractions, violates DRY, violates cohesion, etc.

The industry has known how to reuse codes for two decades now (npm was released 16 years ago; pip 18 years ago). Using LLMs for code reuse is a step in the wrong direction, at least if you care about maintaining your code.

porridgeraisin an hour ago | parent | next [-]

Oh sure the quality is extremely unreliable and I am not a fan of its style of coding either. Requires quite a bit of hand holding and sometimes it truly enrages me. I am just saying that LLM technology opens up another dimension of code reuse which is broader. Still a ways to go, not in the foundation model, those have plateaued, but in refining them for coding.

naasking 2 hours ago | parent | prev [-]

> LLM generates unmaintainable slop

LLMs generate what you tell them to, which means it will be slop if you're careless and good if you're careful, just like programming in general.