Remix.run Logo
killerstorm 17 hours ago

LLM can give you thousands of lines of perfectly working code for less than 1 dollar. How is that trivial or expensive?

sgt101 16 hours ago | parent | next [-]

Looking up a project on github, downloading it and using it can give you 10000 lines of perfectly working code for free.

Also, when I use Cursor I have to watch it like a hawk or it deletes random bits of code that are needed or adds in extra code to repair imaginary issues. A good example was that I used it to write a function that inverted the axis on some data that I wanted to present differently, and then added that call into one of the functions generating the data I needed.

Of course, somewhere in the pipeline it added the call into every data generating function. Cue a very confused 20 minutes a week later when I was re-running some experiments.

brulard 15 hours ago | parent [-]

Are you seriously comparing downloading static code from github with bespoke code generated for your specific problem? LLMs don't keep you from coding, they assist it. Sometimes the output works, sometimes it doesn't (on first or multiple tries). Dismissing the entire approach because it's not perfect yet is shortsighted.

ozgrakkurt 13 hours ago | parent [-]

They didn’t dismiss it, they just said it is not really that useful which is correct?

brulard 3 hours ago | parent | next [-]

Obviously YMMV, but it is extremely useful for me and for many people out there.

Matticus_Rex 9 hours ago | parent | prev [-]

Many obviously disagree that it's correct

fendy3002 17 hours ago | parent | prev | next [-]

well I presented the statement wrongly. What I mean is the use case for LLM are trivial things, it shouldn't be expensive to operate

and the 1 dollar cost for your case is heavily subsidized, that price won't hold up long assuming the computing power stays the same.

killerstorm 11 hours ago | parent [-]

Cheaper models might be around $0.01 per request, and it's not subsidized: we see a lot of different providers offering open source models, which offer quality similar to proprietary ones. On-device generation is also an option now.

For $1 I'm talking about Claude Opus 4. I doubt it's subsidized - it's already much more expensive than the open models.

zwnow 17 hours ago | parent | prev [-]

Thousands of lines of perfectly working code? Did you verify that yourself? Last time I tried it produced slop, and I've been extremely detailed in my prompt.

killerstorm 3 hours ago | parent | next [-]

Yes. I verified it myself. Best results from Opus 4 so far, Gemini might be OK too.

DSingularity 7 hours ago | parent | prev [-]

Try again.

mrbungie 4 hours ago | parent [-]

Any retries before nailing the prompt are still going to be billed, so this supports GP position about LLMs being expensive for trivial things.