Remix.run Logo
streetcat1 4 hours ago

For some reason miss two important points:

1) The AI code maintainence question - who would maintain the AI generated code 2) The true cost of AI. Once the VC/PE money runs out and companies charge the full cost, what would happen to vibe coding at that point ?

NitpickLawyer 3 hours ago | parent [-]

I think this post is a great example of a different point made in this thread. People confuse vibe-coding with llm-assisted coding all the time (no shade for you, OP). There is an implied bias that all LLM code is bad, unmaintainable, incomprehensible. That's not necessarily the case.

1) Either you, the person owning the code, or you + LLms, or just the LLMs in the future. All of them can work. And they can work better with a bit of prep work.

The latest models are very good at following instructions. So instead of "write a service that does X" you can use the tools to ask for specifics (i.e. write a modular service, that uses concept A and concept B to do Y. It should use x y z tech stack. It should use this ruleset, these conventions. Before testing run these linters and these formatters. Fix every env error before testing. etc).

That's the main difference between vibe-coding and llm-assisted coding. You get to decide what you ask for. And you get to set the acceptance criteria. The key po9int that non-practitioners always miss is that once a capability becomes available to these models, you can layer them on top of previous capabilities and get a better end result. Higher instruction adherence -> better specs -> longer context -> better results -> better testing -> better overall loop.

2) You are confusing the fact that some labs subsidise inference costs (for access to data, usage metrics, etc) with the true cost of inference on a given model size. Youc an already have a good indication on what the cost is today for any given model size. 3rd party inference shops exist today, and they are not subsidising the costs (they have no reason to). You can do the math as well, and figure out an average cost per token for a given capability. And those open models are out, they're not gonna change, and you can get the same capability tomorrow or in 10 years. (and likely at lower costs, since hardware improves, inference stack improves, etc).