Remix.run Logo
kburman 8 hours ago

Amazing work! I'd love to hear more details about your workflow with Claude Code.

As a side note and this isn't a knock on your project specifically. I think the community needs to normalize disclaimers for "vibe-coded" packages. Consumers really need to understand the potential risks of relying on agent-generated code upfront.

nine_k 5 hours ago | parent | next [-]

Even more interesting is how much did the effort cost.

Unlike the development work of old (pre-2025), work with high-end models incurs a very direct monetary cost, one burns tokens which cost money, and you can't have something as powerful to be running locally (even if you happened to have a Mac Pro Ultra with RAM maxed out).

Some of my friends burned through hundreds of dollars a day while doing large amounts of (allegedly efficient) work with Claude Code.

jawiggins 8 hours ago | parent | prev | next [-]

Yeah its a fair point. I wondered if it might be irresponsible to publish the package because it was made this way, but I suspect I'm not the first person to try and develop a package with Claude Code, so I think the best I can do is be honest about it.

As for the workflow, I think the best advice I can give is to setup as many guardrails and tools as possible, so Claude and do as many iterations before needing any intervention. So in this case I setup pre-commit hooks for linting and formatting, gave it access to the full testing suite, and let it rip. The majority of the work was done in a single thinking loop that lasted ~3 hours where Claude was able to run the tests, see what failed, and iterate until they all passed. From there, there was still lots of iterations to add features, clean up, test, and improve performance - but allowing Claude to iterate quickly on it's own without my involvement was crucial.

kelnos 2 hours ago | parent | next [-]

I don't think it was irresponsible to publish it, but I do think it was irresponsible to publish it without clearly disclosing at the top of the crates.io README that it was built entirely by AI, and that you haven't reviewed the code (assuming you haven't).

If I were looking for an XML parser/generator library, I might stumble across this and think it might be production-quality, and assume it was built by humans, or at least that humans had fully vetted and understand the code.

tonyedgecombe 2 hours ago | parent | prev [-]

Yes, if you tripped across this package in crates.io the readme gives the impression of a serious piece of software but your comments here imply it is a one off experiment rather than something you plan to maintain for the next decade.

socalgal2 2 hours ago | parent | prev [-]

Do they? Tons of extremely popular human generated libraries are absolute trash. Just as an example, nearly all of the JS zip file libraries are dumpers fires. Same with QR code libraries and command line parsing libraries.

This feels like if you want to know if the code is good or bad, read the code and check the tests. Assuming human = good, LLM = bad does not make much sense given the amount of bad human code I've seen.

Sure, if the code is from a repuatable company or creator then I'd take that as a strong signal quality over an LLM but I wouldn't take a random human programmer as a strong signal over generated.