Remix.run Logo
parasti 15 hours ago

I really tried to get into the vibe coding thing - just describe the thing I need in human language and let the agent figure it out. It was incredible at first. Then I realized that I am spending a lot of time writing clarifications because the agent either forgot or misinterpreted something. Then I realized that I am waiting an awful long time for each agent step to complete just to write another correction or clarification. Then I realized that this constant start-stop process is literally melting my brain and making me unable to do any real work myself. It's basically having the same effect as scrolling any other algorithmic feed. Now I am back to programming myself and only bouncing the boring bits off of ChatGPT.

Cthulhu_ 28 minutes ago | parent | next [-]

One thing that helps is to write an AGENTS.md file that encodes the knowledge and tricks you have of the codebase, like running a single test (faster feedback cycles), common coding patterns, examples, etc.

I went full meta and sketched out a file, then had an expensive LLM go through the codebase and write such a file. I don't know if it's any good though, I only really use coding assistants to write unit tests.

dlisboa 11 hours ago | parent | prev | next [-]

> Then I realized that this constant start-stop process is literally melting my brain and making me unable to do any real work myself. It's basically having the same effect as scrolling any other algorithmic feed

Yes, it’s extremely soul sucking. With the added disadvantage of not teaching me anything.

jazzyjackson 10 hours ago | parent [-]

I felt this way too until I decided that my goal isn't to know things, it's to ship things.

adverbly 8 hours ago | parent | next [-]

I felt this way too until I decided that my goal isn't to ship things, it's to ship, support and improve existing things.

a5c11 21 minutes ago | parent | prev [-]

Sounds very depressing.

russfink 13 hours ago | parent | prev | next [-]

One trick I have tried is asking the LLM to output a specification of the thing we are in the middle of building. A commenter above said humans struggle with writing good requirements - LLMs have trouble following good requirements - ALL of them - often forgetting important things while scrambling to address your latest concern.

Getting it to output a spec lets me correct the spec, reload the browser tab to speed things up, or move to a different AI.

TylerLives 15 hours ago | parent | prev [-]

I don't have much experience with it either, but what has worked so far is breaking down the problem into very small steps I can verify easily.

a5c11 18 minutes ago | parent [-]

At this point it's easier to just write the code. If my prompt is to be longer than the code produced, why bother with explaining that to LLM?