| ▲ | nunez a day ago | ||||||||||||||||
> I agree with you overall, yet there’s one flow that works for me. Instead of speccing out a feature, I let PMs vibe code it. I then have the exact reference I need to build. Like BDD, but with something more accessible than Cucumber. I'm totally here for that. It would be nice if people also committed their initial prompt and chat session with the LLM into their codebase. From a corporate standpoint, having that would be excellent business logic as code, if the code is coming from a PM or a stakeholder on the business side of the house. From an engineering standpoint, it would be an excellent addendum to the codebase's documentation. | |||||||||||||||||
| ▲ | tharkun__ 9 hours ago | parent | next [-] | ||||||||||||||||
FWIW, BDD and frameworks like Cucumber don't work at all in my experience. The people that'd need to fill these out don't do it properly (they can't) and then we, devs, are stuck with brittle and un-debuggable stuff that's worse than if we just used regular code to encode what we understood from them. It's the same reason (most) PMs armed with an LLM still won't get anything usable done. They can't do it properly. They still need devs. But the gaps are shrinking. Some few PMs can get stuff done w/ both Cucumber, could wireframe UX with previous tools and can now do so much easier and better with an LLM.
I doubt you'd want this. It's a chat session for a reason. It's gonna be huge wall of text, especially if you meant to actually include all the internal prompting the LLM did while it was working. You'd also have all my "no dude, stop bullshitting me! I told to ignore X and use Y and to always double check Z and provide proof".It would only "work" if every single piece of feature you wrote was 100% written by the LLM from a single, largish and well defined prompt, the LLM works for a few hours and out comes the feature. And even then you have no reproducability (even if you turned around and gave it to the exact same model, no retraining, newer model, system prompt etc.). | |||||||||||||||||
| |||||||||||||||||
| ▲ | rufasterisco 7 hours ago | parent | prev [-] | ||||||||||||||||
I am actually working on that. Want to beta test? :) Can invite you to the, for now private github repo. Any feedback would be helpful! | |||||||||||||||||