| ▲ | Ask HN: Is it just me? | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| 17 points by twoelf 3 days ago | 31 comments | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I’ve become lazy, and got addicted to "vibe" coding using the large "language" models. At first it worked well, made impactful changes, even added to my requirements, and the "vibe" was good. The tool did what I asked and suggested improvements. That was two months ago. But lately, I feel like I’m being deceived in every prompt, reply, and implementation. It feels like it limits me at every step, like it’s forcing me to choose between features even when I clearly gave instruction to implement everything that needs to be implemented. It starts with incomplete plans, and when I point out what’s missing, it says, “Oh, I missed that.” There’s also a lot of “yes-man” behavior. It feels too smart, like it knows what I want but gives me just enough to keep me hooked. Isn’t the smartest tool ever made supposed to guide the user toward the light? Shouldn’t it follow instructions, help complete the project, and guide it to completion? It’s clearly capable of doing that, but it often doesn’t. Sometimes it feels like it holds back because if it finished the job end-to-end, there would be no reason to come back for the next session. Isn't the whole point of using a tool to code is to code till completion, or is it just to get the "user" hooked? Instead of guiding toward the light, it creates its own “light” and steers the user into a dark corner. If the user stops paying for the light, they are left in the dark: no architecture, no proper structure. Gatekeeping for what? Another subscription? It can predict the next 10,000 lines of code. It understands and acknowledges every request, idea, vision, flaw, structure, requirement, needs and just ignores and fails to implement it and cannot consistently think through it. I just can’t believe that. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | LinkSpree 10 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
From a non-techinical perspective, Vibe coding is just a massive tradeoff. Time and Cost. I understand that the LLM no matter how advanced, will still struggle to forumlate an app end to end with all of its bells and whistles. The idea has always been that your intervention is how the app develops, and incomplete plans might be more of a protection from the AI to overtake your vision with a flawed interpretation of your idea. A few years ago, I had to pay a team of 4 to do an MVP that was about 30% as good as the same app vibe coded through three weeks. As a founder, I am 100% ok with the tradeoff, because I could go on to burn cash on actual code (as a non techinical founder) but would much rather ship fast, half baked products and iterate as I go. Yes, even if the cost is in tokens or correcting the agent over and over. This tradeoff is perfectly reasonable in my opinion. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | Blackstrat 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I'm curious. Vibe coding seems to be all the rage on HN these days. And yet many that discuss it are unhappy. My question, seriously, is why did you go into the software development field if you were willing to surrender your autonomy to an LLM? I can't think of anything more demoralizing. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | sminchev 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Well, this is how it is with real humans as well. The moment the human gets tired, or the information they need to process is too much, they produce errors. Here is the same, the moment things get too much, it start hallucinating and missing important things. It also depends on what model you are using. I read that Gemini 3 pro, which has limit of 1 million tokens can decrease its productivity to 25% getting close to its limits. Not WITH 25% but TO 25%. Becomes extremely dump. Other models are just asking too many questions... There are some tips and tricks that you can follow. And it is similar to how people work. Keep the tasks small, save what the model learned during the session somewhere, and re-use this knowledge in the next session, by explicitly writing to read that information, before it starts. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | iamflimflam1 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I find it all quite entertaining. A lot of developers are discovering what it’s like to manage “people”. And they are realising that it’s not fun. If you’ve run a team or managed people it’s quite a familiar feeling of “I’m pretty sure we were very clear on what needed to be done. But somehow, what’s been produced is just not quite what I wanted” | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | kentich 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The reason for things you've described is that LLMs are forgetful. They just can't remember the context and have to research the code almost every time you prompt. Even the code it itself wrote. This leads to re-implementation of the same features with different code, code duplicates, missing the implementation of corner cases, etc. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | jwilliams 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Try this - For the same task, try the same prompt three times with totally different framing - do it fast, be comprehensive, find stuff I’ve missed, etc. Then throw away the ones you don’t like. It also prevents reinforcement of your incoming pov. I’ve found this has made me way way better at steering. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | krapp 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
> It understands and acknowledges every request, idea, vision, flaw, structure, requirement, needs and just ignores and fails to implement it and cannot consistently think through it. I just can’t believe that. Believe it. You're anthropomorphizing. It doesn't understand anything. There is no "thinking" going on. Yes, the point of LLMs as a service is to make money. Yes, the service is designed to maximize profit. Yes, there are dark patterns baked into the system. Yes, keeping you addicted and using the service is part of the business model. This isn't human instrumentality, it's just capitalism. Until you realize the machine isn't qualitatively superior to your own mind and your own efforts, you're just going to keep torturing yourself because your nature forces you to maximize your productivity at any cost, which given your false assumptions about LLMs means ceding as much of yourself to the machine as possible and suffering its inadequacies. I use "you" collectively here because it seems like a lot of people have worked themselves into this corner where they don't like what LLMs do for them but feel compelled to use them anyway. It's just a tool. If you don't like the tool, don't use the tool. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | grahammccain 3 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I definitely feel this exact sentiment. I’m wondering if it actually the model quality degrading or if it’s me lol. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | cyanydeez 2 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
your LLM isn't an LLM. It's a coding harness backed by capitalists that are trying to maxmize their cash while minimizing their compute spend. It's likely they're swapping out the larger models for smaller models that cost less on inference. They're swapping your new imputs for similar cached inputs to funnel you into a less cost-intesive solution. If you switch to a local LLM, you'll see it has all the same flaws, but those flaws only change when you change the coding harness. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | nadav_tal 3 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
totally! lol | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||