| ▲ | Against vibes: When is a generative model useful(williamjbowman.com) | ||||||||||||||||
| 57 points by takira a day ago | 8 comments | |||||||||||||||||
| ▲ | andai 15 minutes ago | parent | next [-] | ||||||||||||||||
> What is the cost of verifying the generated artifact meets requirements vs. a directly produced artifact? This is mostly a function of the task and the user, but also the generative model. So this is the fun one for programming. I let AI agents do some programming on my codebases, but then I had to spend more time catching up with their changes. So first I was bored waiting for them to finish, and then I was confused and frustrated making sense of the result. Whereas, when I am asking AI small things like "edit this function so it does this instead", and accepting changes manually, my mental model stays synced the whole time. And I can stay active and in flow. (Also for such fine grained tasks, small fast cheap models are actually superior because they allow realtime usage. Even small latency makes a big difference.) | |||||||||||||||||
| ▲ | smilindave26 2 hours ago | parent | prev | next [-] | ||||||||||||||||
> For almost all software I write, I do care about the process. I’m typically designing software as part of research, and me doing the design and implementation work creates knowledge that I will then share. Similar here. For a lot of software I write, I don't really know what the essential "abstraction" I need is until I'm actively writing it. The answers, when I get them right, look obvious in retrospect. Sometimes, starting with Claude Code, I can get there, but my mindset is that I'm using this tool to generate software that helps me immerse myself in the problem space. It's a different pace to the process - sometimes it speeds me up, sometimes I end up taking bad concepts a lot further than I normally would before getting to the better path | |||||||||||||||||
| |||||||||||||||||
| ▲ | qsera 3 hours ago | parent | prev | next [-] | ||||||||||||||||
>The scientific version of these claims is “the total encoding cost (for some class of tasks) is lower than previous models” I wonder why? Can the new models read mind? > For example, I was recently trying to install a package whose name I forgot. I prompted the model to “install that x11 fake gui thing”, a trivial prompt. Yes, they are a better search. I would also add that there is also a subjective factor. If I enjoy writing code a lot more than reviewing it, I am going to prefer NOT using it for writing and might just use it to review. So "hardness" is also related to how much you like/dislike doing it. | |||||||||||||||||
| |||||||||||||||||
| ▲ | adampunk 13 hours ago | parent | prev | next [-] | ||||||||||||||||
This is basically the right approached, framed as critique. Success with these models means engaging in detail with their work, persistently and at all scales. You need attention to detail, ability to evaluate (independent from the model), and mechanisms for enforcing all that. In a word: engineering. But because people get all bent out of shape I prefer to call it vibe coding anyway. | |||||||||||||||||
| ▲ | 7777777phil 21 hours ago | parent | prev [-] | ||||||||||||||||
I particularly like this framework: how hard is it to describe the task vs. how hard is it to check the output. | |||||||||||||||||