| ▲ | kace91 2 hours ago |
| I’m not particularly proAI but I struggle with the mentality some engineers seem to apply to trying. If you read someone say “I don’t know what’s the big deal with vim, I ran it and pressed some keys and it didn’t write text at all” they’d be mocked for it. But with these tools there seems to be an attitude of “if I don’t get results straight away it’s bad”. Why the difference? |
|
| ▲ | alkonaut an hour ago | parent | next [-] |
| I don't understand how to get even bad results. Or any results at all. I'm at a level where I'm going "This can't just be me not having read the manual". I get the same change applied multiple times, the agent having some absurd method of applying changes that conflict with what I say it like some git merge from hell and so on. I can't get it to understand even the simplest of contexts etc. It's not really that the code it writes might not work. I just can't get past the actual tool use. In fact, I don't think I'm even at the stage where the AI output is even the problem yet. |
| |
| ▲ | TeMPOraL 24 minutes ago | parent [-] | | > I'm at a level where I'm going "This can't just be me not having read the manual". Sure it can, because nobody is reading manuals anymore :). It's an interesting exercise to try: take your favorite tool you use often (that isn't some recent webshit, devoid of any documentation), find a manual (not a man page), and read it cover to cover. Say, GDB or Emacs or even coreutils. It's surprising just how much powerful features good software tools have, and how much you'll learn in short time, that most software people don't know is possible (or worse, decry as "too much complexity") just because they couldn't be arsed to read some documentation. > I just can't get past the actual tool use. In fact, I don't think I'm even at the stage where the AI output is even the problem yet. The tools are a problem because they're new and a moving target. They're both dead simple and somehow complex around the edges. AI, too, is tricky to work, particularly when people aren't used to communicating clearly. There's a lot of surprising problems (such as "absurd method of applying changes") that come from the fact that AI is solving a very broad class of problems, everywhere at the same time, by virtue of being a general tool. Still needs a bit of and-holding if your project/conventions stray away from what's obvious or popular in particular domain. But it's getting easier and easier as months go by. FWIW, I too haven't developed a proper agentic workflow with CLI tools for myself just yet; depending on the project, I either get stellar results or garbage. But I recognize this is only a matter of time investment: I didn't have much time to set aside and do it properly. |
|
|
| ▲ | Macha 2 hours ago | parent | prev | next [-] |
| There isn't a bunch of managers metaphorically asking people if they're using vim enough, and not so many blog posts proclaiming vim as the only future for building software |
| |
| ▲ | kace91 2 hours ago | parent | next [-] | | I’d argue that, if we accept that AI is relevant enough to at least be worth checking, then dismissing it with minimal effort is just as bad as mindlessly hyping the tech. | |
| ▲ | dist-epoch an hour ago | parent | prev [-] | | You must be new here. "I use vim between", "you don't use vim, you use Visual Studio, your opinion doesn't count" is a thing in programming circles. |
|
|
| ▲ | neumann 2 hours ago | parent | prev | next [-] |
| I agree to a degree, but I am in that camp. I subscribe to alphasignal, and every morning there are 3 new agent tools, and two new features, and a new agentic approach, and I am left wondering, where is the production stuff? |
| |
|
| ▲ | galaxyLogic 2 hours ago | parent | prev [-] |
| Well one could say that since it's AI, AI should be able to tell us what we're doing wrong. No? AI is supposed to make our work easier. |
| |
| ▲ | kace91 2 hours ago | parent [-] | | What you are doing wrong in respect to what? If you ask for A, how would any system know that you actually wanted to ask for B? | | |
| ▲ | walt_grata an hour ago | parent [-] | | Honestly IMO it's more that I ask for A, but don't strongly enough discourage B then I get both A, B and maybe C, generally implemented poorly. The base systems need to have more focus and doubt built in before they'll be truely useful for things aside from a greenfield apps or generating maintainable code. |
|
|