| |
| ▲ | theshrike79 2 days ago | parent | next [-] | | It's like working with humans: 1) define problem
2) split problem into small independently verifiable tasks
3) implement tasks one by one, verify with tools
With humans 1) is the spec, 2) is the Jira or whatever tasksWith an LLM usually 1) is just a markdown file, 2) is a markdown checklist, Github issues (which Claude can use with the `gh` cli) and every loop of 3 gets a fresh context, maybe the spec from step 1 and the relevant task information from 2 I haven't ran into context issues in a LONG time, and if I have it's usually been either intentional (it's a problem where compacting wont' hurt) or an error on my part. | | |
| ▲ | troupo 2 days ago | parent [-] | | > every loop of 3 gets a fresh context, maybe the spec from step 1 and the relevant task information from 2 > I haven't ran into context issues in a LONG time Because you've become the reverse centaur :) "a person who is serving as a squishy meat appendage for an uncaring machine." [1] You are very aware of the exact issues I'm talking about, and have trained yourself to do all the mechanical dance moves to avoid them. I do the same dances, that's why I'm pointing out that they are still necessary despite the claims of how model X/Y/Z are "next tier". [1] https://doctorow.medium.com/https-pluralistic-net-2025-12-05... | | |
| ▲ | theshrike79 2 days ago | parent [-] | | Yes and no. I've worked quite a bit with juniors, offshore consultants and just in companies where processes are a bit shit. The exact same method that worked for those happened to also work for LLMs, I didn't have to learn anything new or change much in my workflow. "Fix bug in FoobarComponent" is enough of a bug ticket for the 100x developer in your team with experience with that specific product, but bad for AI, juniors and offshored teams. Thus, giving enough context in each ticket to tell whoever is working on it where to look and a few ideas what might be the root cause and how to fix it is kinda second nature to me. Also my own brain is mostly neurospicy mush, so _I_ need to write the context to the tickets even if I'm the one on it a few weeks from now. Because now-me remembers things, two-weeks-from-now me most likely doesn't. | | |
| ▲ | troupo 2 days ago | parent [-] | | The problem with LLMs (similar to people :) ) is that you never really know what works. I've had Claude one-shot "implement <some complex requirement>" with little additional input, and then completely botch even the smallest bug fix with explicit instructions and context. And vice versa :) |
|
|
| |
| ▲ | CuriouslyC 2 days ago | parent | prev [-] | | I realize your experience has been frustrating. I hope you see that every generation of model and harness is converting more hold-outs. We're still a few years from hard diminishing returns assuming capital keeps flowing (and that's without any major new architectures which are likely) so you should be able to see how this is going to play out. It's in your interest to deal with your frustration and figure out how you can leverage the new tools to stay relevant (to the degree that you want to). Regarding the context window, Claude needs thinking turned up for long context accuracy, it's quite forgetful without thinking. | | |
| ▲ | th0ma5 2 days ago | parent | next [-] | | I think it's important for people who want to write a comment like this to understand how much this sounds like you're in a cult. | | |
| ▲ | CuriouslyC 2 days ago | parent | next [-] | | Personally I'm sympathetic to people who don't want to have to use AI, but I dislike it when they attack my use of AI as a skill issue. I'm quite certain the workplace is going to punish people who don't leverage AI though, and I'm trying to be helpful. | | |
| ▲ | troupo 2 days ago | parent [-] | | > but I dislike it when they attack my use of AI as a skill issue. No one attacked your use of AI. I explained my own experience with the "Claude Opus 4.5 is next tier". You barged in, ignored anything I said, and attacked my skills. > the workplace is going to punish people who don't leverage AI though, and I'm trying to be helpful. So what exactly is helpful in your comments? | | |
| ▲ | CuriouslyC 2 days ago | parent [-] | | The only thing I disagreed with in your post is your objectively incorrect statement regarding Claude's context behavior. Other than that I'm just trying to encourage you to make preparations for something that I don't think you're taking seriously enough yet. No need to get all worked up, it'll only reflect on you. |
|
| |
| ▲ | pigeons a day ago | parent | prev [-] | | It certainly sounds unkind, if not cultish. |
| |
| ▲ | troupo 2 days ago | parent | prev [-] | | Note how nothing in your comment addresses anything I said. Except the last sentence that basically confirms what I said. This perfectly illustrates the discourse around AI. As for the snide and patronizing "it's in your interest to stay relevant": 1. I use these tools daily. That's why I don't subscribe to willful wide-eyed gullibility. I know exactly what these tools can and cannot do. The vast majority of "AI skeptics" are the same. 2. In a few years when the world is awash in barely working incomprehensible AI slop my skills will be in great demand. Not because I'm an amazing developer (I'm not), but because I have experience separating wheat from the chaff | | |
| ▲ | CuriouslyC 2 days ago | parent [-] | | The snide and patronizing is your projection. It kinda makes me sad when the discourse is so poisoned that I can't even encourage someone to protect their own future from something that's obviously coming (technical merits aside, purely based on social dynamics). It seems the subject of AI is emotionally charged for you, so I expect friendly/rational discourse is going to be a challenge. I'd say something nice but since you're primed to see me being patronizing... Fuck you? That what you were expecting? | | |
| ▲ | troupo 2 days ago | parent [-] | | > The snide and patronizing is your projection. It's not me who decided to barge in, assume their opponent doesn't use something or doesn't want to use something, and offer unsolicited advice. > It kinda makes me sad when the discourse is so poisoned that I can't even encourage someone to protect their own future from something that's obviously coming See. Again. You're so in love with your "wisdom" that you can't even see what you sound like: snide, patronising, condenscending. And completely missing the whole point of what was written. You are literally the person who poisons the discourse. Me: "here are the issues I still experience with what people claim are 'next tier frontier model'" You: "it's in your interests to figure out how to leverage new tools to stay relevant in the future" Me: ... what the hell are you talking about? I'm using these tools daily. Do you have anything constructive to add to the discourse? > so I expect friendly/rational discourse is going to be a challenge. It's only challenge to you because you keep being in love with your voice and your voice only. Do you have anything to contribute to the actual rational discourse, are you going to attack my character? > 'd say something nice but since you're primed to see me being patronizing... Fuck you? T Ah. The famous friendly/rational discourse of "they attack my use of AI" (no one attacked you), "why don't you invest in learning tools to stay relevant in the future" (I literally use these tools daily, do you have anything useful to say?) and "fuck you" (well, same to you). > That what you were expecting? What I was expecting is responses to what I wrote, not you riding in on a high horse. | | |
| ▲ | CuriouslyC 2 days ago | parent | next [-] | | You were the one complaining about how the tools aren't giving you the results you expected. If you're using these tools daily and having a hard time, either you're working on something very different from the bulk of people using the tools and your problems or legitimate, or you aren't and it's a skill issue. If you want to take politeness as being patronizing, I'm happy to stop bothering. My guess is you're not a special snowflake, and you need to "get good" or you're going to end up on unemployment complaining about how unfair life is. I'd have sympathy but you don't seem like a pleasant human being to interact with, so have fun! | | |
| ▲ | troupo 2 days ago | parent [-] | | > ou were the one complaining about how the tools aren't giving you the results you expected. They are not giving me the results people claim they give. It is distinctly different from not giving the results I want. > If you're using these tools daily and having a hard time, either you're working on something very different from the bulk of people using the tools and your problems or legitimate, or you aren't and it's a skill issue. Indeed. And your rational/friendly discourse that you claim you're having would start with trying to figure that out. Did you? No, you didn't. You immediately assumed your opponent is a clueless idiot who is somehow against AI and is incapable or learning or something. > If you want to take politeness as being patronizing, I'm happy to stop bothering. No. It's not politeness. It's smugness. You literally started your interaction in this thread with a "git gud or else" and even managed to complain later that "you dislike it when they attack your use of AI as a skill issue". While continuously attacking others. > you don't seem like a pleasant human being to interact with Says the person who has contributed nothing to the conversation except his arrogance, smugness, holier-than-thou attitude, engaged in nothing but personal attacks, complained about non-existent grievances and when called out on this behavior completed his "friendly and rational discourse" with a "fuck you". Well, fuck you, too. Adieu. | | |
| |
| ▲ | cindyllm 2 days ago | parent | prev [-] | | [dead] |
|
|
|
|
|