▲ | notarobot123 9 hours ago | |
I am not the biggest fan of LLMs but I have to admit that, as long as you understand what the technology is and how it works, it is a very powerful tool. I think the mixed reports on utility have a lot to do with the very different ways the tool is used and how much 'magic' the end-user expects versus how much the end-user expects to guide the tool to do the work. To get the best out of it, you do have to provide significant amount of scaffolding (though it can help with that too). If you're just pointing it at a codebase and expecting it to figure it out, you're going to have mixed results at best. If you guide it well, it can save a significant amount of manual effort and time. | ||
▲ | kaydub 2 hours ago | parent | next [-] | |
> (though it can help with that too) Yeah, this is a big thing I'm noticing a lot of people miss. I have tons of people ask me "how do I get claude to do <whatever>?" "Ask claude" is the only response I can give. You can get the LLM to help you figure out how to get to your goal and write the right prompt before you even ask the LLM to get to your goal. | ||
▲ | bentt 3 hours ago | parent | prev [-] | |
yeah very few months I try to have it “just do magic” again and I re-learn the lesson. Like, I’ll just say “optimize this shader!” and plug it in blind. It doesn’t work. The only way it could is if the LLM has a testing loop itself. I guess in web world it could, but in my world of game dev, not so much. So I stick with the method I outlined in OP and it is sometimes useful. |