▲ | Implicated 3 days ago | |
If this is truly your perspective, you've already lost the plot. It's almost always the users fault when it comes to tools. If you're using it and it's not doing its 'job' well - it's more likely that you're using it wrong than it is that it's a bad tool. Almost universally. Right tool for the job, etc etc. Also important that you're using it right, for the right job. Claude Code isn't meant to refactor entire projects. If you're trying to load up 100k token "whole projects" into it - you're using it wrong. Just a fact. That's not what this tool is designed to do. Sure.. maybe it "works" or gets close enough to make people think that is what it's designed for, but it's not. Detailed, specific work... it excels, so wildly, that it's astonishing to me that these takes exist. In saying all of that, there _are_ times I dump huge amounts of context into it (Claude, projects, not Claude Code - cause that's not what it's designed for) and I don't have "conversations" with it in that manner. I load it up with a bunch of context, ask my question/give it a task and that first response is all you need. If it doesn't solve your concern, it should shine enough light that you now know how you want to address it in a more granular fashion. | ||
▲ | troupo 3 days ago | parent [-] | |
The unpredictable non-deterministic black box with an unknown training set, weights and biases is behaving contrary to how it's advertised? The fault lies with the user, surely. |