| ▲ | noupdates 5 hours ago | ||||||||||||||||||||||||||||
Quite frankly, most seasoned developers should be able to write their own Claude Code. You know your own algorithm for how you deal with lines of code, so it's just a matter of converting your own logic. Becoming dependent on Claude Code is a mistake (edit: I might be too heavy handed with this statement). If your coding agent isn't doing what you want, you need to be able to redesign it. | |||||||||||||||||||||||||||||
| ▲ | nicetryguy 5 hours ago | parent | next [-] | ||||||||||||||||||||||||||||
It's not that simple. Claude Code allows you to use the Anthropic monthly subscription instead of API tokens, which for power users is massively less expensive. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | bradfa 5 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
Yes and no. There are many not-trivial things you have to solve when using an LLM to help (or fully handle writing) code. For example, applying diffs to files. Since the LLM uses tokenization for all its text input/output, sometimes the diffs it'll create to modify a file aren't quite right as it may slightly mess up the text which is before/after the change and/or might introduce a slight typo in text which is being removed, which may or may not cleanly apply in the edit. There's a variety of ways to deal with this but most of the agentic coding tools have this mostly solved now (I guess you could just copy their implementation?). Also, sometimes the models will send you JSON or XML back from tool calls which isn't valid, so your tool will need to handle that. These fun implementation details don't happen that often in a coding session, but they happen often enough that you'd probably get driven mad trying to use a tool which didn't handle them seamlessly if you're doing real work. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | vjerancrnjak 5 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
It's quite tricky as they optimize the agent loop, similar to codex. It's probably not enough to have answer-prompt -> tool call -> result critic -> apply or refine, there might be a specific thing they're doing when they fine tune the loop to the model, or they might even train the model to improve the existing loop. You would have to first look at their agent loop and then code it up from scratch. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | mikert89 5 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
The model is being trained to use claude code. i.e. the agentic patterns are reinforced using reinforcement learning. thats why it works so well. you cannot build this on your own, it will perform far worse | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | sergiotapia 4 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||
Claude Code has thousands of human manhours fine tuning a comprehensive harness to maximize effectiveness of the model. You think a single person can do better? I don't think that's possible. Opencode is better than Claude Code and they also have perhaps even more manhours. It's a collaboration thing, ever improving. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | dingnuts 5 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||
[dead] | |||||||||||||||||||||||||||||