| ▲ | rednb 2 hours ago | |||||||
As someone who uses quite a couple of different AI providers (codex, glm, deepseek, claude premium among others), i've noticed that claude tends to move too fast and execute commands without asking for permission. For example, if i ask a question regarding an implementation decision while it is implementing a plan, it answers (or not) and immediately proceeds to make changes it assumes i want. Other models switch to chat mode, or ask for the best course of action. Once this is said, i am not blaming Anthropic For that one, because IMHO the OP has taken a lot of risks and failed to design a proper backup and recovery strategy. I wish them to recover from this though, this must be a very stressful situation for them. | ||||||||
| ▲ | fireflash38 2 hours ago | parent [-] | |||||||
All the models I have used will frequently jump ahead a ton of steps and not verify any of its assumptions. From generating a ton of code output I didn't ask for, to making a ton of assumptions about what I'm working on without appropriate context. | ||||||||
| ||||||||