Remix.run Logo
bubblyworld 2 days ago

I know you've got a subthread about this exact idea, but I do think there is some value in manually performing the debugging process if (and perhaps only if) your goal is to improve your overall programming ability.

I guess the chess analogy would be that it makes a lot of sense to analyse positions yourself, even though Leela and Stockfish can do a far more thorough job in much less time. Of course, if you just need to know the best move right now, you would use the AI, and professionals do that all the time.

But as a decently strong chess player I cannot imagine improving without doing this kind of manual practice (at least beyond a basic level of skill like knowing how pieces move). Grandmasters routinely drill tactics exercises, for instance, even though they are "mundane" at that level of ability.

I guess the crux of it - do you think AI+person learns faster than just person for this kind of thing? And why? It's not obvious to me either way (and another question is whether the skill is even relevant any more... I think so, but I know people who don't).

kasey_junk 2 days ago | parent [-]

But you can do that _after_ the incident. When things are not on fire.

You don’t run analysis of your chess game when the clock is ticking.

bubblyworld 2 days ago | parent [-]

Sure, if something is super critical then you should solve the problem as fast as possible. I'm not debating that. But there's probably a middle ground there somewhere for less critical issues. I suspect the process of generating and falsifying hypotheses quickly is the skill, and I don't know if you can effectively train that skill after an incident, when you've already seen the resolution.

Chess is maybe not a great analogy, because there are rarely objectively correct answers, only hard trade-offs. For that reason there's still a lot of value in reviewing a finished game.