Remix.run Logo
gregw2 a day ago

This was very thoughtful essay; I've had similar lines of thinking myself. How will we debug auto-generated AI code?

That said, on that topic, the essay overlooks one key point and line of reasoning about debugging, one derived from Kernighan's Law. (Kernighan as in the K in AWK + K&R C...)

The law states: "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."

If Kernighan's law is "true" in some rough sense (as I have long agreed with), then we have a potential solution to the "AI debugging" problem... ask the LLM to make the code four times simpler than it needs to or write code with a 4x dumber model. Then a smarter model (or us) can debug it. Right?

rolha-capoeira 19 hours ago | parent | next [-]

why do you need to debug it? or, why do you need to debug it?

my crazy thoughts, which I am open to being wrong about:

this is still the old way of thinking. generative AI is a probability function—the results exist in a probability space. we expect to find a result at a specific point in that space, but we don't know the inputs to achieve it until it is solved. catch 22.

instead, we must embrace the chaos by building attractors in the system, and pruning results that stray too far from the boundaries we set. we should focus only on macro-level results, not the code. multiple solutions existing at any given time. if anything is "debugging", it is a generative AI agent.

tucson-josh 21 hours ago | parent | prev | next [-]

Haha, love it.

Step 1: determine how to quantify cleverness

lostmsu 20 hours ago | parent | prev [-]

Hm, have they tried to ask Claude debug generated code? It does it pretty well.