| ▲ | cluckindan 5 days ago | ||||||||||||||||||||||
So the ”fix” includes a completely new function? In a cryptography implementation? I feel like the article is giving out very bad advice which is going to end up shooting someone in the foot. | |||||||||||||||||||||||
| ▲ | thadt 5 days ago | parent | next [-] | ||||||||||||||||||||||
Can you expand on what you find to be 'bad advice'? The author uses an LLM to find bugs and then throw away the fix and instead write the code he would have written anyway. This seems like a rather conservative application of LLMs. Using the 'shooting someone in the foot' analogy - this article is an illustration of professional and responsible firearm handling. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | OneDeuxTriSeiGo 5 days ago | parent | prev [-] | ||||||||||||||||||||||
The article even states that the solution claude proposed wasn't the point. The point was finding the bug. AI are very capable heuristics tools. Being able to "sniff test" things blind is their specialty. i.e. Treat them like an extremely capable gas detector that can tell you there is a leak and where in the plumbing it is, not a plumber who can fix the leak for you. | |||||||||||||||||||||||
| |||||||||||||||||||||||