▲ | amluto 5 days ago | |||||||||||||||||||||||||
Plain int3 is a footgun: the CPU does not keep track of the address of the int3 (at least not until FRED), and it reports the address after int3. It’s impossible to reliably undo that in software, and most debuggers don’t even try, and the result is a failure to identify the location of the breakpoint. It’s problematic if the int3 is the last instruction in a basic block, and even worse if the optimizer thinks that whatever is after the int3 is unreachable. If Rust’s standard library does this, please consider using int3;nop instead. | ||||||||||||||||||||||||||
▲ | JoshTriplett 5 days ago | parent | next [-] | |||||||||||||||||||||||||
Good to know! I've seen the pattern of "int3; nop" before, but I've never seen the explanation for why. I'd always assumed it involved the desire to be able to live-patch a different instruction over it. In Rust, we're using the `llvm.debugtrap` intrinsic. Does that DTRT? | ||||||||||||||||||||||||||
▲ | rep_lodsb 5 days ago | parent | prev [-] | |||||||||||||||||||||||||
The "canonical" INT 3 is a single byte opcode (CCh), so the debugger can just subtract 1 from the address pushed on the stack to get the breakpoint location. There is another encoding (CD 03), but no assembler should emit it. It used to be possible for adversarial code to confuse debug interrupt handlers with this, but this should be fixed now. | ||||||||||||||||||||||||||
|