Remix.run Logo
summa_tech 4 days ago

Long ago, before access to the Internet was cheap and plentiful, and way before search engines made finding this kind of information easy, this was a priceless find for an aspiring low-level programmer. All the (semi-)common PC hardware and software documented in one place.

Endless hours spent exploring VGA hardware registers and trying to apply them for cool visual effects. Learning how the then-new 32-bit Windows interacted with DOS extenders, and trying to make a homemade - very basic - operating system that could do it, too. The thrill of writing a Terminate and Stay Resident alarm clock, and having it finally not explode...

I have very fond memories of the Ralf Brown's Interrupt List.

jesuslop 4 days ago | parent | next [-]

Absolutely. Title says 2018 but it really comes from the dawn of pc. DOS was at 21h, and now linux system calls in x86 are INT 80h.

EarlKing 4 days ago | parent | next [-]

Linux system calls WERE 80h. If your code is still using an interrupt to access kernel functions then you've got problems. Syscall exists for the simple reason that interrupts are expensive.

zeusk 4 days ago | parent | next [-]

What do you mean by that? Most syscalls are still interrupt based.

maggit 4 days ago | parent | next [-]

x86-64 introduced a `syscall` instruction to allow syscalls with a lower overhead than going through interrupts. I don't know any reason to prefer `int 80h` over `syscall` when the latter is available. For documentation, see for example https://www.felixcloutier.com/x86/syscall

adrian_b 16 hours ago | parent | next [-]

While AMD syscall or Intel sysenter can provide a much higher performance than the old "int" instructions, both syscall and sysenter have been designed very badly, as explained by Linus himself in many places. It is extremely easy to use them in ways that do not work correctly, because of subtle bugs.

It is actually quite puzzling why both the Intel designers and the AMD designers have been so incompetent in specifying a "syscall" instruction, when such instructions, but well designed, had been included in many other CPU ISAs for many decades.

When not using an established operating system, where the implementation for "syscall" has been tested for many years and hopefully all bugs have been removed, there may be a reason to use the "int" instruction to transition into the privileged mode, because it is relatively foolproof and it requires a minimum amount of code to be handled.

Now Intel has specified FRED, a new mechanism for handling interrupts, exceptions and system calls, which does not have any of the defects of "int", "syscall" and "sysenter".

The first CPU implementing FRED should be Intel Panther Lake, to be launched by the end of this year, but surprisingly, recently when Intel has made a presentation providing information about Panther Lake no word was said about FRED, even if this is expected to be the greatest innovation of Panther Lake.

I hope that the Panther Lake implementation of FRED is not buggy, which could have made Intel to disable it and postpone its introduction to a future CPU, like they have done many times in the past. For instance, the "sysenter" instruction was intended to be introduced in Intel Pentium Pro, by the end of 1995, but because of bugs it was disabled and not documented until Pentium II, in mid 1997, where it finally worked.

messe 3 days ago | parent | prev [-]

32 bit x86 also has sysenter/sysexit.

adrian_b 16 hours ago | parent [-]

Only Intel. AMD had its own "syscall" instead of Intel's "sysenter" since the K6 CPU, so x86-64 has inherited that.

AMD's "syscall" corrects some defects of Intel's "sysenter", but unfortunately it introduces some new defects.

Details can be found in the Linux documentation, in comments by Linus Torvalds about the use of these instructions in the kernel.

4 days ago | parent | prev [-]
[deleted]
kragen 4 days ago | parent | prev [-]

Int 80h still works as well as ever on i386.

kaladin-jasnah 4 days ago | parent | prev | next [-]

I recently found out about swi 0x123456 on ARM...

globalnode 4 days ago | parent | prev [-]

many of the int 21h interrupts were virus install checks, telling for the future direction of microsoft?

anotherlab 3 days ago | parent | prev | next [-]

I was using this before 2018. I used to write TSR applets for data collection. Knowing what interrupts were being was critical. It could mean the difference between your code working and it dying somewhere in expanded memory space.

burnt-resistor 3 days ago | parent | prev [-]

Running disassembly on the system, VGA, and other add-in card BIOSes was often helpful. I recall figuring out how to cycle the palette faster than calling an interrupt, although it would still require vsync to prevent snow* and tearing.

* When updating the overscan region border color on some video cards DACs via direct port I/O, there would be random speckling of dots of previous and new colors like analog snow if synchronization to wait for the vertical blanking interval wasn't observed. This is the sort of shit emulation doesn't reproduce faithfully. It sometimes took having access to a lot of hardware to verify a program doing hardware-specific VGA tweaks worked correctly.