| ▲ | ebiederm a day ago | |||||||
There is an aspect of the history of Forth and C I have been trying to wrap my head around. The early B compiler was reported to generate threaded code (like Forth). The threaded code was abandoned fairly early in the port to the PDP11 from the PDP7 as it was deemed to slow to write an operating system in. At which point unix and C lost a very interesting size optimization. With the net result that Forth was more portable and portable between machines circa 1970 and Unix had to wait until circa 1975 with UnixV6. I have been trying to go back through the history to see if I could understand why threaded code was deemed too slow. Today the most part of executables is code that is not run often and would probably benefit from a smaller representation (less memory and cache space making the system faster overall). So this is a practical question even today. I found a copy of unix for the PDP7. Reproduced from old printouts typed in. If I have read the assembly correctly the B compiler was not using an efficient form of threaded code at all. The PDP7 is an interesting machine. It's cells were 18 bits wide. The adress bus was 12bits wide. Which meant there was room for an opcode and a full address in every cell. As I read the B compiler it was using a form of token threading with everything packed into a single 18 bit cell. The basic operations of B were tokens and an if token encoded with a full address in the cell. Every token had to be decoded via a jump table, the address of the target code was then plugged into a jump instruction which was immediately run. Given the width of the cells, I wonder what the conclusions about performance of B would have been if subroutine threading or a similar technique using jmp instructions would have been. Does anyone know if Forth suffers measurably in inner loops from have to call words that perform basic operations? Is this where a Forth programmer would be accustomed to write the inner loop in assembly to avoid the performance penalty? | ||||||||
| ▲ | jacquesm a day ago | parent | next [-] | |||||||
I can probably shed some light on that. I've used Forth on 8 bit platforms (6502, 6809), 16 bit platforms (80286) and 32 bit platforms (68K), as well as assembly, and on the 16 and 32 bit platforms C. Putting these side-by-side and assuming roughly equivalent programmer competence levels at the time assembler would win out, C would get you to maybe half the speed of assembly on a good day and Forth was about 10x slower than that. Which was still incredibly fast for the day, given that Forth was compiled to an intermediary format with the Forth interpreter acting as a very primitive virtual machine. This interpretation step had considerable overhead, especially in inner loops with few instructions the overhead would be massive. For every one instruction doing actual work you'd have a whole slew of them assigned to bookkeeping and stack management. What in C would compile to a few machine instructions (which a competent assembly programmer of the time would be able to significantly improve upon) would result in endless calls to lower and lower levels. There were later Forth implementations that improved on this by compiling to native code but I never had access to those when I was still doing this. For a lark I wrote a Forth in C rather than bootstrapping it through assembly and it performed quite well, Forth is ridiculously easy to bring up, it is essentially a few afternoons work to go from zero to highway speeds on a brand new board that you have a compiler for. Which is one of the reasons it is still a favorite for initial board bring-up. One area where Forth usually beat out C by a comfortable margin was code size, Forth code tends to be extremely compact (and devoid of any luxury). On even the smallest micro controllers (8051 for instance, and later, MicroChip and such) you could get real work done in Forth. | ||||||||
| ▲ | rerdavies a day ago | parent | prev | next [-] | |||||||
It is no so much the parts of the code that run infrequently that contribute to poor performance, but the very tiny <1% of all code that does run frequently, and should be running completely in cache. So code size doesn't have an enormous impact on speed of execution. The overhead of threading seems pretty obvious: call and return instructions are expensive compared to the cost of the one equivalent instruction that would have been executed in a compiled implementation. And placing arguments on a stack means that all operands have to to be read from and written to memory, incurring additional ferocious overhead, whereas a compiler would enregister values, particularly in performance-critical code. Not unreasonable to expect that Forth code is going to run at least an order of magnitude slower than compiled code. | ||||||||
| ||||||||
| ▲ | kragen 16 hours ago | parent | prev [-] | |||||||
> Does anyone know if Forth suffers measurably in inner loops from have to call words that perform basic operations? Yes, the slowdown is of the order of 8× for DTC, ITC, and bytecode ("token threading"). Eliminating the jump table reduces the overhead a bit, but it's still order 8×. The B compiler bundled a copy of the bytecode interpreter into each executable; that might have made it less appealing as a size optimization. For a big enough program it would still have won. Subroutine threading is really just compact native code, but it still suffers from typically about 4× overhead for basic operations like dup, @, +, or exit (the traditional name for the runtime effect of ;). The primitive operations these execute are typically one or two cycles on a RISC such as a Cortex-M4, while a subroutine call and return are two more cycles, often plus two to four cycles of pipeline bubble (if the processor doesn't have good enough branch prediction). Presumably on the PDP-7 a subroutine call would have needed an additional memory cycle to store the return address into memory and another one to fetch it, plus two more memory cycles to fetch the call and return instructions. (I'm not familiar with the -7's instruction set, so correct me if I'm wrong.) With respect to dup, though, commonly dup, drop, swap, and over represent operations that don't appear in optimized native code—they just tell the following operations which data to operate on, a purpose which is normally achieved by operand fields in native code. So the runtime overhead of stack-bytecode interpretation is a worse than it appears at first: each bytecode instruction takes time of the order of 4× or 8× the time of as a native instruction doing the same thing, but you have to run about twice as many bytecode instructions because about half of them are stack manipulation. So your total slowdown is maybe 8× or 16×. You may also be interested in looking at the program dc, which IIRC was one of the programs Unix was originally written to run. It's a stack bytecode designed to be written by hand, like HP desk calculators of the time but with arbitrary precision. | ||||||||