▲ | bonzini 2 days ago | |
Just one nit: contrary to what the article suggests, as far as I remember the compact model was not so common because using far pointers for all data is slow and wastes memory. Also, the globals and the stack had to fit in 64k anyway so compact only bought you a larger heap. However, there were variants of malloc and free that returned or accepted far pointers, or alternatively you could ask DOS for memory in 16-byte units and slice it yourself (e.g. by loading game assets). Therefore many programs used the small and medium models instead of compact and large respectively, and annotated pointers to large data (which is almost always runtime-loaded and dynamically allocated anyway) by hand with the __far modifier. This was the most efficient setup with the only problem that, due to the 64k limit, you could hardly use the heap or recursion. | ||
▲ | 2 days ago | parent | next [-] | |
[deleted] | ||
▲ | tiahura 2 days ago | parent | prev [-] | |
1. Compact Model Limits: The stack and globals don’t strictly need to fit in 64 KB; far pointers allow larger heaps, but inefficiency made this model unpopular. 2. Malloc Variants: While farmalloc and farfree existed, developers often used direct DOS memory allocation for better control. 3. Stack Constraints: Stack and recursion limits were due to 64 KB segments, not specific to compact or small models. 4. Far Pointers: Using __far for dynamic data was common across models; compact/large automated this but were inefficient. 5. Heap/Recursion Use: The heap and recursion were constrained, not “hardly usable,” due to far pointer overhead and stack size. |