| ▲ | asveikau 3 days ago |
| > You have two programs, compiled in different languages with different memory management strategies, potentially years ap Sometimes they are even the same language. Windows has a few problems that I haven't seen in the Unix world, such as: each DLL potentially having an incompatible implementation of malloc, where allocating using malloc(3) in one DLL then freeing it with free(3) in another being a crash. |
|
| ▲ | snuxoll 11 hours ago | parent | next [-] |
| > where allocating using malloc(3) in one DLL then freeing it with free(3) in another being a crash. This can still happen all the time on UNIX systems. glibc's malloc implementation is a fine general purpose allocator, but there's plenty of times where you want to bring in tcmalloc, jemalloc, etc. Of course, you hope that various libraries will resolve to your implementation of choice when the linker wires everything up, but they can opt not to just as easily. |
| |
| ▲ | asveikau 9 hours ago | parent [-] | | No actually, this doesn't happen the same way on modern Unix. The way symbol resolution works is just not the same. A library asking for an extern called "malloc" will get the same malloc. To use those other allocators, you would typically give them a different symbol name, or make the whole process use the new one. A dll import on Windows explicitly calls for the DLL by name. You could have some DLLs explicitly ask for a different version of the Visual Studio runtime, or with different threading settings, release vs debug etc., and a C extern asking for simply the name "malloc", no other details, will resolve to that, possibly incompatible with another DLL in the same process despite the compiler's perspective of it just being extern void *malloc(size_t) and no other detail, no other decoration, rename of the symbol etc.. there might be a rarely used symbol versioning pragma to accomplish similar on a modern gcc/clang/elf setup but it's not the way anybody does this. I would argue that the modern Unix way, with these limitations, is better, by the way. Maybe some older Unix in the early days of shared libraries, early 90s or so, tried what Windows does, I don't know. But it's not common today. | | |
| ▲ | snuxoll 7 hours ago | parent [-] | | > No actually, this doesn't happen the same way on modern Unix. The way symbol resolution works is just not the same. A library asking for an extern called "malloc" will get the same malloc. To use those other allocators, you would typically give them a different symbol name, or make the whole process use the new one. This is, yes, the behavior of both the ELF specification as well as the GNU linker. I'm not here to get into semantics of symbol namespaces and resolution though, I can just as easily link a static jemalloc into an arbitrary ELF shared object and use it inside for every allocation and not give a damn about what the global malloc() symbol points to. There's a half dozen other ways I can have a local malloc() symbol as well instead of having the linker bind the global one. Which, is the entire point I'm trying to make. Is this a bigger problem on Windows versus UNIX-like platforms due to the way runtime linker support is handled? Yes. Is it entirely possible to have the same issue, however? Yes, absolutely. |
|
|
|
| ▲ | pjmlp 2 days ago | parent | prev [-] |
| Because C standard library isn't part of the OS. Outside UNIX, the C standard library is a responsibility of the C compiler vendor, not the OS. Nowadays Windows might seem the odd one, however 30 years ago the operating system was more diverse. You will also find similar issues on dynamic libraries in mainframes/micros from IBM and Unisys, still being sold. |
| |
| ▲ | asveikau 2 days ago | parent [-] | | Yeah I know the reasons for this, I'm just saying it's not usual coming from currently dominant unix-like systems. |
|