|
| ▲ | jjmarr 2 hours ago | parent | next [-] |
| "Stable ABI" is a joke in C++ because you can't keep ABI and change the implementation of a templated function, which blocks improvements to the standard library. In C, ABI = API because the declaration of a function contains the name and arguments, which is all the info needed to use it. You can swap out the definition without affecting callers. That's why Rust allows a stable C-style ABI; the definition of a function declared in C doesn't have to be in C! But in a C++-style templated function, the caller needs access to the definition to do template substitution. If you change the definition, you need to recompile calling code i.e. ABI breakage. If you don't recompile calling code and link with other libraries that are using the new definition, you'll violate the one-definition rule (ODR). This is bad because duplicate template functions are pruned at link-time for size reasons. So it's a mystery as to what definition you'll get. Your code will break in mysterious ways. This means the C++ committee can never change the implementation of a standardized templated class or function. The only time they did was a minor optimization to std::string in 2011 and it was such a catastrophe they never did it again. That is why Rust will not support stable ABIs for any of its features relying on generic types. It is impossible to keep the ABI stable and optimize an implementation. |
|
| ▲ | zrm 2 hours ago | parent | prev | next [-] |
| > C and C++ are usually stuck in that antiquated thinking that you should build a module, package it into some libraries, install/export the library binaries and associated assets, then import those in other projects. That makes everything slow, inefficient, and widely dangerous. It seems to me the "convenient" options are the dangerous ones. The traditional method is for third party code to have a stable API. Newer versions add functions or fix bugs but existing functions continue to work as before. API mistakes get deprecated and alternatives offered but newly-deprecated functions remain available for 10+ years. With the result that you can link all applications against any sufficiently recent version of the library, e.g. the latest stable release, which can then be installed via the system package manager and have a manageable maintenance burden because only one version needs to be maintained. Language package managers have a tendency to facilitate breaking changes. You "don't have to worry" about removing functions without deprecating them because anyone can just pull in the older version of the code. Except the older version is no longer maintained. Then you're using a version of the code from a few years ago because you didn't need any of the newer features and it hadn't had any problems, until it picks up a CVE. Suddenly you have vulnerable code running in production but fixing it isn't just a matter of "apt upgrade" because no one else is going to patch the version only you were using, and the current version has several breaking changes so you can't switch to it until you integrate them into your code. |
|
| ▲ | tialaramex 3 hours ago | parent | prev | next [-] |
| It's not true that Rust rejects "the notion of a stable ABI". Rust rejects the C++ solution of freeze everything and hope because it's a disaster, it's less stable than some customers hoped and yet it's frozen in practice so it disappoints others. Rust says an ABI should be a promise by a developer, the way its existing C ABI is, that you can explicitly make or not make. Rust is interested in having a properly thought out ABI that's nicer than the C ABI which it supports today. It'd be nice to have say, ABI for slices for example. But "freeze everything and hope" isn't that, it means every user of your language into the unforeseeable future has to pay for every mistake made by the language designers, and that's already a sizeable price for C++ to pay, "ABI: Now or never" spells some of that out and we don't want to join them. |
|
| ▲ | NetMageSCW 3 hours ago | parent | prev | next [-] |
| I would suggest importing binaries and metadata is going to be faster than compiling all the source for that. |
|
| ▲ | uecker 2 hours ago | parent | prev | next [-] |
| "That makes everything slow, inefficient, and widely dangerous." There nothing faster and more efficient than building C programs. I also not sure what is dangerous in having libraries. C++ is quite different though. |
| |
| ▲ | hansvm an hour ago | parent [-] | | Of course there is. Raw machine code is the gold standard, and everything else is an attempt to achieve _something_ at the cost of performance, C included, and that's even when considering whole-program optimization and ignoring the overhead introduced by libraries. Other languages with better semantics frequently outperform C (slightly) because the compiler is able to assume more things about the data and instructions being manipulated, generating tighter optimizations. |
|
|
| ▲ | stackghost 3 hours ago | parent | prev [-] |
| >There are of course good ways of building C++, but those are the exception rather than the standard. What are the good ways? |
| |
| ▲ | mgaunard 14 minutes ago | parent | next [-] | | Build everything from source within a single unified workspace, cache whatever artifacts were already built with content-addressable storage so that you don't need to build them again. You should also avoid libraries, as they reduce granularity and needlessly complexify the logic. I'd also argue you shouldn't have any kind of declaration of dependencies and simply deduce them transparently based on what the code includes, with some logic to map header to implementation files. | |
| ▲ | lstodd an hour ago | parent | prev [-] | | "Do not do it" looks like the winning one nowadays. |
|