| ▲ | AKSF_Ackermann 5 hours ago |
| > When programming, it is still important to write code that runs correctly on systems with either byte order What you should do instead is write all your code so it is little-endian only, as the only relevant big-endian architecture is s390x, and if someone wants to run your code on s390x, they can afford a support contract. |
|
| ▲ | socalgal2 7 minutes ago | parent | next [-] |
| I'm with you this. I lived through the big endian / little endian hell in the 80/90s. Little endian won. Anyone making a big endian architechture at this point would be shooting themselves in the foot because off all the incompatibilities. Don't make things more complicated. In fact, I'd be surprised if you made a big endian arch and then ran a browser on it if some large number of websites would fail because they used typedarrays and aren't endian aware. The solution is not to ask every programmer in the universe to write endian aware code. The solution is to standardize on little endian |
|
| ▲ | jcalvinowens 3 hours ago | parent | prev | next [-] |
| Don't ignore endianness. But making little endian the default is the right thing to do, it is so much more ubiquitous in the modern world. The vast majority of modern network protocols use little endian byte ordering. Most Linux filesystems use little endian for their on-disk binary representations. There is absolutely no good reason for networking protocols to be defined to use big endian. It's an antiquated arbitrary idea: just do what makes sense. Use these functions to avoid ifdef noise: https://man7.org/linux/man-pages/man3/endian.3.html |
|
| ▲ | cbmuser an hour ago | parent | prev | next [-] |
| > What you should do instead is write all your code so it is little-endian only, as the only relevant big-endian architecture is s390x, and if someone wants to run your code on s390x, they can afford a support contract. Or you can just be a nice person and make your code endian-agnostic. ;-) |
|
| ▲ | GandalfHN 3 hours ago | parent | prev | next [-] |
| Outsourcing endianness pain to your customers is an easy way to teach them about segfaults and silent data corruption. s390x is niche, endian bugs are not. Network protocols and file formats still need a defined byte order, and the first time your code talks to hardware or reads old data, little-endian assumptions leak all over the place. Ignoring portability buys you a pile of vendor-specific hacks later, because your team will meet those 'irrelevant' platforms in appliances, embedded boxes, or somebody else's DB import path long before a sales rep waves a support contract at you. |
| |
| ▲ | AKSF_Ackermann 3 hours ago | parent | next [-] | | Not sure why you consider that to be an issue, if you need to interact with a format that specifies values to be BE, just always byte-swap. And every appliance/embedded box i had to interact with ran either x86 or some flavour of 32-bit arm (in LE mode, of course). | |
| ▲ | adrian_b an hour ago | parent | prev | next [-] | | Endianness problems should have been solved by compilers, not by programmers. Most existing CPUs, have instructions to load and store memory data of various sizes into registers, while reversing the byte order. So programs that work with big-endian data typically differ from those working with little-endian data just by replacing the load and store instructions. Therefore you should have types like int16, int32, int64, int16_be, int32_be, int64_be, for little-endian integers and big-endian integers and the compiler should generate the appropriate code. At least in the languages with user-defined data types and overloadable operators and functions, like C++, you can define these yourself, when the language does not provide them, instead of using ugly workarounds like htonl and the like, which can be very inefficient if the compiler is not clever enough to optimize them away. | |
| ▲ | 3 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | 7jjjjjjj 3 hours ago | parent | prev [-] | | Assuming an 8-bit byte used to be a "vendor specific hack." Assuming twos complement integers used to be a "vendor specific hack." When all the 36-bit machines died, and all the one's complement machines died, we got over it. That's where big endian is now. All the BE architectures are dying or dead. No big endian system will ever be popular again. It's time for big endian to be consigned to the dustbin of history. | | |
| ▲ | namibj 36 minutes ago | parent | next [-] | | JS numbers behave much more like C's definition of signed overflow being UB as it's signed numbers are effectively like 51-ish bit with a SEPARATE sign bit and non-assiciative behavior when overflow happens. | |
| ▲ | zephen an hour ago | parent | prev | next [-] | | > It's time for big endian to be consigned to the dustbin of history. And, especially what most people call big-endian, which is a bastardized mixed-endian mess of most significant byte is zero, while least significant bit is likewise zero. | |
| ▲ | cmrdporcupine 2 hours ago | parent | prev [-] | | > No big endian system will ever be popular again Cries in 68k nostalgia |
|
|
|
| ▲ | j16sdiz 4 hours ago | parent | prev | next [-] |
| If you comes to low level network protocol (e.g. writing a TCP stack), the "network byte order" is always big-endian. |
| |
| ▲ | edflsafoiewq 3 hours ago | parent | next [-] | | That's a serialization format. | |
| ▲ | 7jjjjjjj 2 hours ago | parent | prev | next [-] | | It goes without saying that all binary network protocols should document their byte order, and that if you're implementing a protocol documented as big endian you should use ntohl and friends to ensure correctness. However if designing a new network protocol, choosing big endian is insanity. Use little endian, skip the macros, and just add #ifndef LITTLE_ENDIAN
#error
Or the like to a header somewhere. | | |
| ▲ | AnthonyMouse an hour ago | parent [-] | | What does it actually cost you to define a macro which is a no-op on little endian architectures and then use it at the point of serialization/deserialization? |
| |
| ▲ | whizzter 2 hours ago | parent | prev | next [-] | | And honestly at this point it's mostly a historical artifact, if we write that kind of stuff then sure we need to care but to produce modern stuff is a honestly massive waste of time at this point. FWIW I doing hobby-stuff for Amiga's (68k big-endian) but that's just that, hobby stuff. | |
| ▲ | skrtskrt 4 hours ago | parent | prev [-] | | Prometheus index format is also a big-endian binary file - haven’t found any reference to why it was chosen. |
|
|
| ▲ | addaon 4 hours ago | parent | prev | next [-] |
| There's still at least one relevant big-endian-only ARM chip out there, the TI Hercules. While in the past five or ten years we've gone from having very few options for lockstep microcontrollers (with the Hercules being a very compelling option) to being spoiled for choice, the Hercules is still a good fit for some applications, and is a pretty solid chip. |
|
| ▲ | sllabres 2 hours ago | parent | prev | next [-] |
| Not only the System/390.
Its also IBM i, AIX, and for many protocols the network byte order.
AFAIK the binary data in JPG (1) and Java Class [2] files a re big endian.
And if you write down a hexadecimal number as 0x12345678 you are writing big-endian. (1) for JPG for embedded TIFF metadata which can have both. [2] https://docs.oracle.com/javase/specs/jvms/se7/html/jvms-4.ht... |
| |
| ▲ | hmry 2 hours ago | parent [-] | | The endianness of file formats and handwriting is irrelevant when it comes to deciding whether your code should support running on big-endian CPUs. The only question that matters: Do your customers / users want to run it on big-endian hardware? And for 99% of programmers, the answer is no, because their customers have never knowingly been in the same room as a big-endian CPU. |
|
|
| ▲ | nyrikki 4 hours ago | parent | prev | next [-] |
| The linked to blog post in the OP explains this better IMHO [0]: If the data stream encodes values with byte order B, then the algorithm to decode the value on computer with byte order C should be about B, not about the relationship between B and C.
One cannot just ignore the big/little data interchange problem MacOS[1], Java, TCP/IP, Jpeg etc...The point (for me) is not that your code runs on a s390, it is that you abstract your personal local implementation details from the data interchange formats. And unfortunately almost all of the processors are little, and many of the popular and unavoidable externalization are big... [0] https://commandcenter.blogspot.com/2012/04/byte-order-fallac...
[1] https://github.com/apple/darwin-xnu/blob/main/EXTERNAL_HEADE... |
| |
| ▲ | whizzter 2 hours ago | parent | next [-] | | MacOS "was" big-endian due to 68k and later PPC cpu's (the PPC Mac's could've been little but Apple picked big for convenience and porting). Their x86 changeover moved the CPU's to little-endian and Aarch64 continues solidifies that tradition. Same with Java, there's probably a strong influence from SPARC's and with PPC, 68k and SPARC being relevant back in the 90s it wasn't a bold choice. But all of this is more or less legacy at this point, I have little reason to believe that the types of code I write will ever end up on a s390 or any other big-endian platform unless something truly revolutionizes the computing landscape since x86, aarch64, risc-v and so on run little now. | |
| ▲ | adrian_b an hour ago | parent | prev [-] | | To cope with data interchange formats, you need a set of big endian data types, e.g. for each kind of signed or unsigned integer with a size of 16 bits or bigger you must have a big endian variant, e.g. identified with a "_be" suffix. Most CPUs (including x86-64) have variants of the load and store instructions that reverse the byte order (e.g. MOVBE in x86-64). The remaining CPUs have byte reversal instructions for registers, so a reversed byte order load or store can be simulated by a sequence of 2 instructions. So the little-endian types and the big-endian data types must be handled identically by a compiler, except that the load and store instructions use different encodings. The structures used in a data-exchange format must be declared with the correct types and that should take care of everything. Any decent programming language must provide means for the user to define such data types, when they are not provided by the base language. The traditional UNIX conversion functions are the wrong way to handle endianness differences. An optimizing compiler must be able to recognize them as special cases in order to be able to optimize them away from the machine code. A program that is written using only data types with known endianness can be compiled for either little-endian targets or big-endian targets and it will work identically. All the problems that have ever existed in handling endianness have been caused by programming languages where the endianness of the base data types was left undefined, for fear that recompiling a program for a target of different endianness could result in a slower program. This fear is obsolete today. |
|
|
| ▲ | bear8642 4 hours ago | parent | prev | next [-] |
| > the only relevant big-endian architecture is s390x The adjacent POWER architecture is also still relevant - but as you say, they too can afford a support contract. |
| |
|
| ▲ | EPWN3D 4 hours ago | parent | prev [-] |
| I mostly agree, but network byte ordering is still a thing. |