| |
| ▲ | kragen 5 days ago | parent | next [-] | | "Concurrent" doesn't usually mean "bounded in worst-case execution time", especially on a uniprocessor. Does it in this case? InvisiCaps sound unbelievably amazing. Even CHERI hasn't managed to preserve pointer size. | | |
| ▲ | pizlonator 5 days ago | parent | next [-] | | > "Concurrent" doesn't usually mean "bounded in worst-case execution time", especially on a uniprocessor. Does it in this case? Meh. I was in the real time GC game for a while, when I was younger. Nobody agrees on what it really means to bound the worst case. If you're a flight software engineer, it means one thing. If you're a game developer, it means something else entirely. And if you're working on the audio stack specifically, it means yet another thing (somewhere in between game and flight). So let me put it this way, using the game-audio-flight framework: - Games: I bound worst case execution time, just assuming a fair enough OS scheduler, even on uniprocessor. - Audio: I bound worst case execution time if you have multiple cores. - Flight: I don't bound worst case execution time. Your plane crashes and everyone is dead | | | |
| ▲ | gf000 5 days ago | parent | prev [-] | | > "Concurrent" doesn't usually mean "bounded in worst-case execution time" Sure, though this is also true for ordinary serial code, with all the intricate interactions between the OS scheduler, different caches, filesystem, networking, etc. | | |
| ▲ | kragen 5 days ago | parent [-] | | Usually when people care about worst-case execution time, they are running their code on a computer without caches and either no OS or an OS with a very simple, predictable scheduler. And they never access the filesystem (if there is one) or wait on the network (if there is one) in their WCET-constrained code. Those are the environments that John upthread was talking about when he said: > There's tons of embedded use cases where a GC is not going to fly just from a code size perspective, let alone latency. That's mostly where I've often seen C (not C++) for new programs. But I've seen C++ there too. If you're worried about the code size of a GC you probably don't have a filesystem. | | |
| ▲ | pizlonator 5 days ago | parent | next [-] | | Yeah totally, if you're in those kinds of environments, then I agree that a GC is a bad choice of tech. I say that even though, as you noticed in another reply, I worked on research to try to make GC suitable for exactly those environments. I had some cool demos, and a lot of ideas in FUGC come from that. But I would not recommend you use GC in those environments! There is a way to engineer Fil-C to not rely on GC. InvisiCaps would work with isoheaps (what those embedded dudes would just call "object pools"). So, if we wanted to make a Fil-C-for-flight-software then that's what it would look like, and honestly it might even be super cool | |
| ▲ | gf000 5 days ago | parent | prev [-] | | Well, there is a whole JVM implementation for hard real-time with a GC, that's used in avionics/military -- hard real time is a completely different story, slowness is not an issue here, you exchange fast execution for a promise of keeping a response time. But I don't really think it's meaningful to bring that up as it is a niche of a niche. Soft-real time (which most people may end up touching, e.g. video games) are much more forgiving, see all the games running on Unity with a GC. An occasional frame drop won't cause an explosion here, and managed languages are more than fine. | | |
| ▲ | kragen 5 days ago | parent [-] | | Are you talking about Ovm https://dl.acm.org/doi/10.1145/1324969.1324974 https://apps.dtic.mil/sti/citations/ADA456895? pizlonator (the Fil-C author) was one of Ovm's authors 17 years ago. I don't think it's in current use, but hopefully he'll correct me if I'm wrong. The RTSJ didn't require a real-time GC (and IIRC at the time it wasn't known how to write a hard-real-time GC without truly enormous overheads) and it didn't have a real-time GC at the time. Perhaps one has been added since then. I don't agree that "it is a niche of a niche". There are probably 32× as many computers in your house running hard-real-time software as computers that aren't. Even Linux used to disable interrupts during IDE disk accesses! |
|
|
|
| |
| ▲ | johncolanduoni 5 days ago | parent | prev | next [-] | | For embedded use cases, it can definitely kill you. Small microcontrollers frequently have constant IPC for a given instruction stream and you regularly see simple for loops get used for timing. | |
| ▲ | yvdriess 5 days ago | parent | prev [-] | | There's tricks to improve the performance of pointer chasing on modern uarchs (cfr go's Greentea GC). You want to batch the address calculation/loading, deref/load and subsequent dependent ops like marking. Reorder buffers and load-store buffers are pretty big these days, so anything that breaks the addr->load->do dependency chain is a huge win, especially if there are any near that traverse loop. |
|
| |
| ▲ | kragen 5 days ago | parent [-] | | I agree, and I've written an allocator in C that works that way. The fast path is about 5 clock cycles on common superscalar processors, which is about 7–10× faster than malloc: http://canonical.org/~kragen/sw/dev3/kmregion.h This is bottlenecked on memory access that is challenging to avoid in C. You could speed it up by at least 2× with some compiler support, and maybe even without it, but I haven't figured out how. Do you have any ideas? Typically, though, when you are trying to do WCET analysis, as you know, you try to avoid any dynamic allocation in the time-sensitive part of the program. After all, if completing a computation after a deadline would cause a motor to catch fire or something, you definitely don't want to abort the computation entirely with an out-of-memory exception! Some garbage collectors can satisfy this requirement just by not interfering with code that doesn't allocate, but typically not concurrent ones. |
|