| ▲ | The C++ standard for the F-35 Fighter Jet [video](youtube.com) |
| 173 points by AareyBaba 7 hours ago | 175 comments |
| PDF: https://www.stroustrup.com/JSF-AV-rules.pdf |
|
| ▲ | bri3d 3 hours ago | parent | next [-] |
| https://web.archive.org/web/20111219004314/http://journal.th... (referenced, at least tangentially, in the video) is a piece from the engineering lead which does a great job discussing Why C++. The short summary is "they couldn't find enough people to write Ada, and even if they could, they also couldn't find enough Ada middleware and toolchain." I actually think Ada would be an easier sell today than it was back then. It seems to me that the software field overall has become more open to a wider variety of languages and concepts, and knowing Ada wouldn't be perceived as widely as career pidgeonholing today. Plus, Ada is having a bit of a resurgence with stuff like NVidia picking SPARK. |
|
| ▲ | anonymousiam 5 hours ago | parent | prev | next [-] |
| The same is true for the software that runs many satellites. Use of the STL is prohibited. The main issue is mission assurance. Using the stack or the heap means your variables aren't always at the same memory address. This can be bad if a particular memory cell has failed. If every variable has a fixed address, and one of those addresses goes bad, a patch can be loaded to move that address and the mission can continue. |
| |
| ▲ | Thaxll 3 hours ago | parent [-] | | Can't this be done at runtime? Like the underlying calls can black list hardware address on read/write faults? | | |
| ▲ | amluto 3 hours ago | parent [-] | | If you have memory to spare and are using hardware with an MMU, you can remap your logical address to a different page. Linux can do this, but only for user memory. | | |
| ▲ | anonymousiam 3 hours ago | parent [-] | | This assumes that the operating system can run. If the memory corruption impacts the OS, then it may be impossible to recover. As the systems (and software) have become more complex, keeping these Mission Assurance best practices becomes more important, but the modern generation of developers sometimes loses sight of this. A good example of what I'm talking about is a program that I was peripherally involved with about 15 years ago. The lead wanted to abstract the mundane details from the users (on the ground), so they would just "register intent" with the spacecraft, and it would figure out how to do what was wanted. The lead also wanted to eliminate features such as "memory dump", which is critical to the anomaly resolution process. If I had been on that team, I would have raised hell, but I wasn't, and at the time, I needed that team lead as an ally. | | |
| ▲ | d-lisp 2 hours ago | parent [-] | | Wow, but how did they deal with anomalies ? I mean, even when I have the codebase readily accessible and testable in front of my eyes, I never trust the tests to be enough ? I often spot forgotten edge cases and bugs of various sort in C/embedded projects BECAUSE I run the program, can debug and spot mem issues and whole a lot of other things for which you NEED to gather the most informations you can in order to find solutions ? |
|
|
|
|
|
| ▲ | don-code 2 hours ago | parent | prev | next [-] |
| > All if, else if constructs will contain either a final else clause or a comment indicating why a final else clause is not necessary. I actually do this as well, but in addition I log out a message like, "value was neither found nor not found. This should never happen." This is incredibly useful for debugging. When code is running at scale, nonzero probability events happen all the time, and being able to immediately understand what happened - even if I don't understand why - has been very valuable to me. |
| |
| ▲ | kace91 a minute ago | parent [-] | | I like rust matching for this reason: You need to cover all branches. In fact, not using a default (the else clause equivalent) is ideal if you can explicitly cover all cases, because then if the possibilities expand (say a new value in an enum) you’ll be annoyed by the compiler to cover the new case, which might otherwise slip by. |
|
|
| ▲ | djfobbz an hour ago | parent | prev | next [-] |
| I wonder if Lockheed Martin has an Electron based future fighter in the works? |
|
| ▲ | jandrewrogers 6 hours ago | parent | prev | next [-] |
| For those interested, the F-35 (née Joint Strike Fighter) C++ coding standards can be found here, all 142 pages of it: https://www.stroustrup.com/JSF-AV-rules.pdf |
| |
| ▲ | Animats an hour ago | parent | next [-] | | As is common in hard real time code, there is no dynamic allocation during operation: allocation/deallocation from/to the free store (heap)
shall not occur after initialization.
This works fine when the problem is roughly constant, as it was in, say, 2005. But what do things look like in modern AI-guided drones? | | |
| ▲ | jandrewrogers an hour ago | parent | next [-] | | Why would the modern environment materially change this? The initialized resource allocation reflects the limitations of the hardware. That budget is what it is. I can't think of anything about "modern AI-guided drones" that would change the fundamental mechanics. Some systems support very elastic and dynamic workloads under fixed allocation constraints. | |
| ▲ | csmantle an hour ago | parent | prev [-] | | "AI" comes in various flavors. It could be a expert system, a decision forest, a CNN, a Transformer, etc. In most inference scenarios the model is fixed, the input/output shapes are pre-defined and actions are prescribed. So it's not that dynamic after all. |
| |
| ▲ | tgv 6 hours ago | parent | prev | next [-] | | From quickly glancing over a couple of pages, that looks sensible. Which makes me curious to see some exceptions to the "shall" rules. With a project of this size, that should give some idea about the usefulness of such standards. | |
| ▲ | genewitch 4 hours ago | parent | prev | next [-] | | In general, are these good recommendations for building software for embedded or lower-spec devices? I don't know how to do preprocessor macros anyhow, for instance - so as i am reading this i am like "yeah, i agree..." until the no stdio.h! | | | |
| ▲ | shepherdjerred 5 hours ago | parent | prev | next [-] | | I wonder if they use static analysis to enforce these rules, or if developers are expected to just know all of this | | | |
| ▲ | extraduder_ire 5 hours ago | parent | prev | next [-] | | The first time I came across this document, someone was using it as an example how the c++ you write for an Arduino Uno is still c++ despite missing so many features. | |
| ▲ | raffael_de 5 hours ago | parent | prev | next [-] | | Interesting font choice for the code snippets. I wonder if that's been chosen on a whim or if there is a reason for not going with mono space. | | |
| ▲ | throwaway2037 2 hours ago | parent [-] | | The font used for code samples looks nearly the same as "The C++ Programming Languages" (3rd edition / "Wave") by Bjarne Stroustrup. Looking back, yeah, I guess it was weird that he used italic variable width text for code samples, but uses tab stops to align the comments! |
| |
| ▲ | msla 5 hours ago | parent | prev [-] | | Interesting they're using C++ as opposed to Ada. | | |
| ▲ | WD-42 5 hours ago | parent [-] | | The video goes into the history of why the military eventually accepted c++ instead of enforcing Ada. |
|
|
|
| ▲ | time4tea 5 hours ago | parent | prev | next [-] |
| a = a; // misra Actual code i have seen with my own eyes. (Not in F-35 code) Its a way to avoid removing an unused parameter from a method. Unused parameters are disallowed, but this is fine? I am sceptical that these coding standards make for good code! |
| |
| ▲ | tialaramex 3 hours ago | parent | next [-] | | Studies have looked at MISRA, I'm not aware of any for the JSF guidelines. For MISRA there's a mix, some of the rules seem to be effective (fewer defects in compliant software), some are the opposite (code which obeys these rules is more likely to have defects) and some were irrelevant. Notably this document is from 2005. So that's after C++ was standardized but before their second bite of that particular cherry and twenty years before its author, Bjarne Stroustrup suddenly decides after years of insisting that C++ dialects are a terrible idea and will never be endorsed by the language committee, that in fact dialects (now named "profiles") are the magic ingredient to fix the festering problems with the language. While Laurie's video is fun, I too am sceptical about the value of style guides, which is what these are. "TABS shall be avoided" or "Letters in function names shall be lowercase" isn't because somebody's aeroplane fell out of the sky - it's due to using a style Bjarne doesn't like. | | |
| ▲ | platinumrad an hour ago | parent | next [-] | | The "good" rules are like "don't write off the end of an array", and the bad ones are like "no early returns" or "variable names must not be longer than 6 characters". 95% of the "good" rules are basically just longer ways of saying "don't invoke undefined behavior". | |
| ▲ | writtiewrat 2 hours ago | parent | prev [-] | | If one limited C++ profiles to have no semantic effect and to not change code generation, only subsetting, they would be way simpler than dialects, right? More limited, yes, but also way simpler. It would enable preventing a lot of footguns, at the very least, like vector<bool>. Wait, you're one of those Rust evangelists, right? Have you been paid like fasterthanlime? | | |
| ▲ | tialaramex 37 minutes ago | parent [-] | | "No semantic effect" is one of those recurring C++ tropes like the "subset of a superset" or "trading performance for safety" that I think even its defenders ought to call bullshit on. The insistence on "No semantic effect" for attributes has poisoned them badly, and the choice to just ignore the semantic implications for Bjarne's C++ 20 Concepts makes this a poor substitute for the concepts feature as once imagined at the start of the century. I doubt I can satisfy you as to whether I'm somehow a paid evangelist, I remember I got a free meal once for contributing to the OSM project, and I bet if I dig further I can find some other occasion that, if you spin it hard enough can be justified as "payment" for my opinion that Rust is a good language. There was a nice lady giving our free cookies at the anti-racist counter-protests the other week, maybe she once met a guy who worked for an outfit which was contracted to print a Rust book? I sense you may own a corkboard and a lot of red string. |
|
| |
| ▲ | unwind 5 hours ago | parent | prev | next [-] | | For C, the proper/expected/standard way to reference a variable without accessing it is a cast to void: (void) a;
I'm sure there are commonly-implemented compiler extensions, but this is the normal/native way and should always work. | | |
| ▲ | amluto 3 hours ago | parent [-] | | Not if you use GCC. https://godbolt.org/z/zYdc9ej88 clang gets this right. | | |
| ▲ | comex 3 hours ago | parent | next [-] | | It does work in GCC to suppress unused variable warnings. Just not for function calls I guess. | |
| ▲ | Am4TIfIsER0ppos 3 hours ago | parent | prev [-] | | You've defined that function with an attribute saying not to ignore the returned value. Is it right to explicitly silence an explicit warning? | | |
| ▲ | MathMonkeyMan 2 hours ago | parent [-] | | Sometimes. For example, you might be setting a non-crucial option on a socket, and if it fails you don't even care to log the fact (maybe the logging would be too expensive), so you just ignore the return value of whatever library is wrapping setsockopt. |
|
|
| |
| ▲ | platinumrad an hour ago | parent | prev | next [-] | | I've (unfortunately) written plenty of "safety critical" code professionally and coding standards definitely have a negative effect overall. The thing keeping planes from falling out of the sky is careful design, which in practice means fail-safes, watchdogs, redundancy, and most-importantly, requirements that aren't overly ambitious. While maybe 10% of rules are sensible, these sensible rules also tend to be blindingly obvious, or at least table stakes on embedded systems (e.g. don't try to allocate on a system which probably doesn't have a full libc in the first place). | |
| ▲ | y1n0 2 hours ago | parent | prev | next [-] | | The standards don't remove the need for code review. In fact they provide a standard to be used in code review. Anything you can automate is nice, but when you have exceptions to rules that say "Exception, if there's no reasonable way to do X then Y is acceptable" isn't really something you can codify into static analysis. | |
| ▲ | ivanjermakov 5 hours ago | parent | prev | next [-] | | Zig makes it explicit with _ = a;
And you would encounter it quite often because unused variable is a compilation error: https://github.com/ziglang/zig/issues/335 | | |
| ▲ | bluecalm 2 hours ago | parent | next [-] | | Doesn't it make it more likely unused variables stay in the codebase? You want to experiment, the code doesn't compile, you add this (probably by automatic tool), the code now compiles. You're happy with your experiment. As the compiler doesn't complain you commit and junk stays in the code. Isn't it just bad design that makes both experimenting harder and for unused variables to stay in the code in the final version? | | |
| ▲ | ivanjermakov 2 hours ago | parent [-] | | It is indeed quite controversial aspect of Zig's design. I would rather prefer it be a warning. Argument "warnings are always ignored" just doesn't hold because anything can be ignored if there is a way to suppress it. | | |
| ▲ | dnautics 40 minutes ago | parent [-] | | there was a recent interview where andrew suggested if i understood correctly: the future path of zig is to make all compilations (successful or not) produce an executable. if theres something egregious like a syntax or type error, the produced artifact just prints the error and returns nonzero. for a "unused parameter", the compiler produces the artifact you expect, but returns nonzero (so it gets caught by CI for example. | | |
| ▲ | sumalamana 34 minutes ago | parent [-] | | Why would the compiler do that, instead of just printing the error at compile-time and exiting with a non-zero value? What is the benefit? |
|
|
| |
| ▲ | ErroneousBosh 3 hours ago | parent | prev [-] | | Golang is exactly the same. It's extremely annoying until it's suddenly very useful and has prevented you doing something unintended. | | |
| ▲ | SoKamil 43 minutes ago | parent | next [-] | | And what is the unintended thing that happens when you have unused variable? | |
| ▲ | bluecalm 2 hours ago | parent | prev [-] | | I fail to see how a warning doesn't achieve the same thing while allowing you to iterate faster. Unless you're working with barbarians who commit code that complies with warnings to your repo and there is 0 discipline to stop them. |
|
| |
| ▲ | jjmarr 2 hours ago | parent | prev | next [-] | | An unused parameter should be commented out. | | |
| ▲ | MobiusHorizons an hour ago | parent [-] | | Unless it’s there to conform to an interface | | |
| ▲ | jjmarr an hour ago | parent [-] | | Especially if it's there to conform to an interface. You can comment out the variable name and leave the type. |
|
| |
| ▲ | binary132 4 hours ago | parent | prev | next [-] | | It’s very weird how none of the sibling comments understood what it were saying is wrong with this. | | | |
| ▲ | msla 5 hours ago | parent | prev | next [-] | | Especially since there is a widely recognized way to ignore a parameter: (void) a;
Every C programmer beyond weaning knows that. | | |
| ▲ | time4tea 5 hours ago | parent | next [-] | | The point really was that the unused method parameter should in almost all cases be removed, not that some trick should be used to make it seem used, and this is the wrong trick! | | |
| ▲ | addaon 4 hours ago | parent [-] | | Sometimes. But sometimes you have a set of functions that are called through function pointers that need the same signature, and one or more of them ignore some of the arguments. These days I’d spell that __attribute__((unused)); but it’s a perfectly reasonable case. |
| |
| ▲ | stefan_ 5 hours ago | parent | prev [-] | | I'm sure thats disallowed for the C-style cast. | | |
| ▲ | cpgxiii 5 hours ago | parent | next [-] | | Fwiw, unused-cast-to-void is a case that GCC and Clang ignore when using -Wno-old-style-cast, which is what most projects prohibiting C-style casts are going to be using (or whatever the equivalent their compiler provides). | |
| ▲ | daringrain32781 5 hours ago | parent | prev [-] | | C++17 has the [[maybe_unused]] attribute. |
|
| |
| ▲ | jojobas an hour ago | parent | prev [-] | | Isn't it inevitable for some cases of inheritance? A superclass does something basic and doesn't need all parameters, child classes require additional ones. |
|
|
| ▲ | geophph 5 hours ago | parent | prev | next [-] |
| LaurieWired is an awesome follow on YouTube! |
| |
|
| ▲ | barfoure 7 hours ago | parent | prev | next [-] |
| Do avionics in general subscribe to MISRA C/C++ or do they go even further with an additional (or different) approach? |
| |
| ▲ | fallingmeat 6 hours ago | parent | next [-] | | coding standard is a part of the story. mainly it comes down to level of rigor and documenting process and outcomes for audit ability. DO-178c | |
| ▲ | stackghost 6 hours ago | parent | prev | next [-] | | Depends on the company in my experience. I've seen some suppliers that basically just wire up the diagram in Matlab/simulink and hit Autocode. No humans actually touch the C that comes out. Honestly I think that's probably the correct way to write high reliability code. | | |
| ▲ | garyfirestorm 6 hours ago | parent [-] | | You’re joking right? That autogenerated code is generally garbage and spaghetti code. It was probably the reason for Toyotas unintended acceleration glitch. | | |
| ▲ | cpgxiii 5 hours ago | parent | next [-] | | In the case of the Toyota/Denso mess, the code in question had both auto-generated and hand-written elements, including places where the autogenerated code had been modified by hand later. That is the worst place to be, where you no longer have whatever structure and/or guarantees the code gen might provide, but you also don't have the structure and choices that a good SWE team would have to develop that level of complexity by hand. | |
| ▲ | creato an hour ago | parent | prev | next [-] | | This isn't necessarily a problem if you don't consider the output to be "source" code. Assembly is also garbage spaghetti code but that doesn't stop you from using a compiler does it? | |
| ▲ | vodou 6 hours ago | parent | prev | next [-] | | Modern autogenerated C code from Simulink is rather effective. It is neither garbage nor spaghetti, it is just... peculiar. | | |
| ▲ | addaon 4 hours ago | parent [-] | | It’s also much, much more resource intensive (both compute and memory) than what a human would right for the same requirements. | | |
| ▲ | stackghost 3 hours ago | parent [-] | | For control systems like avionics it either passes the suite of tests for certification, or it doesn't. Whether a human could write code that uses less memory is simply not important. In the event the autocode isn't performant enough to run on the box you just spec a faster chip or more memory. | | |
| ▲ | addaon 2 hours ago | parent [-] | | I’m sorry, but I disagree. Building these real-time safety-critical systems is what I do for a living. Once the system is designed and hardware is selected, I agree that if the required tasks fit in the hardware, it’s good to go — there’s no bonus points for leaving memory empty. But the sizing of the system, and even the decomposition of the system to multiple ECUs and the level of integration, depends on how efficient the code is. And there are step functions here — even a decade ago it wasn’t possible to get safety processors with sufficient performance for eVTOL control loops (there’s no “just spec a faster chip”), so the system design needed to deal with lower-ASIL capable hardware and achieve reliability, at the cost of system complexity, at a higher level. Today doing that in a safety processors is possible for hand-written code, but still marginal for autogen code, meaning that if you want to allow for the bloat of code gen you’ll pay for it at the system level. | | |
| ▲ | stackghost an hour ago | parent [-] | | >And there are step functions here — even a decade ago it wasn’t possible to get safety processors with sufficient performance for eVTOL control loops (there’s no “just spec a faster chip”) The idea that processors from the last decade were slower than those available today isn't a novel or interesting revelation. All that means is that 10 years ago you had to rely on humans to write the code that today can be done more safely with auto generation. 50+ years of off by ones and use after frees should have disabused us of the hubristic notion that humans can write safe code. We demonstrably can't. In any other problem domain, if our bodies can't do something we use a tool. This is why we invented axes, screwdrivers, and forklifts. But for some reason in software there are people who, despite all evidence to the contrary, cling to the absurd notion that people can write safe code. | | |
| ▲ | addaon 30 minutes ago | parent [-] | | > All that means is that 10 years ago you had to rely on humans to write the code that today can be done more safely with auto generation. No. It means more than that. There's a cross-product here. On one axis, you have "resources needed", higher for code gen. On another axis, you have "available hardware safety features." If the higher resources needed for code gen pushes you to fewer hardware safety features available at that performance bucket, then you're stuck with a more complex safety concept, pushing the overall system complexity up. The choice isn't "code gen, with corresponding hopefully better tool safety, and more hardware cost" vs. "hand written code, with human-written bugs that need to be mitigated by test processes, and less hardware cost." It's "code gen, better tool safety, more system complexity, much much larger test matrix for fault injection" vs "human-written code, human-written bugs, but an overall much simpler system." And while it is possible to discuss systems that are so simple that safety processors can be used either way, or systems so complex that non-safety processors must be used either way... in my experience, there are real, interesting, and relevant systems over the past decade that are right on the edge. It's also worth saying that for high-criticality avionics built to DAL B or DAL A via DO-178, the incidence of bugs found in the wild is very, very low. That's accomplished by spending outrageous time (money) on testing, but it's achievable -- defects in real-world avionics systems overwhelming are defects in the requirement specifications, not in the implementation, hand-written or not. |
|
|
|
|
| |
| ▲ | fluorinerocket 3 hours ago | parent | prev | next [-] | | Rockets have flown to orbit on auto coded simulink, seen it myself | |
| ▲ | AnimalMuppet 5 hours ago | parent | prev | next [-] | | > It was probably the reason for Toyotas unintended acceleration glitch. Do you have any evidence for "probably"? | | |
| ▲ | garyfirestorm 4 hours ago | parent [-] | | I know for the fact simulink generates spaghetti and spaghetti code was partially blamed for Toyotas problems. Hence the inference See https://www.safetyresearch.net/toyota-unintended-acceleratio... | | |
| ▲ | CamouflagedKiwi 3 hours ago | parent | next [-] | | That's a nonsensical connection. "Spaghetti code" is a very general term, that's nowhere near specific enough for the two to be related. "I know for a fact that Italian cooks generate spaghetti, and the deceased's last meal contained spaghetti, therefore an Italian chef must have poisoned him" | |
| ▲ | stackghost 3 hours ago | parent | prev [-] | | SRS is a for-profit corporation whose income comes from lawsuits, so their reports/investigations are tainted by their financial incentive to overstate the significance of their findings. |
|
| |
| ▲ | stackghost 5 hours ago | parent | prev [-] | | No I'm not joking at all. The Autocode feature generates code that has high fidelity to the model in simulink, and is immensely more reliable than a human. It is impossible for a simulink model to accidentally type `i > 0` when they meant `i >= 0`, for example. Any human who tells you they have not made this mistake is a liar. Unless there was a second uncommanded acceleration problem with Toyotas, my understanding is that it was caused by poor mechanical design of the accelerator pedal that caused it to get stuck on floor mats. In any case, when we're talking about safety critical control systems like avionics, it's better to abstract away the actual act of typing code into an editor, because it eliminates a potential source of errors. You verify the model at a higher level, and the code is produced in a deterministic manner. | | |
| ▲ | mmooss 26 minutes ago | parent | next [-] | | > It is impossible for a simulink model to accidentally type `i > 0` when they meant `i >= 0` That's a classic bias: Comparing A and B, show that B doesn't have some A flaws. If they are different systems, of course that's true. But it's also true that A doesn't have some B flaws. That is, what flaws does Autocode have that humans don't? The fantasy that machines are infallible - another (implicit) argument in this thread - is just ignorance for any professional in technology. | |
| ▲ | fl7305 2 hours ago | parent | prev [-] | | > It is impossible for a simulink model to accidentally type `i > 0` when they meant `i >= 0` The Simulink Coder tool is a piece of software. It is designed and implemented by humans. It will have bugs. Autogenerated code is different from human written code. It hits soft spots in the C/C++ compilers. For example, autogenerated code can have really huge switch statements. You know, larger than the 15-bit branch offset the compiler implementer thought was big enough to handle any switch-statement any sane human would ever write? So now the switch jumps backwards instead when trying to get the the correct case-statement. I'm not saying that Simulink Coder + a C/C++ compiler is bad. It might be better than the "manual coding" options available. But it's not 100% bug free either. | | |
| ▲ | stackghost an hour ago | parent [-] | | >But it's not 100% bug free either. Nobody said it was bug free, and this is a straw man argument of your own construction. Using Autocode completely eliminates certain types of errors that human C programmers have continued to make for more than half a century. |
|
|
|
| |
| ▲ | 4gotunameagain 6 hours ago | parent | prev [-] | | Depends on the region. MISRA is widely adopted, and then there are the US MIL standards, ECSS for european aerospace stuff, do-178C for aviation.. |
|
|
| ▲ | kaluga 6 hours ago | parent | prev | next [-] |
| The “90% ban” isn’t about hating C++ — it’s about guaranteeing determinism. In avionics, anything that can hide allocations, add unpredictable control flow, or complicate WCET analysis gets removed. Once you operate under those constraints, every language shrinks to a tiny, fully-auditable subset anyway. |
|
| ▲ | factorialboy 3 hours ago | parent | prev | next [-] |
| Isn't the F35 program considered a failure? Or am I confusing it with some other program? |
| |
| ▲ | ironhaven 2 hours ago | parent | next [-] | | There have been countless articles claiming the demise and failure of the F35 but that is just one side of the story. There has been an argument started 50 years ago in the 1970's about how to build the best next generation fighter jets. One of these camps was called the "Fighter mafia"[0] figure headed by John Boyd. The main argument they bing was the only thing that matters for a jet fighter is how well it performs in one-on-one short ranged dog fighting. They claim that stealth, beyond visual range missiles, electronic warfare and sensors/datalink systems are useless junk that only hinders the dog fighting capability and bloat the cost of new jets. The evidence for this claim was found in testing for the F35 where it was dog fighting a older F16. The results of the test where that the F35 won almost every scenario except one where a lightweight fitted F16 was teleported directed behind a F35 weighed down by heavy missiles and won the fight. This one loss has spawned hundreds of articles about how the F35 is junk that can't dogfight. In the end the F35 has a lot of fancy features that are not optional for modern operations. The jet has now found enough buyers across the west for economies of scale to kick in and the cost is about ~80 million each which is cheaper than retrofitting stealth and sensors onto other air frames like what you get with the F15-EX [0] https://en.wikipedia.org/wiki/Fighter_Mafia | |
| ▲ | joha4270 2 hours ago | parent | prev | next [-] | | A lot of people have made careers out of telling you that it's a failure, but while not everything about the F-35 is an unquestionable success, it has produced a "cheap" fighter jet that is more capable than all but a handful of other planes. Definitely not a failure. | |
| ▲ | jasonwatkinspdx an hour ago | parent | prev | next [-] | | There's a ton of absolutely garbage reporting on it. No like seriously a ton of the articles are just more mainstream media uncritically reposting claims by a couple cranks in Australia that LARP as a think tank. Anyhow, a fair assessment is the program has gone massively over timeline and budget, so in that sense is a failure, however the resulting aircraft is very clearly the best in its class both in absolute capability and value. Going forward there's broad awareness in the government that the program management mistakes of the F-35 program cannot be repeated. There's a general consensus that 3 decade long development projects just won't be relevant in a world where drone concepts and similar are evolving rapidly on a year by year basis. There's also awareness the government needs to act more as the integrator that owns the project to avoid lock in issues. | |
| ▲ | tsunagatta 2 hours ago | parent | prev | next [-] | | The F-35 was in development hell for a while for sure, but it’s far from a failure. See the recent deals where it’s been used as a political bargaining chip; it still ended up being a very desirable and capable platform from my understanding. | | |
| ▲ | fl7305 2 hours ago | parent [-] | | > See the recent deals where it’s been used as a political bargaining chip; it still ended up being a very desirable and capable platform from my understanding. From a european perspective, I can tell you that the mood has shifted 180 degrees from "buy American fighters to solidify our ties with the US" to "can't rely on the US for anything which we'll need when the war comes". | | |
| ▲ | hu3 an hour ago | parent [-] | | that has nothing to do with the F-35. Europe is wise and capable enough to develop their own platform. |
|
| |
| ▲ | throwaway2037 2 hours ago | parent | prev | next [-] | | My slightly trollish reply: If you have infinite money, it is hard to fail. Ok, joking aside: If it is considered a failure, what 100B+ military programme has not been considered a failure? In my totally unqualified opinion, the best cost performance fighter jet in the world is the Saab JAS 39 Gripen. It is very cheap to buy and operate, and has pretty good capabilities. It's a good option for militaries that don't have the infinite money glitch. | |
| ▲ | wat10000 an hour ago | parent | prev | next [-] | | You’re confusing clickbait articles with reality. There have been over 1,200 F-35s built so far, with new ones being built at a rate of about 150 per year. For comparison, that’s nearly as many F-35s built per year as F-22s were built ever, and 1,200 is a large amount for a modern jet fighter. The extremely successful F-15 has seen about that many built since it first entered production over 50 years ago. That doesn’t mean it must be good, but it’s a strong indicator. Especially since the US isn’t the only customer. Many other countries want it too. Some are shying away from it now, but only for political reasons because the US is no longer seen as a reliable supplier. In terms of actual capabilities, it’s the best fighter jet out there save for the F-22, which was far more expensive and is no longer being made. It’s relatively cheap, comparable in cost to alternatives like the Gripen or Rafale while being much more capable. There have been a lot of articles out there about how terrible it is. These fall into a few different categories: * Reasonable critiques of its high development costs, overruns, and delays, baselessly extrapolated to “it’s bad.” * Teething problems extrapolated to “it’s terrible” as if these things never get fixed. * Analyses of outcomes from exercises that misunderstand the purpose and design of exercises. You might see that, say, an F-35 lost against an F-16 in some mock fights. But they’re not going to set up a lot of exercises where the F-35 and F-16 have a realistic engagement. The result of such an exercise would be that the F-16 gets shot out of the sky without ever knowing the F-35 was there. This is uninformative and a waste of time and money. So such a matchup will be done with restrictions that actually make it useful. This might end up in a dogfight, where the F-16 is legitimately superior. This then gets reported as “F-35 worse than F-16,” ignoring the fact that a real situation would have the F-35 victorious long before a dogfight could occur. * Completely legitimate arguments that fighter jets are last century’s weapons, that drones and missiles are the future, and the F-35 is like the most advanced battleship in 1941: useful, powerful, but rapidly becoming obsolete. This may be true, but if it is, it only means the F-35 wasn’t the right thing to focus on, not that it’s a failure. The aircraft carrier was the decisive weapon of the Pacific war but that didn’t make the Iowa class battleships a failure. | | |
| ▲ | jandrewrogers 31 minutes ago | parent [-] | | In many regards, the F-35 was the first aircraft explicitly engineered for the requirements of drone-centric warfare. Its limitations are that this capability was grafted onto an older (by US standards) 5th generation tech stack that wasn't designed for this role from first principles. I think this is what ultimately limited production of the F-22, which is not upgradeable even to the standard of the F-35 for drone-centric environments. The new 6th generation platforms being rolled out (B-21, F-47, et al) are all pure first-principles drone-warfare native platforms. |
| |
| ▲ | TiredOfLife an hour ago | parent | prev | next [-] | | It freely flies in territory protected by state of the art airdefense made by the country that spreads those claims. | |
| ▲ | TimorousBestie 3 hours ago | parent | prev [-] | | The research and development effort went way over budget, the first couple rounds of production were fraught with difficulty, and the platform itself has design issues from being a “one-size-fits-all” compromise (despite also having variants for each service). I haven’t heard anything particularly bad about the software effort, other than the difficulties they had making the VR/AR helmet work (the component never made it to production afaik). | | |
|
|
| ▲ | greenavocado 34 minutes ago | parent | prev | next [-] |
| The C++ standard for the F-35 fighter jet prohibits ninety percent of C++ features because what they are actually after is C with destructors. I was just thinking about how to write C in a modern way today and discovered GLib has an enormous about of useful C++ convieniences in plain C. Reading through the JSF++ coding standards I see they ban exceptions, ban the standard template library, ban multiple inheritance, ban dynamic casts, and essentially strip C++ down to bare metal with one crucial feature remaining: automatic destructors through RAII. When a variable goes out of scope, cleanup happens. That is the entire value proposition they are extracting from C++, and it made me wonder if C could achieve the same thing without dragging along the C++ compiler and all its complexity. GLib is a utility library that extends C with better string handling, data structures, and portable system abstractions, but buried within it is a remarkably elegant solution to automatic resource management that leverages a GCC and Clang extension called the cleanup attribute. This attribute allows you to tag a variable with a function that gets called automatically when that variable goes out of scope, which is essentially what C++ destructors do but without the overhead of classes and virtual tables. The heart of GLib's memory management system starts with two simple macros: g_autofree and g_autoptr. The g_autofree macro is deceptively simple. You declare a pointer with this attribute and when the pointer goes out of scope, g_free is automatically called on it. No manual memory management, no remembering to free at every return path, no cleanup sections with goto statements. The pointer is freed whether you return normally, return early due to an error, or even if somehow the code takes an unexpected path. This alone eliminates the majority of memory leaks in typical C programs because most memory management is just malloc and free, or in GLib's case, g_malloc and g_free. The g_autoptr macro is more sophisticated. While g_autofree works for simple pointers to memory, g_autoptr handles complex types that need custom cleanup functions. A file handle needs fclose, a database connection needs a close function, a custom structure might need multiple cleanup steps. The g_autoptr macro takes a type name and automatically calls the appropriate cleanup function registered for that type. This is where GLib shows its maturity because the library has already registered cleanup functions for all its own types. GError structures are freed correctly, GFile objects are unreferenced, GInputStream objects are closed and released. Everything just works. Behind these macros is something called G_DEFINE_AUTOPTR_CLEANUP_FUNC, which is how you teach GLib about your own types. You write a cleanup function that knows how to properly destroy your structure, then you invoke this macro with your type name and cleanup function, and from that moment forward you can use g_autoptr with your type. The macro generates the necessary glue code that connects the cleanup attribute to your function, handling all the pointer indirection correctly. This is critical because the cleanup attribute passes a pointer to your variable, not the variable itself, which means for a pointer variable it passes a double pointer, and getting this wrong leads to crashes or memory corruption. The third member of this is g_auto, which handles stack-allocated types. Some GLib types like GString are meant to live on the stack but still need cleanup. A GString internally allocates memory for its buffer even though the GString structure itself is on the stack. The g_auto macro ensures that when the structure goes out of scope, its cleanup function runs to free the internal allocations. Heap pointers, complex objects, and stack structures all get automatic cleanup. What's interesting about this system is how it composes. You can have a function that opens a file, allocates several buffers, creates error objects, and builds complex data structures, and you can simply declare each resource with the appropriate auto macro. If any operation fails and you return early, every resource declared up to that point is automatically cleaned up in reverse order of declaration. This is identical to C++ destructors running in reverse order of construction, but you are writing pure C code that works with any GCC or Clang compiler from the past fifteen years. The foundation beneath all this is GLib's memory allocation functions. The library provides g_malloc, g_new, g_realloc and friends which are drop-in replacements for the standard C allocation functions. These functions have better error handling because g_malloc never returns NULL. If allocation fails, the program aborts with a clear error message. This might sound extreme but for most applications it is actually the right behavior. When malloc returns NULL in traditional C code, most programmers either do not check it, check it incorrectly, or check it but then do not have a reasonable recovery path anyway. GLib acknowledges this reality and makes the contract explicit: if you cannot allocate memory, the program terminates cleanly rather than stumbling forward into undefined behavior. |
| |
| ▲ | greenavocado 33 minutes ago | parent [-] | | For situations where you do want to handle allocation failure, GLib provides g_try_malloc and related functions that can return NULL. The key insight is making the common case automatic and the exceptional case explicit. The g_new macro is particularly nice because it is type-aware. Instead of writing g_malloc of sizeof times count and then casting, you write g_new of type and count, and it handles the sizing and casting automatically while checking for overflow in the multiplication. Reference counting is another critical component of GLib's memory management, particularly for objects. The GObject system, which is GLib's object system for C, uses reference counting to manage object lifetimes. Every object has a reference count starting at one when created. When you want to keep a reference to an object, you call g_object_ref. When you are done with it, you call g_object_unref. When the reference count reaches zero, the object is automatically destroyed. This is the same model used by shared_ptr in C++ or reference counting in Python, but implemented in pure C. This also integrates with the autoptr system. Many GLib types are reference counted, and their cleanup functions simply decrement the reference count. This means you can declare a local variable with g_autoptr, the reference count stays positive while you use it, and when the variable goes out of scope the reference is automatically released. If you were the last holder of that reference, the object is freed. If other parts of the code still hold references, the object stays alive. This solves the resource sharing problem that makes manual memory management so difficult in C. GLib also provides memory pools through GMemChunk and the newer slice allocator, though the slice allocator is being phased out in favor of standard malloc since modern allocators have improved significantly. The concept was to reduce allocation overhead and fragmentation for programs that allocate many small objects of the same size. You create a pool for objects of a specific size and then allocate from that pool quickly without going through the general purpose allocator. When you are done with all objects from that pool, you can destroy the entire pool at once. This pattern shows up in many high-performance C programs but GLib provided it as a reusable component. The error handling story in GLib deserves special attention because it demonstrates how automatic cleanup enables better error handling patterns. The GError type is a structure that carries error information including a domain, a code, and a message. Functions that can fail take a GError double pointer as their last parameter. If the function succeeds, it returns true or a valid value and leaves the error NULL. If it fails, it returns false or NULL and allocates a GError with details about what went wrong. The calling code checks the return value and if there was an error, examines the GError for details. The critical part is that GError is automatically freed when declared with g_autoptr. You can write a function that calls ten different operations, each of which might set an error, and you can check each one and return early if something fails, and the error is automatically freed on all code paths. You never leak the error message string, never double-free it, never forget to free it. This is a massive improvement over traditional C error handling where you either ignore errors or write incredibly tedious cleanup code with goto statements jumping to labels at the end of the function. The GNOME developers could have switched to C++ or Rust or any modern language, but instead they invested in making C excellent at what C is good at. They added just enough infrastructure to eliminate the common pitfalls without fundamentally changing the language. A C programmer can read GLib code and understand it immediately because it is still just C. The auto macros are syntactic sugar over a compiler attribute, not a new language feature requiring a custom compiler. This philosophy aligns pretty well with what the F-35 programmers want: the performance and predictability of C with the safety of automatic resource management. No hidden allocations, no virtual dispatch overhead, no exception unwinding cost, no template instantiation bloat. Just deterministic cleanup that happens exactly when you expect it to happen because it is tied to lexical scope, which is something you can see by reading the code. I found it sort of surprising that the solution to modern C was not a new language or a massive departure from traditional practices. The cleanup attribute has been in GCC since 2003. Reference counting has been around forever. The innovation was putting these pieces together in a coherent system that feels natural to use and composes well. Sometimes the right tool is not the newest or most fashionable one, but the one that solves your actual problem with the least additional complexity. GLib proves you can have that feature in C, today, with compilers that have been stable for decades, without giving up the simplicity and predictability that makes C valuable in the first place. |
|
|
| ▲ | chairmansteve 2 hours ago | parent | prev | next [-] |
| Ahhh. They use C++..... That explains all the delays on the F-35...., |
| |
| ▲ | riku_iki 2 hours ago | parent | next [-] | | what would be so obviously better choice of language in your opinion? | | |
| ▲ | throwaway2037 2 hours ago | parent [-] | | You raise a good point. No trolling: I wonder what languages they seriously considered? Example: I am sure the analysis included C in the mix. Also, I wonder if they considered compiler extensions. Example: Since C doesn't have destructors, maybe you could add a compiler extension to add the defer keyword to allow people to schedule object destruction. Even when they decided upon C++, I am sure there was a small holy war to decide what features were allowed. When they started the JSF programmed in the 1990s, C++ compilers were pretty terrible! | | |
| ▲ | jasonwatkinspdx an hour ago | parent | next [-] | | Ada and C++ were the only realistic options at the time, and Ada developers are difficult to hire. But honestly, with this sort of programming the language distinctions matter less. As the guide shows you restrict yourself to a subset of the language where distinctions between languages aren't as meaningful. Basically everything runs out of statically allocated global variables and arrays. Don't have to worry about fragmentation and garbage collection if there's no allocation at all. Basically remove any source of variability in execution possible. So really you could do this in any c style language that gives you control over the memory layout. | |
| ▲ | riku_iki 2 hours ago | parent | prev [-] | | My recollection is that traditionally they used Ada for avionics, but per some internet claims they had difficulties to hire enough Ada programmers for such large projects, so switched to C++. |
|
| |
| ▲ | smlacy 2 hours ago | parent | prev [-] | | You think a fighter jet should run Ruby on rails instead? |
|
|
| ▲ | thenobsta 6 hours ago | parent | prev | next [-] |
| I wonder how these compare to high frequency training standards. It seems like they'd have similar speed/reliability/predictability requirements in the critical paths. |
| |
| ▲ | perbu 5 hours ago | parent | next [-] | | JFS-CPP bans exceptions because you would lose control over the execution of the problem. The HFT crowds didn't like it because you'd add 10ns to a function call. At least before we had zero-cost exceptions. These days, I suspect the HFT crowd is back to counting microseconds or milliseconds as trades are being done smarter, not faster. | |
| ▲ | clanky 4 hours ago | parent | prev [-] | | There are at least some HFT players who actually use exceptions to avoid branches on the infrequent-but-speed-critical execution path: https://youtu.be/KHlI5NBbIPY?si=VjFs7xVN0GsectHr |
|
|
| ▲ | dzonga 5 hours ago | parent | prev | next [-] |
| I guess a bigger conversation could be had in regards to: what leads to better code in terms of understandability & preventing errors Exceptions (what almost every language does) or Error codes (like Golang) are there folks here that choose to use error codes and forgo Exceptions completely ? |
| |
| ▲ | jandrewrogers 4 hours ago | parent [-] | | There isn't much of a conversation to be had here. For low-level systems code, exceptions introduce a bunch of issues and ugly edge cases. Error codes are cleaner, faster, and easier to reason about in this context. Pretty much all systems languages use error codes. In C++, which supports both, exceptions are commonly disabled at compile-time for systems code. This is pretty idiomatic, I've never worked on a C++ code base that used exceptions. On the other hand, high-level non-systems C++ code may use exceptions. | | |
|
|
| ▲ | manoDev 5 hours ago | parent | prev | next [-] |
| You mean fighters ARE coded in C++? My god |
| |
| ▲ | fweimer 3 hours ago | parent | next [-] | | I think the late Robert Dewar once quipped that modern jet fighters aren't safety-critical applications because the aircraft disintegrates immediately if the computer system fails. | |
| ▲ | GoblinSlayer 5 hours ago | parent | prev [-] | | "Launching nuclear rockets" just became literal. |
|
|
| ▲ | FpUser 2 hours ago | parent | prev | next [-] |
| Her point about exceptions vs error codes was that one failed to catch exception of particular and and things went south meanwhile if we instead "catch" error code all will be nice and dandy. Well one might fail to handle error codes just as well. That is of course not to say that exceptions and error codes are the same. |
|
| ▲ | semiinfinitely 6 hours ago | parent | prev | next [-] |
| even with 90% of c++ features banned, the language remains ~5x larger than every other programming language |
| |
|
| ▲ | nikanj 6 hours ago | parent | prev | next [-] |
| In 1994 C++ compilers were buggy, and a modernization of the C++ allowed features list is still stuck in a committee somewhere? |
|
| ▲ | ltbarcly3 6 hours ago | parent | prev | next [-] |
| Paging our Ada fanboys! You're missing it! |
|
| ▲ | mwkaufma 7 hours ago | parent | prev [-] |
| TL;DR - no exceptions - no recursion - no malloc()/free() in the inner-loop |
| |
| ▲ | thefourthchime 6 hours ago | parent | next [-] | | I've worked on a playout system for broadcast television. The software has to run for years at a time and not have any leaks, We need to send out one frame of television exactly on time, every time. It is "C++", but we also follow the same standards. Static memory allocation, no exceptions, no recursion. We don't use templates. We barely use inheritance. It's more like C with classes. | | |
| ▲ | EliRivers 5 hours ago | parent [-] | | I worked on the same for many years; same deal - playout system for broadcast, years of uptime, never miss a frame. The C++ was atrocious. Home-made reference counting that was thread-dangerous, but depending on what kind of object the multi-multi-multi diamond inheritance would use, sometimes it would increment, sometimes it wouldn't. Entire objects made out of weird inheritance chains. Even the naming system was crazy; "pencilFactory" wasn't a factory for making pencils, it was anything that was made by the factory for pencils. Inheritance rather than composition was very clearly the model; if some other object had function you needed, you would inherit from that also. Which led to some object inheriting from the same class a half-dozen times in all. The multi-inheritance system given weird control by objects on creation defining what kind of objects (from the set of all kinds that they actually were) they could be cast to via a special function, but any time someone wanted one that wasn't on that list they'd just cast to it using C++ anyway. You had to cast, because the functions were all deliberately private - to force you to cast. But not how C++ would expect you to cast, oh no! Crazy, home made containers that were like Win32 opaque objects; you'd just get a void pointer to the object you wanted, and to get the next one pass that void pointer back in. Obviously trying to copy MS COM with IUnknown and other such home made QueryInterface nonsense, in effect creating their own inheritance system on top of C++. What I really learned is that it's possible to create systems that maintain years of uptime and keep their frame accuracy even with the most atrocious, utterly insane architecture decisions that make it so clear the original architect was thinking in C the whole time and using C++ to build his own terrible implementation of C++, and THAT'S what he wrote it all in. Gosh, this was a fun walk down memory lane. | | |
| ▲ | uecker 2 hours ago | parent | next [-] | | A multi-inhertiance system is certainly not something somebody who "was thinking in C" would ever come up with. This sounds more like a true C++ mess. | | |
| ▲ | throwaway2037 an hour ago | parent [-] | | I worked on a pure C system early in my career. They implemented multiple inheritance (a bit like Perl/Python MRO style) in pure C. It was nuts, but they didn't abuse it, so it worked OK. Also, serious question: Are they any GUI toolkits that do not use multiple inheritance? Even Java Swing uses multiple inheritance through interfaces. (I guess DotNet does something similar.) Qt has it all over the place. | | |
| ▲ | nottorp an hour ago | parent [-] | | One could say toolkits done in C++ use multiple inheritance because C++ doesn't have interfaces though. |
|
| |
| ▲ | webdevver 4 hours ago | parent | prev [-] | | it is also interesting that places where you would expect to have quite 'switched-on' software development practices tend to be the opposite - and the much-maligned 'codemonkeys' at 'big tech' infact tend to be pretty damn good. it was painful for me to accept that the most elite programmers i have ever encountered were the ones working in high frequency trading, finance, and mass-producers of 'slop' (adtech, etc.) i still ache to work in embedded fields, in 8kB constrained environment to write perfectly correct code without a cycle wasted, but i know from (others) experience that embedded software tends to have the worst software developers and software development practices of them all. |
|
| |
| ▲ | Taniwha 6 hours ago | parent | prev | next [-] | | yup, same for any real time code, new/malloc/free/delete use hidden mutexes and can cause priority inversion as a result - heisenbugs, that audio/video dropout that happens rarely and you can't quite catch - best to code to avoid them | | |
| ▲ | AnimalMuppet 5 hours ago | parent [-] | | They also can simply fail, if you are out of memory or your heap is hopelessly fragmented. And they take an unpredictable amount of time. That's very bad if you're trying to prove that you satisfy the worst-case timing requirement. |
| |
| ▲ | pton_xd 5 hours ago | parent | prev | next [-] | | That's standard in the games industry as well. Plus many others like no rtti, no huge dependencies like boost, no smart pointers, generally avoid ctors / dtors, etc. | |
| ▲ | krashidov 5 hours ago | parent | prev | next [-] | | Has anyone else here banned exceptions (for the most part) in less critical settings (like a web app)? I feel like that's the way to go since you don't obscure control flow. I have also been considered adding assertions like TigerBeetle does https://github.com/tigerbeetle/tigerbeetle/blob/main/docs/TI... | | |
| ▲ | fweimer 2 hours ago | parent | next [-] | | Most large open-source projects ban exceptions, often because the project was originally converted from C and is just not compatible with non-local control flow. Or the project originated within an organization which has tons of C++ code that is not exception-safe and is expected to integrate with that. Some large commercial software systems use C++ exceptions, though. Until recently, pretty much all implementations seemed to have a global mutex on the throw path. With higher and higher core counts, the affordable throw rate in a process was getting surprisingly slow. But the lock is gone in GCC/libstdc++ with glibc. Hopefully the other implementations follow, so that we don't end up with yet another error handling scheme for C++. | |
| ▲ | tonfa 3 hours ago | parent | prev | next [-] | | Google style bans them: https://google.github.io/styleguide/cppguide.html#Exceptions | |
| ▲ | mwkaufma 5 hours ago | parent | prev [-] | | Lots of games, and notably the Unreal Engine, compile without exceptions. EASTL back in the day was in part written to avoid the poor no-exception support in Dinkumware STL and STLport. | | |
| ▲ | jesse__ an hour ago | parent [-] | | Basically all high profile engine teams I know of ban exceptions. They're worse than useless |
|
| |
| ▲ | wiseowise 6 hours ago | parent | prev | next [-] | | That’s hardly 90% of C++. | | |
| ▲ | elteto 6 hours ago | parent | next [-] | | If you compile with -fno-exceptions you just lost almost all of the STL. You can compile with exceptions enabled, use the STL, but strictly enforce no allocations after initialization. It depends on how strict is the spec you are trying to hit. | | |
| ▲ | vodou 6 hours ago | parent | next [-] | | Not my experience. I work with a -fno-exceptions codebase. Still quite a lot of std left. (Exceptions come with a surprisingly hefty binary size cost.) | | |
| ▲ | elteto 5 hours ago | parent | next [-] | | Not exactly sure what your experience is, but if you work with in an -fno-exceptions codebase then you know that STL containers are not usable in that regime (with the exception of std::tuple it seems, see freestanding comment below). I would argue that the majority of use cases of the STL is for its containers. So, what exact parts of the STL do you use in your code base? Most be mostly compile time stuff (types, type trait, etc). | | |
| ▲ | alchemio 4 hours ago | parent [-] | | You can use std containers in a no-exceptions environment. Just know that if an error occurs the program will terminate. | | |
| |
| ▲ | theICEBeardk 6 hours ago | parent | prev [-] | | Apparently according to some ACCU and CPPCon talks by Khalil Estel this can be largely mitigated even in embedded lowering the size cost by orders of magnitude. | | |
| |
| ▲ | theICEBeardk 6 hours ago | parent | prev [-] | | Are you aware of the Freestanding definition of STL? See here: https://en.cppreference.com/w/cpp/freestanding.html Large and useful parts of it are available if you run with a newer c++ standard. | | |
| ▲ | elteto 5 hours ago | parent [-] | | Well, it's mostly type definitions and compiler stuff, like type_traits. Although I'm pleasantly surprised that std::tuple is fully supported. It looks like C++26 will bring in a lot more support for freestanding stuff. No algorithms or containers, which to me is probably 90% of what is most heavily used of the STL. |
|
| |
| ▲ | bluGill 6 hours ago | parent | prev [-] | | Large parts of the standard library malloc/free. | | |
| |
| ▲ | jandrewrogers 7 hours ago | parent | prev | next [-] | | i.e. standard practice for every C++ code base I've ever worked on | | |
| ▲ | DashAnimal 6 hours ago | parent [-] | | What industry do you work in? Modern RAII practices are pretty prevalent | | |
| ▲ | Cyan488 6 hours ago | parent | next [-] | | This is common in embedded systems, where there is limited memory and no OS to run garbage collection. | |
| ▲ | jandrewrogers 6 hours ago | parent | prev [-] | | What does RAII have to do with any of the above? | | |
| ▲ | WD-42 6 hours ago | parent | next [-] | | 0 allocations after the program initializes. | | |
| ▲ | tialaramex 6 hours ago | parent | next [-] | | RAII doesn't imply allocating. My guess is that you're assuming all user defined types, and maybe even all non-trivial built-in types too, are boxed, meaning they're allocated on the heap when we create them. That's not the case in C++ (the language in question here) and it's rarely the case in other modern languages because it has terrible performance qualities. | |
| ▲ | Gupie 6 hours ago | parent | prev | next [-] | | Open a file in the constructor, close it in the destructor. RAII with 0 allocations. | |
| ▲ | jjmarr 6 hours ago | parent | prev | next [-] | | Stack "allocations" are basically free. | |
| ▲ | nicoburns 6 hours ago | parent | prev [-] | | RAII doesn't necessarily require allocation? |
| |
| ▲ | DashAnimal 6 hours ago | parent | prev | next [-] | | Well if you're using the standard library then you're not really paying attention to allocations and deallocations for one. For instance, the use of std::string. So I guess I'm wondering if you work in an industry that avoids std? | | |
| ▲ | jandrewrogers 5 hours ago | parent [-] | | I work in high-scale data infrastructure. It is common practice to do no memory allocation after bootstrap. Much of the standard library is still available despite this, though there are other reasons to not use the standard containers. For example, it is common to need containers that can be paged to storage across process boundaries. C++ is designed to make this pretty easy. |
| |
| ▲ | nmhancoc 6 hours ago | parent | prev | next [-] | | Not an expert but I’m pretty sure no exceptions means you can’t use significant parts of std algorithm or the std containers. And if you’re using pooling I think RAII gets significantly trickier to do. | | | |
| ▲ | astrobe_ 6 hours ago | parent | prev [-] | | And what does "modern" has to do with it anyway. |
|
|
| |
| ▲ | tialaramex 6 hours ago | parent | prev | next [-] | | Forbidding recursion is pretty annoying. One of the nice things that's on the distant horizon for Rust is an explicit tail recursion operator perhaps named `become`. Unlike naive recursion, which as this video (I haven't followed the link but I'm assuming it is Laurie's recent video) explains risks stack overflow, optimized tail recursion doesn't grow the stack. The idea of `become` is to signal "I believe this can be tail recursive" and then the compiler is either going to agree and deliver the optimized machine code, or disagree and your program won't compile, so in neither case have you introduced a stack overflow. Rust's Drop mechanism throws a small spanner into this, in principle if every function foo makes a Goose, and then in most cases calls foo again, we shouldn't Drop each Goose until the functions return, which is too late, that's now our tail instead of the call. So the `become` feature AIUI will spot this, and Drop that Goose early (or refuse to compile) to support the optimization. | | |
| ▲ | tgv 6 hours ago | parent | next [-] | | In C, tail recursion is a fairly simple rewrite. I can't think of any complications. But ... that rewrite can increase the cyclomatic complexity of the code on which they have some hard limits, so perhaps that's why it isn't allowed? And the stack overflow, of course. | | |
| ▲ | AnimalMuppet 5 hours ago | parent [-] | | I don't know that it's just cyclomatic complexity. I think it at least part of it is proving that you meet hard real-time constraints. Recursion is harder to analyze that way than "for (i = 0; i < 16; i++) ... " is. |
| |
| ▲ | zozbot234 4 hours ago | parent | prev [-] | | The tail recursion operator is a nice idea, but the extra `become` keyword is annoying. I think the syntax should be `return as`: it uses existing keywords, is unambiguous and starts with `return` which tail recursion is a special case of. | | |
| ▲ | tialaramex 3 hours ago | parent [-] | | Traditionally the time for bike shedding the exact syntax is much closer to stabilization. Because Rust is allowed (at this sort of distance in time) to reserve new keywords via editions, it's not a problem to invent more, so I generally do prefer new keywords over re-using existing words but I'm sure I'd be interested in reading the pros and cons. | | |
| ▲ | zozbot234 2 hours ago | parent [-] | | The usual argument against a decorated `return` keyword is that a proper tail call is not a true "return" since it has to first drop any locals that aren't passed thru to the tail call. I don't think it's a very good argument because if the distinction of where exactly those implicit drops occur was that important, we'd probably choose to require explicit drops anyway. |
|
|
| |
| ▲ | petermcneeley 3 hours ago | parent | prev | next [-] | | This is basically video games prior to 2010 | | |
| ▲ | mwkaufma 3 hours ago | parent [-] | | Relax the dynamic-memory restriction to "limit per-event memory allocation to the bump allocator" and it's still mostly true for many AAA/AAAA games I work on today. | | |
| ▲ | petermcneeley 2 hours ago | parent [-] | | Developers have gotten lazy. Im glad to here where you are they are at least still trying. | | |
|
| |
| ▲ | msla 5 hours ago | parent | prev [-] | | At that point, why not write in C? Do they think it's C/C++ and not understand the difference? > no recursion Does this actually mean no recursion or does it just mean to limit stack use? Because processing a tree, for example, is recursive even if you use an array, for example, instead of the stack to keep track of your progress. The real trick is limiting memory consumption, which requires limiting input size. | | |
| ▲ | drnick1 3 hours ago | parent | next [-] | | You may still want to use classes (where they make sense), references (cleaner syntax than pointers), operator overloading, etc. For example, a linear algebra library is far nicer to write and use in C++. | | | |
| ▲ | mwkaufma 5 hours ago | parent | prev | next [-] | | For a long time, at least in MS and Intel, the C++ compilers were better than the C compilers. | |
| ▲ | mwkaufma 4 hours ago | parent | prev [-] | | Re: recursion. She explains in her video. Per requirements, the stack capacity has to be statically verifiable, and not dependent on runtime input. |
|
|