Remix.run Logo
jillesvangurp 5 days ago

C++ and C rely, heavily, on skill and discipline instead of automated checks to stay safe. Over time, and in larger groups of people that always fails. People just aren't that disciplined and they get overconfident of their own skills (or level of discipline). Decades of endless memory leaks, buffer overflows, etc. and the related security issues, crash bugs, data corruption, etc. shows that no code base is really immune to this.

The best attitude in programmers (regardless of the language) is the awareness that "my code probably contains embarrassing bugs, I just haven't found them yet". Act accordingly.

There are of course lots of valid reasons to continue to use C/C++ on projects where it is used and there are a lot such projects. Rewrites are disruptive, time consuming, expensive, and risky.

It is true that there are ways in C++ to mitigate some of these issues. Mostly this boils down to using tools, libraries, and avoiding some of the more dark corners of the language and standard library. And if you have a large legacy code base, adopting some of these practices is prudent.

However, a lot of this stuff boils down to discipline and skill. You need to know what to use and do, and why. And then you need to be disciplined enough to stick with that. And hope that everybody around you is equally skilled and disciplined.

However, for new projects, there usually are valid alternatives. Even performance and memory are not the arguments they used to be. Rust seems to be building a decent reputation for combining compile time safety with performance and robustness; often beating C/C++ implementations of things where Rust is used to provide a drop in replacement. Given that, I can see why major companies are reluctant to take on new C/C++ projects. I don't think there are many (or any) upsides to the well documented downsides.

pornel 4 days ago | parent | next [-]

People innately admire difficult skills, regardless of their usefulness. Acrobatic skateboarding is impressive, even when it would be faster and safer to go in a straight line or use a different mode of transport.

To me skill and effort is misplaced and wasted when it's spent on manually checking invariants that a compiler could check better automatically, or implementing clever workarounds for language warts that no longer provide any value.

Removal of busywork and pointless obstacles won't make smart programmers dumb and lazy. It allows smart programmers to use their brainpower on bigger more ambitious problems.

bayindirh 4 days ago | parent | next [-]

These type comments always remind me that we forget where we come from in terms of computation, every time.

It's important to remember Rust's borrow checker was computationally infeasible 15 years ago. C & C++ are much older than that, and they come from an era where variable name length affected compilation time.

It's easy to publicly shame people who do hard things for a long time in the light of newer tools. However, many people who likes these languages are using them longer than the languages we champion today were mere ideas.

I personally like Go in these days for its stupid simplicity, but when I'm going to do something serious, I'll always use C++. You can fight me, but never pry C++ from my cold, dead hands.

For note, I don't like C & C++ because they are hard. I like them because they provide a more transparent window the processor, which is a glorified, hardware implemented PDP-11 emulator.

Last, we shall not forget that all processors are C VMs, anyway.

steveklabnik 4 days ago | parent | next [-]

> It's important to remember Rust's borrow checker was computationally infeasible 15 years ago.

The core of the borrow checker was being formulated in 2012[1], which is 13 years ago. No infeasibility then. And it's based on ideas that are much older, going back to the 90s.

Plus, you are vastly overestimating the expense of borrow checking, it is very fast, and not the reason for Rust's compile times being slow. You absolutely could have done borrow checking much earlier, even with less computing power available.

1: https://smallcultfollowing.com/babysteps/blog/2012/11/18/ima...

aw1621107 4 days ago | parent | prev | next [-]

> It's important to remember Rust's borrow checker was computationally infeasible 15 years ago.

IIRC borrow checking usually doesn't consume that much compilation time for most crates - maybe a few percent or thereabouts. Monomorphization can be significantly more expensive and that's been much more widely used for much longer.

nxobject 4 days ago | parent | prev | next [-]

> It's important to remember Rust's borrow checker was computationally infeasible 15 years ago. C & C++ are much older than that, and they come from an era where variable name length affected compilation time.

I think you're setting the bar a little too high. Rust's borrow-checking semantics draw on much earlier research (for example, Cyclone had a form of region-checking in 2006); and Turbo Pascal was churning through 127-character identifiers on 8088s in 1983, one year before C++ stream I/O was designed.

EDIT: changed Cyclone's "2002" to "2006".

pjmlp 4 days ago | parent | prev | next [-]

I remember, I was there in the 1980's coding, hence why I know C and C++ were not the only alternatives, rather the ones that eventually won in the end.

unscaled 4 days ago | parent | prev | next [-]

> the processor, which is a glorified, hardware implemented PDP-11 emulator.

This specific seems like just gratuitously rewriting history.

I can get how you'd feel C (and certain dialects of C++) are "closer to the metal" in a certain sense: C supports very few abstractions and with fewer abstractions, there are less "things" between you and "the metal". But this is as far as it goes. C does not represent - by any stretch of imagination - an accurate computational model or a memory of a modern CPU. It does stay close to PDP-11, but calling modern CPUs "glorified hardware emulators of PDP-11" is just preposterous.

PDP-11 was an in-order CISC processor with no virtual memory, cache hierarchy, branch prediction, symmetric multiprocessing and SIMD instruction. Some modern CPUs (namely the x86/x64 family of CPUs) do emulate a CISC ISA on that is probably more RISC-like, but that's as far we can say they are trying to behave like a PDP-11 (even though the intention was to behave like a first-gen Intel Pentium).

raverbashing 4 days ago | parent | prev [-]

> we shall not forget that all processors are C VMs

This idea is some 10yrs behind. And no, thinking that C is "closer to the processor" today is incorrect

It makes you think it is close which in some sense is even worse

lelanthran 4 days ago | parent [-]

> This idea is some 10yrs behind.

Akshually[1] ...

> And no, thinking that C is "closer to the processor" today is incorrect

THIS thinking is about 5 years out of date.

Sure, this thinking you exhibit gained prominence and got endlessly repeated by every critic of C who once spent a summer doing a C project in undergrad, but it's been more than 5 years that this opinion was essentially nullified by

    Okay, If C is "not close to the process", what's closer?
Assembler? After all if everything else is "Just as close as C, but not closer", then just what kind of spectrum are you measuring on, that has a lower bound which none of the data gets close to?

You're repeating something that was fashionable years ago.

===========

[1] There's always one. Today, I am that one :-)

steveklabnik 4 days ago | parent | next [-]

Standard C doesn't have inline assembly, even though many compilers provide it as an extension. Other languages do.

> After all if everything else is "Just as close as C, but not closer", then just what kind of spectrum are you measuring on

The claim about C being "close to the machine" means different things to different people. Some people literally believe that C maps directly to the machine, when it does not. This is just a factual inaccuracy. For the people that believe that there's a spectrum, it's often implied that C is uniquely close to the machine in ways that other languages are not. The pushback here is that C is not uniquely so. "just as close, but not closer" is about that uniqueness statement, and it doesn't mean that the spectrum isn't there.

lelanthran 4 days ago | parent [-]

> Some people literally believe that C maps directly to the machine, when it does not.

Maybe they did, 5 years (or more) ago when that essay came out. it was wrong even then, but repeating it is even more wrong.

> This is just a factual inaccuracy.

No. It's what we call A Strawman Argument, because no one in this thread claimed that C was uniquely close to the hardware.

Jumping in to destroy the argument when no one is making it is almost textbook example of strawmanning.

steveklabnik 4 days ago | parent [-]

Claiming that a processor is a "C VM" implies that it's specifically about C.

adgjlsfhk1 4 days ago | parent | prev | next [-]

Lots of languages at a higher level than C are closer to the processor in that they have interfaces for more instructions that C hasn't standardized yet.

lelanthran 4 days ago | parent [-]

> Lots of languages at a higher level than C are closer to the processor in that they have interfaces for more instructions that C hasn't standardized yet.

Well, you're talking about languages that don't have standards, they have a reference implementation.

IOW, no language has standards for processor intrinsics; they all have implementations that support intrinsics.

raverbashing 4 days ago | parent | prev [-]

> Okay, If C is "not close to the process", what's closer?

LLVM IR is closer. Still higher level than Assembly

The problem is thus:

char a,b,c; c = a+b;

Could not be more different between x86 and ARM

lelanthran 4 days ago | parent [-]

> LLVM IR is closer. Still higher level than Assembly

So your reasoning for repeating the once-fashionable statement is because "an intermediate representation that no human codes in is closer than the source code"?

throwawaymaths 4 days ago | parent | prev [-]

To me a compiler's effort is misplaced and wasted when it's spent on checking invariants that could be checked by a linter or a sidecar analysis module.

pornel 4 days ago | parent | next [-]

Checking of whole-program invariants can be accurate and done basically for free if the language has suitable semantics.

For example, if a language has non-nullable types, then you get this information locally for free everywhere, even from 3rd party code. When the language doesn't track it, then you need a linter that can do symbolic execution, construct call graphs, data flows, find every possible assignment, and still end up with a lot of unknowns and waste your time on false positives and false negatives.

Linters can't fix language semantics that create dead-ends for static analysis. It's not a matter of trying harder to make a better linter. If a language doesn't have clear-enough aliasing, immutability, ownership, thread-safety, etc. then a lot of analysis falls apart. Recovering required information from arbitrary code may be literally impossible (Rice's theorem), and getting even approximate results quickly ends up requiring whole-program analysis and prohibitively expensive algorithms.

And it's not even an either-or choice. You can have robust checks for fundamental invariants built into the language/compiler, and still use additional linters for detecting less clear-cut issues.

throwawaymaths 2 days ago | parent [-]

> Linters can't fix language semantics that create dead-ends for static analysis. It's not a matter of trying harder to make a better linter. If a language doesn't have clear-enough aliasing, immutability, ownership, thread-safety, etc. then a lot of analysis falls apart

this assertion is known disproven. seL4 is a fully memory safe (and also has even more safety baked in) major systems programming behemoth that is written on c + annotations where the analysis is conducted in a sidecar.

to obtain extra safety (but still not as safe as seL4) in rust, you must add a sidecar in the form of MIRI. nobody proposes adding MIRI into rust.

now, it is true that sel4 is a pain in the ass to write,compile+check, but there is a lot of design space in the unexplored spectrum of rust, rust+miri, sel4.

fluoridation 4 days ago | parent | prev [-]

If the compiler is not checking them then it can't assume them, and that reduces the opportunities for optimizations. If the checks don't run on the compiler then they're not running every time; if you do want them to run every time then they may as well live in the compiler instead.

lelanthran 4 days ago | parent | prev | next [-]

> C++ and C rely, heavily, on skill and discipline instead of automated checks to stay safe.

You can't sensibly talk about C and C++ as a single language. One is the most simple language there is, most of the rules to which can be held in the head of a single person while reading code.

The other is one of the most complex programming languages to ever have existed, in which even world-renowned experts in lose their facility for the language after a short break from it.

saghm 4 days ago | parent | next [-]

And yet, they both still suffer from the flaw that the parent comment cites. Describing a shared property doesn't imply a claim that they're the same language.

lelanthran 4 days ago | parent [-]

> And yet, they both still suffer from the flaw that the parent comment cites.

I dunno; the flaw is not really comparable, is it? The skill and discipline required to write C bug-free is an orders of magnitude less than the skill and discipline required to write C++.

Unless you read GGPs post to mean a flaw different to "skill and discipline required".

saghm 4 days ago | parent [-]

I'd argue that their point was that the required amount of skill and discipline of either is higher than it's worth at this point for new projects. The difference doesn't matter if even the lower of the two is too high.

estimator7292 4 days ago | parent | prev [-]

Have you written significant amounts of C or C++?

Most people don't write C, nor use the C compiler, even when writing C. You use C++ and the C++ compiler. For (nearly) all intents and purposes, C++ has subsumed and replaced C. Most of the time when someone says something is "written in C" it actually means it's C++ without the +± features. It's still C++ on the C++ compiler.

Actual uses of actual C are pretty esoteric and rare in the modern era. Everything else is varying degrees of C++.

QuiEgo 4 days ago | parent | next [-]

Sending out a strong disagree from the embedded systems world. C is king here.

(Broad, general, YMMV statement): The general C++ arc for an embedded developer looks like this:

1.) discover exceptions are way too expensive in embedded. So is RTTI.

2.) So you turn them off and get a gimped set of C++ with no STL.

3.) Then you just go back to C.

magnushiie 4 days ago | parent [-]

Skype was written without exception handling and RTTI, although using a lot of C++ features. You can write good C++ code without these dependencies. You don't use STL but with cautious use of hand-built classes you go far.

Today I wouldn't recommnend Skype built in any language except Rust. But the Skype founders Ahti Heinla, Jaan Tallinn and Priit Kasesalu found exactly the right balance of C and C++ for the time.

I also wrote a few lines of code in that dialect of C++ (no exceptions). And it didn't feel much different from modern C++ (exception are really fatal errors)

And regarding to embedded, the same codebase was embedded in literally all the ubiquitous TVs of the time, even DECT phones. I bet there are only a few (if any) application codebases of significant size to have been deployed at that scale.

QuiEgo 4 days ago | parent | next [-]

Sure, you absolutely can use a limited set of C++, and find value, and there are many big projects that have gone that route.

See Embedded C++ - https://en.wikipedia.org/wiki/Embedded_C%2B%2B

Apple's IO Kit (all kernel drivers on macOS/iphoneOS/ipadOS/watchOS) is a great example of what you're talking about. Billions of devices deployed with code built on this pattern.

That said, in the embedded world, when you get down to little 32-bit or 16-bit microcontrollers, not amd64 or aarm64 systems with lots of RAM, pure C is very prevelant. Many people don't find much value from classes when they are writing bare-metal code that primarily is twiddling bits in registers, and they also can't or don't want to pay the overhead for things like vtables when they are very RAM constrained (e.x. 64kbyte of RAM is not that uncommon in embedded).

So, I disagree with the idea that "actual uses of C are esoteric" from the post - it's very prevelant in the embedded space still. Just want people to think about it from another use case :).

The classic example of a big pure-C project at scale is the Linux kernel.

Ask Linus what he thinks of C++. His opinions are his own (EDIT: I actually like C++ a lot, please don't come at me with pitchforks! :)), I merely repost for entertainment value (from a while back):

https://lwn.net/Articles/249460/

Maybe a simpler example: go find a BSP (board support package) for the mirco of your choice. It's almost certain that all of the example code will be in C, not C++. They may or may not support building with g++, but C is the lingua franca of embedded devs.

4 days ago | parent [-]
[deleted]
4 days ago | parent | prev [-]
[deleted]
rramadass 4 days ago | parent | prev | next [-]

Right on the money!

Other then hardcore embedded guys and/or folks dealing with legacy C code, I and most folks i know almost always use C++ in various forms i.e. "C++ as a better C", "Object-Oriented C++ with no template shenanigans", "Generic programming in C++ with templates and no OO", "Template metaprogramming magic", "use any subset of C++ from C++98 to C++23" etc. And of course you can mix-and-match all of the above as needed.

C++'s multi-paradigm support is so versatile that i don't know why folks on HN keep moaning about its complexity; it is the price you pay for the power you get. It is the only language that i can program in for itty-bitty MCUs all the way to large complicated distributed systems on multiple servers plus i can span all of applications to systems to bare-metal programming.

unscaled 4 days ago | parent [-]

In practice, C++ is a language family more than a single programming language. Every C++ project I've worked on essentially had its own idiolect of C++.

rramadass 4 days ago | parent [-]

This is just a oft-repeated cliche and nothing more. Because C++ is a multi-paradigm language (with admittedly some less than ideal syntax/semantic choices) people overstate its complexity without much study/experience. Herd mentality than takes over and people start parroting and spreading the canard.

For the power and flexibility that C++ gives you, it is worth one's time to get familiar with and learn to use its complexity.

pod_krad 11 hours ago | parent [-]

>people overstate its complexity without much study/experience

The is no need in any experience to have ability to estimate C++ complexity. C++ specification is about 1500 pages.

lelanthran 4 days ago | parent | prev [-]

> Have you written significant amounts of C or C++?

Yes.

> Most of the time when someone says something is "written in C" it actually means it's C++ without the +± features.

Those "someone's" have not written a significant amount of C. Maybe they wrote a significant amount of C++.

The cognitive load when dealing with C++ code is in no way comparable to the cognitive load required when dealing with C code, outside of code-golfing exercises which is as unidiomatic as can be for both languages.

squirrellous 4 days ago | parent | prev | next [-]

Even new projects have good reasons to use c++. Maybe the ecosystem is built around it. Maybe competent c++ programmers are easier to find than rust ones. Maybe you need lots of dynamic loading. Maybe you want drop-in interop with C. Maybe you’re just more comfortable with c++.

I agree with the discipline aspect. C++ has a lot going against it. But despite everything it will continue to be mainstream for a long time, and by the looks of it not in the way of COBOL but more like C.

gajjanag 4 days ago | parent | prev [-]

> I don't think there are many (or any) upsides to the well documented downsides.

C++ template metaprogramming still remains extremely powerful. Projects like CUTLASS, etc could not be written to give best performance in as ergonomic a way in Rust.

There is a reason why the ML infra community mostly goes with Python-like DSL's, or template metaprogramming frameworks.

Last I checked there are no alternatives at scale for this.