Remix.run Logo
gf000 5 days ago

This discussion is absolutely meaningless without specifying what kind of software we are talking about.

4x slowdown may be absolutely irrelevant in case of a software that spends most of its time waiting on IO, which I would wager a good chunk of user-facing software does. Like, if it has an event loop and does a 0.5 ms calculation once every second, doing the same calculation in 2 ms is absolutely not-noticeable.

For compilers, it may not make as much sense (not even due to performance reasons, but simply because a memory issue taking down the program would still be "well-contained", and memory leaks would not matter much as it's a relatively short-lived program to begin with).

And then there are the truly CPU-bound programs, but seriously, how often do you [1] see your CPU maxed out for long durations on your desktop PC?

[1] not you, pizlonator, just joining the discussion replying to you

monkeyelite 5 days ago | parent [-]

This IO bound myth is commonly repeated - yet most software executes in time many multiples above the IO work. Execution time is summed and using a language like C lets you better control your data and optimize IO resources.

gf000 5 days ago | parent [-]

Well, software is not like a traditional Turing machine of having an input, buzzing a bit, and returning a response.

They are most commonly running continuously, and reacting to different events.

You can't do the IO work that depends on a CPU work ahead of time, and neither can you do CPU work that depends on IO. You have a bunch of complicated interdependencies between the two, and the execution time is heavily constrained by this directed graph. No matter how efficient your data manipulation algorithm is, if you still have to wait for it to load from the web/file.

Just draw a Gantt chart and sure, sum the execution time. My point is that due to interdependencies you will have a longest lane and no matter what you do with the small CPU parts, you can only marginally affect the whole.

It gets even more funny with parallelism (this was just concurrency yet), where a similar concept is named Amdahl's law.

And I would even go as far and claim that what you may win by C you often lose several-folds due to going with a simpler parallelism model for fear of Segfaults, which you could fearlessly do in a higher-level language.

monkeyelite 4 days ago | parent [-]

> you often lose several-folds due to going with a simpler parallelism model for fear of Segfaults, which you could fearlessly do in a higher-level language.

Wait - what was that part about Amdahl's law?

Also segfaults are unrelated to parallelism.

gf000 4 days ago | parent [-]

Amdahl's law was about the potential speedup from going parallel being limited by parts that must be serial. Nothing controversial here - many tasks can be parallelized just fine.

My point is that you often see a simpler algorithm/data structure in C for fear of a memory issue/not getting some edge case right.

What part are you disagreeing with? That parallel code has more gotchas, that make a footgun-y language even more prone to failures?