▲ | pizlonator 5 days ago | |
> While there are certainly other reasons C/C++ get used in new projects, I think 99% not being performance or footprint sensitive is way overstating it. Here’s my source. I’m porting Linux From Scratch to Fil-C There is load bearing stuff in there that I’d never think of off the top of my head that I can assure you works just as well even with the Fil-C tax. Like I can’t tell the difference and don’t care that it is technically using more CPU and memory. So then you’ve got to wonder, why aren’t those things written in JavaScript, or Python, or Java, or Haskell? And if you look inside you just see really complex syscall usage. Not for perf but for correctness. It code that would be zero fun to try to write in anything other than C or C++ | ||
▲ | reorder9695 4 days ago | parent | next [-] | |
I have no credentials here but I'd be interested in knowing what environmental impact things like this (like relatively high overhead things like filc, vms, containers) as opposed to running optimised well designed code. I don't mean in regular project's, but in things specifically like the linux kernel that's potentially millions? billions? of computers | ||
▲ | kragen 4 days ago | parent | prev | next [-] | |
I wonder if something like LuaJIT would be an option. Certainly Objective-C would work. | ||
▲ | johncolanduoni 4 days ago | parent | prev [-] | |
My source is that Google spent a bunch of engineer time to write, test, and tweak complicated outlining passes for LLVM to get broad 1% performance gains in C++ software, and everybody hailed it as a masterstroke when it shipped. Was that performance art? 1% of C++ developers drowning out the apparent 99% of ones that didn’t (or shouldn’t) care? I never said there was no place for taking a 2x performance hit for C or C++ code. I think Fil-C is a really interesting direction and definitely well executed. I just don’t see how you can claim that C++ code that can’t take a 2x performance hit is some bizarre, 1% edge case for C++. |