Remix.run Logo
thomasmg 3 days ago

Yes, safety got more important, and it's great to support old C code in a safe way. The performance drop and specially the GC of Fil-C do limit the usage however. I read there are some ideas for Fil-C without GC; I would love to hear more about that!

But all existing programming languages seem to have some disadvange: C is fast but unsafe. Fil-C is C compatible but requires GC, more memory, and is slower. Rust is fast, uses little memory, but us verbose and hard to use (borrow checker). Python, Java, C# etc are easy to use, concise, but, like Fil-C, require tracing GC and so more memory, and are slow.

I think the 'perfect' language would be as concise as Python, statically typed, not require tracing GC like Swift (use reference counting), support some kind of borrow checker like Rust (for the most performance critical sections). And leverage the C ecosystem, by transpiling to C. And so would run on almost all existing hardware, and could even be used in the kernel.

procaryote 3 days ago | parent | next [-]

> Python, Java, C# [...] are slow

These might all be slower than well written C or rust, but they're not nearly the same magnitude of slow. Java is often within a magnitude of C/C++ in practice, and threading is less of a pain. Python can easily be 100x slower, and until very recently, threading wasn't even an option for more CPU due to the GIL so you needed extra complexity to deal with that

There's also Golang, which is in the same ballpark as java and c

thomasmg 2 days ago | parent | next [-]

You are right, languages with tracing GC are fast. Often, they are faster than C or Rust, if you measure peak performance of a micro-benchmark that does a lot of memory management. But that is only true if you just measure the speed of the main thread :-) Tracing garbage collection does most of the work in separate threads, and so is often not visible in benchmarks. Memory usage is also not easily visible, but languages with tracing GC need about twice the amount of memory than eg. C or Rust. (When using an area allocator in C, you can get faster, at the cost of memory usage.)

Yes, Python is specially slow, but I think it's probably more because it's dynamically typed, and not not compiled. I found PyPy is quite fast.

procaryote 2 days ago | parent | next [-]

I've built high load services in Java. GC can be an issue if it gets bad enough to have to pause, but it's in no way a big performance drain regularly.

pypy is fast compared to plain python, but it's not remotely in the same ballpark as C, Java, Golang

thomasmg 2 days ago | parent [-]

Sure, it's not a big performance drain. For the vast majority of software, it is fine. Usually, the ability to write programs more quickly in eg. Java (not having to care about memory management) outweighs the possible gain of Rust that can reduce memory usage, and total energy usage (because no background thread are needed for GC). I also write most software in Java. Right now, the ergonomics of languages that don't require tracing GC is just too high. But I don't think this is a law of nature; it's just that there a now better languages yet that don't require a tracing GC. The closest is probably Swift, from a memory / energy usage perspective, but it has other issues.

gf000 2 days ago | parent [-]

> and total energy usage

Surprisingly, Java is right behind manual memory managed languages in terms of energy use, due to its GC being so efficient. It turns out that if your GC can "sprint very fast", you can postpone running it till the last second, and memory drains the same amount no matter what kind of garbage it holds. Also, just "booking" that this region is now garbage without doing any work is also cheaper than calling potentially a chain of destructors or incrementing/decrementing counters.

igouy 2 days ago | parent | prev [-]

> not visible in benchmarks

fwiw benchmarksgame uses benchexec

https://github.com/sosy-lab/benchexec

mrsmrtss 3 days ago | parent | prev [-]

Of these languages, C# may actually be the fastest.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

tialaramex 3 days ago | parent [-]

In most cases the later entries in a language for the benchmark game are increasingly hyper-optimized and non-idiomatic for that language, which is exactly where C# will say "Here's some dangerous features, be careful" and the other languages are likely to suggest you use a bare metal language instead.

Presumably the benchmark game doesn't allow "I wrote this code in C" as a Python submission, but it would allow unsafe C# tricks ?

igouy 2 days ago | parent | next [-]

Note: 'possible hand-written vector instructions or "unsafe" or naked ffi' are flagged by *

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

Note: Here are naive un-optimised single-thread programs transliterated line-by-line literal style into different programming languages from the same original.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

mrsmrtss 2 days ago | parent | prev [-]

Unsafe C# is still C# though. Also C# has a lot more control over memory than Java for example, so you don't actually need to use unsafe to be fast. Or are you trying to say that C# is only fast when using unsafe?

jamincan 2 days ago | parent [-]

Likely just that the fastest implementations in the benchmarks game are using those features and so aren't really a good reflection of the language as it is normally used. This is a problem for any language on the list, really; the fastest implementations are probably not going to reflect idiomatic coding practices.

igouy 2 days ago | parent | next [-]

Here are naive un-optimised single-thread programs transliterated line-by-line literal style into different programming languages from the same original.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

neonsunset 2 days ago | parent | prev [-]

[dead]

pizlonator 2 days ago | parent | prev | next [-]

> The performance drop and specially the GC of Fil-C do limit the usage however. I read there are some ideas for Fil-C without GC; I would love to hear more about that!

I love how people assume that the GC is the reason for Fil-C being slower than C and that somehow, if it didn't have a GC, it wouldn't be slower.

Fil-C is slower than C because of InvisiCaps. https://fil-c.org/invisicaps

The GC is is crazy fast and fully concurrent/parallel. https://fil-c.org/fugc

Removing the GC is likely to make Fil-C slower, not faster.

thomasmg 2 days ago | parent [-]

Well I didn't mean GC is the reason for Fil-C being slower. I mean the performance drop of Fil-C (as described in the article) limits the usage, and the GC (independently) limits the usage.

I understand raw speed (of the main thread) of Fil-C can be faster with tracing GC than Fil-C without. But I think there's a limit on how fast and memory efficient Fil-C can get, given it necessarily has to do a lot of things at runtime, versus compile time. Energy usage, and memory usage or a programming language that uses a tracing GC is higher than one without. At least, if memory management logic can be done at compile time.

For Fil-C, a lot of the memory management logic, and checks, necessarily needs to happen at runtime. Unless if the code is annotated somehow, but then it wouldn't be pure C any longer.

dfawcus 2 days ago | parent [-]

I wonder if some of the Apple provided Clang annotations for bounds checking can be combined with Fil-C?

That then may allow for some of the uses to be statically optimised away, i.e. by annotating pointers upon which arithmetic is not allowed.

The Fil-C capability mechanisms for trapping double-free, and use-after free would probably have to be retained, but maybe it could optimise some uses?

spijdar 3 days ago | parent | prev | next [-]

Nim fits most of those descriptors, and it’s become my favorite language to use. Like any language, it’s still a compromise, but it sits in a really nice spot in terms of compromises, at least IMO. Its biggest downsides are all related to its relative “obscurity” (compared to the other mentioned languages) and resulting small ecosystem.

tptacek 3 days ago | parent | next [-]

The advantage of Fil-C is that it's C, not some other language. For the problem domain it's most suited to, you'd do C/C++, some other ultra-modern memory-safe C/C++ system, or Rust.

thomasmg 3 days ago | parent | prev [-]

I agree. Nim is memory safe, concise, and fast. In my view, Nim lacks a very clear memory management strategy: it supports ARC, ORC, manual (unsafe) allocation, move semantics. Maybe supporting viewer options would be better? Usually, adding things that are lacking is easier than removing features, specially if the community is small and if you don't want to alienate too many people.

pjmlp 3 days ago | parent | prev | next [-]

Slow to whom, though?

Yes, they might lose the meaningless benchmarks game that gets thrown around, what matters is are they fast enough for the problem that is being solved.

If everyone actually cared about performance above anything else, we wouldn't have an Electron crap crisis.

wrathofmonads 2 days ago | parent | next [-]

Seems like Windows is trying to address the Electron problem by adopting React Native for their WinAppSDK. RN is not just a cross-platform solution, but a framework that allows Windows to finally tap into the pool of devs used to that declarative UI paradigm. They appear to be standardizing on TypeScript, with C++ for the performance-critical native parts. They leverage the scene graph directly from WinAppSDK. By prioritizing C++ over C# for extensions and TS for the render code, they might actually hit the sweet spot.

https://microsoft.github.io/react-native-windows/docs/new-ar...

pjmlp 2 days ago | parent [-]

Anything related to WinUI is a bad joke.

Have fun following the discussions and amount of bugs,

https://github.com/microsoft/microsoft-ui-xaml

That C++ support that WinUI team marketing keeps talking about relies on a framework that is no longer being developed.

> The reason the issues page only lets you create a bug report is because cppwinrt is in maintenance mode and no longer receiving new feature work. cppwinrt serves an important and specific role, but further feature development risks destabilizing the project. Additional helpers are regularly contributed to complimentary projects such as https://github.com/microsoft/wil/.

From https://github.com/microsoft/cppwinrt/issues/1289#issuecomme...

IshKebab 3 days ago | parent | prev | next [-]

I don't know I think what matters is that performance is close to the best you can reasonably get in any other language.

People don't like leaving performance on the table. It feels stupid and it lets competitors have an easy advantage.

The Electron situation is not because people don't care about performance; it's because they care more about some other things (e.g. not having to do 4x the work to get native apps).

pjmlp 3 days ago | parent [-]

Your second paragraph kind of contradicts the last one.

And yes, caring more about other things is why performance isn't the top number one item, and most applications have long stopped being written in pure C or C++ since the early 2000's.

We go even further in several abstraction layers, nowadays with the ongoing uptake of LLMs and agentic workflows in iPaaS low code tools.

Personally at work I haven't written a pure 100% C or C++ application since 1999, always a mix of Tcl, Perl, Python, C# alongside C or C++, private projects is another matter.

zozbot234 3 days ago | parent [-]

Most applications stopped being written in C/C++ when Java first came out - the first memory safe language with mass enterprise adoption. Java was the Rust of the mid-1990s, even though it used a GC which made it a lot slower and clunkier than actual Rust.

pjmlp 3 days ago | parent [-]

I would say that the "first" belongs to Smalltalk, Visual Basic and Delphi.

What Java had going for it was the massive scale of Sun's marketing, and the JDK being available as free beer, however until Eclipse came to be, all IDEs were commercial, and everyone was coding in Emacs, vi (no vim yet), nano, and so on.

However it only became viable after Java 1.3, when Hotspot became part of Java's runtime.

I agree with the spirit of your comment though, and I also think that the blow given by Java to C and C++ wasn't bigger, only because AOT tools were only available under high commercial prices.

Many folks use C and C++, not due to their systems programming features, rather they are the only AOT compiled languages that they know.

igouy 2 days ago | parent | prev [-]

When performance doesn't matter, it doesn't matter.

vlovich123 3 days ago | parent | prev | next [-]

> And leverage the C ecosystem, by transpiling to C

I heavily doubt that this would work on arbitrary C compilers reliably as the interpretation of the standard gets really wonky and certain constructs that should work might not even compile. Typically such things target GCC because it has such a large backend of supported architectures. But LLVM supports a large overlapping number too - thats why it’s supported to build the Linux kernel under clang and why Rust can support so many microcontrollers. For Rust, that’s why there’s the rust codegen gcc effort which uses GCC as the backend instead of LLVM to flush out the supported architectures further. But generally transpiration is used as a stopgap for anything in this space, not an ultimate target for lots of reasons, not least of which that there’s optimizations that aren’t legal in C that are in another language that transpilation would inhibit.

> Rust is fast, uses little memory, but us verbose and hard to use (borrow checker).

It’s weird to me that my experience is that it was as hard to pick up the borrow checker as the first time I came upon list comprehension. In essence it’s something new I’d never seen before but once I got it it went into the background noise and is trivial to do most of the time, especially since the compiler infers most lifetimes anyway. Resistance to learning is different than being difficult to learn.

thomasmg 3 days ago | parent [-]

Well "transpiling to C" does include GCC and clang, right? Sure, trying to support _all_ C compilers is nearly impossible, and not what I mean. Quite many languages support transpiling to C (even Go and Lua), but in my view that alone is not sufficient for a C replacement in places like the Linux kernel: for this to work, tracing GC can not be used. And this is what prevents Fil-C and many other languages to be used in that area.

Rust borrow checker: the problem I see is not so much that it's hard to learn, but requires constant effort. In Rust, you are basically forced to use it, even if the code is not performance critical. Sure, Rust also supports reference counting GC, but that is more _verbose_ to use... It should be _simpler_ to use in my view, similar to Python. The main disadvantage of Rust, in my view, is that it's verbose. (Also, there is a tendency to add too many features, similar to C++, but that's a secondary concern).

estebank 2 days ago | parent | next [-]

> Rust also supports reference counting GC, but that is more _verbose_ to use... It should be _simpler_ to use in my view, similar to Python. The main disadvantage of Rust, in my view, is that it's verbose.

I think there's space for Rust to become more ergonomic, but its goals limit just how far it can go. At the same time I think there's space to take Rust and make a Rust# that goes further on the Swift/Scala end of the spectrum, where things like auto-cloning of references are implemented first, that can consume Rust libraries. From the organizational point of you, you can see it as a mix between nightly and editions. From a user's point of view you can look at it as a mode to make refactoring faster, onbiarding easier and a test bed for language evolution. Not being Rust itself it would also allow for different stability guarantees (you can have breaking changes every year), which also means you can be holder on tryin things out knowing you're not permanently stuck with them. People who care about performance, correctness and reuse can still use Rust. People who would be well served by Swift/Scala, have access to Rust's libraries and toolchain.

> (Also, there is a tendency to add too many features, similar to C++, but that's a secondary concern).

These two quoted sentiments seem contradictory: making Rust less verbose to interact with reference counted values would indeed be adding a feature.

zozbot234 3 days ago | parent | prev | next [-]

> Sure, Rust also supports reference counting GC, but that is more _verbose_ to use... It should be _simpler_ to use in my view, similar to Python.

If that's what you're looking for, you can use Swift. The latest release has memory safety by default, just like Rust.

galangalalgol 2 days ago | parent | next [-]

Someone, maybe Tolnay?, recently posted a short Go snippet that segfaults because the virtual function table pointer and data pointer aren't copied atomically or mutexed. The same thing works in swift, because neither is thread safe. Swift is also slower than go unless you pass unchecked making it even less safe than go. C#/f# are safer from that particular problem and more performant than either go or swift, but have suffered from the same deserialization attacks that java does. Right now if you want true memory and thread safety, you need to limit a GC language to zero concurrency, use a borrow checker, i.e. rust, or be purely functional which in production would mean haskell. None of those are effortless, and which is easiest depends on you and your problem. Rust is easiest for me, but I keep thinking if I justvwrite enough haskell it will all click. I'm worried if my brain starts working that way about the impacts on things other than writing Haskell.

galangalalgol 2 days ago | parent | next [-]

Replying to myself because a vouch wasn't enough to bring the post back from the dead. They were partially right and educated me. The downvotes were unnecessary. MS did start advising against dangerous deserializers 8yrs ago. They were only deprecated three years ago though, and only removed last year. Some of the remaining are only mostly safe and then only if you follow best practice. So it isn't a problem entirely of the past, but it has gotten a lot better.

Unless you are writing formal proofs nothing is completely safe, GC languages had found a sweet spot until increased concurrency started uncovering thread safety problems. Rust seems to have found a sweet spot that is usable despite the grumbling. It could probably be made a bit easier. The compiler already knows when something needs to be send or synch, and it could just do that invisibly, but that would lead people to code in a way that had lots of locking which is slow and generates deadlocks too often. This way the wordiness of shared mutable state steers you towards avoiding it except when a functional design pattern wouldn't be performant. If you have to use mutex a lot in rust stop fighting the borrow checker and listen to what it is saying.

neonsunset 2 days ago | parent | prev [-]

> C#/f# are safer from that particular problem and more performant than either go or swift, but have suffered from the same deserialization attacks that java does.

They have not in the past 10 years.

thomasmg 2 days ago | parent | prev [-]

Yes. I do like Swift as a language. The main disadvantages of Swift, in my view, are: (A) The lack of an (optional) "ownership" model for memory management. So you _have_ to use reference counting everywhere. That limits the performance. This is measurable: I converted some micro-benchmarks to various languages, and Swift does suffer for the memory managment intensive tasks [1]. (B) Swift is too Apple-centric currently. Sure, this might be become a non-issue over time.

[1] https://github.com/thomasmueller/bau-lang/blob/main/doc/perf...

reppap 2 days ago | parent | prev | next [-]

Re: borrow checker

Isn't it just enforcing something you should be doing in every language anyway, i.e. thinking about ownership of data.

zozbot234 2 days ago | parent | next [-]

The borrow checker involves documenting the ownership of data throughout the program. That's what people are calling "overly verbose" and saying it "makes comprehensive large-scale refactoring impractical" as an argument against Rust. (And no it doesn't, it's just keeping you honest about what the refactor truly involves.)

estebank 2 days ago | parent [-]

The annoying experience with the borrow checker is when following the compiler errors after making a change until you hit a fundamental ownership problem a few levels away from the original change that precludes the change (like ending up with a self referencial borrow). This can bite even experienced developers, depending on how many layers of indirection there are (and sometimes the change that would be adding a single Rc or Cell in a field isn't applicable because it happens in a library you don't control). I do still prefer hitting that wall than having it compile and end up with rare incorrect runtime behaviour (with any luck, a segfault), but it is more annoying than "it just works because the GC dealt with it for me".

Measter 2 days ago | parent [-]

There are also limits to what the borrow checker is capable of verifying. There will always be programs which are valid under the rules the borrow checker is enforcing, but the borrow checker rejects.

It's kinda annoying when you run into those. I think I've also ran into a situation where the borrow checker itself wasn't the issue, but rather the way references were created in a pattern match causing the borrow checker to reject the program. That was also annoying.

vlovich123 2 days ago | parent [-]

Polonius hopefully arrives next year and reduces the burden here further. Partial field borrows would be huge so that something like obj.set_bar(obj.foo()) would work.

vacuity 2 days ago | parent [-]

Given the troubles with shipping Polonius, I imagine that there isn't much more room for improvements in "pure borrow checking" after Polonius, though more precise ways to borrow should improve ergonomics a lot more. You mentioned borrowing just the field; I think self-referential borrows are another.

vacuity 2 days ago | parent | prev [-]

The borrow checker is an approximation of an ideal model of managing things. In the general case, the guidelines that the borrow checker establishes are a useful way to structure code (though not necessarily the only way), but sometimes the borrow checker simply doesn't accept code that is logically sound. Rust is statically analyzed with an emphasis on safety, so that is the tradeoff made for Rust.

vlovich123 2 days ago | parent | prev [-]

> Quite many languages support transpiling to C (even Go and Lua)

Source? I’m not familiar with official efforts here. I see one in the community for Lua but nothing for Go. It’s rare for languages to use this as anything other than a stopgap or a neat community poc. But my point was precisely this - if you’re only targeting GCC/LLVM, you can just use their backend directly rather than transpiling to C which only buys you some development velocity at the beginning (as in easier to generate that from your frontend vs the intermediate representation) at the cost of a worse binary output (since you have to encode the language semantics on top of the C virtual machine which isn’t necessarily free). Specifically this is why transpile to C makes no sense for Rust - it’s already got all the infrastructure to call the compiler internals directly without having to go through the C frontend.

> Rust borrow checker: the problem I see is not so much that it's hard to learn, but requires constant effort. In Rust, you are basically forced to use it, even if the code is not performance critical

Your only forced to use it when you’re storing references within a struct. In like 99% of all other cases the compiler will correctly infer the lifetimes for you. Not sure when the last time was you tried to write rust code.

> Sure, Rust also supports reference counting GC, but that is more _verbose_ to use... It should be _simpler_ to use in my view, similar to Python.

Any language targeting the performance envelope rust does needs GC to be opt in. And I’m not sure how much extra verbosity there is to wrap the type with RC/Arc unless you’re referring to the need to throw in a RefCell/Mutex to support in place mutation as well, but that goes back to there not being an alternative easy way to simultaneously have safety and no runtime overhead.

> The main disadvantage of Rust, in my view, is that it's verbose.

Sure, but compared to what? It’s actually a lot more concise than C/C++ if you consider how much boilerplate dancing there is with header files and compilation units. And if you start factoring in that few people actually seem to actually know what the rule of 0 is and how to write exception safe code, there’s drastically less verbosity and the verbosity is impossible to use incorrectly. Compared to Python sure, but then go use something like otterlang [1] which gives you close to Rust performance with a syntax closer to Python. But again, it’s a different point on the Pareto frontier - there’s no one language that could rule them all because they’re orthogonal design criteria that conflict with each other. And no one has figured out how to have a cohesive GC that transparently and progressively lets you go between no GC, ref GC and tracing GC despite foundational research a few years back showing that ref GC and tracing GC are part of the same spectrum and high performing implementations in both the to converge on the same set of techniques.

[1] https://github.com/jonathanmagambo/otterlang

thomasmg 2 days ago | parent [-]

I agree transpile to C will not result in the fastest code (and of course not the fastest toolchain), but having the ability to convert to C does help in some cases. Besides the ability to support some more obscure targets, I found it's useful for building a language, for unit tests [1]. One of the targets, in my case, is the XCC C compiler, which can run in WASM and convert to WASM, and so I built the playground for my language using that.

> transpiling to C (even Go and Lua)

Go: I'm sorry, I thought TinyGo internally converts to C, but it turns out that's not true (any more?). That leaves https://github.com/opd-ai/go2c which uses TinyGo and then converts the LLVM IR to C. So, I'm mistaken, sorry.

Lua: One is https://github.com/davidm/lua2c but I thought eLua also converts to C.

> Your only forced to use it when you’re storing references within a struct.

Well, that's quite often, in my view.

> Not sure when the last time was you tried to write rust code.

I'm not a regular user, that's true [2]. But I do have some knowledge in quite many languages now [3] and so I think I have a reasonable understanding of the advantages and disadvantages of Rust as well.

> Any language targeting the performance envelope rust does needs GC to be opt in.

Yes, I fully agree. I just think that Rust has the wrong default: it uses single ownership / borrowing by _default_, and RC/Arc is more like an exception. I think most programs could use RC/Arc by default, and only use ownership / borrowing where performance is critical.

> The main disadvantage of Rust, in my view, is that it's verbose. >> Sure, but compared to what?

Compared to most languages, actually [4]. Rust is similar to Java and Zig in this regard. Sure, we can argue the use case of Rust is different than eg. Python.

[1] https://github.com/thomasmueller/bau-lang [2] https://github.com/thomasmueller/lz4_simple [3] https://github.com/thomasmueller/bau-lang/tree/main/src/test... [4] https://github.com/thomasmueller/bau-lang/blob/main/doc/conc...

vlovich123 2 days ago | parent [-]

> I'm not a regular user, that's true [2]. But I do have some knowledge in quite many languages now [3] and so I think I have a reasonable understanding of the advantages and disadvantages of Rust as well.

That is skewing your perception. The problem is that how you write code just changes after a while and both things happen: you know how to write things to leverage the compiler inferred lifetimes better and the lifetimes fade into the noise. It only seems really annoying, difficult and verbose at first which is what can skew your perception if you don’t actually commit to writing a lot of code and reading others’ code so that you become familiar with it better.

> Compared to most languages, actually [4]. Rust is similar to Java and Zig in this regard. Sure, we can argue the use case of Rust is different than eg. Python.

That these are the languages you’re comparing of is a point in Rust’s favor - it’s targeting a significantly lower level and higher performance of language. So Java is not comparable at all. Zig however nice is fundamentally not a safe language (more like C with fewer razor blades) and is inappropriate from that perspective. Like I said - it fits a completely different Pareto frontier - it’s strictly better than C/C++ on every front (even with the borrow checker it’s faster and less painful development) and people are considering it in the same breath as Go (also unsafe and not as fast), Java (safe but not as fast) and Python (very concise but super slow and code is often low quality historically).

sakompella 3 days ago | parent | prev [-]

I think transpiling to C is probably the least interesting way to tap into C. FFI is a lot valuable (and doable).

thomasmg 3 days ago | parent [-]

There are surprisingly many languages that support transpiling to C: Python (via Cython), Go (via TinyGo), Lua (via eLua), Nim, Zig, Vlang. The main advantage (in my view) is to support embedded systems, which might not match your use case.

pjmlp 2 days ago | parent [-]

Eiffel, that is how it always worked, a VM based workflow for development (Melt VM), compilation via C or C++ for release builds.