Remix.run Logo
Unity's Mono problem: Why your C# code runs slower than it should(marekfiser.com)
260 points by iliketrains a day ago | 154 comments
luaKmua 19 hours ago | parent | next [-]

In general when game development comes up here I tend not to engage as professional gamedev is so different than what other people tend to deal with that it's hard to even get on the same page, but seeing as how this one is very directly dealing with my expertise I'll chime in.

There are few things off with the this post that essentially sound as someone more green when it comes to Unity development (no problem, we all start somewhere).

1. The stated approach of separating the simulation and presentation layers isn't all that uncommon, in fact it was the primary way of achieving performance in the past (though, you usually used C++, not C#).

2. Most games don't ship on the mono backend, but instead on il2cpp (it's hard to gauge how feasible that'd be from this post as it lacks details).

3. In modern Unity, if you want to achieve performance, you'd be better off taking the approach of utilizing the burst compiler and HPC#, especially with what appears to be happening in the in sample here as the job system will help tremendously.

4. Profiling the editor is always a fools errand, it's so much slower than even a debug build for obvious reasons.

Long story short, Unity devs are excited for the mentioned update, but it's for accessing modern language features, not particularly for any performance gains. Also, I've seen a lot of mention around GC through this comment section, and professional Unity projects tend to go out of their way to minimize these at runtime, or even sidestep entirely with unmanaged memory and DOTS.

torginus 12 hours ago | parent | next [-]

I think you've unfortunately got suckered in by Unity marketing wholesale, and things would stand to be cleared up a bit.

Unity's whole shtick is that they make something horrible, then improve upon it marginally. The ground reality is that these performance enhancement schemes still fall very much short of just doing the basic sensible thing - using CoreCLR for most code, and writing C++ for the truly perf critical parts.

IL2Cpp is a horror-kludge of generated code, that generates low-quality C++ code from .NET IL, relying on the opitmizing compiler to extract decent performance out of it.

You can check it out: https://unity.com/blog/engine-platform/il2cpp-internals-a-to...

The resulting code gives up every possible convenience of C# (compile speed, convenience, debuggability), while falling well short of even modern .NET on performance.

The Burst compiler/HPC# plays on every meme perpetuated by modern gamedev culture (structure-of-arrays, ECS), but performance wise, generally still falls short of competently, but naively written C++ or even sometimes .NET C#. (Though tbf, most naive CoreCLR C# code is like 70-80% the speed of hyper-optimized Burst)

These technologies needless to say, are entirely proprietary, and require you to architect your code entirely their paradigms, use proprietary non-free libraries that make it unusable outside unity, and other nasty side effects.

This whole snakeoil salesmanship is enabled by these cooked Unity benchmarks that always compare performance to the (very slow) baseline Mono, not modern C# or C++ compilers.

These are well-established facts, benchmarked time and time again, but Unity marketing somehow still manages to spread the narrative of their special sauce compilers somehow being technically superior.

But it seems the truth has been catching up to them, and even they realized they have to embrace CoreCLR - which is coming soonTM in Unity. I think it's going to be a fun conversation when people realize that their regular Unity code using CoreCLR runs just as fast or faster than the kludgey stuff they spent 3 times as much time writing, that Unity has been pushing for more than a decade as the future of the engine.

pjmlp 10 hours ago | parent | next [-]

The biggest issue is that Unity is at the same time, the farol beacon for doing game develpment in C#, that Microsoft refuses to support, see how much effort Apple puts on game kits for Swift, versus DirectX team.

Efforts like Managed DirectX and XNA were driven by highly motivated individuals, and were quickly killed as soon as those individuals changed role.

One could blame them for leaving the project, or see that without them managemenent did not care enough to keep them going.

While at the same time, since Unity relies on such alternative approaches, it also creates a false perception on how good .NET and C# are in reality, for those devs that never learned C# outside Unity.

In a similar way it is like those devs that have learnt Java in Android, and get sold on the Kotlin vs Java marketing from Google, by taking Android Java as their perception of what it is all about.

Going back to game development and .NET, at least Capcom has the resources to have their own fork of modern .NET, e.g. Devil May Cry for the Playstation was done with it.

"RE:2023 C# 8.0 / .NET Support for Game Code, and the Future"

https://www.youtube.com/watch?v=tDUY90yIC7U

torginus 9 hours ago | parent [-]

Very interesting talk, will definitely watch when I have the time!

XNA was very influential for me as well - when I was in high school, I tried to get into 3D game dev, and I started with Frank. D Luna's otherwise excellent book on DirectX gamedev - man that thing was a tome. However, having to learn DirectX, C++, linear algebra, shaders, WIN32 API, COM etc. at the same time (which to be fair were explained very thoroughly by the book), was just too much for me back then, not to mention the absolute pain of trying to get models and assets in the game.

Later on I discovered XNA, and it was a breath of fresh air for me - a much easier language, good IDE support and a decent way of importing assets, and an much nicer API made it so much easier to get started.

And the truly great thing about it was that it didn't dumb things down or hide stuff from the developer - it merely provided sane defaults, and utility functions so that you didn't have to engage with all that complexity at once.

I think Unity was also great as well, at least in the beginning (the first decade of existence), but it's chief issue is that Unity's 'dialect' of C# was very different from how you programmed in regular C# (or mostly any other engine) - my feeling is that Unity should've spun their own language/runtime rather than trying to make C# into what it wasn't designed to be.

debugnik 5 hours ago | parent | next [-]

> Unity should've spun their own language/runtime

They did, and that's why their C# API is such an oddball. Unity used to support 3 .NET languages: UnityScript, Boo, and C#. UnityScript started as the recommended one, but I believe it was just a JS-like syntax for Boo's semantics. Eventually C# users dominated, and UnityScript and Boo got deprecated and removed, but Unity's .NET API was left with all the quirks from their UnityScript era.

pjmlp 9 hours ago | parent | prev [-]

They did, hence Boo

https://en.wikipedia.org/wiki/Boo_(programming_language)

I would argue that C# has always been a good alternative for games, starting with Arena Wars, the problem was Microsoft not being serious about AOT or low level programming, because that was left for C++/CLI.

https://en.wikipedia.org/wiki/Arena_Wars

Here is the person responsible for pushing XNA, even though management wasn't into it.

"The billion dollar decision that launched XNA"

https://youtu.be/wJY8RhPHmUQ?si=_3pic4pEiOlqQzvm

When she left Microsoft, XNA was promptly replaced by DirectXTK, because C++ is the only true way for DirectX team,

https://walbourn.github.io/directxtk/

animal531 5 hours ago | parent | prev | next [-]

This part of your comment is wrong on many levels: "The Burst compiler/HPC# plays on every meme perpetuated by modern gamedev culture (structure-of-arrays, ECS), but performance wise, generally still falls short of competently, but naively written C++ or even sometimes .NET C#. (Though tbf, most naive CoreCLR C# code is like 70-80% the speed of hyper-optimized Burst)".

C++ code is much faster than C#, but modern C# has become a lot better with all the time that's been invested into it. But you can't just take a random bit of C code and think that it's going to be better than an optimized bit of C#, those days are long past.

Secondly, the whole point of Burst is that it enables vectorization, which means that if you've converted code to it and it's used properly that its going to support instructions up to 256 wide (from what I remember it doesn't use AVX512). That means that it's going to be significantly faster than standard C# (and C).

If the author is generating for example maps and it takes 80 seconds with Mono, then getting to between 10-30 seconds with Burst is easy to achieve just due to its thread usage. Once you then add in focused optimizations that make use of vectorization you can get that down to probably 4 odd seconds (the actual numbers really depend on what you're doing, if its a numerical calculation you can easily get to 80x improvement, but if there's a lot of logic being applied then you'll be stuck at e.g. 8x.

For the last point, new modern C# can't just magically apply vectorization everywhere, because developers intersperse far too much logic. It has a lot of libraries etc. that have become a lot more performant, but again you can't compare that directly to Burst. To compare to Burst you have to do a comparison with Numerics, etc.

doctorpangloss 7 hours ago | parent | prev [-]

While I get that you’re making a stylized comment, it’s a big drag. It’s one of those, “everyone is an idiot except me” styles. By all means, make a game engine that people will adopt based on CoreCLR (or whatever).

It’s not saying much that everything has tradeoffs. During the “decade” you are talking about, CoreCLR didn’t have a solution for writing anything for iOS, and today, it isn’t a solution for writing games for iOS. What you are calling kludges was ultimately a very creative solution. Usually the “right” solution, the nonexistent one that you are advocating with, ends with Apple saying no.

That is why Unity is a valuable piece of software and a big company: not because of C# runtimes, but because they get Apple and Nintendo to say yes in a world where they usually say no.

torginus 5 hours ago | parent [-]

I am sorry that I came across as abrasive, however the points I raised, are as far as I know, factual (and echoed by others' comments). I don't think ignoring them would be constructive.

During the 'decade' where CoreCLR was not a solution, Mono (Xamarin) still was - in fact their entire commercial appeal (before they were bought out by Microsoft) was that they provided an AOT compiled .NET for mobile devices.

Unity got stuck on an ancient version compared to the more modern Mono implementations (I think this is the case to this day), and Unity's version was much, much slower.

Afair, most of the time, the MS version had them (Xamarin) beat, but the difference wasn't huge, between the two, especially compared to Unity's mono. It was an AOT runtime, not sure about Nintendo, but their entire business model hinged on being able to ship to Apple's app store.

I hate to dig up the past, but Unity's long-standing issue was their ancient GC (which was not incremental then), combined with a simple compiler bug, that made every foreach loop allocate an iterator on the heap. The combination of the two meant that basically every non-trial Unity game that used foreach extensively, stuttered. This simple compiler fix took them years to upstream, with people hacking around the issue by replacing the shipped compiler with a patched one.

And I still stand by my point - if Unity went with an upstream Mono, and made it convenient and easy to link with C++ code, it'd have had the same or better performance out of the box as they had with their exotic stuff.

And I also back up the fact that Unity's marketing was deceptive - HPC#/Burst/DOTS/ECS was marketed as some gateway to previously unheard of performance, but when some people went around benchmarking it, basic sensible C++ had it beat (I can't find the benchmarks rn, but multithreaded DOTS was slower than single threaded c++ in a simulation).

What I said about Burst holds up as well.

These are not tradeoffs, but bad technical deicisions, whose legitimacy can be only defended when you ignore the most sensible alternatives.

doctorpangloss 26 minutes ago | parent [-]

i suppose you could speculate, why do i feel Unity's marketing isn't deceptive; why do I think it's a pretty well written game engine; and why am i ignoring these points about C# performance? because i'm stupid? i can't really say, because i'll be downvoted haha. a lot of smart, wise successful game developers choose unity...

iliketrains 18 hours ago | parent | prev [-]

Author here, thanks for your perspective. Here some thoughts:

> approach of separating the simulation and presentation layers isn't all that uncommon

I agree that some level of separation is is not that uncommon, but games usually depend on things from their respective engine, especially on things like datatypes (e.g. Vector3) or math libraries. The reason I mention that our game is unique in this way is that its non-rendering code does not depend on any Unity types or DLLs. And I think that is quite uncommon, especially for a game made in Unity.

> Most games don't ship on the mono backend, but instead on il2cpp

I think this really depends. If we take absolute numbers, roughly 20% of Unity games on Steam use IL2CPP [1]. Of course many simple games won't be using it so the sample is skewed is we want to measure "how many players play games with IL2CPP tech". But there are still many and higher perf of managed code would certainly have an impact.

We don't use IL2CPP because we use many features that are not compatible with it. For example DLC and mods loading at runtime via DLLs, reflection for custom serialization, things like [FieldOffset] for efficient struct packing and for GPU communication, etc.

Also, having managed code makes the game "hackabe". Some modders use IL injection to be able to hook to places where our APIs don't allow. This is good and bad, but so far this allowed modders to progress faster than we expected so it's a net positive.

> In modern Unity, if you want to achieve performance, you'd be better off taking the approach of utilizing the burst compiler and HPC#

Yeah, and I really wish we would not need to do that. Burst and HPC# are messy and add a lot of unnecessary complexity and artificial limitations.

The thing is, if Mono and .NET were both equally "slow", then sure, let's do some HPC# tricks to get high performance, but it is not! Modern .NET is fast, but Unity devs cannot take advantage of it, which is frustrating.

By the way, the final trace with parallel workers was just C#'s workers threads and thread pool.

> Profiling the editor is always a fools errand

Maybe, but we (devs) spend 99% of our time in the editor. And perf gains from editor usually translate to the Release build with very similar percentage gains (I know this is generally not true, but in my experience it is). We have done many significant optimizations before and measurements from the editor were always useful indicator.

What is not very useful is Unity's profiler, especially with "deep profile" enabled. It adds constant cost per method, highly exaggerating cost of small methods. So we have our own tracing system that does not do this.

> I've seen a lot of mention around GC through this comment section, and professional Unity projects tend to go out of their way to minimize these at runtime

Yes, minimizing allocations is key, but there are many cases where they are hard to avoid. Things like strings processing for UI generates a lot of garbage every frame. And there are APIs that simply don't have an allocation-free options. CoreCLR would allow to further cut down on allocations and have better APIs available.

Just the fact that the current GC is non-moving means that the memory consumption goes up over time due to fragmentation. We have had numerous reports of "memory" leaks where players report that after periodic load/quit-to-menu loops, memory consumption goes up over time.

Even if we got fast CoreCLR C# code execution, these issues would prevail, so improved CG would be the next on the list.

[1] https://steamdb.info/stats/releases/?tech=SDK.UnityIL2CPP

animal531 5 hours ago | parent | next [-]

What I agree on is that if we had modern .NET available we'd get a free 2-3x improvement, it would definitely be great. BUT having said that, if you're into performance but unwilling to use the tools available then that's on you.

From the article it seems that you're using some form of threading to create things, but you don't really specify which and/or how.

The default C# implementations are usually quite poor performance wise, so if you used for example the default thread pool I can definitively say that I've achieved a 3x speedup over that by using my own thread pool implementation which would yield about the same 30s -> 12s reduction.

Burst threading/scheduling in general is also a lot better than the standard one, in general if I feed it a logic heavy method (so no vectorization) then I can beat it by a bit, but not close to the 3x of the normal thread pool.

But then if your generation is number heavy (vs logic) then having used Burst you could probably drop that calculation time down to 2-3 seconds (in the same as if you used Vector<256> numerics).

Finally you touch on GC, that's definitely a problem. The Mono variant has been upgraded by them over time, but C# remains C# which was never meant for gaming. Even if we had access to the modern one there would still be issues with it. As with all the other C# libraries etc., they never considered gaming a target where what we want is extremely fast access/latency with no hiccups. C# in the business world doesn't really care if it loses 16ms (or 160ms) here and there due to garbage, it's usually not a problem there.

Coding in Unity means having to go over every instance of allocation outside of startup and eliminating them, you mention API's that still need to allocate which I've never run into myself. Again modern isn't going to simply make those go away.

mrsmrtss 3 hours ago | parent [-]

Regarding GC pauses, there is an interesting alternative GC with ultra low pauses for .NET called Satori. It's primarly discussed here https://github.com/dotnet/runtime/discussions/115627, and the GC itself can be found here https://github.com/VSadov/Satori

timmytokyo 18 hours ago | parent | prev | next [-]

>We don't use IL2CPP because we use many features that are not compatible with it. For example DLC and mods loading at runtime via DLLs, reflection for custom serialization, things like [FieldOffset] for efficient struct packing and for GPU communication, etc.

FieldOffset is supported by IL2CPP at compile time [0]. You can also install new DLLs and force the player to restart if you want downloadable mod support.

It's true that you can't do reflection for serialization, but there are better, more performant alternatives for that use case, in my experience.

[0] https://docs.unity3d.com/Manual/scripting-restrictions.html

iliketrains 17 hours ago | parent | next [-]

> You can also install new DLLs and force the player to restart if you want downloadable mod support.

I am not aware of an easy way to load (managed) mods as DLLs to IL2CPP-compiled game. I am thinking about `Assembly.LoadFrom("Mod.dll")`.

Can you elaborate how this is done?

> there are better, more performant alternatives for that use case, in my experience.

We actually use reflection to emit optimal code for generic serializers that avoid boxing and increase performance.

There may be alternatives, we explored things like FlatBuffers and their variants, but nothing came close to our system in terms of ease of use, versioning support, and performance.

If you have some suggestions, I'd be interested to see what options are out there for C#.

> FieldOffset is supported by IL2CPP at compile time

You are right, I miss-remembered this one, you cannot get it via reflection, but it works.

timmytokyo 17 hours ago | parent [-]

>I am not aware of an easy way to load (managed) mods as DLLs to IL2CPP-compiled game. I am thinking about `Assembly.LoadFrom("Mod.dll")`.

Ah, I was thinking native DLLs (which is what we're using on a project I'm working on). I think you're right that it's impossible for an IL2CPP-built player to interoperate with a managed (Mono) DLL.

>If you have some suggestions [re: serialization], I'd be interested to see what options are out there for C#.

We wrote a custom, garbage-free JSON serializer/deserializer that uses a fluent API style. We also explored a custom codegen solution (similar to FlatBuffers or protobuf) but abandoned it because the expected perf (and ergonomic) benefits would have been minor. The trickiest part with Unity codegen is generating code that creates little to no garbage.

mastax 17 hours ago | parent | prev [-]

Does unity have source generators support? Could make for a good alternative to reflection.

CreepGin 17 hours ago | parent [-]

Yes and it works well IME. https://docs.unity3d.com/6000.3/Documentation/Manual/roslyn-...

Now I think about it, writing SourceGenerators is actually a great fit for AI agents.

luaKmua 17 hours ago | parent | prev [-]

Hey there, always appreciate a dialog

Per the separation, I think this was far more common both in older unity games, and also professional settings.

For games shipping on mono on steam, that statistic isn't surprising to me given the amount of indie games on there and Unity's prevalence in that environment. My post in general can be read in a professional setting (ie, career game devs). The IL injection is a totally reasonable consideration, but does (currently) lock you out of platforms where AoT is a requirement. You can also support mods/DLC via addressables, and there has been improvement of modding tools for il2cpp, however you're correct it's not nearly as easy.

Going to completely disagree that Burst and HPC# are unnecessary and messy. This is for a few reasons. The restrictions that HPC# enforce essentially are the same you already have if you want to write performant C# code as you just simply use Unity's allocators for your memory up front and then operate on those. Depending on how you do this, you either can eliminate your per frame allocations, or likely eliminate some of the fragmentation you were referring to. Modern .Net is fast, of course, but it's not burst compiled HPC# fast. There are so many things that the compiler and LLVM can do based on those assumptions. Agreed C# strings are always a pain if you actually need to interpolate things at runtime. We always try to avoid these as much as we can, and intern common ones.

The fragmentation you mention on after large operations is (in my experience) indicative of save/load systems, or possibly level init code that do tons of allocations causing that to froth up. That or tons of reflection stuff, which is also usually nono for runtime perf code. The memory profiler used to have a helpful fragmentation view for that, but Unity removed it unfortunately.

Rohansi 16 hours ago | parent | next [-]

> Modern .Net is fast, of course, but it's not burst compiled HPC# fast.

Sure, but the fact that it is competitive with Burst makes it disappointing. If I'm going to go through the trouble of writing code in a different (and not portable!) way then it better be significantly faster. Especially when most code cannot be written as Burst jobs unless you use their (new) ECS.

https://github.com/tbg10101/dotnet-burst-comparison

gr4vityWall 3 hours ago | parent [-]

I wonder what those benchmarks would look like with .NET 10 and an AVX512-capable CPU.

iliketrains 14 hours ago | parent | prev | next [-]

> Going to completely disagree that Burst and HPC# are unnecessary and messy.

Making a managed code burst-compatible comes with real constraints that go beyond "write performant C#". In Burstable code, you generally can't interact with managed objects/GC-dependent APIs, so the design is pushed towards unmanaged structs in native collections. And this design spreads. The more logic is to be covered by Burst, the more things has to be broken down to native containers of unmanaged structs.

I agree that designing things in data-oriented way is good, but why to force this additional boundary and special types on devs instead of just letting them write it in C#? Writing burstable code can increase complexity, one has to manage memory/lifetimes, data layout, and job-friendly boundaries, copying data between native and managed collections, etc., not just "writing fast C#".

In a complex simulation game, my experience is that there are definitely things that fit the "raw data, batch processing" model, but not all gameplay/simulation logic does. Things like inheritance, events, graphs, AI (the dumb "game" version, no NN), UI, exceptions, etc. And on top of it all, debugging complications.

Wouldn't you be relieved with announcement: "C# is now as fast as Burst, have fun!"? You'd be able to do the same data-oriented design where necessary, but keep all the other tings handy standing by when needed. It's so close, yet, so far!

> The fragmentation you mention

What you say makes sense. I've actually spent a lot of time debugging this and I did find some "leaks" where references to "dead objects" were keeping them from being GC'd. But after sorting all these out, Unity's memory profiler was showing that "Empty Heap Space" was the culprit, that one kept increasing after every iteration. My running theory is that the heap is just more and more fragmented, and some static objects randomly scattered around it are keeping it from being shrunk. ¯\_(ツ)_/¯

CreepGin 17 hours ago | parent | prev [-]

Yeah to me, Burst+Jobs and Compute shaders are so easy to work with in Unity, I haven't felt the need to squeeze more perf out of C# in a long time.

For modding and OTA stuff I just use a scripting language with good interop (I made OneJS partially for this purpose). No more AOT issue and no more waiting for domain reload, etc.

reactordev 20 hours ago | parent | prev | next [-]

Unity has a unity problem.

While it’s easy to get in and make something (it’s got all the bells and whistles) it also suffers from the monolith problem (too many features, old code, tech debt).

The asset store is gold but their tech feels less refined. It’s leaps and bounds where it was when it started but it still has this empty feel to it without heavy script modifications.

There is the problem. The scripting ending designed around mono doesn’t translate as well to CoreCLR and using the same Behavior interface gets a little more complicated.

There are times (even with my own engine) that one must let go of the old and begin a new. Dx7->dx9, dx9->opengl, opengl->vulkan, vulkan->webgpu.

EDIT I was just thinking, returning to this a couple of minute later, that if Unity wanted to prove they really care about their Core, they would introduce a complete revamp of the editor like Blender did for 3.X. Give more thought to the level editors and prefab makers. Give different workflow views. Editing / Animation / Scripting / Rendering / Post

As it stands now, it’s all just a menu item that spawns a thing in a single view with a 1999 style property window on the side like Visual Studio was ever cool.

lossyalgo 6 hours ago | parent | next [-]

The biggest problem IMO is that they never finish new features. They start the work on implementing new tech/assets/plugins, then abandon them halfway through just as they are becoming useful for prod. There are tons of would-be-amazing tools but they are all stuck at "0.x-preview" versions, and eventually after 5-10 years either stop working completely or are over-shadowed by newer, shinier assets, which often re-invent the wheel and/or do things worse than the previous attempt. I stopped trying out new tech until its 1.0 (or preferably later, 2.0+ is safer) because I'm afraid to be bitten (again) becoming dependent on abandoned plugins and have to at some point update to something else. It's a lose-lose-lose proposition: Unity throws away time and money re-inventing plugins, we throw away time and money having to port to functioning tools, customers lose time because they get to discover bugs caused by outdated plugins/assets that cause weird errors that are hard to track down.

reactordev 5 hours ago | parent [-]

I think what you're describing is a symptom of the issue. The issue is talent churn. Bright folks who start a feature get poached and leave and the feature dies on the vine. Or that the feature, which started off great in it's own little corner of the engine, was a mess to integrate with the rest of Unity due to it's architecture and the fear of breaking backward compatibility.

The problem now comes not from tech-previews but from the quarterly forced releases because it's now a subscription. The entire business model is flawed and outdated. Same with Unreal. The difference is Unreal tech is exponentially better than Unity architecturally and they know their audience very well so they were able to get in with virtual stage production, games, movies, you name it. They were successful in expanding beyond their core. Unity, can't. They don't know how.

It's a tough spot to be in. I knew my place when I shut mine down and open sourced it. I couldn't compete. For Unity, they have a loyal fanbase that wants them to succeed but I'm afraid it's going to take breaking everything they know in order to do it.

jayd16 20 hours ago | parent | prev | next [-]

I think the major problem with Unity is they're just rudderless. They just continue to buy plugins and slap in random features but it's really just in service of more stickers on the box and not a wholistic plan.

They've tried and failed to make their own games and they just can't do it. That means they don't have the internal drive to push a new design forward. They don't know what it takes to make a game. They just listen to what people ask for in a vacuum and ship that.

A lot of talented people at Unity but I don't expect a big change any time soon.

whstl 9 hours ago | parent | next [-]

I've seen it happening time and time again in similar companies, and this is a symptom of a problem at the upper levels, which means it won't change.

C-level set goals are abstract and generic, or sometimes plain naive, and this is often coming from generic requests from the board or VCs.

"Hire as many developers as you can, even if there's no work right now", a Softbank request.

"Don't build, just acquire similar products", from a Brazilian capital management that ended up killing that company.

"Kill this team, their product doesn't sell. I don't care if all our other product depends on theirs", from Francisco Partners.

Employees who stay can't really rock the boat, so it self-selects for non-boat-rocking people. Rockstars who stay must adapt or suffer. Eventually you get so many bad people that you do layoffs.

reactordev 7 hours ago | parent [-]

The thread in all of them is that the CEO listened to other people’s advice instead of leading themselves. When a ship loses its captain…

whstl 4 hours ago | parent [-]

That's a good point.

If the CEO is just a parrot repeating what the board says, you get a company full of parrots too. No pirate to guide the ship.

reactordev 4 hours ago | parent [-]

The best CEOs I’ve seen balance board requests with what they themselves want to do and where they see their market going. Standing on the shoreline when the armada of prospects come sailing in for provisions.

When there’s a gold rush, sell pickaxes and shovels.

reactordev 19 hours ago | parent | prev [-]

The talent left ship years ago. The core engine’s graphics team is all that’s really left.

They also hired Jim Whitehurst as CEO after the previous CEO crapped the bed. Then Jim left as he just didn’t understand the business (he’s probably the one responsible for the “just grab it from the store” attitude). Now they have this stinking pile of legacy they can’t get rid of.

JBits 13 hours ago | parent [-]

Has the talent moved to anywhere in particular?

reactordev 5 hours ago | parent [-]

Nicholas Francis manages a fund for AgTech after a decade making games with Unity (the engine he made). He left in 2013 so I don't associate him with Unity today but it was his product.

2018 We get the new HDRP and Shader Graph.

2019 there were sexual harassment lawsuits.

The other co-founders left after they announced runtime fees in 2023 and the community fled.

2024 the URP team basically imploded. Leaving everything basically flat.

DiabloD3 20 hours ago | parent | prev | next [-]

That last step is nonsensical: WebGPU is a shim layer that Vulkan-like layer (in the sense that WebGL is GLES-like) that allows you to use the native GPGPU-era APIs of your OS.

On a "proper OS", your WebGPU is 1:1 translating all calls to Vulkan, and doing so pretty cheaply. On Windows, your browser will be doing this depending on GPU vendor: Nvidia continues to have not amazing Vulkan performance, even in cases where the performance should be identical to DX12; AMD does not suffer from this bug.

If you care about performance, you will call Vulkan directly and not pay for the overhead. If you care about portability and/or are compiling to a WASM target, you're pretty much restricted to WebGPU and you have to pay that penalty.

Side note: Nothing stops Windows drivers or Mesa on Linux from providing a WebGPU impl, thus browsers would not need their own shim impl on such drivers and there would be no inherent translation overhead. They just don't.

MindSpunk 16 hours ago | parent | next [-]

WebGPU is far from cheap and has to do a substantial amount of extra work to translate to the underlying API in a safe manner. It's not 1:1 with Vulkan and diverges in a few places. WebGPU uses automatic synchronization and must spend a decent amount of CPU time resolving barriers.

You can't just ship a WebGPU implementation in the driver because the last-mile of getting the <canvas> on screen is handled by the browser in entirely browser specific ways. You'd require very tight coordination between the driver and browsers, and you still wouldn't be saving much because the overhead you get from WebGPU isn't because of API translation, rather it's the cost to make the API safe to expose in a browser.

reactordev 7 hours ago | parent [-]

We already do this by exposing the canvas surface with a semaphore lock. The browser can flip the surface to the canvas (or your app can flip it onto a window surface).

It’s just a HINSTANCE pointer.

You’re right about the waiting, but that’s entirely app driven. Browsers don’t want to render at 144fps but rather wait until drawing has occurred in order to update the view.

wgpu, dawn, already support drawing to arbitrary surfaces (not just a canvas but any window surface).

fulafel 12 hours ago | parent | prev | next [-]

WebGL and WebGPU must robustly defend against malicious web content making the API calls, just like other browser JavaScript APIs, which makes for some overhead and resulted in leaving out some features of the underlying APIs.

Vulkan has also evolved a lot and WebGPU doesn't want to require new Vulkan features, lacking for example bindless textures, ray tracing etc.

StilesCrisis 17 hours ago | parent | prev | next [-]

I wouldn't call it nonsensical to target WebGPU. If you aren't on the bleeding edge for features, its overhead is pretty low and there's value in having one perfectly-consistent API that works pretty well everywhere. (Similar to OpenGL)

reactordev 19 hours ago | parent | prev [-]

I’m foreshadowing a future where they do. Please don’t kill the dream.

DiabloD3 19 hours ago | parent [-]

I'm not killing it, but there is no C API written verbatim. WebGL was fucky because it was a specific version of GLES that never changed and you couldn't actually do GL extensions; it was a hybrid of 2.0 and 3.0 and some extra non-core/ARB extensions.

WebGPU is trying to not repeat this mistake, but it isn't a 100% 1:1 translation for Vulkan, so everyone is going to need to agree to how the C API looks, and you know damned well Google is going to fuck this up for everyone and any attempt is going to die.

Chrome is the cancer killing desktop computing.

pjmlp 10 hours ago | parent | next [-]

And Web, because nowadays when people complain about standards, they mean something that only Chrome or Electron crap does.

reactordev 17 hours ago | parent | prev [-]

So use dawn.

The problem is the same as it was 20 years ago. There’s 2 proprietary API’s and then there’s the “open” one.

I’m sick of having to write code that needs to know the difference. There’s only a need for a Render Pass, a Texture handle, a Shader program, and Buffer memory. The rest is implementation spaghetti.

I know the point you’re making but you’re talking to the wrong person about it. I know all the history. I wish for a simpler world where a WebGPU like API exists for all platforms. I’m working on making that happen. Don’t distract.

LelouBil 20 hours ago | parent | prev | next [-]

Yeah, I started a project in Unity a while ago, and tried out Godot in the meantime.

Unity really feels like there should be a single correct way to do any specific thing you want, but actually it misses <thing> for your use case so you have to work around it, (and repeat this for every unity feature basically)

Godot on the other hand, really feels like you are being handed meaningful simple building blocks to make whatever you want.

reactordev 19 hours ago | parent [-]

Bingo. They don’t actually understand their users. Instead they’re the Roblox of game making, just provide the ability and let devs figure it out (and then sell it as a script).

torginus 12 hours ago | parent | prev | next [-]

Unity somehow manages to break the API of their own features so bad every year or so that their own tutorials don't work. You have a solid baseline API that has existed since forever (with known limitations), with stuff like the legacy render pipeline. Every attempt to reform it has only introduced confusion, complexity, and is at some point between experimental and no longer supported.

I don't agree with you on the Asset Store, for the same reasons - the rate of breakage means that things that are not constantly updated no longer work - and multple versions need to be maintained for parallel engine versions. That combined with the dubious economics of Asset Store (I don't think it makes financial sense to even make these things, let alone maintain them), they mostly end up as abandonware.

And on the Asset Store if you make something indispensable (which is more often than not something the engine should'have OOTB, like competent text rendering), one of the following things will happen:

- Unity will buy you out, and plop your asset in the engine, without doing any integration work, and will stick out like a sore thumb (TextMeshPro). Good for you, bad for consumers, and sucks if you were making a competitor

- They build an in-house solution, that you obviously can't compete with, and will have a huge leg up on you because they have engine access (sucks to be you)

- The engine will never have that feature, because 'you can just buy it', meaning you have to either spends hundreds of dollars/euros on dubious quality assets or hunt for open-source versions with generally even more varying usability. UE4/5 has a lot of these built in, and AAA quality.

Rohansi 11 hours ago | parent | prev [-]

> with a 1999 style property window on the side like Visual Studio was ever cool.

I don't think this is fair. I'd say Unity's inspector window is one of the good parts of Unity because it not just a property window. It's an immediate mode UI that things can hook into to do a lot more than just property editing.

reactordev 7 hours ago | parent [-]

Yes, the last 8 years has seen that little side drawer used for every. single. unity. feature.

lossyalgo 6 hours ago | parent [-]

Thankfully there are tons of assets you can buy or download from github that will extend the functionality of the inspector windows, which IMO need a LOT of love. The last update I saw was where you can do math inside e.g. transform properties e.g. scale is 1, you can type in 1+2 and it will show in game/scene views immediately your changes, and if you press ENTER it will commit those changes. It's not really well-known (I discovered it by accident reading some changelogs at some point a couple years ago).

reactordev 4 hours ago | parent [-]

Not thankfully. The whole point of this thread is that Unity is barebones without community support. What you’re describing is that community support. Glad you like it. I find these kind of lack of attention to your product a huge turn off. Unity Community is naive in the fact that they allow this company to walk all over them because they lack the willpower to steamroll them. There are plenty of Unity community members that are capable of making a better Unity. Unity itself relies on their community otherwise who would pay for an engine? So by saying “just use this plugin” is basically just reinforcing my perspective.

lossyalgo 2 hours ago | parent [-]

I agree, hence "need a lot of love".

On the other hand, the lack of "love" from Unity's side (at least regarding Inspector) allows a thriving ecosystem for devs to build their own version of what an ideal Inspector drawer should look like, as well as potentially make a living from it. And to boot, who is to say what the ideal Inspector should look like? Do you trust Unity to make an Inspector that fits everyone's wants/needs? I definitely don't - so I'm glad they have a "bare-bones" version that allows us to customize it to our heart's content. Do you want them to be like Apple and "steamroll" everyone and make bad decisions for arbitrary reasons? I definitely don't and I personally HATE a ton of Apple's constant changes and lack of ability to change simple things, such as the inability to disable a lot of animations, which murders my VNC sessions, but I digress.

Regarding non-Inspector things: you already replied to my other rant about unfinished features, so yeah, also in agreement.

Rohansi a day ago | parent | prev | next [-]

The article doesn't cover it but the GC being used by Unity also performs very poorly vs. .NET, and even vs. standalone Mono, because it is using the Boehm GC. Last I heard Unity has no plans to switch IL2CPP to a better GC [1].

It'll be interesting to see how the CoreCLR editor performs. With that big of a speed difference the it might be possible for games to run better in the editor than a standalone Mono/IL2CPP build.

[1] https://discussions.unity.com/t/coreclr-and-net-modernizatio...

jayd16 13 hours ago | parent | next [-]

On the one hand, better GC is better but on the other, it doesn't matter all that much.

You tend to want zero per frame allocation as it is and that would probably not change.

As long as your less frequent garbage doesn't overtake the incremental GC, that's not really an issue either. If it's working incrementally as intended stutter shouldn't be an issue.

In a game there's no endless benefit from raw GC throughput like you might see on a server instance that could always push more requests per second.

bob1029 12 hours ago | parent | next [-]

The entire point of the incremental GC is to preserve frame latency budget at the expense of raw throughput. If you can guarantee <16ms frames, I'll work with whatever you can give me.

If your game is allocating so quickly that the incremental GC can't keep up, I would argue that solving this with a "faster" GC is just taking you further into hell.

Rohansi 3 hours ago | parent | prev [-]

> On the one hand, better GC is better but on the other, it doesn't matter all that much.

It shouldn't but it does. Boehm is a conservative GC so when it triggers it needs to scan a lot more memory for pointers than .NET's GC because it has to assume anything in memory could be a pointer.

llmslave2 a day ago | parent | prev | next [-]

Re. the editor speedup, it should outright eliminate the "domain reload" thingy that happens because all of the C# needs to be unloaded and reloaded in response to a change.

Rohansi 21 hours ago | parent [-]

Pretty sure that will still be there? It'll be different because CoreCLR doesn't really have AppDomains but it will still need to unload old assemblies and reload them all again. That's the only reliable way to reset everything into a clean state.

benaadams 20 hours ago | parent | next [-]

AssemblyLoadContext.Unload https://learn.microsoft.com/en-us/dotnet/api/system.runtime....

llmslave2 21 hours ago | parent | prev [-]

But only the assemblies that changed right? Or would it still be all of them?

Rohansi 17 hours ago | parent [-]

No, for a full domain reload it reloads everything. This forces all static variables to reset, makes static constructors run again, etc.

Rochus a day ago | parent | prev [-]

> because it is using the Boehm GC

For what reason? Mono has a pretty good precise GC since many years.

Rohansi 21 hours ago | parent [-]

Yes, SGen should be a lot better, but Unity cannot use it because they hold and pass raw pointers around everywhere. That's fine for Boehm but not possible with SGen. They're working on fixing this already but not sure why they aren't planning a move to a better GC.

LeFantome 18 hours ago | parent [-]

Well, if they port to .NET (CoreCLR), that will move them to the MS GC.

Rohansi 17 hours ago | parent [-]

Yes, but it also puts them in an awkward situation! They recommend (or even require, for some platforms) using IL2CPP for release builds which will still use Boehm GC and not run as quick as CoreCLR.

DoctorOW 6 hours ago | parent [-]

Do they still need IL2CPP if they have AOT? The goal was always to be able to have cross-platform native binaries right?

WorldMaker 4 hours ago | parent | next [-]

In theory yes, IL2CPP doesn't need to exist with modern .NET AOT support. In practice, per quotes in the article Unity may have a bit of a sunk cost issue and has no plans to support .NET AOT, only IL2CPP.

Some of that sunk cost may be the above mentioned pointer issue and not enough current plans for a smarter FFI interface between C++ and C#.

Rohansi 4 hours ago | parent | prev [-]

Unfortunately they do still need IL2CPP because Unity took a different direction than .NET: most reflection still works with IL2CPP but does not with .NET AOT. Switching would be a huge breaking change for everyone, including Unity.

Platform support is also still better with IL2CPP but .NET is catching up.

1a527dd5 20 hours ago | parent | prev | next [-]

> In 2018, Unity engineers discussed that they are working on porting the engine to .NET CoreCLR

Hard task, no doubt. Unity needs to throw everything at this problem. C# in general has gotten insanely fast by default. It's very much worth taking the time to upgrade/update.

Whilst we don't compare in size and api surface, it took us a few months to get off 472 and onto dotnet6. But once we were on dotnet6, moving to the LTS after that was relatively painless; usually a few hours of work.

makotech221 a day ago | parent | prev | next [-]

Yeah I think Unity just doesn't have the technical skillset anymore to make the migration to coreclr. It keeps getting delayed and their tech leads keep dropping out.

Might I suggest https://github.com/stride3d/stride, which is already on .net 10 and doesn't have any cross-boundary overhead like Unity.

WillPostForFood a day ago | parent | next [-]

Progress has been painfully slow, but Unity does seem to be moving forward.

Unity updates on their plans and progress:

2022 - officially announced plans to switch to CoreCLR - https://unity.com/blog/engine-platform/unity-and-net-whats-n...

2023 - Tech update - https://unity.com/blog/engine-platform/porting-unity-to-core...

Unite 2025 - CoreCLR based player scheduled for Unity 6.7 in 2026 - https://digitalproduction.com/2025/11/26/unitys-2026-roadmap...

teraflop 21 hours ago | parent | next [-]

Maybe they are making progress. But given that they first started talking about this in 2018, and then in 2022 they announced that they were planning to release a version with CoreCLR in 2023, and then in 2024 they said it would be in beta in 2025, and now in 2025 they're planning to release it as a technical preview in 2026, but they're still talking about an "internal proof-of-concept" as though it's something coming in the future...

As an outsider, it certainly seems like there's reason for skepticism.

whstl 20 hours ago | parent | next [-]

I've seen similar things from the inside in other companies. Even existential threats (like lack of Apple Silicon support for performance-critical software), getting heavily delayed because the feature treadmill won't stop and the actually important thing is a pet project of some engineer. It is basically death by a thousand papercuts, where nobody can say what the focus is.

People complain about web development but working with native apps can be depressing sometimes.

cheschire 21 hours ago | parent | prev [-]

Well they made some business decisions in the middle of that timeline that cut their funds quite a bit, not to mention probably scared off some good talent.

ACS_Solver 20 hours ago | parent | next [-]

Not just probably scared off some good talent, they had xoofx leave over disagreements with higher management. xoofx was one of their most senior devs, the guy who started the CoreCLR migration and was leading it.

They'll get there eventually, but the current roadmap says experimental CoreCLR in late 2026, which then in the best case means production ready in 2027. Unity isn't going anywhere, but at least as a dev who doesn't care about mobile (which is Unity's real market), competing engines have gotten much more attractive in the last couple years and that seems set to continue.

whstl 9 hours ago | parent [-]

The funny thing about his resignation is that xoofx had a CoreCLR prototype already working around 2016-ish, but the company had "other priorities" and only took it seriously until recently.

The guy should just have been left alone and shielded from company bullshit to do the migration, or empowered to fight.

I know this is one sided but: Whoever from high-management lost this guy is an absolute loser waste of space who didn't do his job and will blame xoofx for “not fighting harder” or some other bullshit. Fuck companies, and fuck loser managers.

ACS_Solver 8 hours ago | parent [-]

I think 2016 is a bit too early but yeah, xoofx first wrote about CoreCLR in 2018 and said he'd made considerable progress with something like himself and two other engineers doing it as a side project. That is four years before Unity as a company announced the migration as a priority, which in turn is another four years before the current estimate for when they may ship it.

From my perspective, Unity seems very poorly managed in recent years. The editor experience isn't improving while they continue the usual pattern of shipping features in a poor state where they need another couple versions to become properly usable, and of course they make terrible decisions like the runtime fee, a total insanity that caused a huge loss of trust and boosted Godot development enormously.

Of course my perspective is biased by me not being Unity's main target market. I work on PC strategy games, which are on Steam. At our studio, we don't do mobile, advanced graphics features aren't very relevant, and we may have the most complex UI that ever shipped in a Unity game.

chris_wot 20 hours ago | parent | prev [-]

Scared off a lot of customers, too.

bentt 21 hours ago | parent | prev [-]

Nice link, thanks.

999900000999 a day ago | parent | prev | next [-]

Stride has a fraction of the features as unity.

Godot is the only real open source competitor, their C# support is spotty. If I can't build to Web it's useless for game jams as no one should be downloading and running random binaries.

A real sandbox solution with actual GPU support is needed.

dustbunny 20 hours ago | parent | next [-]

Writing C# in godot is a bad choice. Use GDScript and directly write c++ as a module. Skip "HD extension" completely. Godots build system is easy enough to use. Just add new classes to the engine using c++ if you don't want to use GDScript. The GDScript workflow is honestly great. Using C# is like the worst of all worlds.

enbugger 10 hours ago | parent | next [-]

GDScript is not very maintainable as the code base grows. It lacks proper refactoring tools (e.g. the ones from Jetbrains Rider), static type checking, flexible object system and many 3rd party libraries which might be needed

dustbunny 4 hours ago | parent [-]

My main point is: if GDScript isn't good enough, go straight to c++ directly in the Engine.

I won't even get into how big of projects I've written in GDScript successfully.

pjmlp 10 hours ago | parent | prev | next [-]

Problem is, GDScript still doesn't have at least a JIT, and the whole GDextensions boilerplate seems more complicated than it needs to be.

dustbunny 3 hours ago | parent [-]

"not having a JIT" is not a problem, it's you speculating that a JIT will improve performance, the real problem is "GDScript has poor performance", which in this context (me saying C# in godot sucks) is you speculating that C#'s performance is better than GDScripts. Do you have any data to back that claim up? Like real world data from a real product? Or are you just speculating with vibes?

If performance is a concern, skip C# and go straight to c++. Now your ripping at max speed with the smallest binary! That's my whole point. GDScript + c++ is my point. Ditch C# it's not worth the squeeze.

pjmlp an hour ago | parent [-]

Some people like nice tooling with performance.

Interpreter code is never faster than a dynamic compiler, otherwise what about doing games in Python?

As mentioned on my comment, GDextension experience kind of sucks.

joha4270 9 hours ago | parent | prev | next [-]

Are there technical reasons to prefer GDScript over C#?

GDScript is undoubtedly better integrated in the engine, but I would have expected C# compare more favorably in larger projects than the game jam sized projects I have made.

et1337 4 hours ago | parent [-]

Performance is one issue with C#: https://sampruden.github.io/posts/godot-is-not-the-new-unity...

joha4270 3 hours ago | parent [-]

I don't see how this article could possibly support the argument that C# is slower than GDScript

It compares several C# implementations of raycasts, never directly compares with GDScript, blames the C# performance on GDScript compatibility and has an strike-out'ed section advocating dropping of GDScript to improve C# performance!

Meanwhile, Godot's official documentation[1] actually does explicitly compare C# and GDScript, unlike the the article which just blames GDScript for C#'s numbers, claiming that C# wins in raw compute while having higher overhead calling into the engine

[1]: https://docs.godotengine.org/en/stable/about/faq.html#doc-fa...

et1337 3 hours ago | parent [-]

My post could have been a bit longer. It seems to have been misunderstood.

I use GDScript because it’s currently the best supported language in Godot. Most of the ecosystem is GDScript. C# feels a bit bolted-on. (See: binding overhead) If the situation were reversed, I’d be using C#. That’s one technical reason to prefer GDScript. But you’re free to choose C# for any number of reasons, I’m just trying to answer the question.

joha4270 2 hours ago | parent [-]

At least in my case, I got curious about the strength of /u/dustbunny's denouncement of Godot+C#.

I would have have put it as a matter of preference/right tool with GDScripts tighter engine integration contrasted with C#'s stronger tooling and available ecosystem.

But with how it was phrased, it didn't sound like expressing a preference for GDScript+C++ over C# or C#++, it sounded like C# had some fatal flaw. And that of course makes me curious. Was it a slightly awkward phrasing, or does C# Godot have some serious footgun I'm unaware of?

et1337 2 hours ago | parent [-]

Makes sense! I think dustbunny said it best: C# is “not worth the squeeze” specifically in Godot, and specifically if you’re going for performance. But maybe that’ll change soon, who knows. The engine is still improving at a good clip.

999900000999 20 hours ago | parent | prev [-]

I don't like C++.

It's very difficult to me, I generally stick to high level stuff , C#, JavaScript, Python, Dart, etc.

chris_wot 20 hours ago | parent [-]

If you can code in C#, how is C++ difficult? Are pointers and the stl that difficult?

Not denigrating, genuine question.

999900000999 19 hours ago | parent | next [-]

One word.

Headers.

That's just the start. The C++ build system and package managers are the stuff if nightmares. Modern languages are significantly easier to use.

Don't get me wrong, if you offer a job with a 200k base salary and give me 6 months to learn C++ I'll do it. But I won't enjoy it, and I definitely won't do it as a hobby.

runevault 19 hours ago | parent [-]

If you use an existing template (and are willing to use scons) GDExtension doesn't really have the standard build problems of rigging everything up with CMake/etc in my experience. The template is set up to handle the "set up the build" problem for you. Still have the header problem though cannot deny that one.

Kwpolska 12 hours ago | parent | prev | next [-]

You're asking if going from a high level language to a low level language on steroids is difficult?

Pointers, manual memory management, and the design by committee standard library are not fun.

atraac 12 hours ago | parent | prev | next [-]

I write mostly backend stuff for a living, big chunk of it in Node/TS but also C# with modern .NET. I also have to dabble with Unity and Unreal both for work and a hobby project. I technically learned C++ in uni but really, I hate every single second I have to spend doing Unreal Engine work. I genuinely despise how obsolete and hard to write C++ is compared to modern languages. It didn't bother me in university because that was all I knew at the time but it's hard to justify existence of header files today. Add macros everywhere, really bad compilation errors, really hard to read syntax with a lot of unnecessary bloat and you get something that is just not likable. I'm sure you can get used to it given enough time spent in that ecosystem, but I can tell you as someone writing 4 different languages on day to day basis, C++ is difficult, and it's not because of pointers.

dustbunny 4 hours ago | parent [-]

Yeah that's why GDScript is great.

You only dabble in the c++ for the sliver of the project that needs it. 90% of game development is animating stuff and user interface development. GDScript is great for that.

DonHopkins 15 hours ago | parent | prev [-]

Java was designed to make fun of C++, then C# was designed to make fun of Java, so you're missing two layers of fun.

999900000999 14 hours ago | parent [-]

I can tolerate Java. I've worked a Java dev role recently.

I think it's overly verbose and probably has a lot of unneeded legacy stuff, but in terms of making money so I can afford donuts, it's not bad.

My personal favorite language is probably Dart, but it's basically useless outside of Flutter and I don't have faith in Google to keep Flutter going.

I don't like low level programming. My dream language is literally an LLM that processes a markdown document where I sorta just describe what I want to happen. With options to call defined functions in a normal programing language

If I had money ( VC money) I'd be working on this.

CyanLite2 15 hours ago | parent | prev | next [-]

I think WebAssembly could become that sandboxed solution. .NET Blazor WASM is moving away from mono to CoreCLR (just like Unity, with an early preview in late 2026). WASM now has really good SIMD support, native GC, large memory support, and with WebGPU we could finally see some native WASM games running at native speeds.

bloomca 6 hours ago | parent [-]

Wait, what, WASM has native GC now?

WorldMaker 4 hours ago | parent [-]

It's still not JS-level/JS-compatible GC (yet?) and it is still quite low level (more about corralling buffers of bytes than objects, a bit closer to OS-level page management than JS or C# level GC), as it is intended to be lower level than most languages need so that different languages can build different things with it. It is also a tiny stepping stone to better memory sharing with JS APIs (and the eventual goal of WASM "direct DOM"), but still not quite finished on that front as more steps remain.

eole666 21 hours ago | parent | prev [-]

Godot 4 C# web export is coming soon : https://github.com/godotengine/godot/pull/106125

999900000999 21 hours ago | parent [-]

We'll see when it actually ships.

I've seen this issue before, they're making progress but theirs no firm release date.

Plus you then have to extensive testing to see what works in Web builds and what doesn't. I REALLY enjoy vibe coding in Godot, but it's still behind Unity in a lot of ways.

ramon156 20 hours ago | parent [-]

Naive question but why not use GDscript? I haven't had any issues with it

999900000999 20 hours ago | parent | next [-]

I'll vibe code stuff in GDscript, and I guess I more or less understand some of it.

But C# is what I've used for over a decade. C# has vastly better IDE support. It's a more complete language.

Plus a lot of C# assets/libraries don't really have GDScript counterparts.

eole666 11 hours ago | parent | prev [-]

I'll add that C# have better performances than gdscript. It doesn't make a difference for most of the things you code in a game, but it comes in handy when needed.

linguistics__ 8 hours ago | parent | prev | next [-]

> Yeah I think Unity just doesn't have the technical skillset anymore

It's not a technical skillset issue, it's a case of lacking resources, leadership changing priorities and changing requirements mid-development, causing a lot of work to be redone.

We still have highly skilled developers working in this area, if we can just actually be allowed to do (and keep) we work we have.

darubedarob 12 hours ago | parent | prev [-]

Imagine you have to communicate that rewrite and drop of support for bought "addons" to the people who went on that shopping spree. Who then would have to explain the "wecan"-value drop + cost of recovery of the same abilities to the shareholders and customers. The magpiesnest of bought companies tech is a tarbaby for any tech lead, the rewrite a career ender for the CEOs office.

tyleo 8 hours ago | parent | prev | next [-]

I wrote the Circuits programming system for Rec Room, and we took a similar approach: keeping the core logic free of Unity dependencies so it could run and be tested on CoreCLR.

The results were similar as well with significantly better performance outside of Unity. There are also secondary benefits. Unity’s API surface is huge, and it’s easy for code to accidentally entangle itself with engine details in ways that confuse or constrain other developers.

By isolating the core in a separate DLL, we only expose Unity concepts where they’re actually needed, and we enforce that boundary with tests. That logic constraint ended up being just as valuable as the performance gain

BatteryMountain 14 hours ago | parent | prev | next [-]

My dream would be that I can adb into my phone, install the .Net SDK or .Net Runtime (v 8 or 10) and have my applications run natively on Android. Simple console apps first, then the rest. Google should open their platform up a little bit more. Allow us to enable root access via adb. Let us unleash our pocket computer's full potentials. Would love to have portable low powered servers, running stacks of my choice. They are super efficient already, have large storages, I can already plug in a usb-c hub into my phone to get more storage, LAN network, keyboard/mouse, external mic if needed. Running my phone with wireguard/tailscale + some lightweight containers = awesome!

Unity really do need to catch up though. .Net 8/10 is really amazing and FAST. With the new garbage collector changes for .Net 10, game stutter would be nearly gone gone.

Unrelated point: I basically stream most of my games with Sunlight + Moonlight from my main rig to my phone with a controller attached, so I can play Diablo 2 remastered, Hades, Grim Dawn and basically any game with Xbox controller support, directly on my phone. Works amazingly well. Phones has high dpi screen + 120hz OLED panel and since all rendering happens on my headless gaming server, at highest settings, the phone's battery doesn't really drain at all, can play for hours. Playing a Mono based game from the play store can drain the battery within 2 hours, and the games are usually so badly built for android, even paid games that have equivalents on steam.

bsimpson 2 hours ago | parent | next [-]

I believe the root constraints are a security thing. They don't want you to charge your phone in a dodgy USB outlet in a public space (e.g. an airport) and be compromised. Making you flash your phone to get root is meant to prevent people accidentally exposing themselves.

littlecranky67 12 hours ago | parent | prev [-]

> My dream would be that I can adb into my phone, install the .Net SDK or .Net Runtime (v 8 or 10) and have my applications run natively on Android

My understanding is that you can (almost) do that using MAUI (formerly known as Xamarin). You will not get a .Net SDK or Runtime, but the mono runtime. Since it is bundled in your app, you won't actually notice.

BatteryMountain 11 hours ago | parent [-]

No, we need one level lower & zero mono. I want the sdk or runtime to be installed in the same way (and executable) the same way on android as on other linux distro's. .Net has ARM builds so it will work fine. This is without any specific framework dependencies (like aspnet, maui, avalonia etc) - just plain .net 8/10. It would actually be nice if android just bundled the runtime to begin with so we don't have to. They can just pin it to latest LTS versions and let it auto update now & then, same way as WebView/Google play services etc.

littlecranky67 10 hours ago | parent [-]

So for whatever reason you want the .NET runtime over Mono - not sure why that is a hard requirement, but AFAIK that is also the goal for microsoft. As in, merging those runtimes and have a single codebase that works for all platforms (including android). There are technical obstacles and historic codebases, but I would expect this unified runtime soon.

skrebbel 11 hours ago | parent | prev | next [-]

Does anyone on here know why it is that Unity doesn't just run on .NET? The cross-platform benefits of Mono have disappeared a decade ago or something like that.

Like, why would Unity invest in hacks like il2cpp rather than just port over to modern .NET? Is there some licensing thing going on?

WorldMaker 4 hours ago | parent | next [-]

Sunk cost fallacy will be a big factor. They already invested a lot of money/time into customizing Mono and hacks like Burst and IL2CPP, so there's momentum to "stay the course" and continue investing in those. Even if some evidence suggests that it is the wrong course.

rincebrain 9 hours ago | parent | prev [-]

The article touches on it - assuming it's correct, it's because they did a bunch of proprietary modifying on Mono, so it's not "just" Mono any more, it's Mono and a decade or more of the most exciting layer violating crimes you can imagine.

I would assume, if someone Meta scale had an app that had to be built on Unity for whatever reason, we'd see similar blogposts to the ones they did about finding bespoke optimizations for their Android app to be performant with how enormous it became, probably involving either some bespoke alternative to IL2CPP, or writing a bunch of bespoke optimizations onto a bastard descendant of whatever version of Mono Unity originally derives from and so supports...or just paying Unity so much money they get to do whatever they want to the underlying source.

tyleo 8 hours ago | parent [-]

I’ve heard (but don’t have an inside source) that Meta’s Horizon was built on Unity and then ported to a custom engine implementation.

viktorcode 2 hours ago | parent | prev | next [-]

I wonder why the author doesn't use IL2CPP and sticks to Mono. IL2CPP does produce much faster code, making Mono builds obsolete. This should be the very first step you do if you care at all about performance in Unity.

LeFantome 18 hours ago | parent | prev | next [-]

It blows my mind that Unity has not been able to migrate to .NET (CoreCLR) after starting back in 2018.

I would have estimated a year, or two tops, for that project.

sieep 21 hours ago | parent | prev | next [-]

I recently started learning Godot and learning that they use .NET for the C# runtime is a nice touch. I write a lot of code that targets .NET in my day job, so having to learn the unity way of doing things would be frustrating.

KronisLV 6 hours ago | parent | prev | next [-]

It's cool to see detailed traces and flame graphs be used more often! A lot of different problems could be detected if they were available for pretty much any language, with enough details and tooling to be useful. Heh, I remember also using VisualVM for finding issues with a web app HTTP thread pool and later with the SQL queries being executed (and also the DB pooling solution).

MindSpunk 16 hours ago | parent | prev | next [-]

CoreCLR doesn't help on console platforms because you can't ship the JIT runtime. To my knowledge CoreCLR's AOT solution won't work because of SDK and build requirements for shipping builds on consoles. I believe some consoles require that all shipped native code must have been compiled with the SDK compiler. Even if you fork the CoreCLR AOT system so you can build for the consoles (the code can't be open because of NDAs) you wouldn't be allowed to ship the binary. IL2CPP is the only path forward there. CoreCLR is only viable on PC.

CyanLite2 16 hours ago | parent | next [-]

Simply not true, this info is outdated by a decade.

CoreCLR NativeAOT is already shipping real games on Nintendo, PS5, and Xbox.

JIT isn't allowed on iPhones either, and this is what NativeAOT solves. Also, .NET is moving WASM support to CoreCLR (rather than mono) in an upcoming version as well.

MindSpunk 15 hours ago | parent [-]

Do you have examples? As far as I'm aware based on current info there's at least one current console vendor that requires all native code to be generated by their SDK.

neonsunset 14 hours ago | parent [-]

Just don't ship to PlayStation and discourage others until Sony changes (is forced to) policy.

pjmlp 10 hours ago | parent [-]

Capcom does, and they are quite happy with it.

pjmlp 10 hours ago | parent | prev [-]

Yes, it does, Capcom is using it for their Playstation 5 games, like Devil May Cry.

"RE:2023 C# 8.0 / .NET Support for Game Code, and the Future"

https://www.youtube.com/watch?v=tDUY90yIC7U

As always, it is a matter of having the skill to deliver, instead of GC phobia.

MindSpunk 6 hours ago | parent [-]

If I'm interpreting that correctly they're using an IL2CPP compilation system that hooks into Roslyn and not using .NET Core's AOT technology. It's possible to ship C# on consoles, obviously, because Unity already does it with their own IL2CPP backend that's stuck on the old .NET versions. My point is that CoreCLR can't be used because of console certification requirements. I certainly wasn't commenting on C# as language for games. I think C# is, as of late, becoming a very powerful language for games with all the Span and similar tools to minimize GC pressure.

boguscoder 21 hours ago | parent | prev | next [-]

Good article but seems strange that author benchmarked debug builds first, that’s a huge “no-no” in any perf tweaking and it’s clear that authors knows this well

iliketrains 20 hours ago | parent | next [-]

From my experience, performance gains seen in Debug builds in Unity/C#/Mono nearly always translate in gains in Release mode. I know that this is not always true, but in this context that's my experience.

Setting up release benchmarks is much more complex and we develop the game in Debug mode, so it is very natural to get the first results there, and if promising, validate them in Release.

Also, since our team works in Debug mode, even gains that only speed things up in Debug mode are valuable for us, but I haven't encountered a case where I would see 20%+ perf gain in Debug mode that would not translate to Release mode.

mort96 9 hours ago | parent | prev [-]

I agreed with you initially, but is it really as big of a deal in C#? I'm used to compiled languages where "debug build" means "no compiler optimisations", aka every operation done with a variable is a memory load + store, trivial functions aren't inlined so even trivial accessors called in a loop carry function call overhead, etc etc. But this is C#, so the JIT presumably optimises just as much in a debug build as in a release build?

So in C++ terms, it's really just benchmarking "-O2" instead of "-O2 -DNDEBUG". This seems fine.

Rochus a day ago | parent | prev | next [-]

That's interesting. I made measurements with Mono and CoreCLR some years ago, but only with a single thread, and I came to the conclusion that their performance was essentially the same (see https://rochus.hashnode.dev/is-the-mono-clr-really-slower-th...). Can someone explain what benchmarks were actually used? Was it just the "Simple benchmark code" in listing 1?

to11mtm 21 hours ago | parent | next [-]

I think a lot of the devil is in the details, especially when we look at NET8/NET10 and the various other 'boosts' they have added to code.

But also, as far as this article, it's noting a noting a more specific use case that is fairly 'real world'; Reading a file (I/O), doing some form of deserialization (likely with a library unless format is proprietary) and whatever 'generating a map' means.

Again, this all feels pretty realistic for a use case so it's good food for thought.

> Can someone explain what benchmarks were actually used?

This honestly would be useful in the article itself, as well as, per above, some 'deep dives' into where the performance issues were. Was it in file I/O (possibly Interop related?) Was it due to some pattern in the serialization library? Was it the object allocation pattern (When I think of C# code friendly for Mono I think of Cysharp libraries which sometimes do curious things)? Not diving deeper into the profiling doesn't help anyone know where the focus needs to be (unless it's a more general thing, in which case I'd hope for a better deep dive on that aspect.)

Edited to add:

Reading your article again, I wonder whether your compiler is just not doing the right things to take advantage of the performance boosts available via CoreCLR?

E.x. can you do things like stackalloc temp buffers to avoid allocation, and does the stdlib do those things where it is advantageous?

Also, I know I vaguely hit on this above, but also wondering whether the IL being done is just 'not hitting the pattern'. where a lot of CoreCLR will do it's best magic if things are arranged a specific way in IL based on how Roslyn outputs, but even for the 'expected' C# case, deviations can lead to breaking the opt.

WorldMaker 4 hours ago | parent | next [-]

> Reading your article again, I wonder whether your compiler is just not doing the right things to take advantage of the performance boosts available via CoreCLR?

> E.x. can you do things like stackalloc temp buffers to avoid allocation, and does the stdlib do those things where it is advantageous?

The C# standard lib (often called the base class library or BCL) has seen a ton of Span<T>/Memory<T>/stackalloc internal usage adoption in .NET 6+, with each release adding more of them. Things like File IO and serialization/deserialization particularly see a lot of notable performance improvements just from upgrading each .NET version. .NET10 is faster than .NET9 with a lot of the same code, and so forth.

Mono still benefits from some of these BCL improvements (as more of the BCL is shared than not these days, and Blazor WASM for the moment is still more Mono than CoreCLR so some investment has continued), but not all of them and not always in the same ways.

Rochus 20 hours ago | parent | prev [-]

The goal of my compiler is not to get out maximum performance, neither of CoreCLR nor Mono. Just look at it as a random compiler which is not C#, especially not MS's C# which is highly in sync and optimized for specific features of the CoreCLR engine (which might appear in a future ECMA-335 standard). So the test essentially was to see what both, CoreCLR and Mono, do with non-optimized CIL generated by not their own compiler. This is a legal test case because ECMA-335 and its compatible CLR were exactly built for this use-case. Yes, the CIL output of my compiler could be much more improved, and I could even get more performance out of e.g. CoreCLR by using the specific knowledge of the engine (which you cannot find in the standard) which also the MS C# compiler uses. But that was not my goal. Both engine got the same CIL code and I just measured how fast it run on both engines on the same machine.

LeFantome 4 hours ago | parent | prev | next [-]

I think the “some years ago” is pretty relevant.

.NET has heavily invested in performance. If I understand your article correctly, you tested .NET 5 which will be much slower at this point than .NET 10 is.

I also think it matters what you mean by “Mono”. Mono, the original stand-alone project has not seen meaningful updates in many years. Mono is also one of the two runtimes in the currently shipping .NET though and I suspect this runtime has received a lot of love that may not have flowed back to the original Mono project.

eterm 21 hours ago | parent | prev [-]

What's going on with the Mandelbrot result in that post?

I don't beleive such a large regression from .NET framework to CoreCLR.

Rochus an hour ago | parent | next [-]

The Mono and .Net 4 times were too short; the true time is unknown. I only left the Mandelbrot result because I got a decently looking figure for CoreCLR, but the actual factor to Mono is unreliable. If the Mono result was 1, the factor would still be seven. I have no idea why it is that much faster.

to11mtm 21 hours ago | parent | prev [-]

NGL would be nice if there was a clear link to the cases used both for OP as well as who you are replying to... Kinda get it in OP's case tho.

Rochus 7 hours ago | parent [-]

I measured the raw horsepower of the JIT engine itself, not the speed of the standard library (BCL). My results show that the Mono engine is surprisingly capable when executing pure IL code, and that much of the 'slowness' people attribute to Mono actually comes from the libraries, not the runtime itself.

In contrast, the posted article uses a very specific, non-standard, and "apple-to-oranges" benchmark. It is essentially comparing a complete game engine initialization against a minimal console app (as far as I understand), which explains the massive 3x-15x differences reported. The author is actually measuring "Unity Engine Overhead + Mono vs. Raw .NET", not actually "Mono vs. .NET" as advertized. The "15x" figure comes very likely from the specific microbenchmark (struct heavy loop) where Mono's optimizer fails, extrapolated to imply the whole runtime is that much slower.

eterm 3 hours ago | parent [-]

Can we reproduce your results for Mandelbrot?

Rochus 2 hours ago | parent [-]

You can find all necessary information/data in the article (see references). Finding the same hardware that I used might be an issue though. Concerning Mandelbrot, I wouldn't spend too much time, because the runtime was so short for some targets that it likely has a big error margin compared to the other results. For my purpose this is not critical because or the geometric mean over all factors.

LarsDu88 13 hours ago | parent | prev | next [-]

Ah I wonder if this could've saved me countless hours of optimizing my VR game Rogue Stargun for the Quest 2, particularly the final battle in the game

calebh a day ago | parent | prev | next [-]

Will the move to CoreCLR give any speed ups in practice if the release build is complied with IL2CPP anyway? On all the games that I've worked on, IL2CPP is one of the first things that we've enabled, and the performance difference between the editor and release version is very noticeable.

Rohansi 21 hours ago | parent [-]

Editor is slower than Mono release builds. You'll need to compare Mono release vs. IL2CPP release to see the actual difference.

calebh 19 hours ago | parent [-]

I guess it would be good to also see a comparison between IL2CPP and Core CLR by the post author!

enbugger 9 hours ago | parent | prev | next [-]

Not to mention the Hot Reload which comes out of the box.

pwdisswordfishy 20 hours ago | parent | prev | next [-]

The author (probably unknowingly) glosses over a lot in these sentences of the "How did we get here" section:

> Unity uses the Mono framework to run C# programs and back in 2006 it was one of the only viable multi-platform implementations of .NET. Mono is also open-source, allowing Unity to do some tweaks to better suit game development. [...] An interesting twist happened nearly 10 years later.

Not mentioned is that Mono itself of course improved a lot over the years, and even prior to MS's about-face on open source, it was well known that Unity was hindered by sticking with an old and out-of-date Mono, and they were very successful at deflecting the blame for this by throwing the Mono folks under the bus. Any time complaints about game developers' inability to use newer C# features came up, Mono/Xamarin would invariably receive the ire* because Unity couldn't come to an agreement with them about their license and consulting fees. (Mono was open source under LGPL instead of MIT licensed at the time, and Unity had initially bought a commercial license that allowed them exemptions from even the soft copyleft provisions in the LGPL, but the exemption was limited and not, for example, for any and all future versions, too, indefinitely.) Reportedly, they were trying to charge too much (whatever that means) for Unity's attempts to upgrade to the modern versions.

It's now 10+ years later, and now not only Mono (after being relicensed under MIT) but .NET CoreCLR are both available for Unity at no cost, but despite this it still took Unity years before they'd upgraded their C# language support and to a slightly more modern runtime.

Something else to note: Although, LGPL isn't inherently incompatible with commercial use or even use in closed source software, one sticking point was that some of the platforms Unity wished to be able to deploy have developer/publisher restrictions that are incompatible with the soft copyleft terms in the LGPL that require that users (or in this case game developers) be allowed to relink against other versions of the covered software (including, for example, newer releases). Perversely, it's because Unity sought and obtained exemptions to the LGPL that both end users and game developers were hamstrung and kept from being able to upgrade Mono themselves! (It wouldn't have helped on, say, locked down platforms like Nintendo's for example, but certainly would have been viable on platforms without the first-party restrictions, like PC gaming or Android.)

By now, Unity has gone on to pull a lot of other shenanigans with their own pricing that seems to have sufficiently pissed off the game development community, but it should still not be forgotten when they were willing to pass the blame to an open source project over the development/support that the company felt it was entitled to for a price lower than they were told it would cost, and that they themselves were slow to make progress on even when the price of the exemption literally became $0.

* you can find threads with these sorts of comments from during this period right here on HN, too, but it was everywhere

littlecranky67 13 hours ago | parent [-]

> it was well known that Unity was hindered by sticking with an old and out-of-date Mono, and they were very successful at deflecting the blame

So much this. According to a 2023 blog article from Unity [0], Unity uses Boehm GC. But Mono itself introduced another, generational GC called SGen [1] more than 10 years ago that became the default at some point. It is just Unity stuck on old mono versions, missing out on all the changes and improvements that went into Mono after their fork, essentially.

[0]: https://unity.com/blog/engine-platform/porting-unity-to-core... [1]: https://www.mono-project.com/docs/advanced/garbage-collector...

rincebrain 8 hours ago | parent [-]

A sibling comment [1] remarks that they play games with raw pointers that are incompatible with the newer GC, so it's not "just" an older runtime that's biting them in the ass.

[1] - https://news.ycombinator.com/item?id=46415568

NooneAtAll3 11 hours ago | parent | prev [-]

my main problem with Unity games is never the speed - it's the outrageous RAM usage