Remix.run Logo
Archit3ch 6 days ago

I write native audio apps, where every cycle matters. I also need the full compute API instead of graphics shaders.

Is the "Rust -> WebGPU -> SPIR-V -> MSL -> Metal" pipeline robust when it come to performance? To me, it seems brittle and hard to reason about all these translation stages. Ditto for "... -> Vulkan -> MoltenVk -> ...".

Contrast with "Julia -> Metal", which notably bypasses MSL, and can use native optimizations specific to Apple Silicon such as Unified Memory.

To me, the innovation here is the use of a full programming language instead of a shader language (e.g. Slang). Rust supports newtype, traits, macros, and so on.

bigyabai 6 days ago | parent | next [-]

> Is the "Rust -> WebGPU -> SPIR-V -> MSL -> Metal" pipeline robust when it come to performance?

It's basically the same concept as Apple's Clang optimizations, but for the GPU. SPIR-V is an IR just like the one in LLVM, which can be used for system-specific optimization. In theory, you can keep the one codebase to target any number of supported raster GPUs.

The Julia -> Metal stack is comparatively not very portable, which probably doesn't matter if you write Audio Unit plugins. But I could definitely see how the bigger cross-platform devs like u-he or Spectrasonics would value a more complex SPIR-V based pipeline.

Archit3ch 5 days ago | parent [-]

> The Julia -> Metal stack is comparatively not very portable

You can do "Julia -> KernelAbstractions.jl -> Metal", "Julia -> KernelAbstractions.jl -> CUDA", etc. if you need portability. This is already used by some of the numerical libraries in the ecosystem.

bigyabai 5 days ago | parent [-]

Sure, you could do that for any language/SDK if you're patient enough. You could scrap the abstraction layer altogether and litter the whole thing with ifdefs if you're really lazy.

We're definitely going to see people using higher-level libraries to abstract all this away in the future though. We knew this was going to happen a decade ago, so it's been frustrating watching GPU standards fragment while featureset demands consolidate. Nowadays there is basically no upside to writing a raster program with native GPU libraries when you could target a higher-level standard with oftentimes better performance.

tucnak 6 days ago | parent | prev | next [-]

I must agree that for numerical computation (and downstream optimisation thereof) Julia is much better suited than ostensibly "systems" language such as Rust. Moreover, the compatibility matrix[1] for Rust-CUDA tells a story: there's seemingly very little demand for CUDA programming in Rust, and most parts that people love about CUDA are notably missing. If there was demand, surely it would get more traction, alas, it would appear that actual CUDA programmers have very little appetite for it...

[1]: https://github.com/Rust-GPU/Rust-CUDA/blob/main/guide/src/fe...

Ygg2 6 days ago | parent [-]

It's not just that. See CUDA EULA at https://docs.nvidia.com/cuda/eula/index.html

Section 1.2 Limitations:

     You may not reverse engineer, decompile or disassemble any portion of the output generated using SDK elements for the purpose of translating such output artifacts to **target a non-NVIDIA platform**.
Emphasis mine.
tucnak 5 days ago | parent [-]

Well, believe it or not, CUDA fans only ever target NVIDIA systems... that's the whole point. However, EULA itself is completely besides the point.

dvtkrlbs 6 days ago | parent | prev [-]

The thing is you don't have to have the WebGPU layer in this with rust-gpu since it is a codegen backend for the compiler. You just compile the Rust MIR to SPIRV