▲ | Archit3ch 2 days ago | ||||||||||||||||
My use case is realtime audio processing (VST plugins). Metal.jl can be used to write GPU kernels in Julia to target an Apple Silicon GPU. Or you can use KernelAbstractions.jl to write once in a high-level CUDA-like language to target NVIDIA/AMD/Apple/Intel GPUs. For best performance, you'll want to take advantage of vendor-specific hardware, like Tensor Cores in CUDA or Unified Memory on Mac. You also get an ever-expanding set of Julia GPU libraries. In my experience, these are more focused on the numerical side rather than ML. If you want to compile an executable for an end user, that functionality was added in Julia 1.12, which hasn't been released yet. Early tests with the release candidate suggest that it works, but I would advise waiting to get a better developer experience. | |||||||||||||||||
▲ | larme 2 days ago | parent [-] | ||||||||||||||||
I'm very interesting in this field (realtime audio + GPU programming). How do you deal with the latency? Do you send or multiple single vectors/buffers to GPU? Also I think because samples in one channel need to be processed sequentially, does that mean mono audio processing won't benefit a lot from GPU programming. Or maybe you are dealing with spectral signal processing? | |||||||||||||||||
|