Remix.run Logo
dismalaf 14 days ago

> But to have a direct pipeline to the GPU via Python

Have you ever used a GPU API (CUDA, OpenCL, OpenGL, Vulkan, etc...) with a scripting language?

It's cool that Nvidia made a bit of an ecosystem around it but it won't replace C++ or Fortran and you can't simply drop in "normal" Python code and have it run on the GPU. CUDA is still fundamentally it's own thing.

There's also been CUDA bindings to scripting languages for at least 15 years... Most people will probably still use Torch or higher level things built on top of it.

Also, here's Nvidia's own advertisement and some instructions for Python on their GPUs:

- https://developer.nvidia.com/cuda-python

- https://developer.nvidia.com/how-to-cuda-python

Reality is kind of boring, and the article posted here is just clickbait.

dragonwriter 14 days ago | parent | next [-]

> It's cool that Nvidia made a bit of an ecosystem around it but it won't replace C++ or Fortran and you can't simply drop in "normal" Python code and have it run on the GPU.

While its not exactly normal Python code, there are Python libraries that allow writing GPU kernels in internal DSLs that are normal-ish Python (e.g., Numba for CUDA specifically via the @cuda.jit decorator; or Taichi which has multiple backends supporting the same application code—Vulkan, Metal, CUDA, OpenGL, OpenGL ES, and CPU.)

Apparently, nVidia is now doing this first party in CUDA Python, including adding a new paradigm for CUDA code (CuTile) that is going to be in Python before C++; possibly trying to get ahead of things like Taichi (which, because it is cross-platform, commoditizes the underlying GPU).

> Also, here's Nvidia's own advertisement for Python on their GPUs

That (and the documentation linked there) does not address the new upcoming native functionality announced at GTC; existing CUDA Python has kernels written in C++ in inline strings.

freeone3000 14 days ago | parent | prev | next [-]

OpenCL and OpenGL are basically already scripting languages that you happen to type into a C compiler. The CUDA advantage was actually having meaningful types and compilation errors, without the intense boilerplate of Vulkan. But this is 100% a python-for-CUDA-C replacement on the GPU, for people who prefer a slightly different bracketing syntax.

dismalaf 14 days ago | parent [-]

> But this is 100% a python-for-CUDA-C replacement on the GPU

Ish. It's a Python maths library made by Nvidia, an eDSL and a collection of curated libraries. It's not significantly different than stuff like Numpy, Triton, etc..., apart from being made by Nvidia and bundled with their tools.

gymbeaux 12 days ago | parent [-]

I’m mainly interested in the performance implications. The less shit between me and the hardware, theoretically the better the performance. In a world where these companies want to build nuclear power plants just to power NVIDIA GPU data centers, I feel like we need to be optimizing the code where possible.

pjmlp 14 days ago | parent | prev [-]

Yes, shading languages which are more productive without the gotchas from those languages, as they were designed from the ground up for compute devices.

The polyglot nature of CUDA is one of the plus points versus the original "we do only C99 dialect around here" from OpenCL, until it was too late.