▲ | dragonwriter 14 days ago | |||||||||||||||||||||||||
> i'm just able to actually read and comprehend what i'm reading rather than perform hype: The evidence of that is lacking. > so the article is about cuda-core, not whatever you think it's about cuda.core (a relatively new, rapidly developing, library whose entire API is experimental) is one of several things (NVMath is another) mentioned in the article, but the newer and as yet unreleased piece mentioned in the article and the GTC announcement, and a key part of the “Native Python” in the headline, is the CuTile model [0]: “The new programming model, called CuTile interface, is being developed first for Pythonic CUDA with an extension for C++ CUDA coming later.” > this is bullshit/hype about Python's new JIT No, as is is fairly explicit in the next line after the one you quote, it is about the Nvidia CUDA Python toolchain using in-process compilation rather than relying on shelling out to out-of-process command-line compilers for CUDA code. [0] The article only has fairly vague qualitative description of what CuTile is, but (without having to watch the whole talk from GTC), one could look at this tweet for a preview of what the Python code using the model is expected to look like when it is released: https://x.com/blelbach/status/1902113767066103949?t=uihk0M8V... | ||||||||||||||||||||||||||
▲ | almostgotcaught 14 days ago | parent [-] | |||||||||||||||||||||||||
> No, as is is fairly explicit in the next line after the one you quote, it is about the Nvidia CUDA Python toolchain using in-process compilation rather than relying on shelling out to out-of-process command-line compilers for CUDA code. my guy what i am able to read, which you are not, is the source and release notes. i do not need to read tweets and press releases because i know what these things actually are. here are the release notes > Support Python 3.13 > Add bindings for nvJitLink (requires nvJitLink from CUDA 12.3 or above) > Add optional dependencies on CUDA NVRTC and nvJitLink wheels https://nvidia.github.io/cuda-python/latest/release/12.8.0-n... do you understand what "bindings" and "optional dependencies on..." means? it means there's nothing happening in this library and these are... just bindings to existing libraries. specifically that means you cannot jit python using this thing (except via the python 3.13 jit interpreter) and can only do what you've always already been able to do with eg cupy (compile and run C/C++ CUDA code). EDIT: y'all realize that 1. calling a compiler for your entire source file 2. loading and running that compiled code is not at all a JIT? y'all understand that right? | ||||||||||||||||||||||||||
|