Remix.run Logo
h4kunamata 5 hours ago

>Requirements

>A will to live (optional but recommended)

>LLVM is NOT required. BarraCUDA does its own instruction encoding like an adult.

>Open an issue if theres anything you want to discuss. Or don't. I'm not your mum.

>Based in New Zealand

Oceania sense of humor is like no other haha

The project owner strongly emphasize the no LLM dependency, in a world of AI slope this is so refreshing.

The cheer amount of knowledge required to even start such project, is really something else, and prove the manual wrong on the machine language level is something else entirely.

When it comes to AMD, "no CUDA support" is the biggest "excuse" to join NVIDIA's walled garden.

Godspeed to this project, the more competition the less NVIDIA can continue destroying the PC parts pricing.

querez 5 hours ago | parent | next [-]

> The project owner strongly emphasize the no LLM dependency, in a world of AI slope this is so refreshing.

The project owner is talking about LLVM,a compiler toolkit, not an LLM.

kmaitreys 4 hours ago | parent [-]

It's actually quite easy to spot if LLMs were used or not.

Very few total number of commits, AI like documentation and code comments.

But even if LLMs were used, the overall project does feel steered by a human, given some decisions like not using bloated build systems. If this actually works then that's great.

butvacuum 4 hours ago | parent | next [-]

Since when is squashing noisesum commits an AI activity instead of good manners?

natvert 3 hours ago | parent | prev [-]

Says the clawdbot

wild_egg 5 hours ago | parent | prev | next [-]

This project very most definitely has significant AI contributions.

Don't care though. AI can work wonders in skilled hands and I'm looking forward to using this project

ZaneHam 4 hours ago | parent [-]

Hello! I didn't realise my project was posted here but I can actually answer this.

I do use LLM's (specifically Ollama) particularly for test summarisation, writing up some boilerplate and also I've used Claude/Chatgpt on the web when my free tier allows. It's good for when I hit problems such as AMD SOP prefixes being different than I expected.

magicalhippo 4 hours ago | parent | prev | next [-]

> Oceania sense of humor is like no other haha

Reminded me of the beached whale animated shorts[1].

[1]: https://www.youtube.com/watch?v=ezJG0QrkCTA&list=PLeKsajfbDp...

ekianjo 3 hours ago | parent | prev | next [-]

LLVM, nothing to do with LLMs

dboreham 3 hours ago | parent | prev | next [-]

Unrelated: just returned from a month in NZ. Amazing people.

colordrops 3 hours ago | parent | prev | next [-]

I'm still blown away that AMD hasn't made it their top priority. I've said this for years. If I was AMD I would spend billions upon billions if necessary to make a CUDA compatibility layer for AMD. It would certainly still pay off, and it almost certainly wouldn't cost that much.

woctordho an hour ago | parent | next [-]

They've been doing it all the time and it's called HIP. Nowadays it works pretty well on a few supported GPUs (CDNA 3 and RDNA 4).

ddtaylor 2 hours ago | parent | prev | next [-]

AMD did hire someone to do this and IIRC he did, but they were afraid of Nvidia lawyers and he released it outside of the company?

andy_ppp 3 hours ago | parent | prev [-]

Moving target, honestly just get PyTorch working fully (loads of stuff just doesn’t work on AMD hardware) and also make it work on all graphics cards from a certain generation. The matrix of support needed GFX cards, architectures and software together is quite astounding but still yes that should have at least that working and equivalent custom kernels.

samrus 4 hours ago | parent | prev | next [-]

> >LLVM is NOT required. BarraCUDA does its own instruction encoding like an adult.

> The project owner strongly emphasize the no LLM dependency, in a world of AI slope this is so refreshing.

"Has tech literacy deserted the tech insider websites of silicon valley? I will not beleove it is so. ARE THERE NO TRUE ENGINEERS AMONG YOU?!"

bigyabai 4 hours ago | parent | prev [-]

> and prove the manual wrong on the machine language level

I'll be the party pooper here, I guess. The manual is still right, and no amount of reverse-engineering will fix the architecture AMD chose for their silicon. It's absolutely possible to implement a subset of CUDA features on a raster GPU, but we've been doing that since OpenCL and CUDA is still king.

The best thing the industry can do is converge on a GPGPU compute standard that doesn't suck. But Intel, AMD and Apple are all at-odds with one another so CUDA's hedged bet on industry hostility will keep paying dividends.