Remix.run Logo
thrance 6 days ago

I had an awful, terrible experience with GPT5 a few days ago, that made me remember why I don't use LLMs, and renewed my promise to not use them for at least a year more.

I am a relative newbie to GPU development, and was writing a simple 2D renderer with WebGPU and its rust implementation, wgpu. The goal is to draw a few textures to a buffer, and then draw that buffer to the screen with a CRT effect applied.

I got 99% of the way there on my own, reading the guide, but then got stumped on a runtime error message. Something like "Texture was destroyed while its semaphore wasn't released". Looking around my code, I see no textures ever being released. I decide to give the LLM a go, and ask it to help me, and it very enthusiastically gives a few thing to try.

I try them, nothing works. It corrects itself with more things to try, more modifications to my code. Each time giving a plausible explanation as to what went wrong. Each time extra confident that it got the issue pinned down this time. After maybe two very frustrating hours, I tell it to go fuck itself, close the tab and switch my brain on again.

10 minutes later, I notice my buffer's format doesn't match the one used in the render pass that draws to it. Correct that, compile, and it works.

I genuinely don't understand what those pro-LLM-coding guys are doing that they find AIs helpful. I can manage the easy parts of my job on my own, and it fails miserably on the hard parts. Are those people only writing boilerplate all day long?

spopejoy 3 days ago | parent [-]

It's more that if there isn't 10 stack overflows on your exact problem then you're out of luck.

LLMs are 100% useless for:

- non-mainstream languages

- langs without massive corpuses of online tutorials and SOs

I was going to add "langs without large online open-source codebases" but if they don't have extensive, reliable SOs (haskell, Solidity, even rust) then LLMs struggle to the point of uselessness, because LLMs don't actually trawl through random codebases and magically turn it into cookbooks and tutorials.

Indeed, an emergent problem with LLMs is they are going to recreate the 90s/2000s PL dark ages where "never write a new language" was mantra. Now it will be because a new lang can never hope to get sufficient LLM training data.

EDIT to your problem, it isn't the lang but the domain. If experts aren't dumping huge volumes of explanatory text online, an LLM can't help you even in a mainstream lang.