Remix.run Logo
bossyTeacher 3 days ago

"Coding performed by AI is at a world-class level". Once I hit that line, I stopped reading. This tells me this person didn't do proper research on this matter.

calebm 3 days ago | parent | next [-]

I recently had ChatGPT refactor an entire mathematical graph rendering logic that I wrote in vanilla js, and had it rewrite it as GLSL. It took about an hour overall (required a few prompts). That is world-class level in my opinion.

bossyTeacher 3 days ago | parent | next [-]

If I tell people that I can write programming code at world-class level and in some of my reviews I make junior mistakes, I make out functions or dependencies that do not exist or I am unable to learn from my mistakes, I would be put on PIP immediately. And after a while, fired. This is the standard LLMs should be held up against when you use the word "world class".

mcv 3 days ago | parent | prev [-]

I'm currently trying to get Claude Sonnet 4.5 to produce a graph rendering algorithm, and while it's producing results, they're not the right results. I should probably do this myself and let the AI handle just the boilerplate code.

calebm 2 days ago | parent [-]

I have consistently had good results when I understand the problem and outsource the details to AI, but bad results when I try to have it work without me understanding the problem.

mcv 2 days ago | parent [-]

Yeah, but then what do you need the AI for? It's programming it myself that helps me understand all the intricacies of the problem. That's exactly the part that gets cut off by outsourcing it to AI. The AI is not a "world class programmer" if it still needs me to tell it the solution to the problem.

cognivore 3 days ago | parent | prev [-]

That's because AI allows poor programmers to appear as good programmers, which is actually a good thing as otherwise they'd be writing crap you'd have to code-review, but their understanding of what is good code is poor, so you're back to having to vet it all anyway. At least you can us AI for that. Except you can't, without vetting it.

I literally just today watched my entire team descend into "Release Hell" where an obscure bug in business logic already delivered to thousands of customers broke right as we were about to ship a release. Obscure bug, huge impact on the customer, as they actually ended up charging people more than they should have. The team-members, and yes, not leads, used AI to write that bug and then tried to prompt their way out of the bug. It turned into a giant game of whack-a-mole as other business logic had errors introduced that thankfully got caught by tests. Then it was discovered that they never understood the code, they could only maintain it with prompts.

Let that sink in. They don't understand what they're doing, they just massage the spec into prompts and when it appears to work and pass tests they call it good.

We looked at the prompts. They were insane. They actually just kept adding more specification to the end, but if you read through it all it had contradictory logic, which I would have hoped the AI would have pointed out, but nope. It was actually just easier for me and another senior to rewrite the logic as pseudo-code, cut the size down by literally 3/4, and eventually got it all working as expected.

So that's the future, girls and boys. People putting together code they don't understand with AI, and can only maintain with AI, and then not being able to fix with AI because they cannot prompt accurately enough because English sucks at being precise.