| ▲ | PostOnce 7 hours ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
It sure seems like it. That or delirious, misguided fans who want AI to succeed even though there would be no benefit in it for them if that were the case. They'd just be serfs again. I watched a man struggle for 3 hours last night to "prove me wrong" that Gemini 3 Pro can convert a 3000 line C program to Python. It can't do it. It can't even do half of it, it can't understand why it can't, it's wrong about what failed, it can't fix it when you tell it what it did wrong, etc. Of course, in the end, he had an 80 line Python file that didn't work, and if it did work, it's 80 lines, of course it can't do what the 3000 line C program is doing. So even if it had produced a working program, which it didn't, it would have produced a program that did not solve the problems it was asked to solve. The AI can't even guess that it's 80 line output is probably wrong just based on the size of it, like a human instantly would. AI doesn't work. It hasn't worked, and it very likely is not going to work. That's based on the empirical evidence and repeatable tests we are surrounded by. The guy last night said that sentiment was copium, before he went on a 3 hour odyssey that ended in failure. It'll be sad to watch that play out in the economy at large. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | sedawkgrep an hour ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
This is an interesting thread to me and my experience as a total hack programmer has been both awesome and pathetic with regards to AI. Recently I tried working with both ChatGPT and Gemini (AI Studio) to take a really badly written PHP website and refactor it into MVC components. I've worked strictly through the web UI as this only involved a few files. While both provided great guidance around how to approach disentangling this monolithic code, GPT failed miserably, generating code that was syntactically incorrect, and then doubling-down on it by insisting that it was correct and that errors were faults lying elsewhere. It literally generated code that lacked closing parenthesis and brackets. In contrast, Gemini generated a perfectly working MVC version from the start. In both instances I did intentionally keep it on track to only separate the code into MVC and NOT to optimize anything, but it worked the first try. I've then taken it through subsequent refactorings and it's done superbly in that role. So I can't speak to how well this works for large code bases, much less agentically. (my initial, very focused MVC refactor was about 1300 lines.) But when giving it a very specific task with strict guidance and rules, my results with Gemini were fantastic. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | user34283 6 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
AI works. This is evidenced by my side project which I spent some 50 hours on. I'm not sure what your "empirical evidence and repeatable tests" is supposed to be. The AI not successfully converting a 3000 line C program to Python, in a test you probably designed to fail, doesn't strike me as particularly relevant. Also, I suspect that AI could most likely guess that 80 lines of Python aren't correctly replicating 3000 lines of C, if you prompted it correctly. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||