▲ | eatsyourtacos 4 days ago | ||||||||||||||||||||||||||||||||||||||||||||||
Sounds like you are using it entirely wrong then... Just yesterday I uploaded a few files of my code (each about 3000+ lines) into a gpt5 project and asked in assistance in changing a lot of database calls into a caching system, and it proceeded to create a full 500 line file with all the caching objects and functions I needed. Then we went section through section of the main 3000+ line file to change parts of the database queries into the cached version. [I didn't even really need to do this, it basically detected everything I would need changing at once and gave me most of it, but I wanted to do it in smaller chunks so I was sure what was going on] Could I have done this without AI? Sure.. but this was basically like having a second pair of eyes and validating what I'm doing. And saving me a bunch of time so I'm not writing everything from scratch. I have the base template of what I need then I can improve it from there. All the code it wrote was perfectly clean.. and this is not a one off, I've been using it daily for the last year for everything. It almost completely replaces my need to have a junior developer helping me. | |||||||||||||||||||||||||||||||||||||||||||||||
▲ | jayd16 4 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
You mean like it turned on Hibernate or it wrote some custom rolled in app cache layer? I usually find these kinds of caching solutions to be extremely complicated (well the cache invalidating part) and I'm a bit curious what approach it took. You mention it only updated a single file so I guess it's not using any updates to the session handling so either sticky sessions are not assumed or something else is going on. So then how do you invalidate the app level cache for a user across all machine instances? I have a lot of trauma from the old web days of people figuring this out so I'm really curious to hear about how this AI one shot it in a single file. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
▲ | lubesGordi 4 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||
I know, I don't understand what problems people are having with getting usable code. Maybe the models don't work well with certain languages? Works great with C++. I've gotten thousands of lines of clean compiling on the first try and obviously correct code from ChatGPT, Gemini, and Claude. I've been assuming the people who are having issues are junior devs, who don't know the vocabulary well enough yet to steer these things in the right direction. I wouldn't say I'm a prompt wizard, but I do understand context and the surface area of the things I'm asking the llm to do. | |||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||
▲ | rootnod3 4 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||
How large is that code-base overall? Would you be able to let the LLM look at the entirety of it without it crapping out? It definitely sounds nice to go and change a few queries, but did it also consider the potential impacts in other parts of the source or in adjacent running systems? The query itself here might not be the best example, but you get what I mean. |