▲ | nromiun 3 days ago | |
Another big problem I see with LLMs is that it can't make precise adjustments to your answer. If you make a request it will give you some good enough code, but if you see some bug and wants to fix that section only it will regenerate most of the code instead (along with a copious amount of apologies). And the new code will have new problems of their own. So you are back to square one. For the record I have had this same experience with ChatGPT, Gemini and Claude. Most of the time I had to give up and write from scratch. | ||
▲ | zozbot234 3 days ago | parent [-] | |
You're absolutely right! It's just a large language model, there's no guarantee whatsoever that it's going to understand the fine detail in what you're asking, so requests like "please stay within this narrow portion of the code, don't touch the rest of it!" are a bit of a non-starter. |