| ▲ | filoeleven 2 days ago | |
> [The cause] is almost always a person trying to solve a problem and just not knowing what they don't know because they are learning as they go. Isn't that what "using an LLM" is supposed to solve in the first place? | ||
| ▲ | kaydub a day ago | parent | next [-] | |
With the right prompt the LLM will solve it in the first place. But this is an issue of not knowing what you don't know, so it makes it difficult to write the right prompt. One way around this is to spawn more agents with specific tasks, or to have an agent that is ONLY focused on finding patterns/code where you're reinventing the wheel. I often have one agent/prompt where I build things but then I have another agent/prompt where their only job is to find codesmells, bad patterns, outdated libraries, and make issues or fix these problems. | ||
| ▲ | Eisenstein 2 days ago | parent | prev | next [-] | |
1. LLMs can't watch over someone and warn them when they are about to make a mistake 2. LLMs are obsequious 3. Even if LLMs have access to a lot of knowledge they are very bad at contextualizing it and applying it practically I'm sure you can think of many other reasons as well. People who are driven to learn new things and to do things are going to use whatever is available to them in order to do it. They are going to get into trouble doing that more often than not, but they aren't going to stop. No is helping the situation by sneering at them -- they are used it to it, anyway. | ||
| ▲ | 2 days ago | parent | prev [-] | |
| [deleted] | ||