| ▲ | aspenmartin 8 hours ago | |||||||
> Because software developers typically understand how to implement a solution to problem better than the client. If they don't have enough details to implement a solution, they will ask the client for details. If the developer decides to use an LLM to implement a solution, they have the ability to assess the end product. Why do you think agents can’t do that? They can’t do this really well today but if the distance we went in 2025 stays similar it’ll be like a year before this starts getting decent and then like another 1 year before it’s excellent. > Sure, you will see a few people using LLMs to develop personalized software for themselves. Yet these will be people who understand how to specify the problem they are trying to solve clearly, will have the patience to handle the quirks and bugs in the software they create Only humans can do this? | ||||||||
| ▲ | dimitri-vs 8 hours ago | parent [-] | |||||||
Hallucinations are not solved, memory is not solved, prompt injection is not solved, context limits are waaay too low at the same time tokens way too expensive to take advantage of context limits, etc. These problems have existed since the very early days of GPT-4 and there is no clear path to them being solved any time soon. You basically need AGI and we are nowhere close to AGI. | ||||||||
| ||||||||