| ▲ | viraptor 16 hours ago |
| Nobody wants an AI that refuses to attempt solving something. We want it to try and maybe realise when all paths it can generate have been exhausted. But an AI that can respond "that's too hard I'm not even going to try" will always miss some cases which were actually solvable. |
|
| ▲ | mrweasel 10 hours ago | parent | next [-] |
| > Nobody wants an AI that refuses to attempt solving something. That's not entirely true. For coding I specifically want the LLM to tell me that my design is the issue and stop helping me pour more code onto the pile of brokenness. |
| |
| ▲ | viraptor 10 hours ago | parent [-] | | Refuse is different from verify you want to continue. "This looks like a bad idea because of (...). Are you sure you want to try this path anyway?" is not a refusal. And it covers both use cases. | | |
| ▲ | mrweasel 6 hours ago | parent [-] | | The issue I ran into was that the LLMs won't recognize the bad ideas and just help you dig your hole deeper and deeper. Alternatively they will start circling back to wrong answers when suggestions aren't working or language features have been hallucinated, they don't stop an go: Hey, maybe what you're doing is wrong. Ideally sure, the LLM could point out that your line of questioning is a result of bad design, but has anyone ever experienced that? |
|
|
|
| ▲ | namaria 14 hours ago | parent | prev [-] |
| So we need LLMs to solve the halting problem? |
| |
| ▲ | viraptor 12 hours ago | parent [-] | | I'm not sure how that follows, so... no. | | |
| ▲ | namaria 7 hours ago | parent [-] | | > We want it to try and maybe realise when all paths it can generate have been exhausted. How would it know if any reasoning fails to terminate at all? |
|
|