▲ | colonCapitalDee 4 days ago | |
The problem isn't that LLMs can't be critical, it's that LLMs don't have taste. It's easy to get an LLM to give praise, and it's easy to get an LLM to give criticism, but getting an LLM to praise good things and criticize bad things is currently impossible for non-trival inputs. That's not say that prompting your LLM to generate criticism is useless, it's just that any LLM prompted to generate criticism is going to criticize things are that actually fine, just like how an LLM prompted to generate praise (which is effectively the default behavior) is going to praise things that are deeply not fine. | ||
▲ | bubblyworld 4 days ago | parent | next [-] | |
Absolutely matches my experience - it can still be super helpful, but AI have an extreme version of an anchoring bias. | ||
▲ | jauhar_ 3 days ago | parent | prev [-] | |
Another issue is that the behaviour of the LLMs is not very consistent. |