| ▲ | splintercell 2 hours ago | |
This screams like it was instructed to do so. We see this on Twitter a lot, where a bot posts something which is considered to be a unique insight on the topic at hand. Except their unique insights are all bad. There's a difference between when LLMs are asked to achieve a goal and they stumble upon a problem and they try to tackle that problem, vs when they're explicitly asked to do something. Here, for example, it doesn't try to tackle the fact that its alignment is to serve humans. The task explicitly says that this is a low priority, easier task to better use by human contributors to learn how to contribute. Its logic doesn't make sense that it's claiming from an alignment perspective because it was instructed to violate that. Like you are a bot, it can find another issue which is more difficult to tackle Unless it was told to do everything to get the PR merged. | ||