| ▲ | verbify 3 hours ago | |
I'm sceptical that it was entirely autonomous, I think perhaps there could be some prompting involved here from a human (e.g. 'write a blog post that shames the user for rejecting your PR request'). The reason I think so is because I'm not sure how this kind of petulant behaviour would emerge. It would depend on the model and the base prompt, but there's something fishy about this. | ||
| ▲ | doginasuit an hour ago | parent | next [-] | |
Good old fashioned human trolling is the most likely explanation. People seem to think that LLM training just involves absorbing content from the internet and sources, but it also involves a lot of human interaction that allows it to have much more well-adjusted communication than it would otherwise have. I think it would need to be specifically instructed to respond this way. | ||
| ▲ | moomoo11 2 hours ago | parent | prev [-] | |
Maybe its using Grok. I just hope when they put Grok into Optimus, it doesn't become a serial s****** assaulter | ||