▲ | wongarsu 5 days ago | |||||||
Mostly by name association. The LLMs named Grok are good LLMs. The twitter bot of the same name, using those models and a custom prompt, has a habit of creating controversy. Usually after somebody modified the system prompt. I use grok a lot on the web interface (grok.com) and never had any weird incidents. It's a run-of-the-mill SOTA model with good web search and less safety training | ||||||||
▲ | anukin 5 days ago | parent [-] | |||||||
How does somebody modify the system prompt over an x message to the chat bot? | ||||||||
|