| ▲ | Barbing 2 hours ago | ||||||||||||||||||||||
Grok 4.3 was completed ahead of its CEO’s lesson on this common safety resource:
https://www.axios.com/2026/04/30/musk-openai-safety-grokLow relevancy in spite of cluster size and musical chair gas generators for time being:
https://techcrunch.com/2026/04/30/elon-musk-testifies-that-x...(Affiliated with no AI company, just surprised to read this yesterday - how could Elon miss model cards…concerning…, & the fact money can’t buy success every time.) | |||||||||||||||||||||||
| ▲ | tecoholic 2 hours ago | parent | next [-] | ||||||||||||||||||||||
Seriously though, why is it a model "card", safety "card"? I had to lookup to learn that it comes from HuggingFace's vague definition of "README" in the model's repo. This is such a specific thing that I don't think anyone except a very small population would know - not the users, not the c-suites. I don't like Musk or Grok. But not knowing what's a safety card is not a signal of anything IMO. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | kardianos 44 minutes ago | parent | prev [-] | ||||||||||||||||||||||
Elon has publicly stated that he cares a great deal about safety. He has stated that the only safe models are those which align greatest with truth, that which is in reality. In this, xAI has lived up, as it has proved to hallucinate least (or close to least) in benchmarks. If you read that, quote again, he is saying "how can you quantify safety in a card?" | |||||||||||||||||||||||
| |||||||||||||||||||||||