> What happens is that weak models hallucinate (sometimes causally hitting a real problem)
So the bigger models hallucinate better causally hitting more real problems?