Remix.run Logo
cherryteastain 2 hours ago

I took the "for inference" bit from that sentence you quoted as a qualifier applied to the chips, as in the chips were originally developed for inference but were now used for training too.

Note that Z.ai also publically announced that they trained another model, GLM-Image, entirely on Huawei Ascend silicon a month ago [1].

[1] https://www.scmp.com/tech/tech-war/article/3339869/zhipu-ai-...

erwald 2 hours ago | parent [-]

Thanks. I'm like 95% sure that you're wrong, and that GLM-5 was trained on NVIDIA GPUs, or at least not on Huawei Ascends.

As I wrote in another comment, I think so for a few reasons:

1. The z.ai blog post says GML-5 is compatible with Ascends for inference, without mentioning training -- it says they support "deploying GLM-5 on non-NVIDIA chips, including Huawei Ascend, Moore Threads, Cambricon, Kunlun Chip, MetaX, Enflame, and Hygon" -- many different domestic chips. Note "deploying". https://z.ai/blog/glm-5

2. The SCMP piece you linked just says: "Huawei’s Ascend chips have proven effective at training smaller models like Zhipu’s GLM-Image, but their efficacy for training the company’s flagship series of large language models, such as the next-generation GLM-5, was still to be determined, according to a person familiar with the matter."

3. You're right that z.ai trained a small image model on Ascends. They made a big fuss about it too. If they had trained GLM-5 with Ascends, they likely would've shouted it from the rooftops. https://www.theregister.com/2026/01/15/zhipu_glm_image_huawe...

4. Ascends just aren't that good