| ▲ | energy123 2 hours ago | |
I don't know when but I'm going off: - "OpenAI is partnering with Cerebras to add 750MW of ultra low-latency AI compute" - Sam Altman saying that users want faster inference more than lower cost in his interview. - My understanding that many tasks are serial in nature. | ||