| ▲ | refulgentis 6 hours ago | |
It’s always been possible, but it’s not possible because there’s no backend, and no one wants to it to be possible because everyone needs it 10x the speed of running on a Mac? I’m missing something, I think. | ||
| ▲ | shivampkumar 4 hours ago | parent [-] | |
I thought it was cool and then I found the open issue mentioned above, that convinced me its def something more people want. It IS significantly slower, about 3.5 minutes on my MacBook vs seconds on an H100. That's partly the pure-PyTorch backend overhead and partly just the hardware difference. For my use case the tradeoff works -- iterate locally without paying for cloud GPUs or waiting in queues. | ||