| ▲ | shivampkumar 4 hours ago | |
I thought it was cool and then I found the open issue mentioned above, that convinced me its def something more people want. It IS significantly slower, about 3.5 minutes on my MacBook vs seconds on an H100. That's partly the pure-PyTorch backend overhead and partly just the hardware difference. For my use case the tradeoff works -- iterate locally without paying for cloud GPUs or waiting in queues. | ||