▲ | mdaniel 3 days ago | ||||||||||||||||
> Cross-Platform
:-/ It reminds me of Microsoft calling their thing "cross platform" because it works on several copies of WindowsIn all seriousness, I get the impression that pytorch is such a monster PITA to manage because it cares so much about the target hardware. It'd be like a blog post saying "I solved the assembly language nightmare" | |||||||||||||||||
▲ | gobdovan 3 days ago | parent | next [-] | ||||||||||||||||
Torch simply has to work this way because it cares about performance on a combination of multiple systems and dozens of GPUs. The complexity leaks into packaging. If you do not care about performance and would rather have portability, use an alternative like tinygrad that does not optimize for every accelerator under the sun. This need for hardware-specific optimization is also why the assembly language analogy is a little imprecise. Nobody expects one binary to run on every CPU or GPU with peak efficiency, unless you are talking about something like Redbean which gets surprisingly far (the creator actually worked on the TensorFlow team and addressed similar cross-platform problems). So maybe the the blogpost you're looking for is https://justine.lol/redbean2/. | |||||||||||||||||
| |||||||||||||||||
▲ | cstrahan 3 days ago | parent | prev | next [-] | ||||||||||||||||
I think a more charitable interpretation of TFA would be: "I Have Come Up With A Recipe for Solving PyTorch's Cross-Platform Nightmare" That is: there's nothing stopping the author from building on the approach he shares to also include Windows/FreeBSD/NetBSD/whatever. It's his project (FileChat), and I would guess he uses Linux. It's natural that he'd solve this problem for the platforms he uses, and for which wheels are readily available. | |||||||||||||||||
▲ | esafak 3 days ago | parent | prev [-] | ||||||||||||||||
https://github.com/pypa/manylinux is for building cross-platform wheels. | |||||||||||||||||
|