| ▲ | mlsu 4 hours ago | |
We have great software now! YoloSwag (13 commits) [rocketship rocketship rocketship] YoloSwag is a 1:1 implementation of pyTorch, written in RUST [crab emoji] - [hand pointing emoji] YoloSwag is Memory Safe due to being Written in Rust - [green leaf emoji] YoloSwag uses 80% less CPU cycles due to being written in Rust - [clipboard emoji] [engineer emoji] YoloSwag is 1:1 API compatible with pyTorch with complete ops specification conformance. All ops are supported. - [recycle emoji] YoloSwag is drop-in ready replacement for Pytorch - [racecar emoji] YoloSwag speeds up your training workflows by over 300% Then you git clone yoloswag and it crashes immediately and doesn't even run. And you look at the test suite and every test just creates its own mocks to pass. And then you look at the code and it's weird frankenstein implementation, half of it is using rust bindings for pytorch and the other half is random APIs that are named similarly but not identical. Then you look at the committer and the description on his profile says "imminentize AGI.", he launched 3 crypto tokens in 2020, he links an X profile (serial experiments lain avatar) where he's posting 100x a day about how "it's over" for software devs and how he "became a domain expert in quantum computing in 6 weeks." | ||
| ▲ | gitpusher 2 hours ago | parent | next [-] | |
> it crashes immediately and doesn't even run. Technically, that's as "Memory Safe" as you can get! | ||
| ▲ | mpalmer 3 hours ago | parent | prev [-] | |
Personally the only way I see to "imminentize" any sort of healthy software culture is to categorically dismiss people who make this kind of stuff, all these temporarily embarrassed CEOs, in every public channel available. Shut them out. They can only be interested in one thing, self-advancement. No other explanation works! If they were interested in self-improvement, they might try reading or writing something themselves! Wouldn't it show if they had? I recognize that models are getting better, but consider: if you already don't understand how programming or LLMs work, and you use LLMs precisely to avoid knowing how to do things, or how they work (the "CEO" mode), each incremental improvement will impress you more than it impresses others. There's no AI exception to Dunning-Kruger. I recognize that "this" is a difficult thing to pin down in real time. But in the end we know it when we see it, and it has the fascinating and useful quality of not really being explainable by anything else. Unless and until the culture gets to a place where no one would risk embarrassing themselves by doing something like this, we're stuck with it. | ||