▲ | hardwaresofton 3 days ago | ||||||||||||||||||||||||||||||||||
If I'm understanding the suggestion, the proposed python virtual threads are ~= fibers ~= stackful coroutines. I have this paper saved in my bookmarks as "fibers bad": https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2018/p13... AFAIK sync/await and stackless coroutines are the most efficient way to do async operations as far as I can tell, even if they are a unwieldly and complicated. Is there something to be gained here other than usability? Python is certainly in the business of trading efficiency and optimal solutions for readability and some notion of simplicity, and that has held it back (and all the programmers that overindex on the pythonic way, it's incredibly sad that all of modern ML is essentially built on python) IMO, but the language is certainly easy to write. [EDIT] - wanted to put here, people spend a lot of characters complaining about tokio in Rust land, but I honestly think it's just fine. It was REALLY rough early on, but at this point the ergonomics are quite easy to use and understand, and it's quite performant out of the box. It's not perfect, but it's really quite pleasing to use and understand (i.e. running into a bug/surprising behavior almost always ends in understanding more about the fundamental tradeoffs and system design for async systems) Swift doing something similar seems to be an endorsement of the approach. In fact, IIRC this might be where I saw that first paper? Maybe it was a HN comment that pointed to it: https://forums.swift.org/t/why-stackless-async-await-for-swi... Rust and Swift are the most impressive modern languages IMO, the sheer amount of lessons they've taken from previous generations of PL is encouraging. | |||||||||||||||||||||||||||||||||||
▲ | gpderetta 2 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
That paper is specifically for C++ and even there not every body agrees (there is still a proposal to add stackful coroutines). One claim in the paper is that stackfull coroutines have an higher stack switching cost. I disagree, but also this is completely irrelevant in python where spending a couple of additional nanoseconds is completely hidden by the inefficiency of the interpreter. It is true that stackless coroutines are more memory efficient when you are running millions of lightweight tasks. But you won't be running millions of tasks (or even hundreds of thousands) in python, so it is a non-issue. There is really no reason for python to have chosen the async/await model. | |||||||||||||||||||||||||||||||||||
▲ | janalsncm 2 days ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
> Python is certainly in the business of trading efficiency and optimal solutions for readability and some notion of simplicity, and that has held it back Sometimes a simple, suboptimal solution is faster in clock time than an optimal one. You have to write the code and then execute it after all. As for why ML is dominated by python, I feel like it just gets to the point quicker than other languages. In Java or typescript or even rust there are just too many other things going on. I feel like writing even a basic training loop in Java would be a nightmare. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
▲ | rfoo 3 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
Is there really something to lose? How often do we see "stackless coroutine" listed as advantage in Rust vs Go for network programming flamewars? | |||||||||||||||||||||||||||||||||||
|