▲ | TZubiri 2 days ago | |
>most people who actually needed to do lots of io concurrently had their own workarounds (forking, etc) and people who didn't actually need it had found out how to get by without it (multiprocessing etc). The problem is not python, it's a skill issue. First of all forking is not a workaround, it's the way multiprocessing works at the low level in Unix systems. Second of all, forking is multiprocessing, not multithreading. Third of all, there's the standard threading library which just works well. There's no issue here, you don't need async. | ||
▲ | zelphirkalt 2 days ago | parent | next [-] | |
Recently, I am working on a project that uses Threading (in Python) and so far have had zero issues with that. Neither did I have any issues before, when using multiprocessing. What I did have issues with though, was async. For example pytest's async thingy is buggy for years with no fix in sight, so in one project I had to switch to manually making an event loop in that those tests. But isn't the whole purpose of async, that it enabled concurrency, not parallelism, without the weight of a thread? I agree that in most cases it is not necessary to go there, but I can imagine systems with not so many resources, that benefit from such an approach when they do lots of io. | ||
▲ | fabioyy 14 hours ago | parent | prev [-] | |
fork is extremely heavy, threads are way lighter, but still opening thousands of threads can become a problem. opening a thread just to wait for a socket operation don't make sense. and the low level requirements to use ( select/iopool syscalls ) is hard. coroutines of async/await solve this problem. |