| ▲ | Python has had async for 10 years – why isn't it more popular?(tonybaloney.github.io) |
| 298 points by willm a day ago | 284 comments |
| |
|
| ▲ | atomicnumber3 a day ago | parent | next [-] |
| The author gets close to what I think the root problem is, but doesn't call it out. The truth is that in python, async was too little, too late. By the time it was introduced, most people who actually needed to do lots of io concurrently had their own workarounds (forking, etc) and people who didn't actually need it had found out how to get by without it (multiprocessing etc). Meanwhile, go showed us what good green threads can look like. Then java did it too. Meanwhile, js had better async support the whole time. But all it did was show us that async code just plain sucks compared to green thread code that can just block, instead of having to do the async dances. So, why engage with it when you already had good solutions? |
| |
| ▲ | throw-qqqqq a day ago | parent | next [-] | | > But all it did was show us that async code just plain sucks compared to green thread code that can just block, instead of having to do the async dances. I take so much flak for this opinion at work, but I agree with you 100%. Code that looks synchronous, but is really async, has funny failure modes and idiosyncracies, and I generally see more bugs in the async parts of our code at work. Maybe I’m just old, but I don’t think it’s worth it. Syntactic sugar over continuations/closures basically.. | | |
| ▲ | lacker a day ago | parent | next [-] | | I'm confused, I feel like the two of you are expressing opposite opinions. The comment you are responding to prefers green threads to be managed like goroutines, where the code looks synchronous, but really it's cooperative multitasking managed by the runtime, to explicit async/await. But then you criticize "code that looks synchronous but is really async". So you prefer the explicit "async" keywords? What exactly is your preferred model here? | | |
| ▲ | throw-qqqqq a day ago | parent | next [-] | | First, I don’t mean to criticize anything or anyone. People value such things subjectively, but for me the async/sync split does no good. Goroutines feel like old-school, threaded code to me. I spawn a goroutine and interact with other “threads” through well defined IPC. I can’t tell if I’m spawning a green thread or a “real” system thread. C#’s async/await is different IMO and I prefer the other model. I think the async-concept gets overused (at my workplace at least). If you know Haskell, I would compare it to overuse of laziness, when strictness would likely use fewer resources and be much easier to reason about. I see many of the same problems/bugs with async/await.. | | |
| ▲ | thomasahle 18 hours ago | parent | next [-] | | Comparing to Haskell, I think of "async" as the IO monad. It's nice to have all code that does IO flagged explicitly as such. | |
| ▲ | raxxorraxor 10 hours ago | parent | prev | next [-] | | > I think the async-concept gets overused (at my workplace at least). Problem is it that it self reinforces and before you look every little function is suddenly async. The irony is that it is used where you want to write in a synchronous style... | | |
| ▲ | carlmr 8 hours ago | parent [-] | | Yep, this is my biggest gripe with explicit async, all of a sudden a library that needn't be async forces me to use async (and in Rust forces me to use their async implementation), just because the author felt like async is a nice thing to try out. |
| |
| ▲ | eddd-ddde 19 hours ago | parent | prev | next [-] | | Wouldn't the old school style be more like rust async? Simple structs that you poll whenever you need to explicitly. No magic code that looks synchronous but isn't. | | |
| ▲ | tptacek 17 hours ago | parent [-] | | No, Rust async is new-school colored-functions concurrency. | | |
| ▲ | pkolaczk 12 hours ago | parent [-] | | The parent comment is right. Rust async is simple state automata structs you can poll explicitly with no magic. Async/await is just some syntactic sugar on top of that, but you don’t have to use it. An obvious advantage of doing it that way is you don’t need any runtime/OS-level support. Eg your runtime doesn’t need to even have a concept of threads. It works on bare metal embedded. Another advantage is that it’s fully cooperative model. No magic preemption. You control the points where the switch can happen, there is no magic stuff suddenly running in background and messing up the state. | | |
| ▲ | tptacek 4 hours ago | parent | next [-] | | I didn't say it was good or bad. I said it's new-school colored functions. | |
| ▲ | jpc0 9 hours ago | parent | prev [-] | | Have you actually tried to implement async in rust from the ground up. It is nothing like what you just described |
|
|
| |
| ▲ | sfn42 8 hours ago | parent | prev [-] | | I always find it strange how people complain about features when the real problem is that they simply don't like how people use the feature. Async in C# is awesome, and there's nothing stopping you from writing sync code where appropriate or using threads if you want proper multi threading. Async is primarily used to avoid blocking for non-cpu-bound work, like waiting for API/db/filesystem etc. If you use it everywhere then it's used everywhere, if you don't then it isn't. For a lot of apps it makes sense to use it a lot, like in web apis that do lots of db calls and such. This incurs some overhead but it has the benefit of avoiding blocked threads so that no threads sit idle waiting for I/O. You can imagine in a web API receiving a large number of requests per second there's a lot of this waiting going on and if threads were idle waiting for responses you wouldn't be able to handle nearly as much throughout. |
| |
| ▲ | throwaway81523 20 hours ago | parent | prev [-] | | No, goroutines are preemptive. They avoid async hazards though of course introduce some different ones. | | |
| ▲ | Yoric 19 hours ago | parent [-] | | To be fair, it depends on the version of Go. Used to be well-hidden cooperative, these days it's preemptive. | | |
|
| |
| ▲ | kibwen a day ago | parent | prev | next [-] | | > Code that looks synchronous, but is really async, has funny failure modes and idiosyncracies But this appears to be describing languages with green threads, rather than languages that make async explicit. | | |
| ▲ | pclmulqdq a day ago | parent [-] | | Without the "async" keyword, you can still write async code. It looks totally different because you have to control the state machine of task scheduling. Green threads are a step further than the async keyword because they have none of the function coloring stuff. You may think of use of an async keyword as explicit async code but that is very much not the case. If you want to see async code without the keyword, most of the code of Linux is asynchronous. | | |
| ▲ | Dylan16807 a day ago | parent | next [-] | | Having to put "await" everywhere is very explicit. I'd even say it's equally explicit to a bunch of awkward closures. Why do you say it's less? | | |
| ▲ | pclmulqdq a day ago | parent | next [-] | | It's explicit that the code is async, but how the asynchrony happens is completely implicit with async/await, and is managed by a runtime of some kind. Kernel-style async code, where everything is explicit: * You write a poller that opens up queues and reads structs representing work * Your functions are not tagged as "async" but they do not block * When those functions finish, you explicitly put that struct in another queue based on the result Async-await code, where the runtime is implicit: * All async functions are marked and you await them if they might block * A runtime of some sort handles queueing and runnability Green threads, where all asynchrony is implicit: * Functions are functions and can block * A runtime wraps everything that can block to switch to other local work before yielding back to the kernel | | |
| ▲ | lstodd 20 hours ago | parent [-] | | > Green threads, where all asynchrony is implicit: which are no different from app POV from kernel threads, or any threads for that matter. the whole async stuff came up because context switch per event is way more expensive than just shoveling down a page of file descriptor state. thus poll, kqueue, epoll, io_uring, whatever. think of it as batch processing |
| |
| ▲ | throw-qqqqq a day ago | parent | prev | next [-] | | > Why do you say it's less Let me try to clarify my point of view: I don’t mean that async/await is more or less explicit than goroutines. I mean regular threaded code is more explicit than async/await code, and I prefer that. I see colleagues struggle to correctly analyze resource usage for instance. Someone tries to parallelize some code (perhaps naiively) by converting it to async/await and then run out of memory. Again, I don’t mean to judge anyone. I just observe that the async/await-flavor has more bugs in the code bases I work on. | | |
| ▲ | curt15 20 hours ago | parent [-] | | >I don’t mean that async/await is more or less explicit than goroutines. I mean regular threaded code is more explicit than async/await code, and I prefer that. More explicit in what sense? I've written both regular threaded Python and async/await Python. Only the latter shows me precisely where the context switches occur. |
| |
| ▲ | throwawayffffas 20 hours ago | parent | prev [-] | | Because it hides away the underlying machinery. Everything is in a run loop that does not exist in my codebase. The context switching points are obvious but the execution environment is opaque. At least that's how it looks to me. | | |
| ▲ | toast0 19 hours ago | parent | next [-] | | The problem isn't that it hides away the machinery. The problem is that it hides some things, but not everything. Certainly a lot of stuff hides behind await/async. But as a naive developer who is used to real threads and green threads, I expected there would be some wait to await on a real thread and all the async stuff would just happen... but instead, if you await, actually you've got to be async too. If you had to write your async code where you gave an eventloop a FD and a callback to run when it was ready, that would be more explicit, IMHO... but it would be so wordy that it would only get used under extreme duress... I've worked on those code bases and they can do amazing things, but if there's any complexity it quickly becomes not worth it. Green threads are better (IMHO), because they actually do hide all the machinery. As a developer in a language with mature green threads (Erlang), I don't have to know about the machinery[1], I just write code that blocks from my perspective and BEAM makes magic happen. As I understand it, that's the model for Java's Project Loom aka Java Green Threads 2: 2 Green 2 Threads. The first release had some issues with the machinery, but I think I read the second release was much better, and I haven't seen much since... I'm not a Cafe Babe, so I don't follow Java that closely. [1] It's always nice to know about the machinery, but I don't have to know about it, and I was able to get started pretty quick and figure out the machinery later. | | |
| ▲ | worthless-trash 11 hours ago | parent [-] | | I don't know who you are, but thanks.. My beam code goes brrrr.. so fast, much async, so reliable, no worries. |
| |
| ▲ | ForHackernews 19 hours ago | parent | prev [-] | | I don't understand this criticism. The JVM is opaque, App Engine is opaque, Docker is opaque. All execution environments are opaque unless you've attached a debugger and are manually poking at the thing while it runs. | | |
|
| |
| ▲ | vova_hn 15 hours ago | parent | prev | next [-] | | > Green threads are a step further than the async keyword because they have none of the function coloring stuff. I would say that green threads still have "function coloring stuff", we just decided that every function will be async-colored. Now, what happens if you try to cross an FFI-border and try to call a function that knows nothing about your green-thread runtime is an entirely different story... | |
| ▲ | throw-qqqqq a day ago | parent | prev [-] | | This is exactly what I mean. Thank you for explaining much more clearly than I could. > none of the function coloring stuff And it’s this part that I don’t like (and see colleagues struggling to implement correctly at work). |
|
| |
| ▲ | larusso a day ago | parent | prev | next [-] | | async is like a virus. I think the implementation in js and .NET is somewhat ok’ish because your code is inside an async context most of the time. I really hate the red / blue method issues where library functions get harder to compose. Oh I have a normal method because there was no need for async. Now I change the implementation and need to call an async method. There are ways around this but more often than not will you change most methods to be async. To be fair that also happens with other solutions. | | |
| ▲ | DanielHB 21 hours ago | parent [-] | | It is not nearly as much of a problem in JS because JS only has an event loop, there is no way to mix in threads with async code because there are no threads. Makes everything a lot simpler and a lot of the data structures a lot faster (because no locks required). But actual parallelization (instead of just concurrency) is impossible[1]. A lot of the async problems in other languages is because they haven't bought up into the concept fully with some 3rd party code using it and some don't. JS went all-in with async. [1]: Yes I know about service workers, but they are not threads in the sense that there is no shared memory*. It is good for some types of parallelization problems, but not others because of all the memory copying required. [2]: Yes I know about SharedArrayBuffer and there is a bunch of proposals to add support for locks and all that fun stuff to them, which also brings all the complexity back. | | |
| ▲ | _moof 14 hours ago | parent [-] | | In my less charitable moments, I've wondered if the real reason Python has async/await is because people coming to it from JavaScript couldn't be arsed to learn a more appropriate paradigm. |
|
| |
| ▲ | Uptrenda 19 hours ago | parent | prev | next [-] | | I'm a person who wrote an entire networking library in Python and I agree with you. The most obvious issue with Python's single-threaded async code is any slow part of the program delays the entire thing. And yeah -- that's actually insanely frigging difficult to avoid. You write standard networking code and then find out that parts you expected to be async in Python actually ended up being sync / blocking. DESPITE THAT: even if you're doing everything "right" (TM) -- using a single thread and doing all your networking I/O sequentially is simply slow as hell. A very very good example of this is bottle.py. Lets say you host a static web server with bottle.py. Every single web request for files leads to sequential loading, which makes page load times absolutely laughable. This isn't the case for every python web frame work, but it seems to be a common theme to me. (Cause: single thread, event loop.) With asyncio, the most consistent behavior I've had with it seems to be to avoid having multiple processes and then running event loops inside them. Even though this approach seems like its necessary (or at least threading) to avoid the massive down sides of the event loop. But yeah, you have to keep everything simple. In my own library I use a single event loop and don't do anything fancy. I've learned the hard way how asyncio punishes trying to improve it. It's a damn cool piece of software, just has some huge limitations for performance. | | |
| ▲ | fulafel 12 hours ago | parent [-] | | bottle.py is a WSGI backed framework, right? So it's agnostic about whether you are running with threads, fork, blocking single thread IO, gevent, or what. | | |
| ▲ | Uptrenda 12 hours ago | parent [-] | | Umm, acktually... (the default server is non-threaded and sequential. It was an example.) | | |
|
| |
| ▲ | markandrewj 20 hours ago | parent | prev [-] | | I can tell you guys work with languages like Go, so this isn't true for yourselves, but I usually find it is developers that only ever work with synchronous code who find async complicated. Which isn't surprising, if you don't understand something it can seem complicated. My views is almost that people should learn how to write async code by default now. Regardless of the language. Writing modern applications basically requires it, although not all the time obviously. | | |
| ▲ | Yoric 19 hours ago | parent | next [-] | | Hey, I'm one of the (many, many) people who made async in JavaScript happen and I find async complicated. | | |
| ▲ | markandrewj 14 hours ago | parent [-] | | Hey Yoric, I do not want to underplay what it is like to work with async, but I think there has been a lot of improvements to make it easier, especially in JavaScript/ECMAScript. It is nice not to have to work directly with promises in the same way that was required previously. The language has matured a lot since I started using in Netscape Navigator (I see you formerly worked at Mozilla). I think coding can be complicated in general, although it shouldn't have to be. I think having a mental model for async from the start can be helpful, and understanding the difference between blocking and non blocking code. A lot of people learned writing synchronous code first, so I think it can be hard to develop the mental model and intuit it. |
| |
| ▲ | ErikBjare 11 hours ago | parent | prev [-] | | I have no problem with async in JS or Rust, but async in Python is a very different beast, and like many people in this thread I do my best to avoid the fully loaded footgun altogether. Writing maintainable Python basically requires avoiding it, so I strongly disagree with "regardless of language". | | |
| ▲ | markandrewj 4 hours ago | parent [-] | | Maybe, but I wouldn't go back to Python 2 without async. It has also improved over time in Python. I have also had success using async in Python. I do understand what the article talks about however. Understanding the difference between blocking and non-blocking code is also a concept relevant to Python. In Node it's one of the concepts you are first introduced to, because Node is single threaded by default. I also understand in Go and other languages there are different options. https://nodejs.org/en/learn/asynchronous-work/overview-of-bl... I will agree with what some is said a above, BEAM is pretty great. I have been using it recently through Elixir. |
|
|
| |
| ▲ | gen220 a day ago | parent | prev | next [-] | | As somebody who's written and maintained a good bit of Python in prod and recently a good amount of server-side typescript... this would be my answer. I'd add one other aspect that we sort of take for granted these days, but affordable multi-threaded CPUs have really taken off in the last 10 years. Not only does the stack based on green-threads "just work" without coloring your codebase with async/no-async, it allows you to scale a single compute instance gracefully to 1 instance with N vCPUs vs N pods of 2-vCPU instances. | |
| ▲ | pnathan a day ago | parent | prev | next [-] | | Async taints code, and async/await fall prey to classic cooperative multitasking issues. "What do you mean that this blocked that?" The memory and execution model for higher level work needs to not have async. Go is the canonical example of it done well from the user standpoint IMO. | | |
| ▲ | hinkley 20 hours ago | parent | next [-] | | The function color thing is a real concern. Am I wrong or did a python user originally coin that idea? | | |
| ▲ | throwawayffffas 19 hours ago | parent [-] | | No it was a js dev complaining about callbacks in node. Mainly because a lot of standard library code back then only came in callback flavour. i.e. no sync file writes, etc. | | |
| ▲ | munificent 18 hours ago | parent | next [-] | | I wrote it. :) Actually, I was and am primarily a Dart developer, not a JS developer. But function color is a problem in any language that uses that style of asynchrony: JS, Dart, etc. | |
| ▲ | LtWorf 19 hours ago | parent | prev [-] | | Which is really funny because the linux kernel doesn't do async for file writes :D | | |
|
| |
| ▲ | meowface 18 hours ago | parent | prev [-] | | gevent has been in Python for ages and still works great. It basically adds goroutine-like green thread support to the language. I still generally start new projects with gevent instead of asyncio, and I think I always will. | | |
| ▲ | pdonis 17 hours ago | parent [-] | | I've used gevent and I agree it works well. It has prevented me from even trying to experiment with the async/await syntax in Python for anything significant. However, gevent has to do its magic by monkeypatching. Wanting to avoid that, IIRC, was a significant reason why the async/await syntax and the underlying runtime implementation was developed for Python. Another significant reason, of course, was wanting to make async functions look more like sync functions, instead of having to be written very differently from the ground up. Unfortunately, requiring the "async" keyword for any async function seriously detracted from that goal. To me, async functions should have worked like generator functions: when generators were introduced into Python, you didn't have to write "gen def" or something like it instead of just "def" to declare one. If the function had the "yield" keyword in it, it was a generator. Similarly, if a function has the "await" keyword in it, it should just automatically be an async function, without having to use "async def" to declare it. | | |
| ▲ | krmboya 14 hours ago | parent [-] | | Would this result in surprises like if a function is turned to async by adding an await keyword, all of a sudden all functions that have it in their call stack become async | | |
| ▲ | pdonis 10 minutes ago | parent [-] | | It would work the same as it works now for generators. A function that calls a generator function isn't a generator just because of that; it only is if it also has the yield keyword in it (or yield from, which is a way of chaining generators). Similarly, a function that calls an async function wouldn't itself be async unless it also had the await keyword. But of course the usual way of calling an async function would be to await it. And calling it without awaiting it wouldn't return a value, just as with a generator; calling a generator function without yielding from it returns a generator object, and calling an async function without awaiting it would return a future object. You could then await the future later, or pass it to some other function that awaited it. |
|
|
|
| |
| ▲ | jacquesm 17 hours ago | parent | prev | next [-] | | There are much better solutions for the same problems, but not in Python. If you really need such high throughput you'd move to Go, the JVM or Erlang/Elixer depending on the kind of workload you have rather than to much around with Python on something that it clearly was never intended to do in the first place. It is amazing they got it to work as well as it does but the impedance mismatch is pretty clear and it will never feel natural. | | |
| ▲ | ch4s3 15 hours ago | parent [-] | | Elixir is a really nice replacement for a lot of places where you could you python but don't absolutely have to, particularly anything web related. You get a lot more out of the same machine with code that's similarly readable for building HTTP APIs. |
| |
| ▲ | hinkley 20 hours ago | parent | prev | next [-] | | Async is pretty good “green threads” on its own. Coroutines can be better, but they’re really solving an overlapping set of problems. Some the same, some different. In JavaScript async doesn’t have a good way to nice your tasks, which is an important feature of green threads. Sindre Sorhus has a bunch of libraries that get close, but there’s still a hole. What coroutines can do is optimize the instruction cache. But I’m not sure goroutines entirely accomplish that. There’s nothing preventing them from doing so but implementation details. | |
| ▲ | cookiengineer 10 hours ago | parent | prev | next [-] | | I agree with you, I think. It's hard to figure out your own position when it comes to multithreading and multitasking APIs. To me, Go is really well designed when it comes to multithreading because it is built upon a mutual contract where it will break easily and at compile time when you mess up the contract between the scheduling thread and the sub threads. But, for the love of Go, I have no idea who the person was that decided that the map data type has to be not threadsafe. Once you start scaling / rewriting your code to use multiple goroutines, it's like you're being thrown in the cold water without having learnt to swim before. Mutexes are a real pain to use in Go, and they could have been avoided if the language just decided to make read/write access threadsafe for at least maps that are known to be accessed from different threads. I get the performance aspect of that decision, but man, this is so painful because you always have to rewrite large parts of your data structures everywhere, and abstract the former maps away into a struct type that manages the mutexes, which in return feels so dirty and unclean as a provided solution. For production systems I just use haxmap from the start, because I know its limitations (of hashes of keys due to atomics), because that is way easier to handle than forgetting about mutexes somewhere down the codebase when you are still before the optimization phase of development. | |
| ▲ | gshulegaard a day ago | parent | prev | next [-] | | I also think asyncio missed the mark when it comes to it's API design. There are a lot of quirks and rough edges to it that, as someone who was using `gevent` heavily before, strike me as curious and even anti-productive. | |
| ▲ | pkulak 19 hours ago | parent | prev | next [-] | | Green threads can be nicer to program in, but it’s not like there’s no cost. You still need a stack for every green thread, just like you need one for every normal thread. I think it’s worth it to figure out a good system for stackless async. Something like Kotlin is about as good as it gets. Rust is getting there, despite all the ownership issues, which would exist in green threads too. | |
| ▲ | pjmlp 10 hours ago | parent | prev | next [-] | | Java has had green threads since day one, most vendors ended up going red threads full way, and now we're back into green and red world. The main difference being that now both models are simultaneously supported instead of being an implementation detail of each JVM. | |
| ▲ | jayd16 a day ago | parent | prev | next [-] | | > But all it did was show us that async code just plain sucks compared to green thread code that can just block, instead of having to do the async dances. I'll be sold on this when a green thread native UI paradigm becomes popular but it seems like all the languages with good native UI stories have async support. | |
| ▲ | parhamn a day ago | parent | prev | next [-] | | pair this with needing async in depth and that's exactly it. The whole network stack needs to be async-first and all the popular networking libraries need to have been built on that. Many of those libraries are already C-extension based and don't jibe well with the newer python parts in any way. | | | |
| ▲ | a-dub 14 hours ago | parent | prev | next [-] | | also most python usecases that are in the realm of things like high performance concurrent request servicing push it down into libraries that i think are often tied to a native network request processing core. (gunicorn, grpc, etc) python is kind of a slow choice for that sort of thing regardless and i don't think the complexity of async is all that justified for most usecases. i still maintain my position that a good computer system should let you write logic synchronously and the system will figure out how to do things concurrently with high performance. (although getting this right would be very hard!) | |
| ▲ | wodenokoto 14 hours ago | parent | prev | next [-] | | It might have been too little but it wasn’t too late. Generations of programmers have given up on downloading data async in their Python scripts and just gone to bash and added a & at the end of a curl call inside a loop. | |
| ▲ | b33j0r a day ago | parent | prev | next [-] | | For me, once I wanted to scale asyncio within one process (scaling horizontally on top of that), only two things made sense: rust with tokio or node.js. Doing async in python has the same fundamental design. You have an executer, a scheduler, and event-driven wakers on futures or promises. But you’re doing it in a fundamentally hand-cuffed environment. You don’t get benefits like static compilation, real work-stealing, a large library ecosystem, or crazy performance boosts. Except in certain places in the stack. Using fastapi with async is a game-changer. Writing a cli to download a bunch of stuff in parallel is great. But if you want to use async to parse faster or make a parallel-friendly GUI, you are more than likely wasting your time using python. The benefits will be bottlenecked by other language design features. Still the GIL mostly. I guess there is no reason you can’t make tokio in python with multiprocessing or subinterpreters, but to my knowledge that hasn’t been done. Learning tokio was way more fun, too. | | |
| ▲ | ciupicri a day ago | parent | next [-] | | GIL is not part of the language design, it's just a detail of the most common implementation - CPython. | | |
| ▲ | b33j0r a day ago | parent [-] | | Fair and accurate. But that’s pretty much what people use, right? I am happy to hear stories of using pypy or something to radically improve an architecture. I don’t have any from personal experience. I guess twisted and stackless, a long time ago. | | |
| ▲ | miohtama 12 hours ago | parent [-] | | The GIL is optional in new Python versions. Downsides are legacy library compatibility and degraded single thread performance. |
|
| |
| ▲ | smw a day ago | parent | prev | next [-] | | Or just golang? | | | |
| ▲ | hinkley 20 hours ago | parent | prev [-] | | I don’t know where Java is now but their early promise and task queue implementations left me feeling flat. And people who should know better made some dumb mistakes around thread to CPU decisions that just screamed “toy solution”. They didn’t compose. |
| |
| ▲ | JackSlateur a day ago | parent | prev | next [-] | | green thread have pitfalls too, like this: https://news.ycombinator.com/item?id=39008026 | | |
| ▲ | kasperni a day ago | parent | next [-] | | This was a known issue and was fixed in Java 24 [1]. [1] https://openjdk.org/jeps/491 | |
| ▲ | hueho 21 hours ago | parent | prev | next [-] | | FWIW this was largely fixed in 24 (I think there are still some edge cases relating to FFI functionality), and the 25 LTS should be coming this month. | |
| ▲ | ronsor a day ago | parent | prev [-] | | This doesn't look like a problem with green threads so much as it is a problem with Java's implementation of them. Of course, Java is known for having problems with its implementations of many different things, such as sandboxing; this isn't special. |
| |
| ▲ | 6r17 19 hours ago | parent | prev | next [-] | | I feel like async is just an easier way to reason about something but it leaves out a lot of cheating open ; tough sometimes it's just more comfortable to write - but that cheating comes with a lot of hidden responsibilities that are just not presented in python (things like ownership) - even tough it present tools to properly solve these issues - anyone who would really want to dive into technical wouldn't choose python anyway | |
| ▲ | TZubiri 21 hours ago | parent | prev | next [-] | | >most people who actually needed to do lots of io concurrently had their own workarounds (forking, etc) and people who didn't actually need it had found out how to get by without it (multiprocessing etc). The problem is not python, it's a skill issue. First of all forking is not a workaround, it's the way multiprocessing works at the low level in Unix systems. Second of all, forking is multiprocessing, not multithreading. Third of all, there's the standard threading library which just works well. There's no issue here, you don't need async. | | |
| ▲ | zelphirkalt 8 hours ago | parent [-] | | Recently, I am working on a project that uses Threading (in Python) and so far have had zero issues with that. Neither did I have any issues before, when using multiprocessing. What I did have issues with though, was async. For example pytest's async thingy is buggy for years with no fix in sight, so in one project I had to switch to manually making an event loop in that those tests. But isn't the whole purpose of async, that it enabled concurrency, not parallelism, without the weight of a thread? I agree that in most cases it is not necessary to go there, but I can imagine systems with not so many resources, that benefit from such an approach when they do lots of io. |
| |
| ▲ | ddorian43 a day ago | parent | prev | next [-] | | Because it sucks compared to gevent (green threads). But for some reason, people always disregard this option. They don't even read it. Like any comment with gevent is shadowbanned and it doesn't register in their mind. | | |
| ▲ | nromiun a day ago | parent | next [-] | | Gevent is too underrated. Even if people don't like the monkey patching you can simply use the gevent namespace API as well. No idea why people prefer the absolute mess that is Python async ecosystem. | |
| ▲ | jononor a day ago | parent | prev | next [-] | | Happily using gevent for our backend (IoT+ML) since 2019. Was very glad when I saw it is still being well supported by recent SQLAlchemy and pscycopg releases. | |
| ▲ | int_19h 20 hours ago | parent | prev | next [-] | | The fundamental problem with any kind of green threads is that they require runtime support which doesn't play well with any active stack frames that aren't aware that they are on a green thread (which can be switched). | |
| ▲ | meowface 18 hours ago | parent | prev [-] | | I've used gevent for years and will probably never stop. I greatly prefer it (or Go) over asyncio. People act like it's dead but it still works perfectly well and, at least for me, makes async networking so much simpler. |
| |
| ▲ | neuroelectron a day ago | parent | prev | next [-] | | Even in Java, async is rarely the right solution. I'm sure in situations where it's needed, Python's async would be used. For instance, it would be good for reducing resource usage in any kind of small service that dynamically scales. The workarounds are much more expensive but that doesn't matter unless you're already resource constrained. Even then, nginx might be a netter solution. | |
| ▲ | leecarraher 20 hours ago | parent | prev | next [-] | | i agree, also add to that, that many python modules are foss projects that are maintained on a limited basis or budget. Refactoring code that may have some unsafe async routines would be costly for an org, and dreadful for recreation.
So you can either have a rich library of modules, or go async and risk something you need not working then having to find a workaround.
Personally, if parallelism is important enough, i use ctypes and openmp. If i need something more portable, i have a few multiprocessing wrappers that implement prange and a few other widgets for shared memory. | |
| ▲ | jongjong 19 hours ago | parent | prev | next [-] | | Yes and JS had a smooth on-ramp to async/await thanks to Promises. Promises/thenables gave people the time to get used to the idea of deferred evaluation via a familiar callback approach... Then when async/await came along, people didn't see it as a radically new feature but more as syntactic sugar to do what they were already doing in a more succinct way without callbacks. People in the Node.js community were very aware of async concepts since the beginning and put a lot of effort in not blocking the event loop. So Promises and then async/await were seen as solutions to existing pain points which everyone was already familiar with. A lot of people refactored their existing code to async/await. | | |
| ▲ | laurencerowe 18 hours ago | parent [-] | | JavaScript’s Promises were of course heavily influenced by Twisted’s Deferreds in Python, from the days before async/await. |
| |
| ▲ | LtWorf 19 hours ago | parent | prev | next [-] | | forking and async are totally different things. | | | |
| ▲ | pulse7 a day ago | parent | prev | next [-] | | "Then java did it too." Java had green threads in 1.0. They were removed. Then Java added virtual threads. | |
| ▲ | pbalau a day ago | parent | prev [-] | | You make a very good case for why python's async isn't more prevalent, but I think this is not painting the full image. Taking a general case, let's say a forum, in order to render a thread one needs to search for all posts from that thread, then get all the extra data needed for rendering and finally send the rendered output to the client. In the "regular" way of doing this, one will compose a query, that will filter things out, join all the required data bla bla, send it to the database, wait for the answer from the database and all the data to be transferred over, loop over the results and do some rendering and send the thing over to the client. It doesn't matter how async your app code is, in this way of doing things, the bottle neck is the database, as there is a fixed limit on how many things a db server can do at once and if doing one of these things takes a long time, you still end up waiting too much. In order for async to work, one needs to split the work load into very small chunks that can be done in parallel and very fast, therefore, sending a big query and waiting for all the result data is out of the window. An async approach would split the db query into a search query, that returns a list of object ids, say posts, then create N number of async tasks that given a post id will return a rendered result. These tasks will do their own query to retrieve the post data, then assemble another list of async tasks to get all the other data required and render each chunk and so on. Throw in a bunch of db replicas and you get the benefits of async. This approach is not generally used, because, let's face it, we like making the systems we use do complicated things, eg complicated sql requests. | | |
| ▲ | zelphirkalt 8 hours ago | parent | next [-] | | When I read your comment I was thinking: "But then you would need to structure your db in such a way that ... ahh yes, they are getting to that ... but then what about actually rendering the results? Ah they are describing that here ..." so well done I think. However, async tasks on a single core means potentially a lot of switching between those tasks. So async alone does not save the day here. It will have to be combined with true parallelism, to result in the speedup we want. Otherwise a single task rendering all the parts in sequence would be faster. Also not, that it depends on where your db is. the process you describe implies at least 2 rounds of db communication. First one for the initial get forum thread query, then second one for all the async get forum replies requests. So if communication with the db takes a long time, you might as well lose what you gained, because you did 2 rounds of that communication. So I guess it's not a trivial matter. | |
| ▲ | LtWorf 15 hours ago | parent | prev [-] | | Why do you think that all of that extra compute work would be better? |
|
|
|
| ▲ | xg15 a day ago | parent | prev | next [-] |
| I learned about the concept of async/await from JS and back then was really amazed by the elegance of it. By now, the downsides are well-known, but I think Python's implementation did a few things that made it particularly unpleasant to use. There is the usual "colored functions" problem. Python has that too, but on steroids: There are sync and async functions, but then some of the sync functions can only be called from an async function, because they expect an event loop to be present, while others must not be called from an async function because they block the thread or take a lot of CPU to run or just refuse to run if an event loop is detected. That makes at least four colors. The API has the same complexity: In JS, there are 3 primitives that you interact with in code: Sync functions, async functions and promises. (Understanding the event loop is needed to reason about the program, but it's never visible in the code). Whereas Python has: Generators, Coroutines, Awaitables, Futures, Tasks, Event Loops, AsyncIterators and probably a few more. All that for not much benefit in everyday situations. One of the biggest advantages of async/await was "fearless concurrency": The guarantee that your variables can only change at well-defined await points, and can only change "atomically". However, python can't actually give the first guarantee, because threaded code may run in parallel to your async code. The second guarantee already comes for free in all Python code, thanks to the GIL - you don't need async for that. |
| |
| ▲ | mcdeltat 20 hours ago | parent | next [-] | | I think Python async is pretty cool - much nicer than threading or multiprocessing - yet has a few annoying rough edges like you say. Some specific issues I run into every time: Function colours can get pretty verbose when you want to write functional wrappers. You can end up writing nearly the exact same code twice because one needs to be async to handle an async function argument, even if the real functionality of the wrapper isn't async. Coroutines vs futures vs tasks are odd. More than is pleasant, you have one but need the other for an API for no intuitive reason. Some waiting functions work on some types and not on others. But you can usually easily convert between them - so why make a distinction in the first place? I think if you create a task but don't await it (which is plausible in a server type scenario), it's not guaranteed to run because of garbage collection or something. That's weird. Such behaviour should be obviously defined in the API. | | |
| ▲ | everforward 2 hours ago | parent | next [-] | | > I think if you create a task but don't await it (which is plausible in a server type scenario), it's not guaranteed to run because of garbage collection or something. I think that use case doesn't work well in async, because async effectively creates a tree of Promises that resolve in order. A task that doesn't get await-ed is effectively outside it's own tree of Promises because it may outlive the Promise it is a child of. I think the solution would be something like Linux's zombie process reaping, and I can see how the devs prefer just not running those tasks to dealing with that mess. | |
| ▲ | tylerhou 18 hours ago | parent | prev | next [-] | | > You can end up writing nearly the exact same code twice because one needs to be async to handle an async function argument, even if the real functionality of the wrapper isn't async. Sorry for the possibly naive question. If I need to call a synchronous function from an async function, why can't I just call await on the async argument? def foo(bar: str, baz: int):
# some synchronous work
pass
async def other(bar: Awaitable[str]):
foo(await bar, 0)
| |
| ▲ | xg15 20 hours ago | parent | prev [-] | | I think the general idea of function colors has some merit - when done right, it's a crude way to communicate information about a function's expected runtime in a way that can be enforced by the environment: A sync function is expected to run short enough that it's not user-perceptible, whereas an async function can run for an arbitrary amount of time. In "exchange", you get tools to manage the async function while it runs. If a sync function runs too long (on the event loop) this can be detected and flagged as an error. Maybe a useful approach for a language would be to make "colors" a first-class part of the type system and support them in generics, etc. Or go a step further and add full-fledged time complexity tracking to the type system. | | |
| ▲ | munificent 18 hours ago | parent | next [-] | | > Maybe a useful approach for a language would be to make "colors" a first-class part of the type system and support them in generics, etc. Rust has been trying to do that with "keyword generics": https://blog.rust-lang.org/inside-rust/2023/02/23/keyword-ge... | |
| ▲ | lmm 15 hours ago | parent | prev [-] | | > Maybe a useful approach for a language would be to make "colors" a first-class part of the type system and support them in generics, etc. This is what languages with higher-kinded types do and it's glorious. In Scala you write your code in terms of a generic monad and then you can reuse it for sync or async. |
|
| |
| ▲ | gloomyday a day ago | parent | prev | next [-] | | I remember trying to use async in Python for the first time in 2017, and I actually found it easier to learn the basics of Go to create a coroutine, export it as a shared library, and create the bindings. I'm not exaggerating. If I remember correctly, the Python async API was still in experimental phase at that time. | |
| ▲ | nateglims 19 hours ago | parent | prev | next [-] | | The API complexity really threw me when I last tried async python. It's very different from other async systems and is incredibly different from gevent or twisted which were popular when I was last writing server python. | |
| ▲ | codethief 13 hours ago | parent | prev | next [-] | | > but then some of the sync functions can only be called from an async function, because they expect an event loop to be present I agree that that's annoying but tbh it sounds like any other piece of code to me that relies on global state. (Man, I can't wait for algebraic effects to become mainstream…) | |
| ▲ | Retr0id a day ago | parent | prev | next [-] | | > some of the sync functions can only be called from an async function, because they expect an event loop to be present I recognise that this situation is possible, but I don't think I've ever seen it happen. Can you give an example? | | |
| ▲ | xg15 a day ago | parent [-] | | Everything that directly interacts with an event loop object and calls methods such as loop.call_soon() [1]. This is used by most of asyncio's synchronization primitives, e.g. async.Queue. A consequence is that you cannot use asyncio Queues to pass messages or work items between async functions and worker threads. (And of course you can't use regular blocking queues either, because they would block). The only solution is to build your own ad-hoc system using loop.call_soon_threadsafe() or use third-party libs like Janus[2]. [1] https://github.com/python/cpython/blob/e4e2390a64593b33d6556... [2] https://github.com/aio-libs/janus |
| |
| ▲ | int_19h 20 hours ago | parent | prev [-] | | Generators are orthogonal to all this. They are the equivalent of `function*` in JS. And yes, they are also coroutines, but experience has shown that keeping generators separate from generic async functions is more ergonomic (hence why C# and JS both do the same thing). | | |
| ▲ | xg15 20 hours ago | parent [-] | | True. I think the connection is more a historical one became the first async implementation was done using generators and lots of "yield from" statements AFAIK. But I think generators are still sometimes mentioned in tutorials for this reason. | | |
| ▲ | int_19h 19 hours ago | parent [-] | | Implementing what was essentially an equivalent of `await` on top of `yield` (before we got `yield from` even) was a favorite pastime at some point. I worked on a project that did exactly that for WinRT projection to Python. And before that there was Twisted. It's very tempting because it gets you like 90% there. But then eventually you want something like `async for` etc... |
|
|
|
|
| ▲ | svieira a day ago | parent | prev | next [-] |
| I used to keep plugging Unyielding [1] vs. What Color Is Your Function [2] as the right matrix to view these issues within. But then Notes on structured concurrency [3]
was written and I just point to that these days. But, to sum it all up for those who want to talk here, there are several ways to look at concurrency but only one that matters. Is my program correct? How long will it take to make my program correct? Structured concurrency makes that clear(er) in the syntax of the language. Unstructured concurrency requires that you hold all the code in your head. [1]: https://glyph.twistedmatrix.com/2014/02/unyielding.html [2]: https://journal.stuffwithstuff.com/2015/02/01/what-color-is-... [3]: https://vorpus.org/blog/notes-on-structured-concurrency-or-g... |
| |
| ▲ | heisenzombie 19 hours ago | parent | next [-] | | I'll second the plug for structured concurrency (and specifically the Trio [1] library that the author wrote. [1] https://github.com/python-trio/trio | | | |
| ▲ | VonTum 10 hours ago | parent | prev | next [-] | | In [3], isn't there a pretty trivial exploit to get a "background task reads from closed file" again? async with mk_nursery() as nursery:
with os.fopen(...) as file:
nursery.start_soon(lambda: file.read())
The with block may have ended before the task starts... | |
| ▲ | stephenlf 12 hours ago | parent | prev [-] | | Man, that Trio [3] read was great. When we demand that all concurrent tasks must join, then we can better reason about our programs. I already kinda had this idea while working with Rust. In Rust, Futures won’t execute unless `await`ed. In practice, that meant that all my futures were joined. It was just the only way I could wrap my head around doing anything useful with async. |
|
|
| ▲ | rybosome a day ago | parent | prev | next [-] |
| I suppose my negative experiences with async fall under #3, that it is hard to maintain two APIs. One of the most memorable "real software engineering" bugs of my career involved async Python. I was maintaining a FastAPI server which was consistently leaking file descriptors when making any outgoing HTTP requests due to failing to close the socket. This manifested in a few ways: once the server ran out of available file descriptors, it degraded to a bizarre world where it would accept new HTTP requests but then refuse to transmit any information, which was also exciting due to increasing the difficulty of remotely debugging this. Occasionally the server would run out of memory before running out of file descriptors on the OS, which was a fun red herring that resulted in at least one premature "I fixed the problem!" RAM bump. The exact culprit was never found - I spent a full week debugging it, and concluded that the problem had to do with someone on the library/framework/system stack of FastAPI/aiohttp/asyncio having expectations about someone else in the stack closing the socket after picking up the async context, but that never actually occurring. It was impenetrable to me due to the constant context switching between the libraries and frameworks, such that I could not keep the thread of who (above my application layer) should have been closing it. My solution was to monkey patch the native python socket class and add a FastAPI middleware layer so that anytime an outgoing socket opened, I'd add it to a map of sockets by incoming request ID. Then when the incoming request concluded I'd lookup sockets in the map and close them manually. It worked, the servers were stable, and the only follow-up request was to please delete the annoying "Socket with file descriptor <x> manually closed" message from the logs, because they were cluttering things up. And thus, another brick in the wall of my opinion that I do not prefer Python for reliable, high-performance HTTP servers. |
| |
| ▲ | Scramblejams a day ago | parent | next [-] | | > it is hard to maintain two APIs. This point doesn't get enough coverage. When I saw async coming into Python and C# (the two ecosystems I was watching most closely at the time) I found it depressing just how much work was going into it that could have been productively expended elsewhere if they'd have gone with blocking calls to green threads instead. To add insult to injury, when implementing async it seems inevitable that what's created is a bizarro-world API that mostly-mirrors-but-often-not-quite the synchronous API. The differences usually don't matter, until they do. So not only does the project pay the cost of maintaining two APIs, the users keep paying the cost of dealing with subtle differences between them that'll probably never go away. > I do not prefer Python for reliable, high-performance HTTP servers I don't use it much anymore, but Twisted Matrix was (is?) great at this. Felt like a superpower to, in the oughties, easily saturate a network interface with useful work in Python. | | |
| ▲ | lormayna 21 hours ago | parent [-] | | > I don't use it much anymore, but Twisted Matrix was (is?) great at this. You must be an experienced developer to write maintenable code with Twisted, otherwise, when the codebase increase a little, it will quickly become a bunch of spaghetti code. |
| |
| ▲ | stackskipton 20 hours ago | parent | prev | next [-] | | Glad I'm not only one in the boat. We have Python HTTP Server doing similar. No one can figure it out, Containerd occasionally OOM kills it, everyone just shrugs and move on. | | |
| ▲ | mdaniel 15 hours ago | parent [-] | | that tracks so much with my experience in the whole of the python community |
| |
| ▲ | a day ago | parent | prev | next [-] | | [deleted] | |
| ▲ | LtWorf 15 hours ago | parent | prev [-] | | I'm not entirely sure how "3rd party library bug" is python's fault. | | |
| ▲ | 7bit 11 hours ago | parent [-] | | So you are at least a little sure. A little too much for my taste ;) |
|
|
|
| ▲ | PaulHoule a day ago | parent | prev | next [-] |
| I went through a phase of writing asyncio servers for my side projects. Probably the most fun I had was writing things that were responsive in complex ways, such as a websockets server that was also listening on message queues or on a TCP connection to a Denon HEOS music player. Eventually I wrote an "image sorter" that I found was hanging up when the browser was trying to download images in parallel, the image serving should not have been CPU bound, I was even using sendfile(), but I think other requests would hold up the CPU and would be block the tiny amount of CPU needed to set up that sendfile. So I switched from aiohttp to the flask API and serve with either Flask or Gunicorn, I even front it with Microsoft IIS or nginx to handle the images so Python doesn't have to. It is a minor hassle because I develop on Windows so I have to run Gunicorn inside WSL2 but it works great and I don't have to think about server performance anymore. |
| |
| ▲ | tdumitrescu a day ago | parent | next [-] | | That's the main problem with evented servers in general isn't it? If any one of your workloads is cpu-intensive, it has the potential to block the serving of everything else on the same thread, so requests that should always be snappy can end up taking randomly long times in practice. Basically if you have any cpu-heavy work, it shouldn't go in that same server. | | |
| ▲ | acdha a day ago | parent | next [-] | | Indeed. async is one of those things which makes a big difference in a handful of scenarios but which got promoted as a best-practice for everything. Python developers have simply joined Node and Go developers in learning that it’s not magic “go faster” spray and reasoning about things like peak memory load or shared resource management can be harder. | |
| ▲ | PaulHoule 21 hours ago | parent | prev | next [-] | | My system is written in Python because it is supported by a number of batch jobs that use code from SBERT, scikit-learn, numpy and such. Currently the server doesn't do any complex calculations but under asyncio it was a strict no-no. Mostly it does database queries and formats HTML responses but it seems like that is still too much CPU. My take on gunicorn is that it doesn't need any tuning or care to handle anything up to the large workgroup size other than maybe "buy some more RAM" -- and now if I want to do some inference in the server or use pandas to generate a report I can do it. If I had to go bigger I probably wouldn't be using Python in the server and would have to face up to either dual language or doing the ML work in a different way. I'm a little intimidated about being on the public web in 2025 though with all the bad webcrawlers. Young 'uns just never learned everything that webcrawler authors knew in 1999. In 2010 there were just two bad Chinese webcrawlers that never sent a lick of traffic to anglophone sites, but now there are new bad webcrawlers every day it seems. | |
| ▲ | nly a day ago | parent | prev | next [-] | | OS threads are for CPU bound work. Async is for juggling lots of little initialisations, completions, and coordinating work. Many apps are best single threaded with a thread pool to run (single threaded) long running tasks. | |
| ▲ | materielle 20 hours ago | parent | prev | next [-] | | Traditionally, there are two strategies: 1) Use the network thread pool to also run application code. Then your entire program has to be super careful to not block or do CPU intensive work. This is efficient but leads to difficult to maintain programs. 2) The network thread pool passes work back and forth between an application executor. That way, the network thread pool is never starved by the application, since it is essentially two different work queues. This works great, but now every request performs multiple thread hops, which increases latency. There has been a lot of interest lately to combine scheduling and work stealing algorithms to create a best of both worlds executor. You could imagine, theoretically, an executor that auto-scales, and maintains different work queues and tries to avoid thread hops when possible. But ensures there are always threads available for the network. | |
| ▲ | guappa 9 hours ago | parent | prev [-] | | Backend developers finding out why user interfaces have a thread for the GUI and a thread for doing work :D |
| |
| ▲ | Townley 20 hours ago | parent | prev [-] | | It’s heartening that there are people who find the problem you described “fun” Writing a FastAPI websocket that reads from a redis pubsub is a documentation-less flailfest |
|
|
| ▲ | stillsut an hour ago | parent | prev | next [-] |
| Not an expert but my chats with ChatGPT led me to believe async + FastAPI can give you 40x throughput for request handling over non-async code. The essential idea was I could be processing ~100 requests per vCPU in the async event loop while threading would max out 2-4 threads per CPU. Of course let us assume for either model we're waiting for 50-2000ms DB query or service call to finish before sending the response. Is this not true? And if it is true, why isn't the juice is worth the squeeze: more than an order of magnitude more saturation/throughput for the same hardware and same language, just with a new engine at its heart? |
|
| ▲ | mjd a day ago | parent | prev | next [-] |
| I haven't read the article yet, but I do have something to contribute: several years ago I was ay PyCon and saw a talk in which someone mentioned async. I was interested and wanted to learn to use it. But I found there was no documentation at all! The syntax was briefly described, but not the semantics. I realized, years later, that the (non-)documentation was directed at people who were already familiar with the feature from Javascript. But I hadn't been familiar with it from Javascript and I didn't even know that Javascript had had such a feature. So that's my tiny contribution to this discussion, one data point: Python's async might have been one unit more popular if it had had any documentation, or even a crossreference to the Javascript documentation. |
| |
| ▲ | notatoad a day ago | parent | next [-] | | this was my initial experience with python async as well (which i now use heavily) the documentation is directed at people who want coroutines and futures, and know what that means. if you don't know what coroutines and futures are, the python docs aren't going to help you. the documentation isn't going to guide anybody into using the async features who aren't already seeking them out. and maybe that's intentional, but it's not going to grow adoption of the async features. | |
| ▲ | shim__ 3 hours ago | parent | prev | next [-] | | Bad documentation is customary when writing Python | |
| ▲ | int_19h 19 hours ago | parent | prev [-] | | FWIW Python got async/await before JavaScript did. I believe at the time the main inspiration was C#. | | |
| ▲ | lyu07282 19 hours ago | parent [-] | | JavaScript was always single-threaded asynchronous, the added async/await keywords were just syntactic sugar. Node.js became popular before it as well, though I found at the time it was difficult to avoid callback hell similar to using libuv directly in C. | | |
| ▲ | int_19h 18 hours ago | parent | next [-] | | async/await was syntactic sugar in C# as well. Callbacks are a natural way to do async so it's no surprise. And while Python implements async directly in the VM, its semantics is such that it can be treated as syntactic sugar for callbacks there also. | |
| ▲ | guappa 9 hours ago | parent | prev [-] | | async await is syntactic sugar hiding calls to poll() and callbacks in every programming language. |
|
|
|
|
| ▲ | rich_sasha 14 hours ago | parent | prev | next [-] |
| What the article and the comments don't seem to mention is also that the documentation is an outlier on the poor side. Most Python documentation is at least decent. asyncio hides a lot of the complexity behind a tutorial style "just do this" prose, only obliquely mentions the foot guns and gives little guidance on how to actually structure async code. IME writing an asyncio Python application is a bit like fixing a broken Linux boot. You frantically Google things, the documentation doesn't mention it, and eventually you find a rant on a forgotten Finnish embedded electronics forum where someone has the same problem as you, and is kindly sharing a solution. After 30 mins of C&P of random commands from a stranger on the web, it works, for no reason you can decipher. Thank goodness for the Finns and Google Translate. |
| |
| ▲ | Philpax 6 hours ago | parent | next [-] | | I would disagree on the first paragraph, if only to say that the majority of Python stdlib documentation is written in that tutorial style, and I loathe it. It is always a chore to look something in the stdlib up, especially if you're used to the reference documentation for Rust/Go/Ruby/JavaScript. | | |
| ▲ | rich_sasha 4 hours ago | parent [-] | | I think a lot of standard library have both. For example multiprocessing or logging. It's true, the tutorial is annoying, except perhaps on first reading, but at least the proper documentation is there. For asyncio the actual hard documentation bit is missing, incomplete or misleading, depending on where exactly you're looking. |
| |
| ▲ | bertil 14 hours ago | parent | prev [-] | | This rings incredibly true, with one major exception: Google Translate can’t handle Finnish to a point that’s both confusing and hilarious. If the output explains how asyncio works, I’m guessing the original discussion was about opening portal for demons, or waiting in line to board the ferry to Estonia. |
|
|
| ▲ | rsyring a day ago | parent | prev | next [-] |
| Not too long ago, I read a comment on HN that suggested, due to Python's support for free-threading, async in Python will no longer be needed and will lose out to free-threading due to it's use of "colored" functions. Which seems to align with where this author ends up: > Because parallelism in Python using threads has always been so limited, the APIs in the standard library are quite rudimentary. I think there is an opportunity to have a task-parallelism API in the standard library once free-threading is stabilized. > I think in 3.14 the sub-interpreter executor and free-threading features make more parallel and concurrency use cases practical and useful. For those, we don’t need async APIs and it alleviates much of the issues I highlighted in this post. Armin recently put up a post that goes into those issue in more depth: https://lucumr.pocoo.org/2025/7/26/virtual-threads/ Which lead me to a pre-PEP discussion regarding the possibility of Virtual Threads in Python, which was probably way more than I needed to know but found interesting: https://discuss.python.org/t/add-virtual-threads-to-python/9... |
| |
| ▲ | ashf023 a day ago | parent | next [-] | | Interesting that very few people in that thread seem to understand Go's model, especially the author of this proposal. If you don't allow preemption, you still have a sort of coloring because most non async functions aren't safe to call in a virtual thread - they may block the executor. If you call C code, you need to swap out stacks and deal with blocking by potentially spawning more OS threads - that's what CGo does. Maybe preemption is harder in Python, but that's not clearly expressed - it's just rejected as obviously unwanted. Ultimately Python already has function coloring, and libraries are forced into that. This proposal seems poorly thought out, and also too little too late. | | |
| ▲ | Dagonfly 2 hours ago | parent | next [-] | | I'm also surprised how often the preemptive vs. cooperative angle gets ignored in favor of the stackful vs stackless debate. If you choose a non-preemptive system, you naturally need yield points for cooperation. Those can either be explicit (await) or implicit (e.g. every function call). But you can get away with a minimal runtime and a stackless design. Meanwhile, in a preemptive system you need a runtime that can interrupt other units of work. And it pushes you towards a stackful design. All those decisions are downstream of the preemptive vs. cooperative. In either case, you always need to be able to interface with CPU-heavy work. Either through preemption, or by isolating the CPU-heavy work. | |
| ▲ | rsyring a day ago | parent | prev [-] | | I can't speak to the more technical aspects you bring up b/c I'm not that well versed in the underlying implementations and tradeoffs. > and also too little too late. I think it very likely that Python will still be around and popular 10 years from now. Probably 20 years from now. And maybe 30 years from now. I think that's plenty of time for a new and good idea that addresses significant pain points to take root and become a predominant paradigm in the ecosystem. So I don't agree that it's too little too late. But whether or not a Virtual Threads implementation can/will be developed and be good enough to gain wide adoption, I just can't speak to. If it's possible to create a better devx than async and get multi-core performance and usage, I'm all for the effort. |
| |
| ▲ | int_19h 20 hours ago | parent | prev | next [-] | | C# has had free threading all along, yet still saw the need for async as a separate facility. The same goes for C++, which now has co_await. | | |
| ▲ | nine_k 14 hours ago | parent [-] | | Threads are more expensive and slow to create. Submitting a task to a thread pool and waiting for a result, or a bunch of results, to show up, is much more ergonomic. So `async` automatically submits a task, and `await` awaits until it completes. Ideally `await` just discovers that a task (promise) has completed at that point, while the main thread was doing other things. Once you have this in place, you can notice that you can "submit the task to the same thread", and just switch between tasks at every `await` point; you get coroutines. This is how generators work: `yield` is the `await` point. If all the task is doing is waiting for I/O, and your runtime is smart enough to yield to another coroutine while the I/O is underway, you can do something useful, or at least issue another I/O task, not waiting for the first one to complete. This allows typical server code that does a lot of different I/O requests to run faster. Older things like `gevent` just automatically added yield / await points at certain I/O calls, with an event loop running implicitly. |
| |
| ▲ | seunosewa 21 hours ago | parent | prev [-] | | async was the wrong solution to the right problem - improving general performance.
Free threading is the prize in an increasingly multi-core CPU world. | | |
| ▲ | guappa 9 hours ago | parent [-] | | Threads use a lot more memory than a single async thread, and if the load is IO, 1 thread is enough. Speed might be similar but resource usage is not the same at all. |
|
|
|
| ▲ | languagehacker a day ago | parent | prev | next [-] |
| Wow, didn't even see much about how miserable using the sync_to_async and async_to_sync transformers are. In general, the architectures developed because of the GIL, like Celery and gunicorn and stuff like that, handles most of the problems we run into that async/await solves with slightly better horizontal scaling IMO. The problem with a lot of async code is that it tends not to think beyond the single machine that's running it, and by the time you do, you need to rearchitect things to scale better horizontally anyway. For most Python applications, especially with web development, just start with something like Celery and you're probably fine. |
| |
| ▲ | operator-name 21 hours ago | parent [-] | | Not to mention sync_to_async and async_to_sync are also part of a library, asgiref that the Django developers made to wrap a thread pool runtime! |
|
|
| ▲ | KaiserPro a day ago | parent | prev | next [-] |
| the two issues I have with async is are: 1) its infectious. You need to wrap everything in async or nothing. 2) it has non-obvious program flow. Even though it is faster in a lot of cases (I had a benchmark off for a web/socket server for multi-threaded vs async with a colleague, and the async was faster.) for me it is a shit to force into a class. The thing I like about threads is that the flow of data is there and laid out neatly _per thread_, where as to me, async feels like surprise goto. async feels like it accepts a request, and then will at some point at the future either trigger more async, or crap out mixing loads of state from different requests all over the place. To me it feels like a knotted wool bundle, where as threaded/multi-process feels like a freshly wound bobbin. Now, this is all viiiiiibes man, so its subjective. |
|
| ▲ | MichaelRazum a day ago | parent | prev | next [-] |
| You can’t just plug and play it. As soon as you introduce async you need to have the runtime loop and so on. Basically the whole architecture needs to be redesigned |
| |
| ▲ | whilenot-dev a day ago | parent | next [-] | | asyncio has been designed to be as "plug and play" as it gets. I'd discourage it, but one could create async loops wherever one would need them, one separate thread per loop, and adapt the code base in a more granular fashion. Blocking through the GIL will persist, though. For any new app that is mostly IO constraint I'd still encourage the use of asyncio from the beginning. | | |
| ▲ | odyssey7 19 hours ago | parent [-] | | I remember back when the “Pythonic” philosophy was to make the language accessible. It’s clear that Dr. Frankenstein has been at large and managed to get his hands on Python’s corpse. | | |
| ▲ | kstrauser 17 hours ago | parent [-] | | I don’t think that’s fair. Yeah, there is a lot to learn and keep track of. At the same time, it’s an inherently complex problem. From one POV, and async Python program looks a lot like a cooperative multitasking operating system, but with functions instead of processes. It was a lot harder to write well-behaved programs on classic Mac OS than it was on a Commodore 64, but that Mac app was doing an awful lot more than the C64 program was. You couldn’t write them the same way and expect good results, but instead had to go about it a totally different way. It didn’t mean the Mac way was bad, just that it had a lot more inherent complexity. |
|
| |
| ▲ | DrillShopper a day ago | parent | prev [-] | | It's this - asyncio is a nightmare to add to and get working in a code base not specifically designed for it, and most people aren't going to bother with that. asyncio is not good enough at anything it does to make it worth it to me to design my entire program around it. |
|
|
| ▲ | foresto 21 hours ago | parent | prev | next [-] |
| A little history... During development, asyncio was called tulip. A quick search turns up this talk by Guido: https://www.youtube.com/watch?v=aurOB4qYuFM I seem to recall that Guido was in touch with the author of Twisted at the time, so design ideas from that project may have helped shape asyncio. https://twisted.org/ Before asyncio, Python had asyncore, a minimal event loop/callback module. I think it was was introduced in Python 1.5.2, and remained part of the standard library until 3.12. https://docs.python.org/3.11/library/asyncore.html https://docs.python.org/3.11/library/asynchat.html |
| |
| ▲ | blibble 20 hours ago | parent [-] | | asyncore actually worked well, unlike asyncio so of course in their infinite wisdom, they removed it |
|
|
| ▲ | rdtsc a day ago | parent | prev | next [-] |
| I never liked async in Python. I feel like it's a bad design pattern, a lot of borrowed from Twisted at the time. I always liked gevent/eventlet based approach and will likely always stick to using that. At the time Go and Elixir/Erlang had green threads (lightweight procs / goroutines) and in general I think that makes for a cleaner code base. |
|
| ▲ | zbentley 14 hours ago | parent | prev | next [-] |
| I don't love Python's async API either, but I think a lot of its complained-about complexity arises from two things: making "when does the coroutine start running" a very explicit point in code (hence the Task/awaitable-function dichotomy), and how it chooses to handle async cancellation: via exceptions. And Python's async cancellation model is pretty nice! You can reason about interruptions, timeouts, and the like pretty well. It's not all roses: things can ignore/defer cancellations, and the various wrappers people layer on make it hard to tell where, exactly, Tasks get cancelled--awaitable functions are simple here, at least. But even given that, Python's approach is a decent happy medium between Node's dangling coroutines and Rust's no-cleanup-ever disappearing ones (glib descriptor: "it's pre-emptive parallelism, but without the parallelism"). More than a little, I think, of the "nobody does it this way" weirdness and frustration in Python asyncio arises from that. That doesn't excuse the annoyances imposed by the resulting APIs, but it is good to know. |
| |
| ▲ | rich_sasha 14 hours ago | parent [-] | | Cancellations probably caused more bugs in my async code than anything else. If any code in your coroutine, including library code, has a broad try/except, there's good chances that eventually the cancellation exception will be swallowed up and ignored. Catch-all try/except of course isn't the pinnacle of good software engineering, but it happens a lot, in particular in server-tyoe applications. You may have some kind of handler loop that handles events periodically, and if one such handling fails, with an unknowabl exception, you want to log it and continue. So then you have to remember to explicitly reraise cancellation errors. Maybe it's the least bad Pythonic option, but it's quite clunky for sure. |
|
|
| ▲ | kurtis_reed 20 hours ago | parent | prev | next [-] |
| The premise of the article is wrong. Async in Python is popular. I'd expect most new web backends to use it. The article says SQLalchemy added async support in 2023 but actually it was 2020. |
|
| ▲ | TheCondor 18 hours ago | parent | prev | next [-] |
| I generally like Python. I'm not a hater but I don't treat it like a religion either. Async Python is practically a new language. I think for most devs, it's a larger than than 2 to 3 was. One of the things that made python uptake easy was the vast number of libraries and bindings to C libraries. With async you need new versions of that stuff, you can definitely use synchronous libraries but then you get to debug why your stuff blocks. Async Python is a different debugging experience for most python engineers. I support a small handful of async python services and think it would be an accellerator for our team to rewrite them on Go. When you hire python engineers, most don't know async that well, if at all. If you have a mix of synchronous and asynchronous code in your org, you can't easily intermix it. Well you can, but it won't behave as you usually desire it to, it's probably more desirable to treat them as different code bases. Not to be too controversial, but depending upon your vintage and they was you've learned to write software I think you can come to python and think async is divine manna. I think there are many more devs that come to python from datascience or scripting or maybe as a first language and I think they have a harder time accepting the value and need of async. Like I said above, it's almost an entirely different language. |
|
| ▲ | physicsguy 11 hours ago | parent | prev | next [-] |
| One of the big reasons I'd say is that 90% of Python work doesn't actually benefit from async? Basically nothing in the data science / simulation / etc. world benefits at all. Web development it does, and Django has sprinkled it in (but not DRF), Flask has a fork that's async and FastAPI is async. At work people have suggested we should switch our Flask code to async to 'make it faster' but for us, we're largely using Flask to server ML models and so there's no benefit by being async at all since we're largely compute bound within the request cycle. |
| |
| ▲ | guappa 9 hours ago | parent [-] | | You'd save a bit of memory by letting 1 thread handle multiple connections, but that's about it. |
|
|
| ▲ | dapperdrake a day ago | parent | prev | next [-] |
| Python's async is very difficult to use and debug. It seems to get stuck randomly, read like race conditions. And Python cannot work around this nicely with their lambdas only permitting a single expression in their body. Not worth the trouble. Shell pipelines are way easier to use. Or simply waiting —no pun intended— for the synchronous to finish. |
| |
| ▲ | mixmastamyk a day ago | parent [-] | | > lambdas only permitting a single expression Use a tuple, maybe walrus, and return the last item[-1]. | | |
| ▲ | dapperdrake 20 hours ago | parent [-] | | That idea sounds good. How do I get variables for not redoing long-running computations that depend on one-another? So, what if the third tuple value depends on the second and the second in turn depends on the first? | | |
| ▲ | mixmastamyk 17 hours ago | parent | next [-] | | That’s what walrus is for: future = lambda age: (
print('Your age is:', age),
older := age + 5,
print('Your age in the future:', older),
older,
)[-1]
print(future(20))
# out
Your age is: 20
Your age in the future: 25
25
| |
| ▲ | int_19h 20 hours ago | parent | prev [-] | | You can abuse list and sequence comprehensions for this. `for..in` is effectively a variable binding since you can target a freshly created list or a tuple if you need to bind a single value. So: [x
for x in [some_complicated_expression]
if x > 0
for y in [x + 1]
...
][0]
That said, I wouldn't recommend this because of poor readability. |
|
|
|
|
| ▲ | harpiaharpyja 13 hours ago | parent | prev | next [-] |
| I've been working quite heavily with async Python for five and a half years now. I've been the principal developer of a control system framework for laboratory automation, written pretty much entirely in async Python. I say framework because it's a reusable engine that has gone on to become the foundation for three projects so far. Our organization is primarily involved in materials research. At it's heart it's kind of like an asynchronous task execution engine that sits on top of an I/O layer which allows the high-level code to coordinate the activities of various equipment. Stuff like robot arms, furnace PID controllers, gantry systems, an automatic hydraulic press/spot welder (in one case), various kinds of pneumatic or stepper actuated mechanisms, and of course, measurement instruments. Often there might be a microcontroller intermediary, but the vast majority of the work is handled by Python. My experience with async Python has been pretty positive, and I'm very happy with our choice to lean heavily into async. Contrary to some of the comments here I don't find the language's async facilities to be rough at all. Having cancellation work smoothly is also pretty important to us and I can't say I've experienced any pain points with exception-based cancellation. Maybe we've been lucky, but injecting an exception into a task to cancel it actually does work pretty reliably. Integrating dependencies that expose blocking APIs has never been a big deal either. Usually you want to have an interface layer for every third party dependency anyways, and it's no big to deal to just write an async wrapper that uses a threads or a thread pool to keep the blocking stuff off of the main thread. I personally think that a lot of people's negative experiences here might have more to do with asyncio than the language's async features. Prior to stepping into my current role, I also had some rough experiences with asyncio, which is why we chose to build all of our async code on top of curio. There was some uncertainty at first about how well supported it would be compared to a package in the standard library, but honestly curio is a really well put together package that just works really smoothly. |
| |
| ▲ | guappa 9 hours ago | parent | next [-] | | I think most of the problems are due to people not understanding how async works (non blocking file descriptors and a call to poll). | |
| ▲ | _dain_ 10 hours ago | parent | prev [-] | | Oh hey, I'm the mirror universe version of you. I used to work in a semiconductor plant, writing Python code that controlled robot arms and electronic measurement instruments and so on. In my universe we used threads over blocking calls instead of async and it was exactly as bad as you might imagine. >Having cancellation work smoothly is also pretty important to us +10000. Threads don't have good cancellation semantics, so we never had a robust solution to the "emergency shutdown" problem where you need to tell all the running equipment to stop whatever they're doing and return to safe positions. Every day I worked on that codebase I wished it had been async from the beginning, but I couldn't see a way to migrate gradually because function coloring makes it an all-or-nothing affair. |
|
|
| ▲ | kamikaz1k a day ago | parent | prev | next [-] |
| async python is awful. to me it is a by default avoid. and when you can't avoid, use only where it provides outsized benefit. |
| |
| ▲ | TZubiri 21 hours ago | parent [-] | | Sometimes less is more. When a program becomes so big, one of the hardest challenges is not to keep on adding stuff. |
|
|
| ▲ | notepad0x90 9 hours ago | parent | prev | next [-] |
| For me, when I use python, it's because I want faster dev time to prove a concept or that I expect others with little to no programming experience to maintain the code in the future. So, I rarely ever use async because I never seem to be in a position where the debugging complexity and how hard it is for others/newbies to read and be familiar with the code is worth the performance improvements. Like others are saying, if I want it fast and efficient (processing), I'll just use Go. Python isn't like JS in browsers, you don't have to use it, you have to want to use it. and the same goes with its features. Maybe if python tutorials/books and "How do i ____ in python?" search results used async, map, filter, collections,etc.. these awesome python features would be more prevalent. But, I can see how mature projects should probably mandate their usage where it makes sense. |
|
| ▲ | 0xbadcafebee 14 hours ago | parent | prev | next [-] |
| Because it's a niche? You don't need async for most stuff Python is used for, it's a "nice-to-have", and it's annoying to add. If you have to have concurrency/threading/etc, there are other languages with better paradigms. The same thing happened with Perl and its weird threading (for different reasons, but still)... I guess Python didn't learn that lesson. Perl also gained async and coroutine support, but I think they were added a while after I left the community. I doubt many people use them today. Anyone used them and can comment on ease vs Python? |
| |
| ▲ | rich_sasha 14 hours ago | parent [-] | | Async is perfectly fine for a medium-load single-threaded RPC/REST server wrangling. Ran it in production with 100s, sometimes 1000s of calls / sec with no issues. And thread-safety is much easier with async, where you know where the context switches are occuring. |
|
|
| ▲ | hk1337 4 hours ago | parent | prev | next [-] |
| It's not as easy to implement as it is in javascript though. If someone has it already implemented, like say in FastAPI, then it's pretty trivial to use but to just use async is kind of a pain. |
|
| ▲ | bjt 20 hours ago | parent | prev | next [-] |
| I was using gevent to get async benefits in Python 10+ years ago. It's a much nicer programming paradigm than async/await, in my opinion. Now working in Go where the same pattern is built into the language, I'm even more convinced. |
|
| ▲ | alanfranz a day ago | parent | prev | next [-] |
| Much more than 10y. Twisted existed since the 90s. I didn’t read the whole article but I find is strange that it’s not mentioned, ever. |
|
| ▲ | tschellenbach a day ago | parent | prev | next [-] |
| Yes, this!. Its a mess, some typing, some async. No standardization/ one way to do things. Literally goes against the original ZEN of python "There should be one-- and preferably only one --obvious way to do it : Aim for a single, clear solution to a problem. " |
| |
| ▲ | aquariusDue a day ago | parent [-] | | Cue the zen of python apologists explaining how we just don't get it and that with enough reframing it'll click. Snide remark aside, I actually like the Zen of Python as programming language folklore but in 2025 AD it's kinda crazy to pretend that Python actually adheres to those tenets or whatever you wish to call them, and I'd go as far as to claim that it does a disservice to a language flexible enough for a lot of use cases. There's even someone on YouTube developing a VR game with Python. | | |
| ▲ | int_19h 20 hours ago | parent | next [-] | | It has never been literally true anyway, not even back when it was originally written. | |
| ▲ | a day ago | parent | prev [-] | | [deleted] |
|
|
|
| ▲ | KingOfCoders 13 hours ago | parent | prev | next [-] |
| I did a lot of Scala Futures and liked the concept, more than 'async' everywhere, because it was easier to reason what happens and functions were just functions. Since some years I use Go where this is even easier. But it took me some time to realize I can do the same idioms in Go as in Scala: // Scala
f := Future(x)
// Do something else until you need f
...
for r <- f { ... }
can be written as c := channel
// Do something else until you need the result
...
r<-c
My mind model was channel as a queue, but it can easily be used like channel as a future for one value.And `select` for more complicated versions. I miss the easy composition and delaying of futures though (f.map etc.) |
|
| ▲ | BrenBarn 12 hours ago | parent | prev | next [-] |
| The main reason I've encountered is that async is generally an all-or-nothing thing. It's generally not possible to take an existing code base and just "add a little async here and there". The entire thing has to be restructured from the ground up for async. The gain has to be really major for this to be worthwhile, more so the better your existing code already works. This is sort of like the article's Problem 3, but it's not just maintaining two APIs, it's even creating the second API in the first place. |
| |
| ▲ | guappa 9 hours ago | parent [-] | | Before async they had asyncore, which they now entirely removed. So the real early adopters of async in python are punished by having to rewrite their software to run it with a current version of python. |
|
|
| ▲ | taeric 19 hours ago | parent | prev | next [-] |
| Wouldn't this be like asking why bit packing/flipping isn't more popular in python? In general, it just isn't necessary for the vast majority of programs people are likely to write using python. Which isn't to argue that they did a good or a bad job adding the ability to the language. It just isn't the long pole in performance concerns for most programs. |
|
| ▲ | dagenix 15 hours ago | parent | prev | next [-] |
| The problem, IMO, with asyncio is that its way, way too complicated. In my experience, anyio (https://github.com/agronholm/anyio) provides a much better interface on top of asyncio. And since it can use asyncio as a backend, it maintains compatibility with the asyncio ecosystem. FastAPI, for example, uses anyio. One thing that I don't see being mentioned in any of the threads here talking about green threads is cancellation. A huge benefit, IMO, of anyio is that it makes cancellation really easy to handle. With asyncio, cancellation is pretty hard. And with green threads, cancellation is often impossible. |
|
| ▲ | wink 10 hours ago | parent | prev | next [-] |
| I think the 2/3 split had more impact than the author thinks. Many people in my bubble (around 2013-2017) just never went with python 3, but chose other languages. The company I was working for started important applications in python 2 as late as 2014 because the libraries we needed weren't ported yet. We never went python 3 later, but went to go instead, so we completely missed any python async thing. |
|
| ▲ | odyssey7 20 hours ago | parent | prev | next [-] |
| Python is ergonomically challenged, so we shouldn’t be surprised when features built on that foundation go unused. |
|
| ▲ | giancarlostoro a day ago | parent | prev | next [-] |
| I think part of it is historical. WSGI has been around a while before async became relevant. The industry now has had ASGI for a while, but if your WSGI deployed web application doesn't need to squeeze out all the juice it can with async, you might not be phased by not using it or bothered at all. Reminds me of how long it took some to go from Python 2 to Python 3. |
|
| ▲ | matthew16550 17 hours ago | parent | prev | next [-] |
| Pythons builtin async always confuses me. The Trio library felt easy to learn and just worked without much fuss. https://trio.readthedocs.io/ |
|
| ▲ | breatheoften 14 hours ago | parent | prev | next [-] |
| The problem with python's async is asyncio ... Structured concurrency libraries like anyio or trio are actually pretty nice -- "stacks" and stack traces are good things. Python multi exception concept is weird --- but also I think probably good ish. It is still a pita to orchestrate around the gil/how terrible python multiprocessing side effects are wherever cpu bound workloads actually exist ... |
|
| ▲ | ketchupdebugger a day ago | parent | prev | next [-] |
| The two api issue basically means async is not backwards compatible. You can't just squeeze some async into an existing code base, You'd need new functions, libraries etc. You'd basically need to rewrite the entire codebase in async to see an ounce of perf improvements. |
|
| ▲ | lormayna 21 hours ago | parent | prev | next [-] |
| I always considered the ergonomic of async/await not really ergonomical and hard to debug. I really like, indeed, the go approach: using goroutines, channels and waitgroups is powerful and easy. |
|
| ▲ | the__alchemist a day ago | parent | prev | next [-] |
| The barrier it places in Rust's lib ecosystem is unpleasant; I'm glad it hasn't taken off in Python. I have written too many rust libs because the existing ones forced your code to be Async. |
| |
| ▲ | bigstrat2003 15 hours ago | parent [-] | | The bifurcation of the Rust ecosystem due to async makes me absolutely loathe the feature, and wish it had never been added to the language. It's so awful. |
|
|
| ▲ | tripletpeaks a day ago | parent | prev | next [-] |
| It’s probably related to the fact that when they added “await” to JavaScript, it seemed to become the most popular keyword in the language overnight, just comical amounts of it in the average new JavaScript file in the wild. |
| |
| ▲ | lelanthran 20 hours ago | parent [-] | | Quite a lot of my js functions don't await; instead I simply return the promise and let the caller `await` or more often attach a `then` as they see fit. The default linter in Vs Code keeps marking those functions with warnings though. Says I should mark them as async |
|
|
| ▲ | tonymet 18 hours ago | parent | prev | next [-] |
| async only helps with io-wait concurrency, but not cpu-bound concurrency. async is popular in JS because the browser is often waiting on many requests. command-line tools are commonly computing something. even grep has to process the pattern matching so concurrent IO doesn't help a single-threaded pattern match. Sure there are applications where async would help a CLI app, but there are fewer than JS. Plus JS devs love rewriting code very 3 months. |
|
| ▲ | omnicognate a day ago | parent | prev | next [-] |
| For a counter-opinion that isn't getting stated much here, I think: * Asyncio is pretty good, and is usually the best choice for non-blocking I/O in python these days. * Asyncio doesn't add multi-core scaling to python. It's not a replacement for threads and doesn't lift the GIL-imposed scaling limitations. If these things are what you're after from asyncio you'll be disappointed, but they're not what it's trying to add and not adding them doesn't make it a failure. * "Coloured functions" is a nonsense argument and that article made the whole world slightly more dumb. * The GIL is part of the reason for python's success. I hope nogil either somehow manages to succeed without compromising the benefits the GIL has brought (I'll be amazed if that happens) or fails entirely. Languages are tools and every tool in your toolbox doesn't have to eventually turn into a drill. If your use case requires in-process parallelisation of interpreted CPU-bound workloads across multiple cores, python is just the wrong thing to use. * It is indeed extremely annoying that we don't have async file access yet. I hope we get it soon. |
|
| ▲ | nromiun a day ago | parent | prev | next [-] |
| It was supposed to bring massive concurrency to Python. But as with any async implantation in any language it is too easy to deadlock the entire system. Did you forgot to sprinkle enough `await`? Your code is blocked somewhere, good luck hunting for it. In contrast preemptive green threads are too easy. Be it IO or CPU load all threads will get their slice of CPU time. Nothing is blocked so you can debug your logic errors instead of deadlocks everywhere. Async works in JS so well because the entire language is designed for it, instead of async being just bolted on. You can't even run plain `sleep` to block, you need setTimeout. |
| |
| ▲ | DanielHB 21 hours ago | parent | next [-] | | It is even funnier because JS only got proper async after, what? 25 years or so of existence. The main reason JS went all in with async is because it only ever had a single event loop and that naturally fits with the async model. I still remember the days when all the libs started adopting async and how so many of them (to this day) support both passing callbacks or returning promises. Async just so naturally fixed the callback hell of 2010s JS that it just became standard even though it is not even heavily used in the browser APIs. | |
| ▲ | KingOfCoders 13 hours ago | parent | prev [-] | | What I find funny is Java started with green threads, then moved away to system threads, then back again. | | |
| ▲ | nromiun 12 hours ago | parent [-] | | Maybe massive concurrency was not that big of a feature back then. But these days everyone wants to support a million connections at a time. Green threads and async tasks can do that without breaking a sweat, unlike OS threads. Also, Java virtual threads are still cooperative. Maybe they will move to preemption in time like Go did. Some time ago I tried to run just 10k OS threads on a small PC and it just crashed. So clearly OS threads have not improved much. |
|
|
|
| ▲ | wodenokoto 14 hours ago | parent | prev | next [-] |
| > If you call function get_thing_sync() versus await get_thing_async(), they take the same amount of time. No, if you call both function one will try and fetch a none responding url and the other will immediately raise an exception. |
|
| ▲ | Animats 13 hours ago | parent | prev | next [-] |
| Python has had threads for 20 years. Why weren't they more popular? |
|
| ▲ | fulafel a day ago | parent | prev | next [-] |
| I think this is for the best. We don't want to end up like Rust. The complexity tradeoff suits Python's sweet spot even less (much less). |
|
| ▲ | est 14 hours ago | parent | prev | next [-] |
| > Problem 3: Maintaining two APIs is hard Well I had a fix https://news.ycombinator.com/item?id=43982570 |
|
| ▲ | Meneth a day ago | parent | prev | next [-] |
| KeyboardInterrupt (Ctrl+C) has been a problem wherever I've used python async. It should just work out of the box. |
|
| ▲ | rcarmo a day ago | parent | prev | next [-] |
| I've had no real issues with async, although I primarily use libraries like aiohttp and aiosqlite and even write my own helpers (https://github.com/rcarmo/aioazstorage is a good example). The vast majority of the Python code I wrote in the last 5-6 years uses asyncio, and most of the complaints I see about it (hard to debug, getting stuck, etc.) were -- at least in my case -- because there were some other libraries doing unexpected things (like threading or hard sleep()). Coming from a networking background, the way I can deal with I/O has been massively simplified, and coroutines are quite useful. But as always in HN, I'm prepared for that to be an unpopular opinion. |
| |
| ▲ | JackSlateur a day ago | parent [-] | | I share your experience asyncio is easier than threads or multiprocess: less locking issue, easier to run small chunks of code in // (easier to await something than to create a thread that run some method) |
|
|
| ▲ | throwawayffffas 20 hours ago | parent | prev | next [-] |
| The function coloring is a non starter for me. I would just rather write JS where everything is async by default. |
| |
| ▲ | IshKebab 20 hours ago | parent [-] | | Everything is not async by default in JS. | | |
| ▲ | fzzzy 18 hours ago | parent | next [-] | | I think what they mean is that there are no blocking functions in the standard library except alert, prompt, and confirm. ( are there any others?) | | |
| ▲ | steve_adams_86 11 hours ago | parent [-] | | Yeah, you can call async functions without specifying it as such and the script will just carry on regardless of how you're handling it. Totally weird, but also pretty cool. When I first started some 20 years ago that was a major foot gun for me, coming from PHP where functions always returned before the next one was called. |
| |
| ▲ | throwawayffffas 19 hours ago | parent | prev [-] | | I mean everything is running on the runloop, async/await, promises, and callbacks are different flavors of syntactic sugar for the same underlying thing. In JS you can do: async function foo(){...}
function bar(){foo().then(...);}
In python though async and sync code runs in a fundamentally different way as far as I understand it. | | |
| ▲ | IshKebab 19 hours ago | parent [-] | | I'm not too familiar with Python async. The only time I used it was to get stderr and stdout out of a subprocess.run() separately. I think anyone using it for performance reasons is insane and should just switch to a more performant language. Anyway I think the main difference is that in Python you control the event loop whereas in JS there's one fixed event loop and you have no choice about it. |
|
|
|
|
| ▲ | ayaros 20 hours ago | parent | prev | next [-] |
| I love JS's async. I don't know how anyone ever did anything useful in the language before it was introduced. I think something between a third and half of the functions and members in LisaGUI are probably async functions at this point. |
| |
| ▲ | steve_adams_86 11 hours ago | parent [-] | | We used callbacks and generators. It was a bit messy at times, but really, it wasn't all that different. I still use generators quite often. |
|
|
| ▲ | Areading314 a day ago | parent | prev | next [-] |
| I think explicit management of the event loop and the associated potential for grievous bugs has a lot to do with it. |
|
| ▲ | liquidpele 17 hours ago | parent | prev | next [-] |
| Because it’s terrible. People don’t avoid non-terrible things. |
|
| ▲ | mting 12 hours ago | parent | prev | next [-] |
| I would prefer to implement it using a synchronous approach and then switch to asynchronous at deployment. |
|
| ▲ | 14 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | bmandale a day ago | parent | prev | next [-] |
| I don't like the idea of launching a whole thread for every concurrent task I want to do. But holy hell is it so much easier than figuring out how tf async works. If I wanted something super performant I wouldn't be using python in the first place. |
|
| ▲ | dekhn a day ago | parent | prev | next [-] |
| It added intrusive- codebase-wide- functionality that more or less could have been done with other (thread-based) approaches. AWSCLI was broken for over a year- we had to do a ton of work to deal with the various packaging issues. Don't break userspace. |
| |
| ▲ | AtlasBarfed 20 hours ago | parent [-] | | Why was awscli written in python? Bad decision to begin with. Hey! We have a product that we clearly want to release worldwide. Let's build it on something that doesn't have Unicode. Or any real threading. And is slow as hell. You picked a platform that was going to have to break user space. At least it wasn't JavaScript | | |
|
|
| ▲ | game_the0ry 19 hours ago | parent | prev | next [-] |
| I will take a couple stabs: - Async is a legitimately hard to get if you are just starting to learn it, which is probably why its isn't more popular in the python community. - If you need async, that implies you need hi I/O performance. At that point, you probably should have picked a more performant language + runtime (Java, Node), bc use case should dictate tooling. - It's not enough to make a language + web framework to be async -- the DB drivers need to be async too (author mentions sqlalechemy got async support in 2023 and django orm is a WIP). I like python, but not bc its async or multi-threaded. I like it bc when I use it, I know I do not have to worry about those things and the new set of problems I have to handle when I do. For the i/o and multi-threaded perf, give me java and node (maybe erlang/elixir if I am feeling extra spicy). For the fast and easy scripting, with massive community of open source of talent and high quality libraries (including the vast majority of web app slop), give me python. |
| |
| ▲ | hoppp 19 hours ago | parent [-] | | To be frank someone who picks python is not doing it because they want the best performance. Its either because its the only language they know or they just don't really care about performance and want to finish the project fast. | | |
| ▲ | game_the0ry 19 hours ago | parent [-] | | Agreed, bc I have been that "someone." And there is nothing wrong with that. In fact, this should be the norm. |
|
|
|
| ▲ | dec0dedab0de a day ago | parent | prev | next [-] |
| It's had async way longer than 10 years. multi-threading/processing, celery, twisted, others I can't remember. Asyncio means learning different syntax that buys me nothing over the existing tools. Why would I bother? |
|
| ▲ | baq a day ago | parent | prev | next [-] |
| async, parallelism, concurrency, why not all three? JS, the canonical async (at least today) language, has had neither parallelism nor concurrency primitives for a good decade or so after its inception. I personally blame low async adoption in Python on 1) general reduction in its popularity vs Typescript+node, which is driven by the desire to have a single stack on the frontend and backend, not by bad or good async implementations in Python (see also: Rails, once the poster child of the Web, now nearly forgotten) 2) lack of good async stdlib. parallelism and concurrency are distant thirds. |
| |
| ▲ | dragonwriter a day ago | parent [-] | | > async, parallelism, concurrency, why not all three? async is a concurrency mechanism. | | |
| ▲ | JackSlateur a day ago | parent [-] | | async enables a concurrency potential, nothing more That is, if you use external stuff and can delegate work to them, then async is concurrent (async io for instance) But if you do not, then async is regular code with extra steps | | |
| ▲ | DanielHB 21 hours ago | parent [-] | | I do not understand what you mean, parallelism is running multiple concurrent execution blocks running in multiple physical CPUs at the same time. My understanding is that JS can't do that (besides service workers which are non-shared memory), but it still has multiple concurrent code-blocks being executed at the same time, just in linear fashion. It will just never use multiple CPU cores at the same time (unless calling some non-JS non-shared-memory code) |
|
|
|
|
| ▲ | didip 19 hours ago | parent | prev | next [-] |
| It would have been a lot more popular if it has a shim that lets it pretend to be a regular thread. Folks don't like to perform a lot of rewrites. |
|
| ▲ | adfm a day ago | parent | prev | next [-] |
| Twisted? |
| |
| ▲ | lstodd a day ago | parent [-] | | And Tornado. Please don't remind me of those horrors. |
|
|
| ▲ | OhMeadhbh 21 hours ago | parent | prev | next [-] |
| I'm not the biggest Python fan, but when I was forced to use it using async disabled a bunch of things... like the debugger. Not a fan. |
|
| ▲ | taude a day ago | parent | prev | next [-] |
| Use an appropriate language and runtime for the right tool/workload. There's better languages and runtimes to use that have better native concurrency built in. |
|
| ▲ | mting 12 hours ago | parent | prev | next [-] |
| I would prefer to implement it using a synchronous approach and then switch to asynchronous at deployment via kafka etc
this way could simply focus on bussniss logical |
|
| ▲ | fullstop a day ago | parent | prev | next [-] |
| Exceptions are difficult to deal with. Also, while they've had async for 10 years it has changed quite a bit from the initial incarnation. |
| |
| ▲ | blibble 20 hours ago | parent [-] | | there are some Exception situations that are almost completely impossible to deal with like guaranteeing a close() inside a finally asyncio is a terrible, terrible library |
|
|
| ▲ | jpgvm 20 hours ago | parent | prev | next [-] |
| Because they chose the wrong API (well more correctly created a new, worse one). In order to appease the various flavours they mixed and matched stuff from Tornado, gevent, etc. They should have stuck with the most seamless of those (gevent) and instead of having it monkey-patch the runtime go the Java VirtualThread route and natively yield in all the I/O APIs. This would have given a Go-esque ease of use and likely would have been immensely more popular. |
|
| ▲ | JodieBenitez a day ago | parent | prev | next [-] |
| Why isn't it more popular ? Well, replacing libs and functions with async versions is not fun. |
|
| ▲ | silverwind a day ago | parent | prev | next [-] |
| I think Python needs a good `fetch`-like async http client in the stdlib. |
| |
| ▲ | rcarmo a day ago | parent [-] | | I personally recommend aiohttp. Setting up a ClientSession and letting it do its thing is quite nice. | | |
| ▲ | operator-name 21 hours ago | parent [-] | | httpx also supports sync and async, but I remember seeing an issue in their repo about worse performance than aiohttp. |
|
|
|
| ▲ | whalesalad a day ago | parent | prev | next [-] |
| I adopted gevent/greenlets pretty early on and it has always felt better than asyncio, monkey-patching aside. |
|
| ▲ | northisup a day ago | parent | prev | next [-] |
| because two colors of functions suuuuuuuuuuks to deal with |
| |
| ▲ | Analemma_ a day ago | parent [-] | | Doesn't really explain why async/await are hugely popular in C#, JavaScript, etc. but didn't take off in Python. | | |
| ▲ | topspin 20 hours ago | parent [-] | | I'd explain your cases this way: C# has a dictator with a budget: Microsoft integrated async into C# in a formal way, with 5.0, including standard libs, debugging, docs, samples, clear guidance going forward, etc. What holes there were were dealt with in an orderly and timely manner. JavaScript actually had a pretty messy start with async, with divergent conventions and techniques. Ultimately this got smoothed out with language additions, but it wasn't all that wonderful in the early days. Also, JavaScript started from a simpler place (single-threaded event loop) that never had "fork" and threads and all that comes with those, so there was less legacy to accommodate and fewer problems to overcome. Python had a vast base of existing non-async software chock full of blocking code, plus an incomplete and haphazard concurrency evolution. There are several legacy concurrency solutions in Python, most still in use today. Python async is still competing and conflicting with it all. Not unlike the Python 2->3 transition. |
|
|
|
| ▲ | bilsbie 17 hours ago | parent | prev | next [-] |
| How does this jive with the GIL? |
|
| ▲ | th0ma5 a day ago | parent | prev | next [-] |
| I adopted just the Clojure style of thinking in terms of immutable copies and it seems easier to move between synch and asynch conceptually as needed, although Clojure has some asynch parallelism automatically due to this paradigm as well. |
|
| ▲ | nurettin 12 hours ago | parent | prev | next [-] |
| Python async feels great ergonomically at first. Async tasks, timers, things happening concurrently. But then you lose stack traces. Code that starts sync, then appends jobs to an async event queue can't be traced, because the exception happened in the loop and doesn't bubble up to sync (or it didn't back in 3.8). That, among with synchronization problems (it is an event loop, why do we even need to synchronize? async mutex??) I really tried to make it work, but eventually gave up and went back to deque. Now life is great. |
|
| ▲ | lysace a day ago | parent | prev | next [-] |
| Because it takes a a lot of reading/studying/experimentation to get things right. I'm personally halfway through that journey (having spent like 4h reading docs/learning, on top of the development). I suspect it could have been designed in such a way so that it's less trivially easy to mess up. |
|
| ▲ | 6510 20 hours ago | parent | prev | next [-] |
| I've always felt like there is some hidden clue in music trackers. They program the music as spaghetti code with unlimited channel running neatly along the same line numbers. |
|
| ▲ | andrewstuart a day ago | parent | prev | next [-] |
| Not popular? I use async all the time. The evidence this post provides is that flask and Django aren’t all in on async. That’s meaningless. |
| |
| ▲ | kamikaz1k 21 hours ago | parent [-] | | if you're going to lobby that criticism, you should atleast offer an alternative definition of popular...unless you're saying your usage of a tool defines its popularity |
|
|
| ▲ | lstodd a day ago | parent | prev | next [-] |
| idk stackless dates back to 2005 at least, most likely earlier. greenlet which is sort of minimal stackless .. before 2008 pycoev which is on one hand greenlets without memmove()s, on the other hand sort of io-scheduled m:n threading I wrote myself in 2009. so, at least idk, 20 years? It was first needed. Then 10 years passed, people got around to pushing it through the process aaand by the time it was done it was already not needed. so it all stalled. Same with Rust. Nowadays server-side async is handled very differently. And client-side is dominated by that abomination called JS. |
| |
| ▲ | fulafel a day ago | parent | next [-] | | Also back then multicore wasn't as prevalent, it made sense to multiplex a zillion things onto one CPU process. Whereas now servers have hundreds of cores / SMT vCPUs [1] and running a lot of processes makes much more sense. [1] https://www.tomshardware.com/pc-components/cpus/amd-announce... | |
| ▲ | toolslive a day ago | parent | prev [-] | | I never understood why stackless wasn't more popular. It was rather nice, clean and performant (well, it's still python but it provide(d) proper concurrency) | | |
| ▲ | lstodd a day ago | parent [-] | | It was memmove() on each task switch. So you could forget about d-cache.
And that killed performance on anything but benchmarks. | | |
| ▲ | ack_complete 21 hours ago | parent [-] | | Also caused subtle bugs. I once had to debug a crash in C++ code that turned out to be due to Stackless Python corrupting stack state on Windows. OutputDebugString() would intermittently crash because Stackless had temporarily copied out part of the stack and corrupted the thread's structured exception handling chain. This wasn't obvious because this occurred in a very deep call stack with Stackless much higher up, and it only made sense if you knew that OutputDebugString() is implemented internally by throwing a continuable exception. The more significant problem was that Stackless was a separate distribution. Every time CPython updated, there would be a delay until Stackless updated, and tooling like Python IDEs varied in whether they supported Stackless. | | |
| ▲ | lstodd 21 hours ago | parent [-] | | We never ran stackless/greenlet under windows. And pycoev was setcontext(3)-based, so no windows either. But I can imagine what that code did there... |
|
|
|
|
|
| ▲ | OutOfHere a day ago | parent | prev | next [-] |
| With the newest Python, I can use no-gil, so my threads automatically use multiple cores. With asyncio, even under no-gil, unless I use a base layer that offers parallelism, I am stuck to a single core by default, which doesn't make any sense in a multicore world. In contrast, with Rust's async, there is no such limitation. The traditional argument against the above assertion has been that asyncio is good for I/O work, not for CPU work, but this constraint is not realistic because CPU usage is guaranteed to creep in. In summary, I can use threading/process/interpreter pools and concurrent futures, considering I need them anyway, without really needing to introduce yet another unnecessary concurrency paradigm (of asyncio). |
| |
|
| ▲ | a day ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | cyberax a day ago | parent | prev | next [-] |
| To add to this, async in Python is also plain buggy. For example, uvicorn is the most popular FastAPI server, and it leaks contexts across requests in the default installation. The bug has been open for 2 years, with zero fucks given. The workaround is "just use libuv": https://github.com/encode/uvicorn/issues/2167 I've seen other such cases, and I just gave up on trying to use async. |
|
| ▲ | cyberax 19 hours ago | parent | prev | next [-] |
| Another thing: the lack of JIT. NodeJS is also single-threaded, but it's _fast_ because of one of the best JITs in the industry. So you can feasibly run hundreds if not thousands requests per second on one core. If you try to do that with Python, you get performance that is not acceptable. So why even bother? |
|
| ▲ | TZubiri 21 hours ago | parent | prev | next [-] |
| Because I don't need it. When I need to do concurrent stuff I either use fork to multiprocess or use the threading library, no import necessary, couple of lines of code, no need to make specialized code with await keywords and stuff. This line made me question myself though: "Then Flask is and probably always will be synchronous (Quart is an async alternative with similar APIs)." I use flask, and I literally spent the last hour questioning whether I was an idiot and needed to dm my previous clients asking them to fix my code. I'm wondering how my apps passed stress tests of thousands of concurrent users, maybe I did the tests wrong? Chatgpt says "s flask asynchronous?
ChatGPT said: Flask itself is not asynchronous. It is a WSGI-based framework, which means it is synchronous by design — it handles one request at a time per worker. Each request is processed sequentially, and concurrency is typically achieved by running multiple worker processes" Oh shit, I didn't use gunicorn, I just run the python script raw. I'm an idiot. Let's write a test server that sleeps for 1 second before responding to a request: "
import flask
import requests
import time
app = flask.Flask("test") @app.route("/")
def hi():
time.sleep(1)
#requests.get("https://google.com")
return "Hello, World!" app.run("0.0.0.0",8088)
" This should block for like 25ms, if 50 concurrent users ask for this resource, there will be an average 500ms of extra latency! And a Test client that does 50 calls at once, will it take 50 seconds?: "import threading
import requests URL = "http://127.0.0.1:8088/" def make_request(i):
try:
print("req")
response = requests.get(URL)
print("res")
except:
print("fail") threads = [] for i in range(5):
t = threading.Thread(target=make_request, args=(i,))
threads.append(t)
t.start() for t in threads:
t.join() print("All requests completed") " Then we run with time binary in linux: >time python3 client.py All requests completed real 0m1.216s
user 0m0.203s
sys 0m0.039s Ok, turn off the alarms, Flask is fine. I'm not sure what's going on with async, but the only experience I had with it was a junior dev that came from writing horrible node apps with react and nest (his frontend connected to a supabase db directly with credentials exposed, even if there was a node backend). He wanted to pivot to python because that's what I used and I had good results, so he installed Quartz instead of Flask, and he was writing Node like code in python, and it was of course a mess. Not saying that it's always going to be a mess, but you are better off learning the native way of a language instead of trying to shoehorn other abstractions and claiming that the way it is done in python is inefficient, it's one of the most popular languages in the world, these are massively used libraries, it's unlikely that "something is terribly wrong". It's more of a meme that python is slow. What async is, is an alternative and supposedly cleaner abstraction to do multithreading. What ends up happening is that people use it without understanding multithreading and operating systems in general, they just think that they need to use it to get parallelism. There's 15 solutions to do parallelism, 1 is the native, vanilla solution (threading library), then there's 3 additional experimental ways in the standard library or futures library, and 11 solutions that you need to pip install. Newbies ask chatgpt or see a stackoverflow thread (or come from node), and they have a 1 in 15 chance of using the regular solution that newbies should be using, because they can't distinguish the wheat from the chaffe. OP might have suffered from this and even believed that this 15th "async" way to do concurrency was the only way, and is judging python's concurrency by this feature. OP maybe believes that python is just now getting multithreading support? That we are all cavemen running toy applications that server 2 or 3 users? Word to the wise, focus on features that have existed on early versions like python2 BEFORE you focus on features that are being introduced in the later versions like 3.14, this in general, you should first learn how a UNIX machine from the 90s did its thing before you learn the kubernetes spark majiggy |
|
| ▲ | a day ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | ilaksh a day ago | parent | prev [-] |
| There is a fundamental misunderstanding about popularity. People think that popularity is directly related to merit or rationality. Technical things are largely popular for the same reason non-technical things are popular: trends. In other words, they are popular because other people perceive them to be popular. Humans are herd animals. async is harder and associated with Node.js/JavaScript which probably makes it uncool for a certain influential python subculture. But actually Fast API has basically taken over and now I think people should recognize that means async IS popular in python at this point. |