Remix.run Logo
rsyring 2 days ago

Not too long ago, I read a comment on HN that suggested, due to Python's support for free-threading, async in Python will no longer be needed and will lose out to free-threading due to it's use of "colored" functions. Which seems to align with where this author ends up:

> Because parallelism in Python using threads has always been so limited, the APIs in the standard library are quite rudimentary. I think there is an opportunity to have a task-parallelism API in the standard library once free-threading is stabilized.

> I think in 3.14 the sub-interpreter executor and free-threading features make more parallel and concurrency use cases practical and useful. For those, we don’t need async APIs and it alleviates much of the issues I highlighted in this post.

Armin recently put up a post that goes into those issue in more depth: https://lucumr.pocoo.org/2025/7/26/virtual-threads/

Which lead me to a pre-PEP discussion regarding the possibility of Virtual Threads in Python, which was probably way more than I needed to know but found interesting: https://discuss.python.org/t/add-virtual-threads-to-python/9...

ashf023 2 days ago | parent | next [-]

Interesting that very few people in that thread seem to understand Go's model, especially the author of this proposal. If you don't allow preemption, you still have a sort of coloring because most non async functions aren't safe to call in a virtual thread - they may block the executor. If you call C code, you need to swap out stacks and deal with blocking by potentially spawning more OS threads - that's what CGo does. Maybe preemption is harder in Python, but that's not clearly expressed - it's just rejected as obviously unwanted.

Ultimately Python already has function coloring, and libraries are forced into that. This proposal seems poorly thought out, and also too little too late.

rsyring 2 days ago | parent | next [-]

I can't speak to the more technical aspects you bring up b/c I'm not that well versed in the underlying implementations and tradeoffs.

> and also too little too late.

I think it very likely that Python will still be around and popular 10 years from now. Probably 20 years from now. And maybe 30 years from now. I think that's plenty of time for a new and good idea that addresses significant pain points to take root and become a predominant paradigm in the ecosystem.

So I don't agree that it's too little too late. But whether or not a Virtual Threads implementation can/will be developed and be good enough to gain wide adoption, I just can't speak to. If it's possible to create a better devx than async and get multi-core performance and usage, I'm all for the effort.

ashf023 10 hours ago | parent [-]

Fair enough, I was a little too negative. It is good they're thinking about improvements

Dagonfly 2 days ago | parent | prev [-]

I'm also surprised how often the preemptive vs. cooperative angle gets ignored in favor of the stackful vs stackless debate.

If you choose a non-preemptive system, you naturally need yield points for cooperation. Those can either be explicit (await) or implicit (e.g. every function call). But you can get away with a minimal runtime and a stackless design.

Meanwhile, in a preemptive system you need a runtime that can interrupt other units of work. And it pushes you towards a stackful design.

All those decisions are downstream of the preemptive vs. cooperative.

In either case, you always need to be able to interface with CPU-heavy work. Either through preemption, or by isolating the CPU-heavy work.

int_19h 2 days ago | parent | prev | next [-]

C# has had free threading all along, yet still saw the need for async as a separate facility.

The same goes for C++, which now has co_await.

nine_k 2 days ago | parent [-]

Threads are more expensive and slow to create. Submitting a task to a thread pool and waiting for a result, or a bunch of results, to show up, is much more ergonomic. So `async` automatically submits a task, and `await` awaits until it completes. Ideally `await` just discovers that a task (promise) has completed at that point, while the main thread was doing other things.

Once you have this in place, you can notice that you can "submit the task to the same thread", and just switch between tasks at every `await` point; you get coroutines. This is how generators work: `yield` is the `await` point.

If all the task is doing is waiting for I/O, and your runtime is smart enough to yield to another coroutine while the I/O is underway, you can do something useful, or at least issue another I/O task, not waiting for the first one to complete. This allows typical server code that does a lot of different I/O requests to run faster.

Older things like `gevent` just automatically added yield / await points at certain I/O calls, with an event loop running implicitly.

seunosewa 2 days ago | parent | prev [-]

async was the wrong solution to the right problem - improving general performance. Free threading is the prize in an increasingly multi-core CPU world.

guappa 2 days ago | parent [-]

Threads use a lot more memory than a single async thread, and if the load is IO, 1 thread is enough.

Speed might be similar but resource usage is not the same at all.