| ▲ | Const-me 7 days ago |
| The article seems specific to JavaScript, C# is different. > you cannot await in a sync function In C# it’s easy to block the current thread waiting for an async task to complete, see Task.Wait method. > since it will never resolve, you can also never await it In C#, awaiting for things which never complete is not that bad, the standard library has Task.WhenAny() method for that. > let's talk about C#. Here the origin story is once again entirely different Originally, NT kernel was designed for SMP from the ground up, supports asynchronous operations on handles like files and sockets, and since NT 3.5 the kernel includes support for thread pool to dispatch IO completions: https://en.wikipedia.org/wiki/Input/output_completion_port Overlapped I/O and especially IOCP are hard to use directly. When Microsoft designed initial version of .NET, they implemented thread pool and IOCP inside the runtime, and exposed higher-level APIs to use them. Stuff like Stream.BeginRead / Stream.EndRead available since .NET 1.1 in 2003, the design pattern is called Asynchronous Programming Model (APM). Async/await language feature introduced in .NET 4.5 in 2012 is a thin layer of sugar on top of these begin/end asynchronous APIs which were always there. BTW, if you have a pair of begin/end methods, converting into async/await takes 1 line of code, see TaskFactory.FromAsync. |
|
| ▲ | User23 3 days ago | parent | next [-] |
| > Originally, NT kernel was designed for SMP from the ground up, supports asynchronous operations on handles like files and sockets, and since NT 3.5 the kernel includes support for thread pool to dispatch IO completions: https://en.wikipedia.org/wiki/Input/output_completion_port Say what you will about Microsoft in that era (and there's a lot to be said), the NT kernel team absolutely crushed it for their customers' use cases. IOCP were years ahead of anything else. I pretty much hated all of the userspace Win32 work I did (MIDL, COM, DCOM, UGGGGGGGGH), but the Kernel interfaces were wonderful to code against. To this day I have fond memories of Jeffrey Richter's book. |
| |
| ▲ | wbl 3 days ago | parent [-] | | It's not enough to have a nicish abstraction, how did it work in practice and eek out performance? I've heard Bryan Cantrell say there wasn't much there and would be curious to really know what the truth is and more explanation on both sides. |
|
|
| ▲ | the_mitsuhiko 6 days ago | parent | prev | next [-] |
| You're probably right that this is leaning in on JavaScript and Python more, but I did try to make a point that the origin story for this feature is quite a bit different between languages. C# is the originator of that feature, but the implications of that feature in C# are quite different than in for instance JavaScript or Python. But when people have a discussion about async/await it often loses these nuances very quickly. > Async/await language feature introduced in .NET 4.5 in 2012 is a thin layer of sugar on top of these begin/end asynchronous APIs which were always there. You are absolutely right. That said, it was a conscious decision to keep the callback model and provide "syntactic sugar" on top of it to make it work. That is not the only model that could have been chosen. |
| |
| ▲ | cwills 2 days ago | parent [-] | | Seems like this article conflates threads C# with asynchronous operations a little. The way I see it, threads are for parallel & concurrent execution of CPU-bound workloads, across multiple CPU cores. And typically use Task Parallel Library. Async/await won’t help here. Whereas async/await for IO bound workloads, and freeing up the current CPU thread until the IO operation finishes. As mentioned, syntactic sugar on top of older callback-based asynchronous APIs. | | |
| ▲ | the_mitsuhiko 2 days ago | parent | next [-] | | I would make the argument it does not matter what the intention is, in practice people await CPU bound tasks all the time. In fact, here is what the offical docs[1] say: > You could also have CPU-bound code, such as performing an expensive calculation, which is also a good scenario for writing async code. [1]: https://learn.microsoft.com/en-us/dotnet/csharp/asynchronous... | | |
| ▲ | coldtea 2 days ago | parent [-] | | >You could also have CPU-bound code, such as performing an expensive calculation, which is also a good scenario for writing async code. That's a scenario for a different reason though (to allow sharing the cpu between chunks of the calculation, e.g. to not freeze UI in JS). In that case you might want to async on CPU-bound code. But regarding maximizing utilization, you want async to take more advantage of a core's CPU when you got tasks waiting for IO, and threads to leverage more CPU cores when doing CPU bound tasks. |
| |
| ▲ | zarzavat a day ago | parent | prev [-] | | The difference in philosophy is: who is responsible for scheduling work? Is it the language, or is it the developer? In JS it's the language, for example node sits on top of libuv which is responsible for managing the thread pool and doing async IO. The advantages of this system are that it's very convenient, and the developer gets a safer single-threaded view over the multiple threads in use. The disadvantage is that the developer lacks control and if you actually want to write multithreaded APIs and not just use them then you have to drop down into a lower level language so you can talk to libuv or the OS. In C# there is no lower level language to drop down to. |
|
|
|
| ▲ | zamadatix 7 days ago | parent | prev | next [-] |
| Task.Wait() is just using the normal "thread" (in the way the author defines it later) blocking logic to do that in said case but I think the author is trying to talk about pure async/await approaches there as an example of why you still want exactly that kind of non-async "thread" blocking to fall back on for differently colored functions. Task.WhenAny() is similar to Promise.any()/Promise.race(). I'm not sure this is where the author is focusing attention on though. Regardless if your execution is able to move on and out of that scope those other promises may still never finish or get cleaned up. |
|
| ▲ | throwitaway1123 6 days ago | parent | prev [-] |
| > In C#, awaiting for things which never complete is not that bad, the standard library has Task.WhenAny() method for that. It's not that bad in JS either. JS has both Promise.any and Promise.race that can trivially set a timeout to prevent a function from waiting infinitely for a non-resolving promise. And as someone pointed out in the Lobsters thread, runtimes that rely on multi-threading for concurrency are also often prone to deadlocks and infinite loops [1]. import { setTimeout } from 'node:timers/promises'
const neverResolves = new Promise(() => {})
await Promise.any([neverResolves, setTimeout(0)])
await Promise.race([neverResolves, setTimeout(0)])
console.trace()
[1] https://lobste.rs/s/hlz4kt/threads_beat_async_await#c_cf4wa1 |
| |
| ▲ | cyberax 3 days ago | parent [-] | | > Promise.race Ding! You now have a memory leak! Collect your $200 and advance two steps. Promise.race will waste memory until _all_ of its promises are resolved. So if a promise never gets resolved, it will stick around forever. It's braindead, but it's the spec: https://github.com/nodejs/node/issues/17469 | | |
| ▲ | throwitaway1123 2 days ago | parent [-] | | This doesn't even really appear to be a flaw in the Promise.race implementation [1], but rather a natural result of the fact that native promises don't have any notion of manual unsubscription. Every time you call the then method on a promise and pass in a callback, the JS engine appends the callback to the list of "reactions" [2]. This isn't too dissimilar to registering a ton of event listeners and never calling `removeEventListener`. Unfortunately, unlike events, promises don't have any manual unsubscription primitive (e.g. a hypothetical `removePromiseListener`), and instead rely on automatic unsubscription when the underlying promise resolves or rejects. You can of course polyfill this missing behavior if you're in the habit of consistently waiting on infinitely non-settling promises, but I would definitely like to see TC39 standardize this [3]. [1] https://issues.chromium.org/issues/42213031#comment5 [2] https://github.com/nodejs/node/issues/17469#issuecomment-349... [3] https://github.com/cefn/watchable/tree/main/packages/unpromi... | | |
| ▲ | kaoD 2 days ago | parent [-] | | This isn't actually about removing the promise (completion) listener, but the fact that promises are not cancelable in JS. Promises in JS always run to completion, whether there's a listener or not registered for it. The event loop will always make any existing promise progress as long as it can. Note that "existing" here does not mean it has a listener, nor even whether you're holding a reference to it. You can create a promise, store its reference somewhere (not await/then-ing it), and it will still progress on its own. You can await/then it later and you might get its result instantly if it had already progressed on its own to completion. Or even not await/then it at all -- it will still progress to completion. You can even not store it anywhere -- it will still run to completion! Note that this means that promises will be held until completion even if userspace code does not have any reference to it. The event loop is the actual owner of the promise -- it just hands a reference to its completion handle to userspace. User code never "owns" a promise. This is in contrast to e.g. Rust promises, which do not run to completion unless someone is actively polling them. In Rust if you `select!` on a bunch of promises (similar to JS's `Promise.race`) as soon as any of them completes the rest stop being polled, are dropped (similar to a destructor) and thus cancelled. JS can't do this because (1) promises are not poll based and (2) it has no destructors so there would be no way for you to specify how cancellation-on-drop happens. Note that this is a design choice. A tradeoff. Cancellation introduces a bunch of problems with promise cancellation safety even under a GC'd language (think e.g. race conditions and inconsistent internal state/IO). You can kinda sorta simulate cancellation in JS by manually introducing some `isCancelled` variable but you still cannot act on it except if you manually check its value between yield (i.e. await) points. But this is just fake cancellation -- you're still running the promise to completion (you're just manually completing early). It's also cumbersome because it forces you to check the cancellation flag between each and every yield point, and you cannot even cancel the inner promises (so the inner promises will still run to completion until it reaches your code) unless you somehow also ensure all inner promises are cancelable and create some infra to cancel them when your outer promise is cancelled (and ensure all inner promises do this recursively until then inner-est promise). There are also cancellation tokens for some promise-enabled APIs (e.g. `AbortController` in `fetch`'s `signal`) but even those are just a special case of the above -- their promise will just reject early with an `AbortError` but will still run to (rejected) completion. This has some huge implications. E.g. if you do this in JS... Promise.race([
deletePost(),
timeout(3000),
]);
...`deletePost` can still (invisibly) succeed in 4000 msecs. You have to manually make sure to cancel `deletePost` if `timeout` completes first. This is somewhat easy to do if `deletePost` can be aborted (via e.g. `AbortController`) even if cumbersome... but more often than not you cannot really cancel inner promises unless they're explicitly abortable, so there's no way to do true userspace promise timeouts in JS.Wow, what a wall of text I just wrote. Hopefully this helps someone's mental model. | | |
| ▲ | throwitaway1123 2 days ago | parent | next [-] | | > This isn't actually about removing the promise (completion) listener, but the fact that promises are not cancelable in JS. You've made an interesting point about promise cancellation but it's ultimately orthogonal to the Github issue I was responding to. The case in question was one in which a memory leak was triggered specifically by racing a long lived promise with another promise — not simply the existence of the promise — but specifically racing that promise against another promise with a shorter lifetime. You shouldn't have to cancel that long lived promise in order to resolve the memory leak. The user who created the issue was creating a promise that resolved whenever the SIGINT signal was received. Why should you have to cancel this promise early in order to tame the memory usage (and only while racing it against another promise)? As the Node contributor discovered the reason is because semantically `Promise.race` operates similarly to this [1]: function race<X, Y>(x: PromiseLike<X>, y: PromiseLike<Y>) {
return new Promise((resolve, reject) => {
x.then(resolve, reject)
y.then(resolve, reject)
})
}
Assuming `x` is our non-settling promise, he was able to resolve the memory leak by monkey patching `x` and replacing its then method with a no-op which ignores the resolve and reject listeners: `x.then = () => {};`. Now of course, ignoring the listeners is obviously not ideal, and if there was a native mechanism for removing the resolve and reject listeners `Promise.race` would've used it (perhaps using `y.finally()`) which would have solved the memory leak.[1] https://github.com/nodejs/node/issues/17469#issuecomment-349... | | |
| ▲ | kaoD 2 days ago | parent | next [-] | | > Why should you have to cancel this promise early in order to tame the memory usage (and only while racing it against another promise)? In the particular case you linked to, the issue is (partially) solved because the promise is short-lived so the `then` makes it live longer, exacerbating the issue. By not then-ing the GC kicks earlier since nothing else holds a reference to its stack frame. But the underlying issue is lack of cancellation, so if you race a long-lived resource-intensive promise against a short-lived promise, the issue would still be there regardless of listener registration (which admittedly makes the problem worse). Note that this is still relevant because it means that the problem can kick in in the "middle" of the async function (if any of the inner promises is long) while the `then` problem (which the "middle of the promise" is a special case of "multiple thens", since each await point is isomorphic to calling `then` with the rest of the function). Without proper cancellation you only solve the particular case if your issue is the latest body of the `then` chain. (Apologies for the unclear explanation, I'm on mobile and on the vet's waiting room, I'm trying my best.) | | |
| ▲ | throwitaway1123 2 days ago | parent [-] | | I don't want to get mired in a theoretical discussion about what promise cancellation would hypothetically look like, and would rather instead look at some concrete code. If you reproduce the memory leak from that original Node Github issue while setting the --max-old-space-size to an extremely low number (to set a hard limit on memory usage) you can empirically observe that the Node process crashes almost instantly with a heap out of memory error: #! /usr/bin/env node --max-old-space-size=5
const interruptPromise = new Promise(resolve =>
process.once('SIGINT', () => resolve('interrupted'))
)
async function run() {
while (true) {
const taskPromise = new Promise(resolve => setImmediate(resolve))
const result = await Promise.race([taskPromise, interruptPromise])
if (result === 'interrupted') break
}
console.log(`SIGINT`)
}
run()
If you run that exact same code but replace `Promise.race` with a call to `Unpromise.race`, the program appears to run indefinitely and memory usage appears to plateau. And if you look at the definition of `Unpromise.race`, the author is saying almost exactly the same thing that I've been saying: "Equivalent to Promise.race but eliminates memory leaks from long-lived promises accumulating .then() and .catch() subscribers" [1], which is exactly the same thing that the Node contributor from the original issue was saying, which is also exactly the same thing the Chromium contributor was saying in the Chromium bug report where he writes "This will also grow the reactions list of `x` to 10e5" [2].[1] https://github.com/cefn/watchable/blob/6a2cd66537c664121671e... [2] https://issues.chromium.org/issues/42213031#comment5 | | |
| ▲ | kaoD a day ago | parent [-] | | Just to clarify because the message might have been lost: I'm not saying you're wrong! I'm saying you're right, and... Quoting a comment from the issue you linked: > This is not specific to Promise.race, but for any callback attached a promise that will never be resolved like this: x = new Promise(() => {});
for (let i = 0; i < 10e5 ; i++) {
x.then(() => {});
}
My point is if you do something like this (see below) instead, the same issue is still there and cannot be resolved just by using `Unpromise.race` because the underlying issue is promise cancellation: // Use this in the `race` instead
// Will also leak memory even with `Unpromise.race`
const interruptPromiseAndLog = () =>
interruptPromise()
.then(() => console.log('SIGINT'))
`Unpromise.race` only helps with its internal `then` so it will only help if the promise you're using has no inner `then` or `await` after the non-progressing point.This is not a theoretical issue. This code happens all the time naturally, including in library code that you have no control over. So you have to proxy this promise too... but again this only partially solves the issue because you'd have to promise every single promise that might ever be created, including those you have no control over (in library code) and therefore cannot proxy yourself. And the ergonomics are terrible. If you do this, you have to proxy and propagate unsubscription to both `then`s: const interruptPromiseAndLog = () =>
interruptPromise()
// How do you unsubscribe this one
.then(() => console.log('SIGINT'))
// ...even if you can easily proxy this one?
.then(() => console.log('REALLY SIGINT'))
Which can easily happen in await points too: const interruptPromiseAndLog = async () => {
console.log('Waiting for SIGINT')
// You have to proxy and somehow propagate unsubscription to this one too... how!?
await interruptPromise()
console.log('SIGINT')
}
Since this is just sugar for: const interruptPromiseAndLog = () => {
console.log('Waiting for SIGINT')
return interruptPromise()
// Needs unsubscription forwarded here
.then(() => console.log('SIGINT'))
}
Which can quickly get out of hand with multiple await points (i.e. many `then`s).Hence why I say the underlying issue is overall promise cancellation and how you actually have no ownership of promises in JS userspace, only of their completion handles (the event loop is the actual promise owner) which do nothing when going out of scope (only the handle is GC'd but the promise stays alive in the event loop). |
|
| |
| ▲ | GoblinSlayer 2 days ago | parent | prev [-] | | For that matter C# has Task.WaitAsync, so waited task continues to the waiter task, and your code subscribes to the waiter task, which unregisters your listener after firing it, so memory leak is limited to the small waiter task that doesn't refer anything after timeout. |
| |
| ▲ | rerdavies 2 days ago | parent | prev [-] | | But if you really truly need cancel-able promises, it's just not that difficult to write one. This seems like A Good Thing, especially since there are several different interpretations of what "cancel-able" might mean (release the completion listeners into the gc, reject based on polling a cancellation token, or both). The javascript promise provides the minimum language implementation upon which more elaborate Promise implementations can be constructed. | | |
| ▲ | kaoD 2 days ago | parent [-] | | Why this isn't possible is implicitly (well, somewhat explicitly) addressed in my comment. const foo = async () => {
... // sync stuff A
await someLibrary.expensiveComputation()
... // sync stuff B
}
No matter what you do it's impossible to cancel this promise unless `someLibary` exposes some way to cancel `expensiveComputation`, and you somehow expose a way to cancel it (and any other await points) and any other promises it uses internally also expose cancellation and they're all plumbed to have the cancellation propagated inward across all their await points.Unsubscribing to the completion listener is never enough. Implementing cancellation in your outer promise is never enough. > The javascript promise provides the minimum language implementation upon which more elaborate Promise implementations can be constructed. I'll reiterate: there is no way to write promise cancellation in JS userspace. It's just not possible (for all the reasons outlined in my long-ass comment above). No matter how elaborate your implementation is, you need collaboration from every single promise that might get called in the call stack. The proposed `unpromise` implementation would not help either. JS would need all promises to expose a sort of `AbortController` that is explicitly connected across all cancellable await points inwards which would introduce cancel-safety issues. So you'd need something like this to make promises actually cancelable: const cancelableFoo = async (signal) => {
if (signal.aborted) {
throw new AbortError()
}
... // sync stuff A
if (signal.aborted) {
// possibly cleanup for sync stuff A
throw new AbortError()
}
await someLibrary.expensiveComputation(signal)
if (signal.aborted) {
// possibly cleanup for sync stuff A
throw new AbortError()
}
... // sync stuff B
if (signal.aborted) {
// possibly cleanup for sync stuff A
// possibly cleanup for sync stuff B
throw new AbortError()
}
}
const controller = new AbortController()
const signal = abortController.signal
Promise.cancelableRace(
controller, // cancelableRace will call controller.abort() if any promise completes
[
cancellableFoo(signal),
deletePost(signal),
timeout(3000, signal),
]
)
And you need all promises to get their `signal` properly propagated (and properly handled) across the whole call stack. |
|
|
|
|
|