| ▲ | ImprobableTruth 14 days ago |
| The reason is that the usage is completely different from coroutine based async. With GPUs you want to queue _as many async operations as possible_ and only then synchronize. That is, you would have a program like this (pseudocode): b = foo(a)
c = bar(b)
d = baz(c)
synchronize()
With coroutines/async await, something like this b = await foo(a)
c = await bar(b)
d = await baz(c)
would synchronize after every step, being much more inefficient. |
|
| ▲ | hackernudes 14 days ago | parent | next [-] |
| Pretty sure you want it to do it the first way in all cases (not just with GPUs)! |
| |
| ▲ | halter73 14 days ago | parent [-] | | It really depends on if you're dealing with an async stream or a single async result as the input to the next function. If a is an access token needed to access resource b, you cannot access a and b at the same time. You have to serialize your operations. |
|
|
| ▲ | alanfranz 14 days ago | parent | prev [-] |
| Well you can and should create multiple coroutine/tasks and then gather them. If you replace cuda with network calls, it’s exactly the same problem. Nothing to do with asyncio. |
| |
| ▲ | ImprobableTruth 14 days ago | parent [-] | | No, that's a different scenario. In the one I gave there's explicitly a dependency between requests. If you use gather, the network requests would be executed in parallel. If you have dependencies they're sequential by nature because later ones depend on values of former ones. The 'trick' for CUDA is that you declare all this using buffers as inputs/outputs rather than values and that there's automatic ordering enforcement through CUDA's stream mechanism. Marrying that with the coroutine mechanism just doesn't really make sense. |
|