▲ | jiggawatts 4 days ago | |
Good summary of the theory, but the weird thing is that every time I’ve rewritten code to use async the total throughout went down by about 10%… which is what I estimate is the overheads introduced by the compiler-generated async state machinery. I’m yet to see a convincing set of A/B comparisons from a modern language. My experiences don’t line up with the conventional wisdom! | ||
▲ | cryptonector 3 days ago | parent [-] | |
That could be because you're still smearing state on the stack? With async functions one can do that, and so you still have stacks/fibers/threads, and so you've not gained much. With a CPS approach you really don't have multiple stacks. Oh, and relatedly the functional core, imperative shell (FCIS) concept comes in here. The imperative shell is the async I/O event loop / executor. Everything else is functional state transitions that possibly request I/O, and if you represent I/O requests as return values to be executed by the executor, then you can have those state transitions be functional. The functional state transition can use as much stack as it wants, but when it's done the stack is gone -- no stack use between state transitions. Now naturally you don't want state transitions to have unbounded CPU time, but for some applications it might have to be the case that you have to allow it, in which case you have problems (gaaah, thread cancellation is such a pain!). The point of FCIS is to make it so it's trivial to test the state transitions because there is nothing to mock except one input, one state of the world, and check the output against what's expected. The "imperative shell" can also be tested with a very simple "application" and setup to show that it works w/o having to test the whole enchilada with complex mockup setups. |