▲ | strogonoff 5 days ago | |
For a while I have been curious about the intended uses for xAtTime functions (like cancelAndHoldAtTime) in Web Audio. As far as I understand it, calls to them suffer from lag due to main JavaScript thread and audio thread communication, which makes sample precision unachievable—and precision is quite important in music. Is it mostly for emulating slow-moving changes on fixed timelines, a la automation tracks in traditional DAWs like Logic and Ableton? Is design rationale documented somewhere? | ||
▲ | padenot 5 days ago | parent [-] | |
Those methods are sub-sample accurate, granted you call them a bit in advance to account for the cross-thread communication, as you say. But yes, in general this was designed (prior to me becoming an editor) with scheduling in mind, not with low-latency interactivity. That said, it goes quite far. Other systems go further, such as Web Audio Modules (that builds on top of AudioWorklet) implement sample-accurate parameter change from within the rendering thread, using wait-free ring-buffers. That requires `SharedArrayBuffer` but works great, and is the lowest latency possible (since it uses atomic loads and stores from e.g. the main thread to the rendering thread). |