Remix.run Logo
stillsut 2 days ago

Not an expert but my chats with ChatGPT led me to believe async + FastAPI can give you 40x throughput for request handling over non-async code.

The essential idea was I could be processing ~100 requests per vCPU in the async event loop while threading would max out 2-4 threads per CPU. Of course let us assume for either model we're waiting for 50-2000ms DB query or service call to finish before sending the response.

Is this not true? And if it is true, why isn't the juice is worth the squeeze: more than an order of magnitude more saturation/throughput for the same hardware and same language, just with a new engine at its heart?