| ▲ | Latency Profiling in Python: From Code Bottlenecks to Observability(quant.engineering) |
| 24 points by rundef 7 days ago | 6 comments |
| |
|
| ▲ | abhashanand1501 an hour ago | parent | next [-] |
| > The trading system reports an average latency of 10ms Python is a bad choice for a system with such latency requirements. Isn't C++/Rust preferred language for algorithmic trading shops? |
| |
|
| ▲ | Veserv 3 hours ago | parent | prev [-] |
| Why even bother with sampling profilers in Python? You can do full function traces for literally all of your code in production at ~1-10% overhead with efficient instrumentation. |
| |
| ▲ | hansvm 3 hours ago | parent [-] | | That depends on the code you're profiling. Even good line profilers can add 2-5x overhead on programs not optimized for them, and you're in a bit of a pickle because the programs least optimized for line profiling are those which are already "optimized" (fast results for a given task when written in Python). | | |
| ▲ | Veserv 2 hours ago | parent [-] | | It does not, those are just very inefficient tracing profilers. You can literally trace C programs in 10-30% overhead. For Python you should only accept low single-digit overhead on average with 10% overhead only in degenerate cases with large numbers of tiny functions [1]. Anything more means your tracer is inefficient. [1] https://functiontrace.com/ | | |
| ▲ | hansvm an hour ago | parent [-] | | That's... intriguing. I just tested out functiontrace and saw 20-30% overhead. I didn't expect it to be anywhere near that low. |
|
|
|