|
| ▲ | benschulz 2 days ago | parent | next [-] |
| Most approaches, I assume, will leverage conditional compilation: When (deterministic simulation) testing, use the deterministic async runtime. Otherwise, use the default runtime. That means there's no (runtime) overhead at the cost of increased complexity. I'm using DST in a personal project. My biggest issue is that significant parts of the ecosystem either require or prefer your runtime to be tokio. To deal with that, I re-implemented most of tokio's API on top of my DST runtime. Running my DST tests involves patching dependencies which can get messy. |
|
| ▲ | imtringued 2 days ago | parent | prev | next [-] |
| It doesn't, because nothing in the article indicates performance hits. It doesn't even mention "proxying every operation through another indirection layer". The article is about organizing a distributed/multithreaded system for deterministic execution and fault injection. It's like refactoring a codebase for unit testing. It shouldn't slow anything down and even if it does, the overhead should be laughably small. |
|
| ▲ | reitzensteinm 2 days ago | parent | prev | next [-] |
| See Tokio's Loom as an example: https://github.com/tokio-rs/loom In development, you import Loom's mutex. In production, you import a regular mutex. This of course has zero overhead, but the simulation testing itself is usually quite slow. Only one thread can execute at a time, and many iterations are required. |
|
| ▲ | vlovich123 2 days ago | parent | prev [-] |
| I would expect it to be possible depending on how you do it. I would think you just instantiate a different set of interfaces for DST while keeping production code running on a different thing. It’s a little trickier if you also want DST coverage of the executor itself. With antithesis that’s all guaranteed of course since you’re running on a VM and the abstraction is a lot lower level. |