▲ | mike_hearn 4 days ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The most interesting thing about this is the apparent absence of unit tests. The test for the XLA compiler bug just prints the outputs, it's more like a repro case than a unit test in the sense that it'd be run by a test harness and have coverage tracked. And the action items are simply to lean more aggressively into evals. Although unit testing an entire LLM is not really feasible right now, all these bugs were in small deterministic parts of the system. Load balancing, top-k probability calculations and so on are all engineered parts no different to other software, and should in principle all be unit testable. At most you need an injectable PRNG. Yes, non-deterministic optimization bugs are awful but I've personally found compiler and database bugs in the past using just regular app test suites. With CI you get a lot of runs so rare events can still surface as long as you investigate flakes. One of my current projects runs every unit test in the same process in parallel, which has proven an excellent and cheap strategy for flushing out rare thread safety issues and database deadlocks. A few days ago I commented on a thread about the Java launch that people often feel Java is "enterprisey" compared to Python because Java code is typically written to be heavily unit testable. A lot of abstraction is driven by the desire for dependency injection, for example. I contrasted that to scripting language culture where I've found testing is often either missing or kinda surface level (e.g. mostly just asserting on types). When I've been learning PyTorch a few years ago I noticed the same thing. The tutorials took you from simple to complex stuff without talking much about how you test or best structure the code. This makes sense for ML research where you don't have a clear goal and success boils down to maxing a score in some kind of eval, but it doesn't make sense for production deployment at scale. I wonder if the AI labs could use more people with SRE and HA SWE background to focus on things like this. I'm kinda skeptical that more aggressive rolling evals-in-prod are the best way to avoid bugs like these happening again. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | vintagedave 4 days ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I've had to write some detailed prompts and examples to have AI generate the kind of unit tests I want in Python. I've seen the assertions on types alone too. I want assertions on values and more. Even more than that, AI tends to mock _everything_. Mocking is useful, but the more real code a unit test invokes, the better, because the risk is not only the code itself but its interactions, the interface. Yet AI in Python will mock so heavily it barely tests even the code itself, with tautological statements. I've prompted with heavy warnings against mocking and pointing directly at examples of thorough tests as examples. FWIW, Python does have excellent tools for injection and can write really nicely structured code. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|