▲ | mgh95 5 days ago | |
> Because if part of my tests involve calling an OpenAI endpoint, I don't want to pay .01 cent every time I run my tests. This is a good time to think to yourself: do I need these dependencies? Can I replace them with something that doesn't expose vendor risk? These are very real questions that large enterprises grapple with. In general (but not always), orgs that view technology as the product (or product under test) will view the costs of either testing or inhousing technology as acceptable, and cost centers will not. > But in general I'm going to mock out things that I want to simulate failure states for, and since I'm paranoid, I generally want to simulate failure states for everything. This can be achieved with an instrumented version of the service itself. | ||
▲ | com2kid 5 days ago | parent [-] | |
> This is a good time to think to yourself: do I need these dependencies? Can I replace them with something that doesn't expose vendor risk? Given that my current projects all revolve solely around using LLMs to do things, yes I need them. The entire purpose of the code is to call into LLMs and do something useful with the output. That said I need to gracefully handle failures, handle OpenAI giving me back trash results (forgetting fields even though they are marked required in the schema, etc), or just the occasional service outage. Also integration tests only make sense once I have an entire system to integrate. Unit tests let me know that the file I just wrote works. |