| ▲ | flutas a day ago | |
Working on reproducible test runs to catch quality issues from LLM providers. My main goal is not just a "the model made code, yay!" setup, but verifiable outputs that can show degradation as percentages. i.e. have the model make something like a connect 4 engine, and then run it through a lot of tests to see how "valid" it's solution is. Then score that solution as NN/100% accurate. Then do many runs of the same test at a fixed interval. I have ~10 tests like this so far, working on more. | ||
| ▲ | alexgandy 12 hours ago | parent | next [-] | |
Sounds really interesting. What are you using for the tests/reports? | ||
| ▲ | sebastianconcpt a day ago | parent | prev [-] | |
Nice. Sounds like will converge to QA as a Service | ||