▲ | Unit Tests for LLMs? | |
6 points by simantakDabhade 3 days ago | 4 comments | ||
is theres any package that helps do like vitest style like quick sanity checks on the output of an llm that I can automate to see if I have regressed on smthin while changing my prompt. For example this agent for a realtor kept offering virtual viewings (even though that isnt a thing) instead of doing a handoff, (modified prompt for this) so a package where I can write a test so that, hey for this input, do not mention this or never mention those things. Or for certain inputs, always call this tool. Started engineering my own little utility for this, but before I dove deep and built my own package, wanted to see if something like this alr exists or if im heading down the wrong path here! p.s. not sure if this should be called evals, kinda overlapping but yeah what should this even be called? | ||
▲ | SleepyWalrus 2 days ago | parent | next [-] | |
How are you approaching this? I assume it's some combo of unit tests and integration tests where you are making sure the response is generally consistent across multiple runs of the same prompt - or if you need to change the prompt to make sure the result was the same as before. From what i've seen using LLMs, your best bet is to have evals for 100ish (if possible) examples with know ground truths. This way you will statistically get results of how accurate your LLM prompt is working. Having more examples will help increase the precision when hallucinations come in. Unfortunately things get a little harder with qualitative responses, where you are expecting certain words or sentences in the response. Your best bet here is to also have 100 examples of what you expect the response would be and a form of semantic similarity comparison between the response and your ground truth. | ||
▲ | vismit2000 2 days ago | parent | prev | next [-] | |
▲ | ivape 3 days ago | parent | prev | next [-] | |
Have the LLM evaluate its own response. User to LLM to LLM (validates its own response) to User. | ||
▲ | gberger 3 days ago | parent | prev [-] | |
You want to do evals, yeah. |