| ▲ | bandrami 3 hours ago | |
These are cool tricks but this seems like an impedence mismatch: why would you use an LLM (a probabilistic source of plausible text) in a situation where you want a deterministic source of text where plausibility is not enough? | ||
| ▲ | orbital-decay 3 hours ago | parent [-] | |
You... don't. That's exactly what structured outputs are for! You're offloading any formally defined generation to a tool that better serves the case, leaving the ambiguous part of the task to the model. Code is an example of a mixed case. Getting any mechanistically parsable output from a model is another. Sure, you can format it after the generation, but you still need the generation to be parsable for that. In many cases, using the required format right away will also provide the context for better replies. | ||