|
| ▲ | mmargenot 21 hours ago | parent | next [-] |
| `outlines` (https://github.com/dottxt-ai/outlines) is very good and supported by vLLM as a backend structured output provider (https://docs.vllm.ai/en/v0.8.2/features/structured_outputs.h...) for both local and remote LLMs. vLLM is probably the best open source tooling for the inference side right now. |
|
| ▲ | colonCapitalDee 18 hours ago | parent | prev | next [-] |
| I did look at instructor and probably for structured output pydantic-ai and instructor are about the same, but pydantic-ai supports a ton of other stuff that isn't part of instructor's feature set. For me killer apps were the ability to serialize/deserialize conversations as json; frictionless tool-calling; and ability to mock the LLM client for testing. |
|
| ▲ | maxdo 21 hours ago | parent | prev [-] |
| I personally just switched to https://docs.boundaryml.com/guide/comparisons/baml-vs-pydant... just feels a bit more polished. especially testing part. |
| |
| ▲ | Aherontas 7 hours ago | parent [-] | | Haven't tried balm, I will give it a shot, I am really curious to see all the features it supports. |
|