▲ | sethaurus 5 days ago | |
One really nice thing about using LLMs as components is that they just generate text. We've taught them to sometimes issue JSON messages representing a structured query or command, but it still comes out of the thing as text; the model doesn't have any IO or state of its own. Then the actual program can then decide what, if anything, to do with that structured request. I don't like the gradual reframing of the model itself as being in charge of the tools, aided by a framework that executes whatever the model pumps out. It's not good to abstract away the connection between the text-generator and the actual volatile IO of your program. | ||
▲ | dmos62 4 days ago | parent [-] | |
What kind of abstractions around llm-io would you prefer? |