Remix.run Logo
ninjagoo 3 hours ago

The verbosity is likely a result of the system prompt for the LLM telling it to be explanatory in its replies. If the system prompt was set to have the model output shortest final answers, you would likely get the result your way. But then for other questions you would lose benefitting from a deeper explanation. It's a design tradeoff, I believe.

BoredomIsFun 2 hours ago | parent [-]

My system prompt is default - "you are a helpful assistant". But that beyound the point though. You don't want too concise outputs as it would degrade the result, unless you are using a reasoning model.

I recommend rereading my top level comment.