| ▲ | nickreese 10 hours ago | |||||||||||||||||||||||||
I'm just now moving my main workflows off openai over to local models and I'm starting to find that these smaller models main failure mode is that they will accept edgecases with the goal of being helpful. Especially in extraction tasks. This appears as inventing data or rationalizing around clear roadblocks. My biggest hack so far is giving them an out named "edge_case" and telling them it is REALLY helpful if they identify edgecases. Simply renaming "fail_closed" or "dead_end" options to "edge_case" with helpful wording causes qwen models to adhere to their prompting more. It feels like there are 100s of these small hacks that people have to have discovered... why isn't there a centralized place where people are recording these learnings? | ||||||||||||||||||||||||||
| ▲ | rotexo 6 hours ago | parent | next [-] | |||||||||||||||||||||||||
Can you describe this more? Is “edge_case” a key in the structured output schema? | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | alach11 9 hours ago | parent | prev [-] | |||||||||||||||||||||||||
Just curious - are you using Open WebUI or Librechat as a local frontend or are all your workflows just calling the models directly without UI? | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||