| ▲ | Show HN: Flint – A 30B model fine-tuned for less repetition(springboards.ai) | |||||||
| 6 points by thmsmxwll a day ago | 2 comments | ||||||||
As frontier LLMs have very little output diversity even for open ended queries. We built Flint to see if we could reverse this. It’s a finetuned Qwen3 30B model specifically trained to produce higher entropy when asked open ended questions. Flint significantly increases the NoveltyBench score compared to the base model, without significantly reducing the score on non-creative benchmarks like MMLU-STEM. This shows that that divergence tuning doesn't actually have to be a tax on base capabilities. Flint scores 7.47/10 on NoveltyBench while most frontier models score between 1.8 and 3.2. | ||||||||
| ▲ | Bolwin 14 hours ago | parent [-] | |||||||
Interesting. Would be able to release it on api or weights so we can use it outside the context of your application? | ||||||||
| ||||||||