| ▲ | Incipient 9 hours ago | |
>there's little correlation between token spend and the quality My sentiment exactly! I have a very similar scaffold to each of my prompts, and feel I provide similar context files, however sometimes I get a truly inspired, complex, and functionally complete response...and sometimes I'd have been better off running lorem ipsum through a python interpreter. I can't find any rhyme or reason to success. I'm not sure if prompting is significantly more nuanced than I realise, or it's the statistical magic that's having a laugh at me. >open source models on inference-optimized hardware. Is this actually a thing? Or are you talking about some hypothetical "opus 4.7 ASIC"? | ||