| ▲ | batshit_beaver 11 hours ago | |
This is going to be interesting. At least for coding, there's little correlation between token spend and the quality (and impact) of the resulting AI suggestion. This is fine when inference prices are capped (eg via a monthly subscription plan or self-hosting), but rapidly discombobulates the relationship between provider and user otherwise. It still seems like OpenAI has no moat and neither does anyone else, as the only reasonable way to use the coding slot machines is going to be via open source models on inference-optimized hardware. Still better than the secret lobotomization they were doing on subscription plan models though. | ||
| ▲ | Incipient 9 hours ago | parent [-] | |
>there's little correlation between token spend and the quality My sentiment exactly! I have a very similar scaffold to each of my prompts, and feel I provide similar context files, however sometimes I get a truly inspired, complex, and functionally complete response...and sometimes I'd have been better off running lorem ipsum through a python interpreter. I can't find any rhyme or reason to success. I'm not sure if prompting is significantly more nuanced than I realise, or it's the statistical magic that's having a laugh at me. >open source models on inference-optimized hardware. Is this actually a thing? Or are you talking about some hypothetical "opus 4.7 ASIC"? | ||