| ▲ | jauntywundrkind a day ago | |||||||
This is a joke of an objection. This API is a neutral party and can be iterated on. These objections are particular objections, to an implementation detail. One that can be swapped out, iterated on, improved on, changed, tailored to the user, by the user working with their user-agent and the LLM choice. One whose failure seems in the realm of personal preference rather than fundamental or harmful or damaging. So what if the English isn't exactly to your liking for the current model? To rest the argument against on such petty matters makes my head reel. Iterating & improving on the base model & some system prompts allowed in this spec. Rather than using the API to try to cajole this very particular behavior that Jake seems to want everywhere, the user agent could let him set system prompts or pick a more suitable UK trained model, if that's really so important to them. The user agent is the proper channel for the agency Jake is seeking here. Theres nothing preventing the user & their user agent from negotiating what model they use. I don't think we all should be held hostage to naysaying by people who decide that the ability to have "the model needs to talk like a pirate, but this model didn't do that well" decision making. That's circumstantial nonsense, blocking the user agent from being able to work with the user to extend user agency, over such a narrow concern, that must be free to iterate anyhow!! This API is the best basis we have to allow this negotiation to happen, out of band, outside the scope of the web API offered here, by agents. It's not up to the page to define this in the first place. It's the browser, the user agent that (as always has been the case on the web) builds user agency at its offered level of customizability & complexity. Maybe not every browser offers a "speak like a pirate". That lack is not a ding on the web prompt API! The objections as stated have no resolution. This is a forever block for all time that is aggreived because not every model is going to behave exactly predictably perfectly. And there's no possible way out of this condundrum. The grievances of this submission are that sites will try to work around this, but the greivance here is built around the assumption that all agency has to lie with the site, that it's the site's obligation to fix US vs British English, that it's the site's need to tailor the agent. That's not feasible not possible not sensible ever. The user agent is the mediating agent between the site and the user and the agent. That is going to be a complex evolving and dynamic relationship. The "failure" Jake cites here of the site to fully sculpt the experience is unreasonable, is an anti goal. It's up to the user and the browser to shape the agent for them, not each site. I find these objections to be deeply deeply misguided. But worse, I find them to insist in perfection. There is no direction offered, no improvement suggested. The site can't make agents perfect therefore no one gets agents. That's all this says. It's fucking bullshit and fuck this a lot. (I love Jake and they have done so much good so so so many times, but this is an impossible situation they are creating while leaving zero space for possibility for maybe and zero leadership for how else we can do what obviously must be done. Alas I think Mozilla at large had become the anti-possible company of web standards, which is a detestable position, one I had hoped might improve, eventually.) | ||||||||
| ▲ | jaffathecake a day ago | parent [-] | |||||||
> The user agent is the proper channel for the agency Jake is seeking here. Theres nothing preventing the user & their user agent from negotiating what model they use. This isn't how it works. As the developer, you use the system prompt to set a particular personality for the chat bot. Eg, when you use an LLM in VSCode, it comes with a system prompt to make it an effective code assistant. Now, in VSCode, you can select a different model, which is maybe where your misconception comes from. But when you select a different model, it will also use a different system prompt, designed to achieve the same personality, but tailored for that particular model. Once you figure out why they do that, you'll understand why your position here doesn't make sense. | ||||||||
| ||||||||