▲ | lxgr 7 hours ago | |
As a user, I'd gladly opt into a slightly less deeply integrated Siri that understands what I want from it. Build a crude router in front of it, if you must, or give it access to "the old Siri" as a tool it can call, and let the LLM decide whether to return its own or a Siri-generated response! I bet even smaller LLMs would be able to figure out, given a user input and Siri response pair, whether the request was resonably answered or whether the model itself could do better or at least explain that the request is out of capabilities for now. |