▲ | hobofan 11 hours ago | |
Then criticize the providers on their defaults instead of claiming that they can't solve the problem? > Or, if LLMs are so smart, why doesn't it say "Hmmm, would you like to use a different model for this?" That's literally what ChatGPT did for me[0], which is consistent from what they shared at the last keynote (quick-low reasoning answer per default first, with reasoning/search only if explicitly prompted or as a follow-up). It did miss one match tough, as it somehow didn't parse the `<search>` element from the MDN docs. [0]: https://chatgpt.com/share/68cffb5c-fd14-8005-b175-ab77d1bf58... |