▲ | pestaa 14 hours ago | |
In contrast to the Economist blaming inefficient workers sabotaging the spread of this wonderful technology, make sure to check out https://pivot-to-ai.com/ where David Gerard has been questioning whether people are prompting it wrong or AI is just not that smart. | ||
▲ | WarOnPrivacy 14 hours ago | parent | next [-] | |
> David Gerard has been questioning whether people are prompting it wrong or AI is just not that smart. If an AI can't understand well enunciated context, I'm not inclined to blame the person who is enunciating the context well. | ||
▲ | ktallett 12 hours ago | parent | prev | next [-] | |
Tech only spreads when the average user can get use with their average interactions. Having to tailor prompts to be so specific is a flaw. Even then it is at times absolutely useless. AI to be genuinely useful needs to accept when it doesn't know and state as such rather than creating waffle. Just as in life, I don't know is far more intelligent sometimes, than just making something up. | ||
▲ | whatever1 14 hours ago | parent | prev [-] | |
Not sure if it is smart but definitely it is not reliable. Try the exactly same prompt multiple times and you will get different answers. Was trying LLM for a chatbot to flip some switches in the UI. Less than 30% success rate in responding to “flip that specific switch to ON”. The even more annoying thing is that the response is pure gaslighting (like the switch you specified does not exist, or the switch cannot be set to ON”) |