| ▲ | gedy 2 hours ago |
| Was anything even attempted? Looking from outside, Siri is same always been, and no improvement in a decade. |
|
| ▲ | danielheath 2 hours ago | parent | next [-] |
| Ten years ago, if it didn't understand what I meant, it told me so after 1-2 seconds. Now, it'll show a loading indicator for 5-6 seconds and then do nothing at all... or do something entirely unrelated to my request (eg responding to "hey siri, how much is fourteen kilograms in pounds" by playing a song from my music library). |
| |
| ▲ | JumpCrisscross 2 hours ago | parent | next [-] | | > or do something entirely unrelated to my request (eg responding to "hey siri, how much is fourteen kilograms in pounds" by playing a song from my music library My personal favourite is Siri responding to a request to open the garage door, a request it had successfully fielded hundreds of times before, by placing a call to the Tanzanian embassy. (I've never been to Tanzania. If I have a connection to it, it's unknown to me. The best I can come up with is Zanzibar sort of sounds like garage door.) | | |
| ▲ | npunt 2 hours ago | parent [-] | | I'm amazed more AI tools don't have reality checks as part of the command flow. If you take a UX-first perspective on AI - which Apple very much should - there's going to be x% failures to interpret correctly, causing some unintended and undesirable action. A reasonable way to handle these failure cases is to have a post-interpretation reality check. This could be personalized, 'does this user do this kind of thing?' which checks history of user actions for anything similar. Or it could be generic, 'is this the type of thing a typical user does?' In both cases, if it's unfamiliar you have a few options: try to interpret it again (maybe with a better model), raise a prompt with the user ('do you want to do x?'), or if it's highly unfamiliar, auto cancel the command and say sorry. |
| |
| ▲ | datadrivenangel 2 hours ago | parent | prev | next [-] | | Apple shot themselves in the foot in the late 2010s by switching to deep learning methods and making things slower and worse, with the spell checker being the worst example. | |
| ▲ | rzzzt 2 hours ago | parent | prev [-] | | Just like a person would! |
|
|
| ▲ | Onavo 2 hours ago | parent | prev [-] |
| Supposedly the main blocker for launching is because Apple would consider it reputational damage if the AI hallucinates. They have a very conservative approach when it comes to LLMs (on the other hand they are happy to scan all your photos and messages in the guise of child safety and send the data to the government and ChatControl). Problem is, Siri is already damaging Apple's reputation with how useless it is.. |
| |
| ▲ | pharos92 2 hours ago | parent | next [-] | | I remember buying the iPhone 4S in 2011, and it being the first iPhone to ship with Siri. It's 2025, and Siri is still fundamentally useless. | | |
| ▲ | dhussoe 35 minutes ago | parent | next [-] | | no, it's very useful for setting timers and for setting garbled reminders for a soon enough time that I'll remember what I actually meant rather than being confused by whatever it spewed out instead! | |
| ▲ | Onavo 2 hours ago | parent | prev [-] | | Well, it's still powered by the old codebase doing slot-filling named entity/intent detection that will route you to safari the moment it gets stuck ¯\_(ツ)_/¯. |
| |
| ▲ | gedy 2 hours ago | parent | prev [-] | | Yeah, I guess I've always distinguished "hallucinating" as e.g I asked for a chicken soup recipe and it told me how to make cyanide. Vs some social media person prompt hacking it to say fascism is good, etc. I've seen more of the latter than the former. |
|