| ▲ | zmmmmm 5 days ago |
| If you watch it carefully, he preempts the AI with "What do I do first" before it even answered the first time. This strongly suggests it did this in rehearsal to me and hence was far more than just "bad luck" or bad connectivity. Perhaps the bad connectivity stopped the override from working and it just kept repeating the previous response. Either way it suggests some troubling early implications about how well Meta's AI work is going to me, that they got this stuck on the main live demo for their flagship product on such a simple thing. |
|
| ▲ | daemonologist 5 days ago | parent | next [-] |
| I think preempting the AI the first time was meant to be a feature (it's not trivial to implement and is something people often ask for). Failing from there definitely wasn't great, although it's kind of what I'd expect from an(y) LLM. |
| |
| ▲ | WD-42 5 days ago | parent [-] | | No, he preempted it because it was about to list all the ingredients necessary to make a steak sauce, despite having them in front of him. These are glasses, it should have skipped that part and went straight to what to do first. |
|
|
| ▲ | exitb 5 days ago | parent | prev | next [-] |
| The way he clung to „what do I do first” makes me think that the whole conversation was scripted in the prompt and AI was asked to reply in specific way to specific sentences. Possibility not even actually connected to the camera? |
| |
| ▲ | whywhywhywhy 5 days ago | parent | next [-] | | Yeah as a fully integrated system and the selling point I'd expect you'd say something like "Look again I think you're getting ahead of yourself". Maybe the tech wasn't quite fool proof and they tried to fake it and then the fake version messed up. | |
| ▲ | klabb3 5 days ago | parent | prev [-] | | I distrust meta (and hate these voice assistants) as much as the next guy but to me it’s obvious that you would prepare the prompt and use pretty much the exact phrasing. Also, repeating yourself is normal if there’s no response at all. If it was truly all fake why not just cheat outright and just prerecord all of it? |
|
|
| ▲ | mrandish 5 days ago | parent | prev | next [-] |
| > Either way it suggests some troubling early implications about how well Meta's AI work is going I fully expect the AI to suck initially and then over many months of updates evolve to mostly annoying and only occasionally mildly useful. However, the live stage demo failing isn't necessarily supporting evidence. Live stage demos involving Wifi are just hard because in addition to the normal device functionality they're demoing, they need to simultaneously compress and transmit a screen share of the final output back over wifi so the audience can see it. And they have to do all that in a highly challenging RF environment that's basically impossible to simulate in advance. Frankly, I'd be okay with them using a special headset that has a hard-wired data link for the stage demo. |
| |
| ▲ | bauruine 5 days ago | parent | next [-] | | I assume you couldn't watch the video because it's just a live stream of a guy standing in a kitchen and talking to his glasses. He's not on the stage with hundreds of people on the wifi and you can't see what the glasses are displaying at all. | | |
| ▲ | mrandish 4 days ago | parent [-] | | The link in this thread to the live glasses live demo is of Zuckerberg at FB Connect. The "fail" is when someone repeatedly tries to call the glasses he is wearing on stage. The person calling apparently has no trouble making the in-bound calls but the glasses Zuckerberg is wearing on stage fail in successfully answering the call. And the streamed video clearly shows the interface of Zuckerberg's glasses full-screen, as well as showing that the interface is being sent to the stage screen so the live audience can see it. So, the failure was apparently with the glasses Zuckerberg's wearing on stage not establishing a two-way video call while simultaneously streaming it's own interface for the live stream and big-screen. He said it worked dozens of times in rehearsal and one notable difference was that for the real demo hundreds of other wifi devices present in the room. I have quite a bit of experience producing live keynote demos at large tech events, so I don't think I've confused about this. As an aside, when we're being shown "Zuckerberg's POV" through the glasses I believe that's actually something custom put together for demos because the normal glasses don't even have a mode which shows the wearer's POV. Creating that view requires sending both the internal output of the glasses, which is the corner inset overlay AND the full screen output of the glasses live camera - which are then being composited together backstage to create the combined image we see representing what Zuckerberg sees. Sending all of that while establishing a two-way video call is a lot for a resource constrained mobile device. |
| |
| ▲ | hattmall 5 days ago | parent | prev [-] | | I run multiple live streams from speakers to conference rooms and other bandwidth intensive offerings throughout the day in an incredibly crowded RF space. WiFi is certainly up to the task. Meta is a nearly 2 Trillion dollar company a failure of this order is ridiculous. |
|
|
| ▲ | 5 days ago | parent | prev [-] |
| [deleted] |