▲ | Teleoflexuous 2 days ago | |||||||||||||
There's a simple explanation that isn't 'prerecorded'. I'd be very happy to accuse Meta of faking a demo, but that's 1) just a weird way to fake a demo and 2) effect that has easier explanation. You ask AI how to do something. AI generates steps to do that thing. It has concept of steps, so that when you go 'back' it goes back to the last step. As you ask how to do something, it finishes explaining general idea and goes to first step. You interrupt it. It assumes it went through the first step and won't let you go back. The first step here was mixing some sauces. That's it. It's a dumb way to make a tool, but if I wanted to make one that will work for a demo, I'd do that. Have you ever tried any voice thing to guide you through something? Convincing Gemini that something it described didn't happen takes a direct explanation of 'X didn't happen' and doesn't work perfectly. It still didn't work, it absolutely wasn't wi-fi issue and lmao, technology of the future in $2T company, it just doesn't seem rigged. | ||||||||||||||
▲ | timmytokyo 2 days ago | parent | next [-] | |||||||||||||
AI: "You've already combined the base ingredients." Except, no. He hadn't. | ||||||||||||||
| ||||||||||||||
▲ | Barbing 2 days ago | parent | prev [-] | |||||||||||||
I'd figured it performed image recognition on the scene visible to it, then told the language model it could see various ingredients including some combined in a bowl. |