Remix.run Logo
twalichiewicz 5 days ago

Watching the announcement, every feature felt like something my phone already does—better.

With glasses, you have to aim your head at whatever you want the AI to see. With a phone, you just point the camera while your hands stay free. Even in Meta’s demo, the presenter had to look back down at the counter because the AI couldn’t see the ingredients.

It feels like the same dead end we saw with Rabbit and the Humane pin—clever hardware that solves nothing the phone doesn’t already do. Maybe there’s a niche if you already wear glasses every day, but beyond that it’s hard to see the case.

Gareth321 5 days ago | parent | next [-]

If executed well I think this could reduce a lot of friction in the process. I can definitely unlock my phone and hold it with one hand while I prepare and cook, but that's annoying. If my glasses could monitor progress and tell me what to do with what while I'm doing it, that's far more convenient. It's clearly not there yet, but in a few years I have no doubt it will be. And this is just the start. With the screens they'll be able to offer AR. Imagine working on electronics or a car and the instructions are overlaid on the screen while the AI is providing verbal instructions.

01100011 5 days ago | parent | prev [-]

I'm oldish, so maybe I'm biased, but this sort of product seems like something no one will want, outside a few technophiles, but that industry desperately needs you to want. It's like 3d TV, a solution in search of a problem because the mfgs need to make the next big thing with the associated high margins.

To me the phone is a pretty good form factor. Convenient enough(especially with voice control), unobtrusive, socially acceptable, and I need to own one anyway because it's a phone. I'm a geek so I think this tech is cool, but I see zero chance I would use one, even if it were a few steps better than it is.