▲ | magicalist 2 days ago | |
> Granted, there have been a lot of times I have trouble finding a website in my history, open tabs or even bookmarks, so I could potentially see how that might be advantageous as long as I was in a situation where I had a second browser for "non-work" related tasks, or this was strictly prohibited in in-private mode. Yeah, this seems like it would be super helpful, and would work really well with a smaller local only model since it doesn't need to generate nice prose about the results or whatever. Until they keep the data strictly local, though, yes, I'm keeping it off too. Weirdly, from their help page[1] they mention needing to "Have a high performance computer" as a requirement, and that > When you turn on "History search, powered by AI," in addition to the page title and URL, the page contents of the website you browse at that time are stored locally. and that the contents are even encrypted at rest, which makes you start to think they did it the right way, but then, no: > When you use History search, powered by AI, your searches, generated answers, best matches, and their page contents are sent to Google. This information is used in accordance with the Google Privacy Policy to improve this feature, which includes generative model research and machine learning technologies They don't outright say it anywhere, but it seems like the implication might be that this is a strictly local only model running (Nano), but then they ruin it by sending the history search results and all the page contents of those results to google so they can use that to improve their models? Why why why. Looking at the preference in Canary, it's just on/off. No "on, but don't send my search history and the contents of pages I've seen to google". > I'm still waiting on that 'Google Duplex' FWIW this has been shipping for a long time. Try doing a reservation through google maps. If there's not open table support or whatever, it'll make the phone call for you. | ||
▲ | pogue 2 days ago | parent | next [-] | |
> Weirdly, from their help page[1] they mention needing to "Have a high performance computer" as a requirement, and that >> When you turn on "History search, powered by AI," in addition to the page title and URL, the page contents of the website you browse at that time are stored locally. > and that the contents are even encrypted at rest, which makes you start to think they did it the right way, but then, no AI is like a beast that has to be continuously fed forever to keep growing, and of course Google knows this. So, they're always going to take your data so they can feed their beast to try and stay, at least abreast, if not ahead of the competition. I'm sure Google is also getting the message from publishers that they're getting sick of having their websites scrapped by GoogleBot only for those results to wind up in the AI Summary & not lead to any actual traffic. So, what could Google do? What if they made everyone who ran Chrome scrape that data for them vicariously just through normal browsing? Not only that, what if in addition to having them scrape that data, but to also process it locally on your computer to save on cloud computing costs? Just a theory... ;) > FWIW this has been shipping for a long time. Try doing a reservation through google maps. If there's not open table support or whatever, it'll make the phone call for you. That's cool about placing the call. Does it actually talk to the person on the other end, set up times & all that kind of stuff like they showed in the demo? | ||
▲ | 2 days ago | parent | prev [-] | |
[deleted] |