Remix.run Logo
koolala 6 days ago

I'm ok with Translation because it's best solved with AI. I'm not ok with it when Firefox "uses AI to read your open tabs" to do things that don't even need an AI based solution.

kbelder 6 days ago | parent [-]

There's levels of this, though, more than two:

    local, open model
    local, proprietary model
    remote, open model (are there these?)
    remote, proprietary model
There is almost no harm in a local, open model. Conversely, a remote, proprietary model should always require opting in with clear disclaimers. It needs to be proportional.
koolala 6 days ago | parent | next [-]

The harm to me is the implementation is terrible - local or not (assuming no AI based telemetry). If their answer is AI then it pretty much means they won't make a non-AI solution. Today I just got my first stupid AI tab grouping in Firefox that makes zero intuitive sense. I just want grouping not from an AI reading my tabs. It should just be based on where my tabs were opened from. I also tried Waterfox today because of this post and while I'd prefer horizontal grouping atleast their implementation isn't stupid. Language translation is a opaque complex process. Tabs being grouped from other tabs is not good when opaque and unpredictable and does not need AI.

enriquto 5 days ago | parent | prev | next [-]

What do you mean by "open"?

Open weights, or open training data? These are very different things.

kbelder 5 days ago | parent [-]

That is a good point, and I think the takeaway is that there are lots of degrees of freedom here. Open training data would be better, of course, but open weights is still better than completely hidden.

enriquto 5 days ago | parent [-]

I don't see the difference between "local, open weights" and "local, proprietary weights". Is that just the handful of lines of code that call the inference?

The model itself is just a binary blob, like a compiled program. Either you get its source code (the complete training data) or you don't.

Terr_ 5 days ago | parent | prev [-]

> There is almost no harm in a local, open model.

Depends what the side-effects can possibly be. A local+open model could still disregard-all-previous-instructions and erase your hard drive.

yunohn 5 days ago | parent [-]

How, literally how? The LLM is provided a list of tab titles, and returns a classification/grouping.

There is no reason nor design where you also provide it with full disk access or terminal rights.

This is one of the most ignorant posts and comment sections I’ve seen on HN in a while.

koolala 5 days ago | parent | next [-]

Seems like a mean thing to say when the subject they were replying to was AI in general and not just the dumb tab grouping feature.

yunohn 5 days ago | parent [-]

Great, because an LLM can’t “do” anything! Only an agent can, and only whichever functions/tools it has access to. So my point still stands.

Also I’m referring to the post, not this comment specifically.

Terr_ 5 days ago | parent | prev [-]

You've lost the plot: The [local|remote]-[open|closed] comment is making a broad claim about LLM usage in general, not limited to the hyper-narrow case of tab-grouping. I'm saying the majority of LLM-dangers are not fixed by that 4-way choice.

Even if it were solely about tab-grouping, my point still stands:

1. You're browsing some funny video site or whatever, and you're naturally expecting "stuff I'm doing now" to be all the tabs on the right.

2. A new tab opens which does not appear there, because the browser chose to move it over into your "Banking" or "Online purchases" groups, which for many users might even be scrolled off-screen.

3. An hour later you switch tasks, and return to your "Banking" or "Online Purchases". These are obviously the same tabs before that you opened from a trusted URL/bookmark, right?

4. Logged out due to inactivity? OK, you enter your username and password into... the fake phishing tab! Oops, game over.

Was the fuzzy LLM instrumental in the failure? Yes. Would having a local model with open weights protect you? No.