Remix.run Logo
CamperBob2 16 hours ago

You don't use it to summarize pages (or at least I don't), but to help understand content within a page while minimizing distractions.

For example: I was browsing a Reddit thread a few hours ago and came upon a comment to the effect of "Bertrand Russell argued for a preemptive nuclear strike on the Soviets at the end of WWII." That seemed to conflict with my prior understanding of Bertrand Russell, to say the least. I figured the poster had confused Russell with von Neumann or Curtis LeMay or somebody, but I didn't want to blow off the comment entirely in case I'd missed something.

So I highlighted the comment, right-clicked, and selected "Explain this." Instead of having to spend several minutes or more going down various Google/Wikipedia rabbit holes in another tab or window, the sidebar immediately popped up with a more nuanced explanation of Russell's actual position (which was very poorly represented by the Reddit comment but not 100% out of line with it), complete with citations, along with further notes on how his views evolved over the next few years.

It goes without saying how useful this feature is when looking over a math-heavy paper. I sure wish it worked in Acrobat Reader. And I hope a bunch of ludds don't browbeat Mozilla into removing the feature or making it harder to use.

homebrewer 16 hours ago | parent [-]

And this explanation is very likely to be entirely hallucinated, or worse, subtly wrong in ways that's not obvious if you're not already well versed in the subject. So if you care about the truth even a little bit, you then have to go and recheck everything it has "said".

Why waste time and energy on the lying machine in the first place? Just yesterday I asked "PhD-level intelligence" for a well known quote from a famous person because I wasn't able to find it quickly in wikiquotes.

It fabricated three different quotes in a row, none of them right. One of them was supposedly from a book that doesn't really exist.

So I resorted to a google search and found what I needed in less time it took to fight that thing.

CamperBob2 16 hours ago | parent [-]

And this explanation is very likely to be entirely hallucinated, or worse, subtly wrong in ways that's not obvious if you're not already well versed in the subject. So if you care about the truth even a little bit, you then have to go and recheck everything it has "said".

It cited its sources, which is certainly more than you've done.

Just yesterday I asked "PhD-level intelligence" for a well known quote from a famous person because I wasn't able to find it quickly in wikiquotes.

In my experience this means that you typed a poorly-formed question into the free instant version of ChatGPT, got an answer worthy of the effort you put into it, and drew a sweeping conclusion that you will now stand by for the next 2-3 years until cognitive dissonance finally catches up with you. But now I'm the one who's making stuff up, I guess.

homebrewer 16 hours ago | parent [-]

Unless you've then read through those sources — and not asked the machine to summarize them again — I don't see how that changes anything.

Judging by your tone and several assumptions based on nothing I see that you're fully converted. No reason to keep talking past each other.

CamperBob2 15 hours ago | parent [-]

No, I'm not "fully converted." I reject the notion that you have to join one cult or the other when it comes to this stuff.

I think we've all seen plenty of hallucinated sources, no argument there. Source hallucination wasn't a problem 2-3 years ago simply because LLMs couldn't cite their sources at all. It was a massive problem 1-2 years ago because it happened all the freaking time. It is a much smaller problem today. It still happens too often, especially with the weaker models.

I'm personally pretty annoyed that no local model (at least that I can run on my own hardware) is anywhere near as hallucination-resistant as the major non-free, non-local frontier models.

In my example, no, I didn't bother confirming the Russell sources in detail, other than to check that they (a) existed and (b) weren't completely irrelevant. I had other stuff to do and don't actually care that much. The comment just struck me as weird, and now I'm better informed thanks to Firefox's AI feature. My takeaway wasn't "Russell wanted to nuke the Russians," but rather "Russell's positions on pacifism and aggression were more nuanced than I thought. Remember to look into this further when/if it comes up again." Where's the harm in that?

Can you share what you asked, and what model you were using? I like to collect benchmark questions that show where progress is and is not happening. If your question actually elicited such a crappy response from a leading-edge reasoning model, it sounds like a good one. But if you really did just issue a throwaway prompt to a free/instant model, then trust me, you got a very wrong impression of where the state of the art really is. The free ChatGPT is inexcusably bad. It was still miscounting the r's in "Strawberry" as late as 5.1.

tsimionescu 10 hours ago | parent [-]

> I'm personally pretty annoyed that no local model (at least that I can run on my own hardware) is anywhere near as hallucination-resistant as the major non-free, non-local frontier models.

And here you get back to my original point: to get good (or at least better) AI, you need complex and huge models, that can't realistically run locally.