Remix.run Logo
DivingForGold 5 days ago

[flagged]

eek2121 5 days ago | parent | next [-]

I know hearing this gets old, however, please review sources outside of LLMs for accuracy. LLMs take a whole bunch off stuff from all over the internet and distill it down to something you can consume. Those sources include everything from reddit to a certain de-wormer that folks still think treats COVID (side note: I've a few long COVID victims in a support group I am in, and they are not happy about the disinfo that was spread, at any rate)...LLMs/"AI" does not and cannot innovate, it can only take all existing information it knows, mash it all together, and present you with a result according to what the model is trained on.

I'm not against AI summaries being on HN, however, users should verify and cite sources so others can verify.

However, I'm just a normal nerd that wants to fact check stuff. Perhaps I'm wrong in wanting to do this. We'll see.

DivingForGold 5 days ago | parent | next [-]

I have significant experience in polymer chemistry, as an experiment, I decided to ask gemini some very specific questions to try and back it into a corner, so to speak. It blew me away with the answer, discussing quite a bit of info I was not even aware of.

jijijijij 4 days ago | parent [-]

Cool story, bro.

brailsafe 5 days ago | parent | prev | next [-]

> I'm not against AI summaries being on HN, however, users should verify and cite sources so others can verify.

I don't see how they contribute anything to a discussion. Even a speculative comment organically produced is more worthwhile than feeding a slop machine back into itself. I don't go out for coffee to discuss LLM summaries with friends, and I can't imagine why anyone would want to do that here.

Earlier today I asked Gemini Pro to find information on a person's death that was turning up nothing for me otherwise, and it just imagined finding verbatim Obituary quotes in every source, cobbled together vaguely related names, plausible bits and pieces from wherever, almost like it was 2023 again.

It ain't search, and it ain't worthwhile; I'd much rather someone ask an llm the question and then post a question out of curiosity based on it, but without the summary itself

xupybd 5 days ago | parent [-]

It is search if you ask it to produce a list of links.

It does well at filtering information for you.

Going to primary sources is required to verify what it says but it can reduce the leg work rather a lot.

brailsafe 4 days ago | parent | next [-]

> It is search if you ask it to produce a list of links.

Not in the example I mentioned. It can imagine the links, and the content of the links, and be very confident about it. It literally invented an obituary that didn't exist, gave me a link to a funeral home that 404'd, came up with "in-memoriam" references from regional newsletters that never contained her name. It's actually really scary how specifically fake it was.

I asked it to produce verbatim references from any sources and the links to them, and none of the text it produced could be searched with quotes on any search engine.

xupybd 4 days ago | parent [-]

Yeah it does punt sometimes but often it works well. Especially on more well written about topics.

brailsafe 4 days ago | parent [-]

I think that's the tricky thing. I'm not saying it's not useful when it is, but you really do need a keen and skeptical eye to be able to know. The problem kind of reminds of bloom filters, such that they're useful for situations when you want to know something might exist or definitely does not exist in a set. Exact truth has some level of permissible error rate, as it does in any situation, but definitely wrong is pretty important to know about.

duskdozer 5 days ago | parent | prev | next [-]

The issue as I see it is just straight copy/pasting its output. You want to use it as a search tool to give you pointers on things to look up and links to read? Great. Then use that as a basis to read the sources and write your own response. If you aren't familiar enough with the subject area to do that, then you also shouldn't be pasting LLM output on it.

brailsafe 4 days ago | parent [-]

It's not even copy/pasting in some cases. In my example, it confidentially produced "verbatim" references that don't exist anywhere, to specific pages that never mentioned this person's name or contained any of the text. Sometimes completely different people, 404 pages, huge waste of time

duskdozer 3 days ago | parent [-]

Yeah I agree, I've seen the hallucinated references also. Sometimes used by people in internet arguments to make their bullshit seem more legitimate. What I meant though by copy/pasting is people getting the LLM output and then just directly feeding that to the people they're conversing with, instead of looking into what it's saying or really even engaging with it in any way.

anjel 5 days ago | parent | prev | next [-]

Ask it to solve a tough Euler Math puzzle with the search button on and it just copies the answer from the web. Turn search off and it actually computes the answer. Funny how the search button is taken away though.

roarcher 5 days ago | parent | prev [-]

> It is search if you ask it to produce a list of links.

Unfortunately it can hallucinate those too. I've had ChatGPT cite countless nonexistent academic papers, complete with links that go nowhere.

tharkun__ 5 days ago | parent [-]

Which is "fine" so to speak. We do this with using AIs for coding all the time, don't we? As in, we ask it to do things or tell us things about our code base (which we might be new to as well) but essentially use it as a "search engine+" so to speak. Hopefully it's faster and can provide some sort of understanding faster than we could with searching ourselves and building a mental model while doing it.

But we still need to ask it for and then follow file and line number references (aka "links") and verify it's true and it got the references right and build enough of a mental model ourselves. With code (at least for our code base) it usually does get that right (the references) and I can verify. I might be biased because I both know our code base very well already (but not everything in detail) and I'm a very suspicious person, questioning everything. With humans it sometimes "drives them crazy" but the LLM doesn't mind when I call its BS over and over. I'm always "right" :P

The problem is when you just trust anything it says. I think we need to treat it like a super junior that's trained to very convincingly BS you if it's out of its depth. But it's still great to have said junior do your bidding while you do other things and faster than an actual junior and this junior is available 24/7 (barring any outages ;)).

justinc8687 5 days ago | parent | prev [-]

I've had quite good luck asking Gemini and ChatGPT to include links to research papers for every claim they make. Not only can I review at least the abstracts but I find when I do this, they'll retract some of the hallucinations they've have made in prior messages. It almost seems (and maybe they do) in their web searching tools, reread the content they include. Thus, greatly reducing errors, with minimal extra effort on my part.

cyberax 5 days ago | parent | prev | next [-]

> Could it be that this organism switches to anaerobic respiration when it finds itself inside cancer tissue

Unlikely. The leading hypothesis is that mitochondria are a part of the apoptosis cycle, so cells need to disable them to become cancerous. This is called the Warburg effect.

There are several drugs that target this mechanism, inhibiting the anaerobic metabolism. They are effective initially, but cancers always find ways to work around them.

MangoToupe 5 days ago | parent | prev | next [-]

I'm happy to wait for the experimentation to weigh in for this one dawg

mikeweiss 5 days ago | parent | prev | next [-]

"according to Gemini" ........

5 days ago | parent [-]
[deleted]
antdke 5 days ago | parent | prev [-]

aCcoRdinG To gEmiNi