| |
| ▲ | zehaeva 3 days ago | parent | next [-] | | What if you had told it again that you don't think that's right? Would it have stuck to it's guns and went "oh, no, I am right here" or would it have backed down and said "Oh, silly me, you're right, here's the real dosage!" and give you again something wrong? I do agree that to get the full usage out of an LLM you should have some familiarity with what you're asking about. If you didn't already have a sense of what a dosage is already, why wouldn't 100mcg be the right one? | | |
| ▲ | cj 3 days ago | parent [-] | | I replied in the same thread "Are you sure that sounds like a low dose". It stuck to the (correct) recommendation in the 2nd response, but added in a few use cases for higher doses. So seems like it stuck to its guns for the most part. For things like this, it would definitely be better for it to act more like a search engine and direct me to trustworthy sources for the information rather than try to provide the information directly. | | |
| ▲ | stevedotcarter 2 days ago | parent [-] | | I noticed this recently when I saw someone post with an AI generated map of Europe which was all wrong. I tried the same and asked ChatGPT to generate a map of Ireland and it was wrong too. So then I asked to find me some accurate maps of Ireland and instead of generating it gave me images and links to proper websites. Will definitely be remembering to put "generate" vs "find" in my prompts depending on what I'm looking for. Not quite sure how you would train the model to know which answer is more suitable. |
|
| |
| ▲ | blehn 3 days ago | parent | prev | next [-] | | Perhaps the absolute worst use-case for an LLM | | |
| ▲ | dragontamer 2 days ago | parent | next [-] | | My mom was looking up church times in the Philippines. Google AI was wrong pretty much every time. Why is an LLM unable to read a table of church times across a sampling of ~5 Filipino churches? Google LLM (Gemini??) was clearly finding the correct page. I just grabbed my mom's phone after another bad mass time and clicked on the hyperlink. The LLM was seemingly unable to parse the table at all. | | |
| ▲ | etherealG 4 minutes ago | parent [-] | | Because google search and llm teams are different, with different incentives. Search is the cash cow they keep squeezing for more cash at the expense of good quality since at least 2018, as revealed in court documents showing they did that on purpose to keep people searching more to have more ads and more revenue. Google AI embedded in search has the same goals, keep you clicking on ads… my guess would be Gemini doesn’t have any of the bad part of enshitification yet… but it will come. If you think hallucinations are bad now, just you wait until tech companies start tuning them up on purpose to get you to make more prompts so they can inject more ads! |
| |
| ▲ | redundantly 2 days ago | parent | prev [-] | | And one that likely happens often. |
| |
| ▲ | BeetleB 2 days ago | parent | prev | next [-] | | > I used ChatGPT 5 over the weekend to double check dosing guidelines for a specific medication. This use case is bad by several degrees. Consider an alternative: Using Google to search for it and relying on its AI generated answer. This usage would be bad by one degree less, but still bad. What about using Google and clicking on one of the top results? Maybe healthline.com? This usage would reduce the badness by one further degree, but still be bad. I could go on and on, but for this use case, unless it's some generic drug (ibuprofen or something), the only correct use case is going to the manufacturer's web site, ensuring you're looking at the exact same medication (not some newer version or a variant), and looking at the dosage guidelines. No, not Mayo clinic or any other site (unless it's a pretty generic medicine). This is just not a good example to highlight the problems of using an LLM. You're likely not that much worse off than using Google. | | |
| ▲ | cj 2 days ago | parent [-] | | The compound I was researching was [edit: removed]. Problem is it's not FDA approved, only prescribed by compounding pharmacies off label. Experimental compound with no official guidelines. The first result on Google for "[edit: removed] dosing guidelines" is a random word document hosted by a Telehealth clinic. Not exactly the most reliable source. Edit: Jeesh, what’s with the downvotes? | | |
| ▲ | BeetleB 2 days ago | parent | next [-] | | > Experimental compound with no official guidelines. > The first result on Google for "GHK-Cu dosing guidelines" is a random word document hosted by a Telehealth clinic. Not exactly the most reliable source. You're making my point even more. When doing off label for an unapproved drug, you probably should not trust anything on the Internet. And if there is a reliable source out there on the Internet, it's very much on you to be able to discern what is and what is not reliable. Who cares that the LLM is wrong, when likely much of the Internet is wrong? BTW, I'm not advocating that LLMs are good for stuff like this. But a better example would be asking the LLM "In my state, is X taxable?" The Google AI summary was completely wrong (and the helpful link it used as a reference was correct, and in complete disagreement with the summary). But other than the AI summary being wrong, pretty much every link in the Google search results was correct. This is a good use case for not relying on an LLM: Information that is widely and easily available is wrong in the LLM. | | |
| ▲ | cj 2 days ago | parent [-] | | > You're making my point even more What exactly is your point? Is your point that I should be smarter and shouldn’t have asked ChatGPT the question? If that’s your point, understood, but I don’t think you can assume the average ChatGPT user will have such a discerning ability to determine when and when not using a LLM is appropriate. FWIW I agree with you. But the “you shouldn’t ask ChatGPT that question” is a weak argument if you care about contextualizing and broadening your point beyond me and my specific anecdote. | | |
| ▲ | BeetleB 2 days ago | parent [-] | | My point is that if you're trying to demonstrate how unreliable LLMs are, this is a poor example, because the alternatives are almost equally poor. > If that’s your point, understood, but I don’t think you can assume the average ChatGPT user will have such a discerning ability to determine when and when not using a LLM is appropriate. I agree that the average user will not, but they also will not have the ability to determine that the answer from the top (few) Google links is invalid as well. All you've shown is the LLM is as bad as Google search results. Put another way, if you invoke this as a reason one should not rely on LLMs (in general), then it follows one should not rely on Google either (in general). |
|
| |
| ▲ | nonameiguess 2 days ago | parent | prev [-] | | I think this actually points at a different problem, a problem with LLM users, but only to the extent that it's a problem with people with respect to any questions they have to ask any source they consider an authority at all. No LLM, nor any other source on the Internet, nor any other source off the Internet, can give you reliable dosage guidelines for copper peptides because this is information that is not known to humans. There is some answer to the question of what response you might expect and how that varies by dose, but without the clinical trials ever having been conducted, it's not an answer anyone actually has. Marketing and popular misconceptions about AI lead to people expecting it to be able to conjure facts out of thin air, perhaps reasoning from first principles using its highly honed model of human physiology. It's an uncomfortable position to be in trying to biohack your way to a more youthful appearance using treatments that have never been studied in human trials, but that's the reality you're facing. Whatever guidelines you manage to find, whether from the telehealth clinic directly, or from a language model that read the Internet and ingested that along with maybe a few other sources, are generally extrapolated from early rodent studies and all that's being extrapolated is an allometric scaling from rat body to human body of the dosage the researchers actually gave to the rats. What effect that actually had, and how that may or may not translate to humans, is not usually a part of the consideration. To at least some extent, it can't be if the compound was never trialed on humans. You're basically just going with scale up a dosage to human sized that at least didn't kill the rats. Take that and it probably won't kill you. What it might actually do can't be answered, not by doctors, not by an LLM, not by Wikipedia, not by anecdotes from past biohackers who tried it on themselves. This is not a failure of information retrieval or compression. You're just asking for information that is not known to anyone, so no one can give it to you. If there's a problem here specific to LLMs, it's that they'll generally give you an answer anyway and will not in any way quantify the extent to which it is probably bullshit and why. | | |
| ▲ | cj 2 days ago | parent [-] | | > a problem with LLM users I think the flaw here is placing blame on users rather than the service provider. HN is cutting LLM companies slack because we understand the technical limitations making it hard for the LLM to just say “I don’t know”. In any other universe, we would be blaming the service rather than the user. Why don’t we fix LLMs so they don’t spit out garbage when it doesn’t know the answer. Have we given up on that thought? | | |
| ▲ | BeetleB 2 days ago | parent | next [-] | | > In any other universe, we would be blaming the service rather than the user. I think the key question is "How is this service being advertised?" Perhaps the HN crowd gives it a lot of slack because they ignore the advertising. Or if you're like me, aren't even aware of how this is being marketed. We know the limitations, and adapt appropriately. I guess where we differ is on whether the tool is broken or not (hence your use of the word "fix"). For me, it's not at all broken. What may be broken is the messaging. I don't want them to modify the tool to say "I don't know", because I'm fairly sure if they do that, it will break a number of people's use cases. If they want to put a post-processor that filters stuff before it gets to the user, and give me an option to disable the post-processor, then I'm fine with it. But don't handicap the tool in the name of accuracy! | | |
| ▲ | cj a day ago | parent [-] | | The point you were making elsewhere in the thread was that "this is a bad use case for LLMs" ... "Don't use LLMs for dosing guidelines." ... "Using dosing guidelines is a bad example for demonstrating how reliable or unreliable LLMs are", etc etc etc. You're blaming the user for having a bad experience as a result of not using the service "correctly". I think the tool is absolutely broken, considering all of the people saying dosing guidelines is an "incorrect" use of LLM models. (While I agree it's not a good use, I strongly dislike how you're blaming the user for using it incorrectly - completely out of touch with reality). We can't just cover up the shortfalls of LLMs by saying things like "Oh sorry, that's not a good use case, you're stupid if you use the tool for that purpose". I really hope the HN crowd stops making excuses for why it's okay that LLMs don't perform well on tasks it's commonly asked to do. > But don't handicap the tool in the name of accuracy! If you're taking the position that it's the user's fault for asking LLMs a question it won't be good at answering, then you can't simultaneously advocate for not censoring the model. If it's the user's responsibility to know how to use ChatGPT "correctly", the tool (at a minimum) should help guide you away from using it in ways it's not intended for. If LLMs were only used by smarter-than-average HN-crowd techies, I'd agree. But we're talking about a technology used by middle school kids. I don't think it's reasonable to expect middleschoolers to know what they should and shouldn't ask LLMs for help with. | | |
| ▲ | BeetleB 11 hours ago | parent [-] | | > You're blaming the user for having a bad experience as a result of not using the service "correctly". Definitely. Just as I used to blame people for misusing search engines in the pre-LLM era. Or for using Wikipedia to get non-factual information. Or for using a library as a place to meet with friends and have lunch (in a non-private area). If you're going to try to use a knife as a hammer, yes, I will fault you. I do expect that if someone plans to use a tool, they do own the responsibility of learning how to use it. > If you're taking the position that it's the user's fault for asking LLMs a question it won't be good at answering, then you can't simultaneously advocate for not censoring the model. If it's the user's responsibility to know how to use ChatGPT "correctly", the tool (at a minimum) should help guide you away from using it in ways it's not intended for. Documentation, manuals, training videos, etc. Yes, I am perhaps a greybeard. And while I do like that many modern parts of computing are designed to be easy to use without any training, I am against stating that this is a minimum standard that all tools have to meet. Software is the only part of engineering where "self-explanatory" seems to be common. You don't buy a board game hoping it will just be self-evident how to play. You don't buy a pressure cooker hoping it will just be safe to use without learning how to use it. So yes, I do expect users should learn how to use the tools they use. |
|
| |
| ▲ | simonw 2 days ago | parent | prev [-] | | Current frontier LLMs - Claude 4, GPT-5, Gemini 2.5 - are massively more likely to say "I don't know" than last year's models. | | |
| ▲ | cj 2 days ago | parent [-] | | I don’t think I’ve ever seen ChatGPT 5 refuse to answer any prompt I’ve ever given it. I’m doing 20+ chats a day. What’s an example prompt where it will say “idk”? Edit: Just tried a silly one, asking it to tell me about the 8th continent on earth, which doesn’t exist. How difficult is it for the model to just say “sorry, there are only 7 continents”. I think we should expect more from LLMs and stop blaming things on technical limitations. “It’s hard” is getting to be an old excuse considering the amount of money flowing into building these systems. | | |
| ▲ | simonw 2 days ago | parent [-] | | https://chatgpt.com/share/68b85035-62ec-8006-ab20-af5931808b... - "There are only seven recognized continents on Earth: Africa, Antarctica, Asia, Australia, Europe, North America, and South America." Here's a recent example of it saying "I don't know" - I asked it to figure out why there was an octopus in a mural about mushrooms: https://chatgpt.com/share/68b8507f-cc90-8006-b9d1-c06a227850... - "I wasn’t able to locate a publicly documented explanation of why Jo Brown (Bernoid) chose to include an octopus amid a mushroom-themed mural." | | |
| ▲ | cj 2 days ago | parent [-] | | Not sure what your system prompt is, but asking the exact same prompt word for word for me results in a response talking about "Zealandia, a continent that is 93% submerged underwater." The 2nd example isn't all that impressive since you're asking it to provide you something very specific. It succeeded in not hallucinating. It didn't succeed at saying "I'm not sure" in the face of ambiguity. I want the LLM to respond more like a librarian: When they know something for sure, they tell you definitively, otherwise they say "I'm not entirely sure, but I can point you to where you need to look to get the information you need." | | |
| ▲ | simonw 2 days ago | parent [-] | | I'm using regular GPT-5, no custom instructions and memory turned off. Can you link to your shared Zealandia result? I think that mural result was spectacularly impressive, given that it started with a photo I took of the mural with almost no additional context. | | |
| ▲ | cj 2 days ago | parent [-] | | I can't link since it's in an enterprise account. Interestingly I tried the same question in a separate ChatGPT account and it gave a similar response you got. Maybe it was pulling context from the (separate) chat thread where it was talking about Zealandia. Which raises another question: once it gets something wrong once, will it just keep reenforcing the inaccuracy in future chats? That could lead to some very suboptimal behavior. Getting back on topic, I strongly dislike the argument that this is all "user error". These models are on track to be worth a trillion dollars at some point in the future. Let's raise our expectations of them. Fix the models, not the users. | | |
| ▲ | simonw 2 days ago | parent [-] | | I wonder if you're stuck on an older model like GPT-4o? EDIT: I think that's likely what is happening here: I tried the prompt against GPT-4o and got this https://chatgpt.com/share/68b8683b-09b0-8006-8f66-a316bfebda... My consistent position on this stuff is that it's actually way harder to use than most people (and the companies marketing it) let on. I'm not sure if it's getting easier to use over time either. The models are getting "better" but that partly means their error cases are harder to reason about, especially as they become less common. |
|
|
|
|
|
|
|
|
|
| |
| ▲ | SV_BubbleTime 3 days ago | parent | prev | next [-] | | LANGUAGE model, not FACT model. | | |
| ▲ | cantor_S_drug 2 days ago | parent [-] | | I gave LLM a list of python packages and asked it to give me their respective licenses. Obviously it got some of them wrong. I had to manually check with the package's pypi page. | | |
| ▲ | SV_BubbleTime 2 days ago | parent [-] | | Ok? Yes. That is the problem; that sometimes it works. See the topic. Adding RAG or web search capability limits the loss and hallucinations. Yes. You always need to check the results. Your task by the way is better for an agentic AI system that can web search, get, and double check results. | | |
| ▲ | tough 2 days ago | parent | next [-] | | his task is probably best done with a script, heck you could thell chatgpt to download all packages with a script and check LICENSE files and report back with a csv/table | |
| ▲ | cantor_S_drug 2 days ago | parent | prev [-] | | [dead] |
|
|
| |
| ▲ | kenjackson 2 days ago | parent | prev | next [-] | | "The main challenge is LLMs aren't able to gauge confidence in its answers" This seems like a very tractable problem. And I think in many cases they can do that. For example, I tried your example with Losartan and it gave the right dosage. Then I said, "I think you're wrong", and it insisted it was right. Then I said, "No, it should be 50g." And it replied, "I need to stop you there". Then went on to correct me again. I've also seen cases where it has confidence where it shouldn't, but there does seem to be some notion of confidence that does exist. | | |
| ▲ | jazzyjackson 2 days ago | parent [-] | | > but there does seem to be I need to stop you right there! These machinations are very good at seeming to be! The behavior is random, sometimes it will be in a high dimensional subspace of refusing to change its mind, others it is a complete sycophant with no integrity. To test your hypothesis that it is more confident about some medicines than others (maybe there is more consistent material in the training data...) one might run the same prompt 20 times each with various drugs, and measure how strongly the llm insists it is correct when confronted. Unrelated, I recently learned the state motto of North Carolina is "To be, rather than to seem" https://en.wikipedia.org/wiki/Esse_quam_videri | | |
| ▲ | kenjackson 2 days ago | parent [-] | | I tried for a handful of drugs and unfortunately(?) it gave accurate dosages to start with and it wouldn't budge. Going too low and it told me that the impact wouldn't be sufficient. Going too high and it told me how dangerous it was and that I had maybe misunderstood the units of measure. |
|
| |
| ▲ | QuantumGood 2 days ago | parent | prev | next [-] | | With search and references, and without search and references are two different tools. They're supposed to be closer to the same thing, but are not. That isn't to say there's a guarantee of correctness with references, but in my experience, accuracy is better, and seeing unexpected references is helpful when confirming. | |
| ▲ | naet 2 days ago | parent | prev | next [-] | | That is exactly the kind of question that I would never trust to chatgpt. | |
| ▲ | tuatoru 2 days ago | parent | prev | next [-] | | Modern Russian Roulette, using LLMs for dose calculations. | |
| ▲ | Aeolun 2 days ago | parent | prev | next [-] | | I feel like asking an LLM for medicine dosage guidelines is exactly what you should never use it for… | |
| ▲ | dncornholio 3 days ago | parent | prev | next [-] | | Using a LLM for medical research is just as dangerous as Googling it. Always ask your doctors! | | |
| ▲ | el_benhameen 3 days ago | parent | next [-] | | I don’t disagree that you should use your doctor as your primary source for medical decision making, but I also think this is kind of an unrealistic take. I should also say that I’m not an AI hype bro. I think we’re a long ways off from true functional AGI and robot doctors. I have good insurance and have a primary care doctor with whom I have good rapport. But I can’t talk to her every time I have a medical question—it can take weeks to just get a phone call! If I manage to get an appointment, it’s a 15 minute slot, and I have to try to remember all of the relevant info as we speed through possible diagnoses. Using an llm not for diagnosis but to shape my knowledge means that my questions are better and more pointed, and I have a baseline understanding of the terminology. They’ll steer you wrong on the fine points, but they’ll also steer you _right_ on the general stuff in a way that Dr. Google doesn’t. One other anecdote. My daughter went to the ER earlier this year with some concerning symptoms. The first panel of doctors dismissed it as normal childhood stuff and sent her home. It took 24 hours, a second visit, and an ambulance ride to a children’s hospital to get to the real cause. Meanwhile, I gave a comprehensive description of her symptoms and history to an llm to try to get a handle on what I should be asking the doctors, and it gave me some possible diagnoses—including a very rare one that turned out to be the cause. (Kid is doing great now). I’m still gonna take my kids to the doctor when they’re sick, of course, but I’m also going to use whatever tools I can to get a better sense of how to manage our health and how to interact with the medical system. | | |
| ▲ | parpfish 2 days ago | parent | next [-] | | I always thought “ask your doctor” was included for liability reasons and not a thing that people actually could do. I also have good insurance and a PCP. The idea that I could call them up just to ask “should I start doing this new exercise” or “how much aspirin for this sprained ankle?” is completely divorced from reality. | | |
| ▲ | el_benhameen 2 days ago | parent | next [-] | | Yes, exactly this. I am an anxious, detail-focused person. I could call or message for every health-related question that comes to mind, but that would not be a good use of anyone’s time. My doctor is great, but she does not care about the minutiae of my health like I do, nor do I expect her to. | |
| ▲ | rkomorn 2 days ago | parent | prev [-] | | I think "ask your doctor" is for prescription meds since only said doctor can write prescriptions. And "your doctor" is actually "any doctor that is willing to write you a prescription for our medicine". | | |
| ▲ | parpfish 2 days ago | parent [-] | | "ask your doctor" is more widespread than tthat. if you look up any diet or exercise advice, there's always an "ask your doctor before starting any new exercise program". i'm not going to call my doctor to ask "is it okay if I try doing kettlebell squats?" | | |
| ▲ | rkomorn 2 days ago | parent [-] | | Yes, I totally got out of context and said something a bit senseless. But also, maybe calling your doctor would be wise (eg if you have back problems) before you start doing kettlebell squats. I'd say that the audience for a lot of health related content skews towards people who should probably be seeing a doctor anyway. The cynic in me also thinks some of the "ask your doctor" statements are just slapped on to artificially give credence to whatever the article is talking about (eg "this is serious exercise/diet/etc). Edit: I guess what I meant is: I don't think it's just "liability", but genuine advice/best practice/wisdom for a sizable chunk of audiences. |
|
|
| |
| ▲ | lurking_swe 2 days ago | parent | prev | next [-] | | I live in the U.S. and my doctor is very responsive on MyChart. A few times a year i’ll send a message and I almost always get a reply within a day! From my PCP directly, or from her assistant. I’d encourage you to find another doctor. | | |
| ▲ | el_benhameen 2 days ago | parent [-] | | My doctor is usually pretty good at responding to messages too, but there’s still a difference between a high-certainty/high-latency reply and a medium-certainty/low-latency reply. With the llm I can ask quick follow ups or provide clarification in a way that allows me to narrow in on a solution without feeling like I’m wasting someone else’s time. But yes, if it’s bleeding, hurting, or growing, I’m definitely going to the real person. |
| |
| ▲ | shrx 2 days ago | parent | prev [-] | | > it can take weeks to just get a phone call > If I manage to get an appointment, it’s a 15 minute slot I'm sorry that this is what "good insurance" gets you. | | |
| ▲ | lurking_swe 2 days ago | parent [-] | | no, that’s what happens when you pick a busy doctor or a practice that’s overbooked in general. All too common these days! :( This probably varies by locale. For example my doctor responds within 1 day on MyChart for quick questions. I can set up an in person or video appointment with her within a week, easily booked on MyChart as well. |
|
| |
| ▲ | yojo 3 days ago | parent | prev | next [-] | | This is the terrifying part: doctors do this too! I have an MD friend that told me she uses ChatGPT to retrieve dosing info. I asked her to please, please not do that. | | |
| ▲ | ozgrakkurt 3 days ago | parent | next [-] | | Find good doctors. A solution doesn’t have to be perfect. A doctor doing better than regular joe with a computer is much higher as you can see in research around this topic | | |
| ▲ | SequoiaHope 2 days ago | parent [-] | | I have noticed that my doctor is getting busier and busier lately. I worry that cost cutting will have doctors so frantic that they are forced to rely on things like ChatGPT, and “find good doctors” will be an option only for an elite few. |
| |
| ▲ | nsriv 3 days ago | parent | prev [-] | | I have a hunch that the whole "chat" interface is a brilliant but somewhat unintentional product design choice that has created this faux trust in LLM's to give back accurate information that others can get from drugs.com or Medline with a text search. This is a terrifying example, and please get her to test it out by second guessing the LLM and watching it flip flop. |
| |
| ▲ | wtbdbrrr 2 days ago | parent | prev | next [-] | | your doctor can have a bad day. and or be an asshole. In 40 years, only one of my doctors had the decency to correct his mistake after I pointed it out. He prescribed the wrong Antibiotics, which I only knew because I did something dumb and wondered if the prescribed antibiotics cover a specific strain, which they didn't, which I knew because I asked an LLM and then superficially double-checked via trustworthy official, government sources. He then prescribed the correct antibiotics. In all other cases where I pointed out a mistake, back in the day researched without LLMs, doctors justified their logic, sometimes siding with a colleague or "the team" before evaluating the facts themselves, instead of having an independent opinion, which, AFAIK, especially in a field like medicine, is _absolutely_ imperative. | |
| ▲ | djrj477dhsnv 3 days ago | parent | prev | next [-] | | I disagree. I'd wager that state of the art LLMs can beat out of the average doctor at diagnosis given a detailed list of symptoms, especially for conditions the doctor doesn't see on a regular basis. | | |
| ▲ | rafterydj 3 days ago | parent | next [-] | | "Given a detailed list of symptoms" is sure holding a lot of weight in that statement. There's way too much information that doctors tacitly understand from interactions with patients that you really cannot rely on those patients supplying in a "detailed list". Could it diagnose correctly, some of the time? Sure. But the false positive rate would be huge given LLMs suggestible nature. See the half dozen news stories covering AI induced psychosis for reference. Regardless, it's diagnostic capability is distinct from the dangers it presents, which is what the parent comment was mentioning. | |
| ▲ | nsriv 3 days ago | parent | prev [-] | | What you're describing, especially with the amount of water "given a detailed list of symptoms" is carrying, is essentially a compute-intensive flowchart with no concept of diagnostic parsimony. |
| |
| ▲ | yujzgzc 3 days ago | parent | prev | next [-] | | Plot twist, your doctor is looking it up on WebMD themselves | |
| ▲ | jrm4 2 days ago | parent | prev | next [-] | | Almost certainly more I would think, precisely because of magnitude errors. The ol' "What weighs more, a pound of feathers or two pounds of bricks" trick explains this perfectly to me. | |
| ▲ | gmac 3 days ago | parent | prev [-] | | Not really: it's arguably quite a lot worse. Because you can judge the trustworthiness of the source when you follow a link from Google (e.g. I will place quite a lot of faith in pages at an .nhs.uk URL), but nobody knows exactly how that specific LLM response got generated. | | |
| ▲ | naasking 2 days ago | parent [-] | | Many of the big LLMs do RAG and will provide links to sources, eg. Bing/ChatGPT, Gemini Pro 2.5, etc. |
|
| |
| ▲ | christkv 3 days ago | parent | prev | next [-] | | I find if I force thinking mode and then force it to search the web it’s much better. | | |
| ▲ | ARandumGuy 3 days ago | parent | next [-] | | But at that point wouldn't it be easier to just search the web yourself? Obviously that has its pitfalls too, but I don't see how adding an LLM middleman adds any benefit. | | |
| ▲ | ragequittah 3 days ago | parent | next [-] | | For medication guidelines I'd just do a Google search. But sometimes I want 20 sources and a quick summary of them. Agent mode or deep research is so useful. Saves me so much time every day. | | | |
| ▲ | leptons 2 days ago | parent | prev | next [-] | | If only we could get people to use the brains in their head. | |
| ▲ | s0rce 3 days ago | parent | prev [-] | | Not always, it can find stuff that is otherwise difficult for me and search engines have become much worse than 15-20 years ago. |
| |
| ▲ | cj 3 days ago | parent | prev [-] | | Agree, I usually force thinking mode too. I actually like the "Thinking mini" option that was just released recently, good middle ground between getting an instant answer and waiting 1-2 minutes. |
| |
| ▲ | ljsprague 2 days ago | parent | prev [-] | | Maybe don't use an LLM for dosing guidelines. |
|