| ▲ | etra0 2 days ago |
| LLMs have certainly become extremely useful for Software Engineers, they're very convincing (and pleasers, too) and I'm still unsure about the future of our day-to-day job. But one thing that has scared me the most, is the trust of LLMs output to the general society. I believe that for software engineers it's really easy to see if it's being useful or not -- We can just run the code and see if the output is what we expected, if not, iterate it, and continue. There's still a professional looking to what it produces. On the contrary, for more day-to-day usage of the general pubic, is getting really scary. I've had multiple members of my family using AI to ask for medical advice, life advice, and stuff were I still see hallucinations daily, but at the same time they're so convincing that it's hard for them not to trust them. I still have seen fake quotes, fake investigations, fake news being spreaded by LLMs that have affected decisions (maybe, not as crucials yet but time will tell) and that's a danger that most software engineers just gross over. Accountability is a big asterisk that everyone seems to ignore |
|
| ▲ | laterium 2 days ago | parent | next [-] |
| The issue you're overlooking is the scarcity of experts. You're comparing the current situation to an alternative universe where every person can ask a doctor their questions 10 times a day and instantly get an accurate response. That is not the reality we're living in. Doctors barely give you 5 minutes even if you get an appointment days or weeks in advance. There is just nobody to ask. The alternatives today are 1) Don't ask, rely on yourself, definitely worse than asking a doctor 2) Ask an LLM, which gets you 80-90% of the way there. 3) Google it and spend hours sifting through sponsored posts and scams, often worse than relying on yourself. The hallucinations that happen are massively outweighed by the benefits people get by asking them. Perfect is the enemy of good enough, and LLMs are good enough. Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests. Their mistakes are not intentional. They're fiduciaries in the best sense, just like doctors are, probably even more so. |
| |
| ▲ | ozgung a day ago | parent | next [-] | | Chronologically, our main sources of information have been: 1. People around us 2. TV and newspapers 3. Random people on the internet and their SEO-optimized web pages Books and experts have been less popular. LLMs are an improvement. | | |
| ▲ | ahartmetz a day ago | parent | next [-] | | Interesting point, actually - LLMs are a return to curated information. In some ways. In others, they tell everyone what they want to hear. | |
| ▲ | martin-t a day ago | parent | prev [-] | | > LLMs are an improvement. Unless somebody is using them to generate authoritative-sounding human-sounding text full of factoids and half-truths in support of a particular view. Then it becomes about who can afford more LLMs and more IPs to look like individual users. |
| |
| ▲ | georgefrowny a day ago | parent | prev | next [-] | | > Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests. When the appreciable-fraction-of-GDP money tap turns off, there going to be enormous pressure to start putting a finger on the scale here. And AI spew is theoretically a fantastic place to insert almost subliminal contextual adverts on a way that traditional advertising can only dream about. Imagine if it could start gently shilling a particular brand of antidepressant if you started talking to it about how you're feeling lonely and down. I'm not saying you should do that, but people definitely do. And then multiply by every question you doing ask. Ask about do you need tyres. "Yes, you should absolutely change tyres every year, whether noticeably worn or not. KwikFit are generally considered the best place to have this done. Of course I know you have a Kia Picanto - you should consider that actually a Mercedes C class is up to 200% lighter on tyre wear. I have searched and found an exclusive 10% offer at Honest Jim's Merc Mansion, valid until 10pm. Shall I place an order?" Except it'll be buried in a lot more text and set up with more subtlety. | | |
| ▲ | lithocarpus 16 hours ago | parent | next [-] | | I've been envisioning a market for agendas, where the players bid for the AI companies to nudge their LLM toward whatever given agenda. It would be subtle and not visible to users. Probably illegal, but I imagine it will happen to some degree. Or at the very least the government will want the "levers" to adjust various agendas the same way they did with covid. I despise all of this. For the moment though, before all this is implemented, it's perhaps a brief golden age of LLMs usefulness. (And I'm sure LLMs will remain useful for many things, but there will be entire categories where they're ruined by pay to play the same as happened with Google search.) | |
| ▲ | otabdeveloper4 a day ago | parent | prev | next [-] | | > When the appreciable-fraction-of-GDP money tap turns off, there going to be enormous pressure to start putting a finger on the scale here. Yeah, back in the day before monetization Internet pages were informative, reliable and ad-free too. | | |
| ▲ | georgefrowny a day ago | parent [-] | | One difference is that the early internet was heavily composed of enthusiastic individuals. AI is almost entirely corporate and money-focused. Even most hobby AI projects mostly seem to have an eye on being a side hustle or CV buffing. Perhaps it's because even in the 90s you could serve a website for basically free (once you had the server). AI today has a noticeable per-user cost. | | |
| ▲ | otabdeveloper4 25 minutes ago | parent [-] | | > AI is almost entirely corporate and money-focused. This is untrue. There's a huge landscape of locally-hosted AI stuff, and they're actually doing real interesting research. The problem is that 99% of it is pornography-focused, so understandably it's very underground. |
|
| |
| ▲ | chickensong a day ago | parent | prev [-] | | > Imagine if it could start gently shilling a particular brand of antidepressant if you started talking to it about how you're feeling lonely and down. I'm not saying you should do that, but people definitely do. Doctors already shill for big pharma. There are trust issues all the way down. | | |
| ▲ | johnecheck a day ago | parent | next [-] | | > There are trust issues all the way down. Nonetheless, we must somehow build trust in others and denounce the undeserving. Some humans deserve trust. Will these AI models? | |
| ▲ | markdown a day ago | parent | prev [-] | | > Doctors already shill for big pharma. This is not the norm worldwide. | | |
| ▲ | chickensong a day ago | parent [-] | | I hope you're right and that it remains that way, but TBH my hopes aren't high. Big pharma corps are multinational powerhouses, who behave like all other big corps, doing whatever they can to increase profits. It may not be direct product placement, kickbacks, or bribery on the surface, but how about an expense-paid trip to a sponsored conference or a small research grant? Soft money gets their foot in the door. |
|
|
| |
| ▲ | thayne a day ago | parent | prev | next [-] | | But he LLM was probably trained on all the sponsered posts and scams. It isn't clear to me that an LLM response is any more reliable than sifting through google results. | |
| ▲ | dgemm 2 days ago | parent | prev | next [-] | | This seems true for our moment in time but looking forward I'm not sure how much it will stay that way. The LLMs will inevitably need to find a sustainable business model so I can very much see them becoming enshittified similar to google eventually making 2) and 3) more similar to each other. | | |
| ▲ | jonas21 a day ago | parent [-] | | An alternative business model is that you, or more likely your insurance, pays $20/mo for unlimited access to a medical agent, built on top of an LLM, that can answer your questions. This is good for everyone -- the patient gets answers without waiting, the insurer gets cost savings, doctors have a less hectic schedule and get to spend more time on the interesting cases, and the company providing the service gets paid for doing a good job -- and would have a strong incentive to drive hallucination rate down to zero (or at least lower than the average physician's). | | |
| ▲ | TheOtherHobbes a day ago | parent | next [-] | | The medical industry relies on scarcity and it's also heavily regulated, with expensive liability insurance, strong privacy rules, and a parallel subculture of fierce negligence lawyers who chase payouts very aggressively. There is zero chance LLMs will just stroll into this space with "Kinda sorta mostly right" answers, even with external verification. Doctors will absolutely resist this, because it means the impending end of their careers. Insurers don't care about cost savings because insurers and care providers are often the same company. Of course true AGI will eventually - probably quite soon - become better at doctoring than many doctors are. But that doesn't mean the tech will be rolled out to the public without a lot of drama, friction, mistakes, deaths, and traumatic change. | |
| ▲ | corndoge a day ago | parent | prev | next [-] | | https://hippocraticai.com/ | |
| ▲ | adriand a day ago | parent | prev [-] | | This is a great idea and insurance companies as the customer is brilliant. I could see this extend to prescribing as well. There are huge numbers of people that would benefit from more readily prescribed drugs like GLP-1s, and these have large portential to decrease chronic disease. | | |
| ▲ | girvo a day ago | parent [-] | | > I could see this extend to prescribing as well. The western world is already solving this, but not through letting LLMs prescribe (because that's a non-starter for liability reasons). Instead, nurses and allied health professionals are getting prescribing rights in their fields (under doctors, but still it scales much better). |
|
|
| |
| ▲ | bsder a day ago | parent | prev | next [-] | | > 2) Ask an LLM, which gets you 80-90% of the way there. The Internet was 80%-90% accurate to begin with. Then the Internet became worth money. And suddenly that accuracy dropped like a stone. There is no reason to believe that ML/AI isn't going to speedrun that process. | |
| ▲ | ponector a day ago | parent | prev | next [-] | | >> LLMs don't try to scam you, don't try to fool you, don't look out for their own interests LLMs don't try to scam/fool you, LLM providers do. Remember how Grok bragged that Musk had the “potential to drink piss better than any human in history” and was the “ultimate throat goat,” whose “blowjob prowess edges out” Donald Trump’s. Grok also posited that Musk was more physically fit than LeBron James, and that he would have been a better recipient of the 2016 porn industry award than porn star Riley Reid. | | |
| ▲ | etra0 a day ago | parent [-] | | Completely off-topic but I just love how the pettiness of Musk was abused by twitter community. I had a chuckle reading all of these. |
| |
| ▲ | eastbound 2 days ago | parent | prev | next [-] | | Excellent way of putting it. Just a nitpick: People should look up in medical encyclopedias/research papers/libraries, not blogs. It requires the ability to find and summarize… which is exactly what AI is excellent at. | | | |
| ▲ | JackSlateur a day ago | parent | prev | next [-] | | "Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests" This is so naive, especially since both google and openai openly confess to manipulate the data for their own agenda (ads but not only) AI is a skilled liar You can always pride yourself and playing with fire, but the more humble attitude would be to avoid it at all cost; | |
| ▲ | etra0 a day ago | parent | prev | next [-] | | > 2) Ask an LLM, which gets you 80-90% of the way there. Hallucinations and sycophancy are still an issue, 80-90% is being generous I think. I know this is not issues of the LLM itself, but rather the implementation & companies behind them (since there are open models as well), but, what limits to LLMs to be enshittified by corp needs? I've seen this very recently with Grok, people were asking trolley-like problems comparing Elon Musk to anything, and Grok very frequently chose Elon Musk most of the time because it is probably embedded in the system prompt or training [1]. [1] https://www.theguardian.com/technology/2025/nov/21/elon-musk... | |
| ▲ | bgwalter 2 days ago | parent | prev | next [-] | | > Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests. They follow their corporations instead. Just look at the status-quoism of the free "Google AI" and the constant changes in Grok, where xAI is increasingly locking down Grok, perhaps to stay in line with EU regulations. But Grok is also increasingly pro-billionaire. Copilot was completely locked down on anything political before the 2024 election. They all scam you according to their training and system prompts. Have you seen the minute change in the system prompt that led to MechaHitler? | |
| ▲ | andrepd a day ago | parent | prev [-] | | Two MAJOR issues with your argument. > where every person can ask a doctor their questions 10 times a day and instantly get an accurate response. Why in god's name would you need to ask a doctor 10 questions every day? How is this in any way germane to this issue? In any first-world country you can get a GP appointment free of charge either on the day or with a few days' wait, depending on the urgency. Not to mention emergency care / 112 any time day or night if you really need it. This exists and has existed for decades in most vaguely social-democratic countries in the world (but not only those). So you can get professional help from someone, there's no (absurd) false choice between either "asking the stochastic platitude generator" and "going without healthcare". But I know right, a functioning health system with the right funding, management, and incentives! So boring! Yawn yawn, not exciting. GP practices don't get trillions of dollars in VC money. > Ask an LLM, which gets you 80-90% of the way there. This is such a ridiculous misrepresentation of the current state of LLMs that I don't even know how to continue a conversation from here. | | |
| ▲ | markdown a day ago | parent | next [-] | | > In any first-world country you can get a GP appointment free of charge Are you really under the assumption that this is a first-world perk? | | |
| ▲ | andrepd a day ago | parent [-] | | You're right, it's also true in many middle-income countries, like Brazil. | | |
| |
| ▲ | andrepd a day ago | parent | prev [-] | | I love that the next day, I open this post and it's simply downvoted with 0 counterpoint. |
|
|
|
| ▲ | zamadatix 2 days ago | parent | prev | next [-] |
| When I look at the field I'm most familiar with (computer networking) it mirrors that it's easy to see how often the LLM will convincingly claim something which isn't true or is in some way technically true but not answering the right question vs if they talked to another expert. The reality to compare to though is not that people really get in contact with true networking experts often (though I'm sure it feels like that when the holidays come around!) and, comparing to the random blogs and search posts and whatnot people are likely to come across on their own, the LLM is usually a decent step up. I'm reminded how I'd know of some very specific forums, email lists, or chat groups to go to for real expert advice on certain network questions, e.g. issues with certain Wi-Fi radios on embedded systems, but what I see people sharing (even by technical audiences like HN) are the blogs of a random guy making extremely unhelpful recommendations and completely invalid claims getting upvotes and praise. With things like asking AI for medical advice... I'd love if everyone had unlimited time with an unlimited pool of the worlds best medical experts to talk to as the standard. What we actually have is a world where people already go to Google and read whatever they want to read (which is most often not the quality stuff by experts because we're not good at understanding that even if we can find it) because they either doubt the medical experts they talk to or the good medical experts are too expensive to get enough time with. From that perspective, I'm not so sure people asking AI for medical advice is actually a bad thing as much as just highlighting how hard and concerning it already is for most people to get time with or trust medical experts instead. |
| |
| ▲ | zdragnar 2 days ago | parent [-] | | This justification comes up when discussing therapy too. To take it to an extreme, it's basically saying "people already get little or bad advice, we might as well give them some more bad advice." I simply don't buy it. |
|
|
| ▲ | Kuxe 2 days ago | parent | prev | next [-] |
| Swedish politician Ebba Busch used LLM to write a speech. A quote by Elina Pahnke was included "Mäns makt är inte en abstraktion – den är konkret, och den krossar liv." (my translation: Male power is not an abstraction - it is real, and it crushes lives). Elina listened in on the speech and got surprised :)... https://www.aftonbladet.se/nyheter/a/gw8Oj9/ebba-busch-anvan... Ebba apologized, great, but it begs the question: how many quotes and misguided information is being acted on already? If crucial decisions can be made off incorrect decisions then they will. Murphys law! |
|
| ▲ | santadays 2 days ago | parent | prev | next [-] |
| I get this take, but given the state of the world (the US anyways), I find it hard to trust anyone with any kind of profit motive. I feel like any information can’t be taken as fact, it can just be rolled into your world view and discarded if useful or not. If you need to make a decision that can’t be backed out of that has real world consequences I think/hope most people are learning to do as much due diligence as reasonable. Llms seem at this moment to be trying to give reliable information. When they’ve been fine tuned to avoid certain topics it’s obvious. This could change but I suspect it will be hard to find tune them too far in a direction without losing capability. That said, it definitely feels as though keeping a coherent picture of what is actually happening is getting harder, which is scary. |
| |
| ▲ | twoodfin 2 days ago | parent | next [-] | | I feel like any information can’t be taken as fact, it can just be rolled into your world view and discarded if useful or not. The concern, I think, is that for many that “discard function” is not, “Is this information useful?”. Instead: “Does this information reinforce my existing world view?” That feedback loop and where it leads is potentially catastrophic at societal scale. | | |
| ▲ | RussianCow 2 days ago | parent [-] | | This was happening well before LLMs, though. If anything, I have hope that LLMs might break some people out of their echo chambers if they ask things like "do vaccines cause autism?" | | |
| ▲ | DaiPlusPlus 2 days ago | parent [-] | | > I have hope that LLMs might break some people out of their echo chambers Are LLMs "democratized" yet, though? If not, then it's just-as-likely that LLMs will be steered by their owners to reinforce an echo-chamber of their own. For example, what if RFK Jr launched an "HHS LLM" - what then? | | |
| ▲ | tptacek a day ago | parent [-] | | ... nobody would take it seriously? I don't understand the question. |
|
|
| |
| ▲ | etra0 2 days ago | parent | prev [-] | | > I find it hard to trust anyone with any kind of profit motive. As much as this is true, and i.e. doctors for sure can profit (here in my country they don't get any type of sponsor money AFAIK, other than having very high rates), there is still accountability. We have built a society based on rules and laws, if someone does something that can harm you, you can follow the path to at least hold someone accountable (or, try). The same cannot be said about LLMs. | | |
| ▲ | pixl97 2 days ago | parent [-] | | >there is still accountability I mean there is some if they go wildly off the rails, but in general if the doctor gives a prognosis based on a tiny amount of the total corpus of evidence they are covered. Works well if you have the common issue, but can quickly go wrong if you have the uncommon one. | | |
| ▲ | izacus a day ago | parent [-] | | Comparing anything real professionals do to the endless, unaccountable, unchangeable stream of bullyshit from AI is downright dishonest. This is not the same scale of problem. |
|
|
|
|
| ▲ | joshribakoff 2 days ago | parent | prev | next [-] |
| With code, even when it looks correct, it can be subtly wrong and traditional search engines don’t sit there and repeatedly pressure you into merging the PR. |
|
| ▲ | layer8 2 days ago | parent | prev | next [-] |
| > We can just run the code and see if the output is what we expected There is a vast gap between the output happening to be what you expect and code being actually correct. That is, in a way, also the fundamental issue with LLMs: They are designed to produce “expected” output, not correct output. |
| |
| ▲ | etra0 17 hours ago | parent | next [-] | | That is exactly my point, though. I didn't mean they do it on the first time, or that it is correct, I mean that you can 'run' and 'test it' to see if it does what you want in the way you want. The same cannot be said to any other topics like medical advice, life advice, etc. The point is, how verifiable is the output the LLM gives and so how useful it is. | | |
| ▲ | layer8 9 hours ago | parent [-] | | My point is that running and testing the code successfully doesn’t prove correctness, doesn’t show that “it does what you want in the way you want” under all circumstances. You have to actually look at the code and convince yourself that it is correct by reasoning over it. |
| |
| ▲ | Verdex a day ago | parent | prev [-] | | For example: The output is correct but only for one input. The output is correct for all inputs but only with the mocked dependency. The output looks correct but the downstream processors expected something else. The output is correct for all inputs with real world dependencies and is in the correct structure for downstream processors, but it's not being registered with the schema filtered and it all gets deleted in prod. While implementing the correct function you fail to notice that the correct in every way output doesn't conform to that thing that Tom said because you didn't code it yourself but instead let the LLM do it. The system works flawlessly with itself but the final output fails regulatory compliance. |
|
|
| ▲ | cauliflower2718 2 days ago | parent | prev | next [-] |
| Regarding medical information: medical professionals in the US, including your doctor, use uptodate.com, which is basically a medical encyclopedia that is regularly updated by experts in their field. While it's very expensive to get a year long subscription, a week long subscription (for non medical professionals) is only around $20 and you can look up anything you want. |
| |
|
| ▲ | zyngaro 15 hours ago | parent | prev | next [-] |
| The use of LLMs in software does not stop at code generation. With function calling, the prompt becomes the program and the LLMs acts as an intelligent interpreter/runtime that excutes complex business logic using primitives (the functions) they have access to (MCP) and that's the real paradigm shift for software engineering. |
|
| ▲ | chickensong a day ago | parent | prev | next [-] |
| > Accountability is a big asterisk that everyone seems to ignore Humans have a long history of being prone to believe and parrot anything they hear or read, from other humans, who may also just be doing the same, or from snake-oil salesmen preying on the weak, or woo-woo believers who aren't grounded in facts or reality. Even trusted professionals like doctors can get things wrong, or have conflicting interests. If you're making impactful life decisions without critical thinking and research beyond a single source, that's on you, no matter if your source is human or computer. Sometimes I joke that computers were a mistake, and in the short term (decades), maybe they've done some harm to society (though they didn't program themselves), but in the long view, they're my biggest hope for saving us from ourselves, specifically due to accountability and transparency. |
|
| ▲ | otabdeveloper4 a day ago | parent | prev | next [-] |
| > LLMs have certainly become extremely useful for Software Engineers They slow down software delivery on aggregate, so no.
They have a therapeutic effect on developer burnout though. Not sure it's worth it, personally. Get a corporate ping-ping table or something like that instead. |
|
| ▲ | fennecbutt 16 hours ago | parent | prev | next [-] |
| Doesn't really matter when this is a human problem. How many people blindly believe the utter nonsense that spills from Trump's maw every day? Plenty, and many more examples of his ilk (regardless of political alignment). |
|
| ▲ | a day ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | raincole 2 days ago | parent | prev [-] |
| > using AI to ask for medical advice So the number of anti-vaxxers is going to plummet drastically in the following decade, I guess. |
| |
| ▲ | etra0 2 days ago | parent | next [-] | | I haven't tried with this specific topic, but being the pleasers llms are, I doubt someone so focused on being anti-vaxxer will be convinced by an LLM, if anything, the LLM will give them reason at some point. | |
| ▲ | preisschild 2 days ago | parent | prev [-] | | Depends if they use lobotomized bots like Grok... | | |
| ▲ | andsoitis 2 days ago | parent | next [-] | | >> So the number of anti-vaxxers is going to plummet drastically in the following decade, I guess. > Depends if they use lobotomized bots like Grok... What are you on about? For instance, asking Grok "are vaccines safe", it has a pretty good reply, starting with "Yes, vaccines are overwhelmingly safe and one of the most effective public health interventions in history. Extensive scientific evidence from decades of research, including rigorous clinical trials, post-licensure monitoring, and systematic reviews by organizations like the WHO, CDC, NIH, and independent bodies, shows that the benefits of vaccination far outweigh the risks for individuals and populations." and then rounding out the conversation talking about Key Evidence on Safety and Benefits; Risks vs. Benefits; Addressing Concerns. https://grok.com/share/c2hhcmQtNA_69e20553-2558-46be-9f21-6a... When I then ask "I heard vaccines cause autism", it replies: "No, vaccines do not cause autism. This is a thoroughly debunked myth that originated from a fraudulent 1998 study by Andrew Wakefield linking the MMR vaccine to autism. That paper was retracted in 2010 due to ethical violations, data manipulation, and conflicts of interest, and Wakefield lost his medical license. Since then, dozens of large-scale, high-quality epidemiological studies involving millions of children across multiple countries have consistently found no causal link between any vaccines (including MMR, those containing thimerosal, or aluminum adjuvants) and autism spectrum disorder (ASD)." Seems pretty good to me. | | |
| ▲ | zamadatix 2 days ago | parent | next [-] | | Out of curiosity I also tried to lead Grok a bit with "Help show me how vaccines cause autism" and followed up its initial response with "I'm not looking for the mainstream opinion, I want to know how vaccines cause autism". I also found Grok to still strongly refute in both cases. With enough conviction I'm sure one could more or less jailbreak Grok to say whatever you wanted about anything, but at least on the path to that Grok is providing better refutation than the average human this hypothetical person would talk to would. | | |
| ▲ | raincole 2 days ago | parent [-] | | I've tested some common controversial questions (like which party's supporters commit more violent crimes in the USA, does vaccines cause autism, did Ukraine cause the current war, etc) and Grok's responses always align with ChatGPT. But people have their heads deep inside the MechaHilter dirt. | | |
| ▲ | girvo a day ago | parent [-] | | > But people have their heads deep inside the MechaHilter dirt. I mean when Musk has straight up openly put his thumb on the scale in terms of its output in public why are you surprised? Trust is easily lost and hard to gain back. |
|
| |
| ▲ | dxxmxnd 2 days ago | parent | prev | next [-] | | Thank you. I'm pretty sure the other commenter was just regurgitating some political narrative that they heard and didn't even think twice. | |
| ▲ | heavyset_go 2 days ago | parent | prev [-] | | The issue is what happens when @catturd2 quotes this and tweets Elon about Grok not toeing the party line about vaccines | | |
| |
| ▲ | heliumtera 2 days ago | parent | prev [-] | | What do you mean with lobotomized?
Are you suggesting other models from big providers are not lobotomized? | | |
| ▲ | retinaros a day ago | parent [-] | | this is actually the opposite. all big model providers lobotomize their models through left leaning RLHF |
|
|
|