| ▲ | jonas21 2 days ago |
| Do you not see ChatGPT and Claude as viable alternatives to search? They've certainly replaced a fair chunk of my queries. |
|
| ▲ | ElijahLynn 2 days ago | parent | next [-] |
| Same, my Google use has dropped noticeably, probably 90%. I remember the feeling when I first started using ChatGPT in late 2022, and it's the same feeling I had when Google search came out in the early 2000s. And that was like, "oh chatgpt is the new Google". |
| |
| ▲ | hangonhn 2 days ago | parent [-] | | Same feeling for me as well. It was like the old Google where it lead you to the right answer. ChatGPT is similar but in some ways smoother because it's conversational. I think most days I don't even use Google at all. That said their "Dive into AI" feature has cause me to use it more lately. |
|
|
| ▲ | adam_arthur 2 days ago | parent | prev | next [-] |
| Google Search has AI responses at the top of the fold. Eventually those answers will be sufficient for most and give people no reason to move to alternatives. Allowing them to pay to be default seems to mostly guarantee this outcome |
| |
| ▲ | rockskon a day ago | parent [-] | | Those answers with the awful model they're using are frequently wrong and aggravating to have displace search results. |
|
|
| ▲ | wiredpancake 2 days ago | parent | prev | next [-] |
| You are losing braincells relying almost entirely on ChatGPT. |
| |
| ▲ | yamazakiwi 2 days ago | parent | next [-] | | I'm losing braincells relying on Google Search shoving ad riddled trash in my face and even worse AI results. Gemini frequently just straight up lies to me. Saying the opposite of the truth so frequently I have experienced negative consequences in real life believing it. The only people who are being homogenized or "down-graded" by Chat GPT are people who wouldn't have sought other sophisticated strategies in the first place, and those who understand that Chat GPT is a tool and understand how it works, and it's context, can utilize it efficiently with great positive effect. Obviously Chat GPT is not perfect but it doesn't need to be perfect to be useful. For a search user, Google Search has not been effective for so long it's unbelievable people still use it. That is, if you believe search should be a helpful tool with utility and not a product made to generate maximum revenue at the cost of search experience. Would you say that people were losing braincells using google in 2010 to look up an animal fact instead of going to a library and opening an encyclopedia? | | |
| ▲ | dns_snek 2 days ago | parent [-] | | > Gemini frequently just straight up lies to me I'm pretty sure they meant LLMs in general, not just ChatGPT. They all straight up lie to very similar degrees, no contest there. > The only people who are being homogenized or "down-graded" by Chat GPT are people who wouldn't have sought other sophisticated strategies in the first place, and those who understand that Chat GPT is a tool and understand how it works, and it's context, can utilize it efficiently with great positive effect. I know for a fact that this isn't true. I have a friend who was really smart, probably used to have an IQ of 120 and he would agree with all of this. But a few of us are noticing that he's essentially being lobotomized by LLMs and we've been trying to warn him but he just doesn't see it, he's under the impression that "he's using LLMs efficiently with great positive effect". In reality his intellectual capabilities (which I used to really respect) have withered and he gets strangely argumentative about really basic concepts that he's absolutely wrong about. It seems like he won't accept it as true until an LLM says so. We used to laugh at those people together because this could never happen to us, so don't think that it can never happen to you. Word of advice for anyone reading this: If multiple people in your life suddenly start warning you that your LLM interactions seem to be becoming a problem for one reason or another, make the best possible effort to hear them out and take them seriously. I know it probably sounds absurd from your point of view, but that's simply a flaw in our own perception of ourselves, we don't see ourselves objectively, we don't realize when we've changed. | | |
| ▲ | yamazakiwi 14 hours ago | parent [-] | | If you are talking about an adult I don't believe you lol And if it is true... it is not a common experience and would have external factors contributing to this behavior. Additionally, using IQ to qualify someone's intelligence is a signal, so I'll just not go into it deeper since we will disagree as I find your anecdote juvenile, straight up exaggeration, or a complete lie to serve your opinion. Plausibly this could happen if you had the ego of a 16 year old or were socially disabled, and it would be alleviated over time through experience. I'm not trying to be rude but you sound like a Tiktok conspirator and I'm old enough and experienced enough to smell bullshit. |
|
| |
| ▲ | 2 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | robryan 2 days ago | parent | prev [-] | | Sure you could end up with occasional misinformation, but the speed at which you can get information more than makes up for it.
Niche topics that would otherwise take hours or days to pull together and summar ise obscure sources takes minutes with LLMs. |
|
|
| ▲ | quitit 2 days ago | parent | prev | next [-] |
| I'm noticing that newer computer users seek information exclusively from chatgpt and that they don't google at all. They want the answer right away and aren't usually aware or bothered with the hallucination problem. While that's concerning, my own experience in seeking information using this approach has been positive: it provides a fast, fully customised answer that easily outweighs the mistakes it makes. This flattens the learning curve on a new subject and with that saved time I am able to confirm important details to weed out the mistakes/hallucinations. Whereas with Googling I'd be reading technical documentation, blog posts and whatever else I could find, and -crucially- I'd still need to be confirming the important details because that step was never optional. Another plus is that I'm now not subjected to low quality ai-generated blog spam when seeking information. I foresee Google search losing relevance rapidly, chatbots are the path of least resistance and "good enough" for most tasks, but I also am aware that Google's surveillance-based data collection will continue to be fruitful for them regardless if I use Google search or not. |
|
| ▲ | bediger4000 2 days ago | parent | prev | next [-] |
| I do not. I prefer to read the primary sources, LLM summaries are, after all, probabilistic, and based on syntax. I'm often looking for semantics, and an LLM really really is not going to give me that. |
| |
| ▲ | crazygringo 2 days ago | parent | next [-] | | Funny, I use LLM's for so much search now because they understand my query semantically, not just its syntax. Keyword matching fails completely for certain types of searching. | | |
| ▲ | balder1991 2 days ago | parent [-] | | Also weirdly LLMs like ChatGPT can give good sources that usually wouldn’t be at the top of a Google query. | | |
| ▲ | matwood 2 days ago | parent [-] | | There’s a particular Italian government website and the only way I can find it is through ChatGPT. It’s a sub site under another site and I assume it’s the context of my question that surfaces the site when Google wouldn’t. |
|
| |
| ▲ | sothatsit 2 days ago | parent | prev | next [-] | | Tools like GPT-5 Thinking are actually pretty great at linking you to primary sources. It has become my go-to search tool because even though it is slower, the results are better. Especially for things like finding documentation. I basically only use Google for "take me to this web page I already know exists" queries now, and maps. | | |
| ▲ | Rohansi 2 days ago | parent [-] | | > pretty great at linking you to primary sources Do you check all of the sources though? Those can be hallucinated and you may not notice unless you're always checking them. Or it could have misunderstood the source. It's easy to assume it's always accurate when it generally is. But it's not always. | | |
| ▲ | matwood 2 days ago | parent | next [-] | | > It's easy to assume it's always accurate when it generally is. But it's not always. So like a lot of the internet? I don’t really understand this idea that LLMs have to be right 100% of the time to be useful. Very little of the web currently meets that standard and society uses it every day. | | |
| ▲ | johannes1234321 2 days ago | parent | next [-] | | It's a question on judgement on the individual case. A documentation for a specific product I expect to be mostly right, but maybe miss the required detail. Some blog, by some author I haven't heard about I trust less. Some third party sites I give some trust, some less. AI is a mixed bag, while always implying authority on the subject. (While becoming submissive when corrected) | |
| ▲ | Rohansi 2 days ago | parent | prev [-] | | It's a marketing issue. LLMs are being marketed similar to Tesla's FSD - claims of PhD-level intelligence, AGI, artificial superintelligence, etc. set the expectation that LLMs should be smarter than (most of) us. Why would we have any reason to doubt the claims of something that is smarter than us? Especially when it is very confident about the way it is saying it. | | |
| ▲ | matwood 2 days ago | parent [-] | | That's fair. The LLM hype has been next level, but it's only rivaled by the 'it never works for anything and will make you stupid' crowd. Both are wrong in my experience. |
|
| |
| ▲ | sothatsit 2 days ago | parent | prev [-] | | I have noticed it hallucinating links when it can't find any relevant documentation at all, but otherwise it is pretty good. And yes, I do check them. The type of search you are doing probably matters a lot here as well. I use it to find documentation for software I am already moderately familiar with, so noticing the hallucinations is not that difficult. Although, hallucinations are pretty rare for this type of "find documentation for XYZ thing in ABC software" query. Plus, it usually doesn't take very long to verify the information. I did get caught once by it mentioning something was possible that wasn't, but out of probably thousands of queries I've done at this point, that's not so bad. Saying that, I definitely don't trust LLMs in any cases where information is subjective. But when you're just talking about fact search, hallucination rates are pretty low, at least for GPT-5 Thinking (although still non-zero). That said, I have also run into a number of problems where the documentation is out-of-date, but there's not much an LLM could do about that. |
|
| |
| ▲ | the_duke 2 days ago | parent | prev | next [-] | | Gemini 2.5 always provides a lot of references, without being prompted to do so. ChatGPT 5 also does, especially with deep research. | |
| ▲ | pas 2 days ago | parent | prev | next [-] | | it's not syntax, it's data driven (yes of course syntax contributes to that) https://freedium.cfd/https://vinithavn.medium.com/from-multi... At its core, attention operates through three fundamental components — queries, keys, and values — that work together with attention scores to create a flexible, context-aware vector representation. Query (Q): The query is a vector that represents the current token for which the model wants to compute attention.
Key (K): Keys are vectors that represent the elements in the context against which the query is compared, to determine the relevance.
Attention Scores: These are computed using Query and Key vectors to determine the amount of attention to be paid to each context token.
Value (V): Values are the vectors that represent the actual contextual information. After calculating the attention scores using Query and Key vectors, these scores are applied against Value vectors to get the final context vector
| |
| ▲ | throwaway314155 2 days ago | parent | prev | next [-] | | ChatGPT provides sources for a lot of queries, particularly if you ask. I'm not defending it, but you can get what claim to want in an easier interface than Google. | |
| ▲ | hackinthebochs 2 days ago | parent | prev | next [-] | | That Searlesque syntax/semantics dichotomy isn't as clear cut as it once was. Yes, programs operate syntactically. But when semantics is assigned to particular syntactic structures, as it is with word embeddings, the computer is then able to operate on semantics through its facility with syntax. These old standard thought patterns need to be reconsidered in the age of LLMs. | |
| ▲ | whycome 2 days ago | parent | prev | next [-] | | Since when does google give your primary sources for simple queries? You have to wade through all the garbage. At least an LLM will give you the general path and provide sources. | | | |
| ▲ | scarface_74 2 days ago | parent | prev [-] | | ChatGPT gives you web citations from real time web searches. |
|
|
| ▲ | ajross 2 days ago | parent | prev [-] |
| > Do you not see ChatGPT and Claude as viable alternatives to search? This subthread is classic HN. Huge depth of replies all chiming in to state some form of the original prior: that "AI is a threat to search"... ... without even a nod to the fact that by far the best LLM-assisted search experience today is available for free at the Google prompt. And it's not even close, really. People are so set in their positions here that they've stopped even attempting to survey the market those opinions are about. (And yes, I'm biased I guess because they pay me. But to work on firmware and not AI.) |
| |
| ▲ | glenstein 2 days ago | parent | next [-] | | Like others have noted, I think it's far from obvious that Google's LLM prompt is the best experience in the space, I would say it's clearly not in the top tier and even that relatively speaking, I consider it bad compared to the best options. Assuming we're talking about the AI generated blurbs at the top of search results, there are loads of problems. For one they frequently don't load at all. For another search is an awkward place for them to be. I interact with search differently than with a chat interface where you're embedding a query in a kind of conversational context such that both your query and the answer are rich in contextual meaning. With search I'm typically more fact finding and in a fight against Google's page rank optimizations to try and break through to get my information I need. In a search context AI prompts don't benefit from context rich prompts and aren't able to give context-rich answers and kind of give generic background that isn't necessarily what I asked for. To really benefit from the search prompts I would have to be using the search bar in a prompt way, which would likely degrade the search results. And generally this hybrid interaction is not very natural or easy to optimize, and we all know nobody is asking for it, it's just bolted on to neutralize the temptation to leave search behind in favor of an LLM chat. And though less important, material design as applied to Google web sites in the browser is not good design, it's ugly and the wrong way to have a prompt interaction. This is also the case for Gemini from a web browser. Meanwhile GPT and Claude are a bit more comfortable with information density and are better visual and interactive experiences because of it. | |
| ▲ | brookst 2 days ago | parent | prev | next [-] | | If Google went all-in on the AI overview and removed search results and invested more heavily in compute, it could be pretty good. But as it stands, it's a terrible user experience. It's ugly, the page remains incredibly busy and distracting, and it is wrong far more often than ChatGPT (presumably because of inference cost at that scale). It might be good enough to slow the bleeding and keep less demanding users on SERP, but it is not good enough to compete for new users. | |
| ▲ | socksy 2 days ago | parent | prev | next [-] | | What? The Google LLM assisted search experience is... not the best option by a long shot? It's laughably incorrect in many cases, and infuriatingly incorrect in the others. It forces itself into your queries above the fold without being asked, and then bullshits to you. A recentish example, I was trying to remember which cities' buses were in Thessaloniki before they got a new batch recently. They used to rent from a company (Papadakis Bros) that would buy out of commission buses from other cities around the world and maintain the fleet. I could remember specifically that there were some BVG Busses from Berlin, and some Dutch buses, and was vaguely wondering if there were some also from Stockholm I couldn't remember. So I searched on my iPad, which defaulted to Google (since clearly I hadn't got around to setting up a good search engine on it yet). And I get this result: https://i.imgur.com/pm512HU.jpeg The LLM forced its way in there without me prompting (in e.g. Kagi, you opt in by ending the query with a question mark). It fundamentally misunderstands the question. It then treats me like an idiot for not understanding that Stockholm is a city in Sweden, and Thessaloniki a city in Greece. It uses its back linking functionality to help cite this great insight. And it takes up the entire page! There's not a single search result in view. This is such a painful experience, it confirms my existing bias that since they introduced LLMs (and honestly for a couple years before that) that Google is no longer a good first place to go for information. It's more of a last resort. Both ChatGPT and Claude have a free tier, and the ability to do searches. Here's what ChatGPT gave me: https://chatgpt.com/share/68b78eb7-d7b4-8006-81e0-ab2c548931... A lot of casual users don't hit the free tier limits (and indeedI've not hit any limits on the free ChatGPT yet), and while they have their problems they're both far better than the Gemini powered summaries Google have been pumping out. My suggestion is that perhaps you haven't surveyed the market before suggesting that "by far the best LLM-assisted search experience today is available for free at the Google prompt". | | |
| ▲ | codethief 2 days ago | parent | next [-] | | > The LLM forced its way in there without me prompting I agree this is annoying but other than that I really can't follow your argument: You're comparing a keyword-like "prompt" given to Google's LLM to a well-phrased question given to ChatGPT and are surprised the former doesn't produce the same results? | |
| ▲ | ajross 2 days ago | parent | prev [-] | | It's so frustrating the way AI argumentation goes. People will cherry pick outrageously specific items and extend to crazy generalization. I mean... your phrasing was 100% ambiguous! There's no such thing as a "Stockholm bus", or "Stockholm rolling stock". There are buses in Stockholm, and buses in Thessoloniki, and buses manufactured in Sweden, and buses previously used in Stockholm that are now in operation in Thessoloniki. And one LLM took one path through the question, answering it correctly and completely. And the other took a different one[1]. As it happened your (poorly phrased) intended question was answered by one and not the other. If I ask the same question with a more careful phrasing that (I think!) matches what you wanted to know: "Where did buses used in Thessoloniki come from originally?" ...I get correct and clear answers from both. But the Google result also has the Wikipedia page for the transit operator and its own web page immediately to the right. Again, cherry picking notwithstanding I think in general the integrated experience of "I need an AI to help me with this problem" works much better at google.com, it just does. [1] It's worth pointing out that the result actually told you that your question didn't make sense, and why. I suspect you think this was a bug since the other LLM guessed instead, but it smells like a feature to me. |
| |
| ▲ | liveoneggs 2 days ago | parent | prev | next [-] | | Just like google cloud is the best ;) | |
| ▲ | rs186 2 days ago | parent | prev [-] | | I have seen way more hallucination from "AI overview" than ChatGPT. You are biased, sure, but it seems that you haven't even used ChatGPT or other similar products enough to even attempt to give a fair assessment. | | |
| ▲ | zargon 2 days ago | parent [-] | | I'm not sure I have ever seen "AI overview" not hallucinate. Granted, I only end up at google on other people's computers or on some fresh install where I haven't configured search yet. | | |
| ▲ | ajross 2 days ago | parent [-] | | > Granted, I only end up at google on other people's computers or on some fresh install where I haven't configured search yet. Which is exactly my point. A bunch of people doing that to conform with the shibboleth identity of the phone in their pocket and then posting strong opinions about the product they don't (or at least claim not to) use is an echo chamber and not a discussion. You only get the upvotes in these threads if you conform. HN is supposed to be better than that. | | |
| ▲ | zargon 2 days ago | parent [-] | | > the shibboleth identity of the phone in their pocket Someone here has religion, and it’s not me. I don’t use Google search because it’s a terrible product and we finally have other options. As for AI, there are dozens of options, and it does not take many examples to see how bad Ai Overview is. Gemini 2.5 Pro, however, is in my tool belt. | | |
| ▲ | ajross a day ago | parent [-] | | > I don’t use Google search [...] I know. But you're posting confidently (along with a ton of other people) in a subthread about Google search anyway, making statements about its behavior which you straight up admit to be unqualified to make. And I'm calling out the disconnect, because someone has to. No one in an echo chamber thinks they're in an echo chamber. This is 100% an echo chamber. | | |
| ▲ | Karrot_Kream a day ago | parent | next [-] | | I gave up contributing to these threads years ago. On the edutainment scale HN threads on search have long tipped over to the "entertainment" side. Most of these threads are for people to performatively sneer at ads, javascript, the web, normies etc. HN just can't have conversation about search in a realistic way anymore. | |
| ▲ | zargon a day ago | parent | prev [-] | | The absurdity of this argument speaks for itself. "You're unqualified to judge it because it's so bad you can't tolerate using it." I suffered through Google search's decline for the last 15 years along with everyone else. I land on it often enough still to see that the trend is not changing. |
|
|
|
|
|
|