Remix.run Logo
lambda 4 days ago

I guess the part where I'm still skeptical are: Google is also still pretty good at search (especially if I avoid the AI summary with udm=14).

I'll take one of your examples: Britannica to seed Wikipedia. I searched for "wikipedia encyclopedia brtannica". In less than 1 second, I got search results back.

I spend maybe 30 seconds scanning the page; past the Wikipedia article on Encyclopedia Britannica, past the Encyclopedia article about Wikipedia, past a Reddit thread comparing them, past the Simple English Wikipedia article on Britannica, and past the Britannica article on Wiki. OK, there it is, the link to "Wikipedia:WikiProject Encyclopaedia Britannica", that answers your question.

Then to answer your follow up, I spend a couple more seconds to search Wikipedia for Wikipedia, and find in the first paragraph that it was founded in 2001.

So, let's say a grand total of 60 seconds of me searching, skimming, and reading the results. The actual searching was maybe 2 or 3 seconds of time total, once on Google, and once on Wikipedia.

Compared to nearly 3 minutes for ChatGPT to grind through all of that, plus the time for you to read it, and hopefully verify by checking its references because it can still hallucinate.

And what did you pay for the privilege of doing that? How much extra energy did you burn for this less efficient response? I wish that when linking to chat transcripts like you do, ChatGPT would show you the token cost of that particular chat

So yeah, it's possible to do search with ChatGPT. But it seems like it's slower and less efficient than searching and skimming yourself, at least for this query.

That's generally been my impression of LLMs; it's impressive that they can do X. But when you add up all the overhead of asking them to do X, having them reason about it, checking their results, following up, and dealing with the consequences of any mistakes, the alternative of just relying on plain old search and your own skimming seems much more efficient.

plopilop 4 days ago | parent | next [-]

Agree. I tried the first 3 examples:

* "Rubber bouncy at Heathrow removal" on Google had 3 links, including the one about SFO from which chatGPT took a tangent. While ChatGPT provided evidence for the latest removal date being of 2024, none was provided for the lower bound. I saw no date online either. Was this a hallucination?

* A reverse image lookup of the building gave me the blog entry, but also an Alamy picture of the Blade (admittedly this result can have been biased by the fact the author already identified the building as the blade)

* The starbucks pop Google search led me to https://starbuckmenu.uk/starbucks-cake-pop-prices/. I will add that the author bitching to ChatGPT about ChatGPT hidden prompts in the transcript is hilarious.

I get why people prefer ChatGPT. It will do all the boring work of curating the internet for you, to privde you with a single answer. It will also hallucinate every now and then but that seems to be a price people are willing to pay and ignore, just like the added cost compared to a single Google search. Now I am not sure how this will evolve.

Back in the days, people would tell you to be weary of the Internet and that Wikipedia thing, and that you could get all the info you need from a much more reliable source at the library anyways, for a fraction of the cost. I guess that if LLMs continue to evolve, we will face the same paradigm shift.

animal531 4 days ago | parent | prev | next [-]

I'm going to somewhat disagree based on my recent attempts.

Firstly, if we don't remove the Google AI summary then as you rightly say, it makes the experience 10x worse. They try to still give an answer quickly, but the AI takes up a ton of space and is mostly terrible.

Googling for a Github repository just now, Google linked me to 3 resources except the actual page. One clone that was named the same, another garbage link but luckily the 3rd was a reddit post by the same person which linked to the correct page.

GPT does take a lot longer, but the main advantage for me comes in depending on the scope of what you're looking for. In the above example I didn't mind Google, because the 3 links opened fast and I could scan and click through to find what I was looking for, ie. I wanted the information right now.

But then let's say I'm interested in something a bit deeper, for example how did they do the unit movement in StarCraft 2? This is a well known question, so the links/info you get from either Google or GPT are all great. If I was searching this topic via Google I'd then have to copy or bookmark the main topics to continue my research on them. Doing it via GPT it returns the same main items, but I can very easily tell it to explain all those topics in turn, have it take the notes, find source code, etc.

Of course as in your example, if you're a Doctor and you're googling symptoms or perhaps real world location of ABC then the hallucination specter is a dangerous thing which you want to avoid at all costs. But for myself I find that I can as easily filter LLM mistakes as I can noise/errors from manual searches.

My future Internet guess is going to be that in N years there will be no such thing as manually searching for anything, everything will be assistant driven via LLM.

simonw 4 days ago | parent | prev | next [-]

I suggest trying that experiment again but picking the hardest of my examples to answer with Google, not the easiest.

lambda 4 days ago | parent [-]

Not sure which is the hardest, but sure, let's try them all.

* Bouncy people mover. Some Google searching turns up the SFO article that you liked. Trying to pin down the exact dates is harder. ChatGPT maybe did narrow down the time frame quicker than I could through a series of Google searches,

* The picture of the building. Go to Google lens, paste in the image, less than a second later I get results. Of course, the exact picture in this article comes up on top, but among the other results I get a mix of two different buildings, one of which is identified as the Blade, one Independence Temple. So a few seconds here between searching and doing my own quick visual scan of the results.

* Starbucks UK Cake Pops: This one is harder to find the full details with a quick Google search. I am able to find that the were fairly recently introduced in the UK after my second search. It looks like ChatGPT gave you a bunch of extra response, some of which you didn't like, because you then spent a while trying to reverse engineer its system prompt rather than any actual follow up on the question itself.

* Official name of the University of Cambrdige: search gave me Wikipedia, Wikipedia contains the official name and a link to a reference on the University's page. Pretty quick to solve with Google Search/Wikipedia.

* Exeter quay. I searched for "waterfront exeter cliff building" and found this result towards the top of the results: https://www.exeterquay.org/milestones/ which explains "Warehouses were added in 1834 [Cornish's] and 1835 [Hooper's], with provision for storing tobacco and wine and cellars for cider and silk were cut into the cliffs downstream." You seemed to be a lot more entertained by ChatGPT's persistence in finding more info, but for satisfying curiosity about the basic question, I got an answer pretty quickly via Google.

* Aldi vs Lidl: this is a much more subjective question, so whether the results you get via a quick Google search meet your needs, vs. whether the summary of subjective results you get via ChatGPT, is more of a question you can answer. I do find some Reddit threads and similar with a quick Google search.

* Book scanning. You asked specifically about destructive book scanning. You can do a quick search of each of the labs and "book scanning" and find the same lack of results that ChatGPT gives you. Maybe takes a similar amount of time to how long it spent thinking. You pretty much only find references to Anthropic doing destructive book scanning, and Google doing mostly non-destructive scanning

Anyhow, the results are mixed. For a bunch of these, I found an answer quicker via a Google search (or Google Lens search), and doing some quick scanning/filtering myself. A few of them, I feel like it was a wash. A couple of them actually do take more iteration/research, the bouncy travelator being the most extreme example, I think; narrowing down the timeline on my own would take a lot of detailed looking through sources.

simonw 4 days ago | parent [-]

This is a fair analysis, thanks for taking the time.

As far as I can tell the Google + Wikipedia solution gets the name of Cambridge University wrong: Wikipedia lists it as "The Chancellor, Masters and Scholars of the University of Cambridge" whereas GPT-5 correctly verified it to be "The Chancellor, Masters, and Scholars of the University of Cambridge" (note that extra comma) as listed on https://www.cam.ac.uk/about-the-university/how-the-universit...

I tried to reverse engineer the system prompt in the cake pop conversation https://chatgpt.com/share/68bc71b4-68f4-8006-b462-cf32f61e7e... purely because I got annoyed at it for answering "haha I believe you" - I particularly disliked the lower case "haha" because I've seen it switch to lower case (even the lower case word "i") in the past and I wanted to know what was causing it to start talking in the same way that Sam Altman tweets.

timeinput 3 days ago | parent [-]

But that's an Oxford comma! You can't use that when describing Cambridge.

IanCal 4 days ago | parent | prev | next [-]

As a counterpoint I asked that simple question to gpt5 in auto mode and it started replying in two seconds, wrote fast enough for me to scan the answer and gave me two solid links to read after.

With thinking it took longer (just shy of two minutes) but compared a variety of different sources, and comes back with numbers and each statement in the summary sourced.

I’ve used gpt a bunch for finding things like bin information on the council site that I just couldn’t easily find myself. I’ve also sent it off to dig through prs, specs and more for matrix where it found the features and experimental flags required to solve a problem I had. Reading that many proposals and checking what’s been accepted is a massive pain and it solved this while I went to make a coffee.

dwayne_dibley 4 days ago | parent | prev | next [-]

I wonder how all this will really change the web. In your manual mode, you a human, are viewing and visiting webpages, but if one never needs to and always interacts with the web through an agent, what does the web need to look like, and will people even bother making websites? Interesting times ahead.

gitmagic 4 days ago | parent [-]

I’ve been thinking about this as well. Instead of making websites, maybe people will make something else, like some future version of MCP tools/servers? E.g. a restaurant could have an “MCP tool” for checking opening hours, reserving a table, etc.

diabllicseagull 4 days ago | parent | next [-]

I hope none of this happens and web stays readable and indexable.

timeinput 3 days ago | parent [-]

I sure hope it stays readable, but it seems like it would only become more indexable with machine friendly formats.

rossant 4 days ago | parent | prev [-]

Same. Websites won't disappear but may become niche or something of the past. Why create a new UI for your new service when you can plug into a "universal" personal agent AI.

recursive 4 days ago | parent [-]

So I can get reliable answers.

Why create a fancy infrastructure for for this new universal thing, when the old thing already does it more reliably and with less steps?

bgwalter 4 days ago | parent | prev | next [-]

Yes, Google with udm=14 is much better than "AI". "AI" might work for the trivia-type questions from this article, which most people aren't interested in to begin with.

It fails completely for complex political or investigative questions where there is no clear answer. Reading a single Wikipedia page is usually a better use of one's time:

You don't have to pretend that you are parallelizing work (which is just for show) while waiting three min for the "AI" answer. You practice speed reading and memory retention. You enhance your own semantic network instead of the network owned and controlled by oligopoly members.

Faaak 4 days ago | parent | prev | next [-]

A bit unrelated, but on Firefox there's the Straight to the Web extension that automatically appends the udm=14 param, so AI gets disabled :-)

wilg 4 days ago | parent | prev | next [-]

First, you not having to spend the 60 seconds and it means you can parallelize it with something else to get the answer effectively instantly. Second, you're essentially establishing that if an LLM can get it done in less than 60 seconds its better than your manual approach, which is a huge win, as this will get faster!

sigmoid10 4 days ago | parent | next [-]

For real. This is what it must have been like living in the early 20th century and hearing people say they prefer a horse to get groceries because it is so much more effort to crank-start a car. I look forward to the age when we gleefully reminisce about the time we had to deal with SEO spam manually.

lomase 4 days ago | parent | next [-]

I look forward to the day AI hype is dead as blockchain.

hu3 3 days ago | parent [-]

Most colleagues use AI daily.

It's not going away, ever.

lomase 2 days ago | parent [-]

Those colleagues can still use their machine learning autocomplete, I hate the hype, not the tech.

prmoustache 4 days ago | parent | prev [-]

The thing is, the idea of cars cars being a plus for humanity is still debatable to this day. We won a lot in some areas but lost an awful lot in others.

sigmoid10 4 days ago | parent [-]

That's a ridiculously cynical claim. And even if it were true, it still misses the fact that horses and the entire economy built around them stood no chance in the end. Today we are again in the sitation where many people who are alive and working today will have to consider getting reeducated if they don't want to be left behind, whether they want it or not.

lambda 4 days ago | parent | prev [-]

There's no useful parallelization that could happen during this particular search. This took a couple of iterations of research via ChatGPT, and then reading the results and looking at the referenced sources; the total interaction time with ChatGPT is a similar 60 seconds or so, the main difference is the 3 minutes of waiting for it to generate answers vs. the maybe a couple of seconds for the searches.

wilg 3 days ago | parent [-]

What? You can do anything else you want during the search?

utyop22 4 days ago | parent | prev [-]

V nice post. Captures my sentiment too