| ▲ | bigstrat2003 2 days ago |
| I don't agree that ChatGPT gives an overall better experience than Google, let alone an actual good search engine like Kagi. It's very rare that I need to ask something in plain English because I just don't know what the keywords are, so the one edge the LLM might have is moot. Meanwhile, because it bullshits a lot (not just sometimes, a lot), I can't trust anything it tells me. At least with a search engine I can figure out if a given site is reliable or not, with the LLM I have no idea. People say all the time that LLMs are so much better for finding information, but to me it's completely at odds with my own user experience. |
|
| ▲ | Wurdan 2 days ago | parent | next [-] |
| Why not both? You mention Kagi, and I find its Assistant to be a very useful mix of LLM and search engine.
Something I asked it recently is whether Gothenburg has any sky-bars that overlook Hisingen to the North, and it correctly gave me one.
A search engine could have given me a list of all sky-bars. And by looking at their photos on Google maps, I could probably have found one with the view / perspective I wanted. But Kagi Assistant using Kimi K2 did a decent job of narrowing the options I had to research. |
|
| ▲ | barnabee 2 days ago | parent | prev | next [-] |
| I’d rather use every LLM that can search the web (including whatever local model I’m currently running on my MacBook) over Google. I also prefer the results from Kagi (which I generally use), DuckDuckGo, and Ecosia. I still don’t think a company with at least one touch point on such a high percentage on web usage should be allowed to have one of 2 mobile OSs that control that market, the most popular browser, the most popular search engine, the top video site (that’s also a massive social network), and a huge business placing ads on 3rd party sites. Any two of these should be cause for concern, but we are well beyond the point that Google’s continued existence as a single entity is hugely problematic. |
|
| ▲ | jve 2 days ago | parent | prev | next [-] |
| For me, ChatGPT in some instances replace Google in a very powerful way. Been researching about waterproofing techniques in my area. Asked chatgpt about products in my region. Gladly mentioned some, provided links to shop. Found out I need to prep foundation with product X. One shop had only Y available, from description felt similar. Asked about differences between products. Provided me with summary table that was crystal clear that one is more of a finishing stuff and the other is more of a structural and can also be used as finishing. Provided me with links to datasheets that confirm the information. I could ask about alternative products and it listed me some, etc. Great when I need to research unknown field and has links... that is the good part :) |
|
| ▲ | Andrew_nenakhov 2 days ago | parent | prev [-] |
| Chatgpt, Grok and the likes give an overall better experience than Google because they give you the answer, not links to some pages where you might find the answer. So unless I'm explicitly searching for something, like some article, asking Grok is faster and gets you an acceptable answer. |
| |
| ▲ | dns_snek 2 days ago | parent [-] | | You get an acceptable answer maybe about 60% of the time, assuming most of your questions are really simple. The other 40% of the time it's complete nonsense dressed up as a reasonable answer. | | |
| ▲ | Andrew_nenakhov 2 days ago | parent | next [-] | | In my experience I get acceptable answers in more than 95% of questions I ask. In fact, I rarely use search engines now. (btw I jumped off Google almost a decade ago now, have been using duckduckgo as my main search driver) | |
| ▲ | sfdlkj3jk342a 2 days ago | parent | prev [-] | | Have you used Grok or ChatGPT in the last year? I can't remember the last time I got a nonsense response. Do you have a recent example? | | |
| ▲ | tim1994 2 days ago | parent | next [-] | | I think the problem is that they cannot communicate that they don't know something and instead make up some BS that sounds somewhat reasonable. Probably due to how they are built. I notice this regularly when asking questions about new web platform features and there is not enough information in the training data. | |
| ▲ | dns_snek 2 days ago | parent | prev | next [-] | | Yes I (try to) use them all the time. I regularly compare ChatGPT, Gemini, and Claude side by side, especially when I sniff something that smells like bullshit. I probably have ~10 chats from the past week with each one. I ask genuine questions expecting a genuine answer, I don't go out of my way to try to "trick" them but often I'll get an answer that doesn't seem quite right and then I dig deeper. I'm not interested in dissecting specific examples because never been productive, but I will say that most people's bullshit detectors are not nearly as sensitive as they think they are which leads them to accepting sloppy incorrect answers as high-quality factual answers. Many of them fall into the category of "conventional wisdom that's absolutely wrong". Quick but sloppy answers are okay if you're okay with them, after all we didn't always have high-quality information at our fingertips. The only thing that worries me is how really smart people can consume this slop and somehow believe it to be high-quality information, and present it as such to other impressionable people. Your success will of course vary depending on the topic and difficulty of your questions, but if you "can't remember" the last time you had a BS answer then I feel extremely confident in saying that your BS detector isn't sensitive enough. | | |
| ▲ | lelanthran 2 days ago | parent [-] | | > Your success will of course vary depending on the topic and difficulty of your questions, but if you "can't remember" the last time you had a BS answer then I feel extremely confident in saying that your BS detector isn't sensitive enough. Do you have a few examples? I'm curious because I have a very sensitive BS detector. In fact, just about anyone asking for examples, like the GP, has a sensitive BS detector. I want to compare the complexity of my questions to the complexity of yours. Here's my most recent one, the answer to which I am fully capable of determining the level of BS: I want to parse markdown into a structure. Leaving aside the actual structure for now, give me a exhaustive list of markdown syntax that I would need to parse.
It gave me a very large list, pointing out CommonMark-specific stuff, etc.I responded with: I am seeing some problems here with the parsing: 1. Newlines are significant in some places but not others. 2. There are some ambiguities (for example, nested lists which may result in more than four spaces at the deepest level can be interpreted as either nested lists or a code block) 3. Autolinks are also ambiguous - how can we know that the tag is an autolink and not HTML which must be passed through? There are more issues. Please expand on how they must be resolved. How do current parsers resolve the issues?
Right. I've shown you mine. Now you show yours. |
| |
| ▲ | svieira 2 days ago | parent | prev [-] | | Today, I asked Google if there was a constant time string comparison algorithm in the JRE. It told me "no, but you can roll your own". Then I perused the links and found that MessageDigest.isEqual exists. |
|
|
|