| ▲ | kdheiwns 12 hours ago |
| Yesterday was my first time trying it. One thing that felt a bit strange to me was that I asked it something and the response was just one paragraph. Which isn't bad or anything but it felt... strange? Like I always need to preface ChatGPT/gemini/whatever question with "Briefly, what is..." or it gives me enough fluff to fill a 5 page high school essay. But I didn't need to do that and just got an answer that was to the point and without loads of shit that's barely related. And the weirdest thing that I noticed: instead of skimming the response to try finding what was relevant, I just straight up read it. Kind of felt like I got a slight amount of focus ability back. Accuracy is something I can't really compare yet (all chatbots feel generally the same for non-pro level queries), but so far, I'm fairly satisfied. |
|
| ▲ | HarHarVeryFunny 4 hours ago | parent | next [-] |
| I use Gemini all the time, but I have to say it's got verbal diarrhea and an EXTREMELY annoying trait of wanting to lead the conversation rather than just responding to what YOU want to do. At the end of every response Gemini will always suggest a "next step", in effect trying to 2nd guess where you want the conversation to go. I'd much rather have an AI that just did what it was asked, and let me decide what to ask next (often nothing - maybe it was just a standalone question!). Apparently this annoying "next step" behavior is driven by the system prompt, since the other day I was running Gemini 3 Thinking, and it was displaying it's thoughts which included a reminder to itself to check that it was maintaining a consistent persona, and to make sure that it had suggested a next step. I'd love to know the thought process of whoever at Google thought that this would make for a natural or useful conversation flow! Could you imagine trying to have a conversation with a human who insisted on doing this?! |
| |
| ▲ | edoceo 4 hours ago | parent | next [-] | | Yes. That is a salesperson. The next-step is to drive engagement. I know you're not interested in $THiNG now but can I follow up in 3 months? In persona I think the AIs are that Claude is the engineer and Gemini is the sales-person and GPT is the eager and loud journeyman. | |
| ▲ | vrosas 4 hours ago | parent | prev | next [-] | | I don’t know, I’ve found the follow ups nice sometimes. You can just ignore them if they’re not useful. The computer won’t get mad… | | |
| ▲ | HarHarVeryFunny 3 hours ago | parent [-] | | I find they are extremely rarely useful - they just break my own chain of thought by having to constantly read and ignore this stuff. The same goes for the excessively verbose responses too - a human has a limited "context window" and a shorter response is therefore much more useful/valuable than a long token-maxxed one. Sure the computer won't get mad, and that is all I do - just ignore what Gemini is suggesting and pretend it never said it - but it certainly makes me mad. The main reason I stick with Gemini is because of the generous free usage limits, but I know this annoying "next step" behavior (which was a relatively recent change) is going to push me back to Claude, even if I need to pay for it. |
| |
| ▲ | debo_ 3 hours ago | parent | prev [-] | | Yes. Gemini is mimicking its creators: this is exactly how I experienced speaking to most Googlers. |
|
|
| ▲ | layer8 9 hours ago | parent | prev | next [-] |
| One issue is that Claude’s web search abilities are more limited, for example it can’t search Reddit and Stack Overflow for relevant content. |
| |
| ▲ | bredren 5 hours ago | parent | next [-] | | Why not just write a skill and script that calls crawl4ai or similar and do this using Claude code? You can store the page as markdown for future sessions, mash the data w other context, you name it. The web Claude is incredibly limited both in capability and workflow integration. Doesn’t matter if you’re dealing with bids from arbor contractors or researching solutions for a DB problem. | | |
| ▲ | Barbing 4 hours ago | parent | next [-] | | Want this w/o killing the free open web. Maybe I run an old PC adjacent to the scraper to manually visit the scraped pages without an adblocker, & buy something I need from an ad periodically (while a cohesive response is being generated in the meantime) Ya sounds dumb, wishing for a middle ground that lets us be effective but also good netizens. Maybe that Cloudflare plan to charge the bots… | |
| ▲ | layer8 4 hours ago | parent | prev [-] | | See https://news.ycombinator.com/item?id=47208741. |
| |
| ▲ | samhclark 8 hours ago | parent | prev | next [-] | | That's so frustrating with Claude. If I need to widely search the web or if I need it to read a specific URL I pasted, I always turn to ChatGPT. Claude seems to hit a lot more roadblocks while trying to navigate the web. | | |
| ▲ | godelski 5 hours ago | parent | next [-] | | The issue is Reddit though. They're the ones blocking. They're very aggressive. When sites are working in one chatbot and not another, there's a good chance that the latter is respecting the website rules. As an example with Reddit, you're probably blocked when using a VPN like Mullvad | |
| ▲ | ronsor 6 hours ago | parent | prev [-] | | They're playing too nice. It's time to roll out the residential proxies. |
| |
| ▲ | andai 6 hours ago | parent | prev | next [-] | | It's not that hard to roll your own web search MCP. I made one for Crush a while ago. https://anduil.neocities.org/blog/?page=mcp I'm not sure about the issues with reddit though? Do they block Claude's web fetch tool? I think Codex runs it thru some kind of cache proxy. | | |
| ▲ | layer8 5 hours ago | parent [-] | | Rolling your own is not the solution for the common case where you’re asking an LLM a question that may or may not be supported or supplemented by a web search. ChatGPT decides by itself when and how to consult the web, and then links the relevant sources in its result. You don’t get that functionality from Claude chat, you’d have to completely build your own chat harness and apps. Sites like Reddit are blocking AI providers, they have to have some contract with them for access. OpenAI does seem to have that. |
| |
| ▲ | MrDarcy 5 hours ago | parent | prev [-] | | That’s a feature not a bug. |
|
|
| ▲ | Sharlin 9 hours ago | parent | prev | next [-] |
| Heh, a while ago I wondered why ChatGPT had started to reply tersely, almost laconically. Then I remembered that I had explicitly told it to be brief by default in the custom personality settings… I also noticed that there are now various sliders to control things like how many emojis or bulletpoint lists ChatGPT should use, which I though was amusing. Anyway, these tools can be customized to adopt just about any style, there's no need to always prefix questions with "Briefly" or similar. |
| |
| ▲ | andai 5 hours ago | parent | next [-] | | Here's my prompt to make ChatGPT sound more like Claude. It works but not as well as I'd like -- the tone and word choice still ends up being really jarring to me (even after years of using ChatGPT). Maybe that's promptable too. Open to suggestions. --- Respond in a natural conversational style.
In terms of language, match my own tone and style. Keep responses to half a page or so max. (Use context and your judgment. e.g. for example, initial response can be a page, and then specific follow up questions can be shorter, if the question is answered clearly) Prefer minimal formatting. Don't use headings, lists etc. Bold and italics OK but keep it tasteful. If you're starting a paragraph like so Item name: description.. then it makes sense to bold item name for readability purposes. | |
| ▲ | AgentOrange1234 3 hours ago | parent | prev [-] | | Hah. I remember some story about a chatbot that had been trained on slack conversations. You would ask it for an essay on whatever, and it would say "will do" or "I'll have it for you tomorrow." :) |
|
|
| ▲ | lkbm 5 hours ago | parent | prev | next [-] |
| Yeah, I've always been a little confused why people use ChatGPT so heavily. It's better than it used to be (maybe thanks to custom configuration), but it still tends to respond like it's writing a Wikipedia article. Wikipedia articles on demand are great, but not usually what I want. |
|
| ▲ | skeledrew 8 hours ago | parent | prev | next [-] |
| Yep the experience is quite something. Another thing I've noticed, and you likely soon will also, is that Claude only attempts a follow-up if the one is needed or the prompt is structured for it. Meanwhile ChatGPT always prompts you with a choice of next steps. It can be nice, as sometimes the options contain improvements you never thought of and would like, but in lengthy conversations with a detailed plan it does things really piecemeal, as though trained to maximize engagement instead of getting to a final solution. |
| |
| ▲ | zukzuk 8 hours ago | parent [-] | | I find that Claude almost always ends its response with some sort of follow up question, despite my system prompt telling it not to. I never really used ChatGPT much though so maybe Claude is just relatively less egregious? |
|
|
| ▲ | esperent 10 hours ago | parent | prev | next [-] |
| > Which isn't bad or anything but it felt... strange? On the contrary, it's great. It's fully capable of outputting a wall of text when required, so instead of feeling like I'm talking to something that has a minimum word count requirement, I get an appropriate sized response to the task at hand. |
|
| ▲ | mavamaarten 11 hours ago | parent | prev [-] |
| In my limited experience, that's mostly since the 4.6 release. I noticed that with the same prompt, it answers much more briefly. A bit jarring indeed, but I prefer it. Less bs and filler, and less burning off electricity for nothing. |
| |
| ▲ | ACCount37 10 hours ago | parent | next [-] | | This behavior first appeared in 4.5, mostly for specific types of questions and in "natural conversation" workflows. 4.6 might have pushed it further. | |
| ▲ | xmonkee 10 hours ago | parent | prev [-] | | It’s probably an offshoot of making Claude more and more suitable for code/cowork. |
|