Remix.run Logo
brokencode 10 hours ago

> “most people agree that the output is trite and unpleasant to consume”

That is a such a wild claim. People like the output of LLMs so much that ChatGPT is the fastest growing app ever. It and other AI apps like Perplexity are now beginning to challenge Google’s search dominance.

Sure, probably not a lot of people would go out and buy a novel or collection of poetry written by ChatGPT. But that doesn’t mean the output is unpleasant to consume. It pretty undeniably produces clear and readable summaries and explanations.

pera 9 hours ago | parent | next [-]

> People like the output of LLMs so much that ChatGPT is the fastest growing app ever

While people seem to love the output of their own queries they seem to hate the output of other people's queries, so maybe what people actually love is to interact with chatbots.

If people loved LLM outputs in general then Google, OpenAI and Anthropic would be in the business of producing and selling content.

henryfjordan 7 hours ago | parent | next [-]

Google does put AI output at the top of every search now, and sometimes it's helpful and sometimes it's crap. They have been trying since long before LLMs to not just provide the links for a search but also the content.

Google used to be interested in making sure you clicked either the paid link or the top link in the results, but for a few years now they'd prefer that a user doesn't even click a link after a search (at least to a non-Google site)

LtWorf 4 hours ago | parent [-]

It made me switch away from google. The push I needed

reddit_clone 4 hours ago | parent | prev | next [-]

Low effort Youtube shorts with AI voice annoy the crap out of me.

After all this hype, they still can't do text to speech properly. Pause at the wrong part of the sentence all the time.

brokencode 7 hours ago | parent | prev | next [-]

I think the thing people hate about that is the lack of effort and attention to detail. It’s an incredible enabler for laziness if misused.

If somebody writes a design or a report, you expect that they’ve put in the time and effort to make sure it is correct and well thought out.

If you then find the person actually just had ChatGPT generate it and didn’t put any effort into editing it and checking for correctness, then that is very infuriating.

They are essentially farming out the process of creating the document to AI and farming out the process of reviewing it to their colleagues. So what is their job then, exactly?

These are tools, not a replacement for human thought and work. Maybe someday we can just have ChatGPT serve as an engineer or a lawyer, but certainly not today.

cruffle_duffle 9 hours ago | parent | prev | next [-]

> While people seem to love the output of their own queries they seem to hate the output of other people's queries

Listening or trying to read other peoples chats with these things is like listening to somebody describe a dream. It’s just not that interesting most of the time. It’s remarkable for the person experiencing it but it is deeply personal.

kenjackson 6 hours ago | parent | prev [-]

If I cared about the output from other people's queries then wouldn't they be my queries? I don't care about ChatGPTs response to your queries is because I don't care about your queries. I don't care if they came from ChatGPT or the world's foremost expert in whatever your query was about.

underdeserver 9 hours ago | parent | prev | next [-]

> That is a such a wild claim. People like the output of LLMs so much that ChatGPT is the fastest growing app ever.

The people using ChatGPT like its output enough when they're the ones reading it.

The people reading ChatGPT output that other people asked for generally don't like it. Especially if it's not disclosed up front.

ohyes 8 hours ago | parent | next [-]

Had someone put up a project plan for something that was not disclosed as LLM assisted output.

While technically correct it came to the wrong conclusions about the best path forward and inevitably hamstrung the project.

I only discovered this later when attempting to fix the mess and having my own chat with an LLM and getting mysteriously similar responses.

The problem was that the assumptions made when asking the LLM were incorrect.

LLMs do not think independently and do not have the ability to challenge your assumptions or think laterally. (yet, possibly ever, one that does may be a different thing).

Unfortunately, this still makes them as good as or better than a very large portion of the population.

I get pissed off not because of the new technology or the use of the LLM, but the lack of understanding of the technology and the laziness with which many choose to deliver the results of these services.

I am more often mad at the person for not doing their job than I am at the use of a model, the model merely makes it easier to hide the lack of competence.

justfix17 7 hours ago | parent | next [-]

> LLMs do not think

Yep.

More seriously, you described a great example of one of the challenges we haven't addressed. LLM output masquerades as thoughtful work products and wastes people's time (or worse tanks a project, hurts people, etc).

Now my job reviewing work is even harder because bad work has fewer warning signs to pick up on. Ugh.

I hope that your workplace developed a policy around LLM use that addressed the incident described. Unfortunately I think most places probably just ignore stuff like this in the faux scramble to "not be left behind".

ludicrousdispla 7 hours ago | parent [-]

It's even worse than you suggest, for the following reason. The rare employee that cares enough to read through an entire report is more likely to encounter false information which they will take as fact (not knowing that LLM produced the report, or unaware that LLMs produce garbage). The lazy employees will be unaffected.

131012 7 hours ago | parent | prev | next [-]

> LLMs do not think independently and do not have the ability to challenge your assumptions

It IS possible for a LLM to challenge your assumptions, as its training material may include critical thinking on many subjects.

The helpful assistant, being almost by definition a sycophant, cannot.

newAccount2025 3 hours ago | parent [-]

Strong agree. If you simply ask an LLM to challenge your thinking, spot weaknesses in your argument, or what else you might consider, it can do a great job.

This is literally my favorite way to use it. Here’s an idea, tell me why it’s wrong.

thewebguyd 7 hours ago | parent | prev [-]

> do not have the ability to challenge your assumptions or think laterally.

Particularly on the challenging your assumptions part is where I think LLMs fail currently, though I won't pretend to know enough about how to even resolve that; but right now, I can put whatever nonsense I want into ChatGPT and it will happily go along telling me what a great idea that is. Even on the remote chance it does hint that I'm wrong, you can just prompt it into submission.

None of the for-profit AI companies are going to start letting their models tell users they're wrong out of fear of losing users (people generally don't like to be held accountable) but ironically I think it's critically important that LLMs start doing exactly that. But like you said, the LLM can't think so how can it determine what's incorrect or not, let alone if something is a bad idea or not.

Interesting problem space, for sure, but unleashing these tools to the masses with their current capabilities I think has done, and is going to continue to do more harm than good.

myrryr 5 hours ago | parent | next [-]

This is why once you are using to using them, you start asking them for there the plan goes wrong. They won't tell you off the bat, whuch can be frustrating, but they are really good at challenging your assumptions, if you ask them to do so.

They are good at telling you what else you should be asking, if you ask them to do so.

People don't use the tools effectively and then think that the tool can't be used effectively...

Which isn't true, you just have to know how the tool acts.

DrewADesign 5 hours ago | parent | prev [-]

I'm no expert, but the most frequent recommendations I hear to address this are:

a) tell it that it's wrong and to give you the correct information.

b) use some magical incantation system prompt that will produce a more critical interlocutor.

The first requires knowing enough about the topic to know the chatbot is full of shit, which dramatically limits the utility of an information retrieval tool. The second assumes that the magical incantation correctly and completely does what you think it does, which is not even close to guaranteed. Both assume it even has the correct information and is capable of communicating it to you. While attempting to use various models to help modify code written in a less-popular language with a poorly-documented API, I learned how much time that can waste the hard way.

If your use case is trivial, or you're using it as a sounding board with a topic you're familiar with as you might with, say, a dunning-kruger-prone intern, then great. I haven't found a situation in which I find either of those use cases compelling.

LeifCarrotson 6 hours ago | parent | prev [-]

Especially if it's not disclosed up front, and especially when it supplants higher-value content. I've been shocked how little time it's taken for AI slop SEO optimized blogs to overtake the articles written by genuine human experts, especially in niche product reviews and technical discussions.

However, whether or not people like it is almost irrelevant. The thing that matters is not whether economics likes it.

At least so far, it looks like economics absolutely loves LLMs: Why hire expensive human customer support when you can just offload 90% of the work to a computer? Why pay expensive journalists when you can just have the AI summarize it? Why hire expensive technical writers to document your code when you can just give it to the AI and check the regulatory box with docs that are good enough?

davidcbc 5 hours ago | parent [-]

Eventually the economics will correct themselves once people yet again learn the old "you get what you pay for" lesson (or the more modern FAFO lesson)

hattmall 10 hours ago | parent | prev | next [-]

I'm not really countering that ChatGPT is popular, it certainly is, but it's also sort of like "fastest growing tire brand" that came along with the adoption of vehicles. The amount of smartphone users is also growing at the fastest rate ever so whatever the new most popular app is has a good chance of being the fastest growing app ever.

doctorpangloss 8 hours ago | parent [-]

No… dude… it’s a new household name. We haven’t had those in software for a long time, maybe since TikTok and Fortnite.

matthewdgreen 5 hours ago | parent [-]

Lots of things had household recognition. Do you fondly remember the Snuggie? The question is whether it'll be durable. The lack of network effects is one reason to be skeptical.

sejje 10 hours ago | parent | prev | next [-]

Maybe he's referencing how people don't like when other humans post LLM responses in the comments.

"Here's what chatGPT said about..."

I don't like that, either.

I love the LLM for answering my own questions, though.

jack_pp 9 hours ago | parent [-]

"Here's what chatGPT said about..." Is the new lmgtfy

zdragnar 8 hours ago | parent [-]

lmgtfy was (from what I saw) always used as a snarky way to tell someone to do a little work on their own before asking someone else to do it for them.

I have seen people use "here's what chatGPT" said almost exclusively unironically, as if anyone else wants humans behaving like agents for chatbots in the middle of other people's discussion threads. That is to say, they offer no opinion or critical thought of their own, they just jump into a conversation with a wall of text.

SoftTalker 7 hours ago | parent [-]

Yeah I don't even read those. If someone can't be bothered to communicate their own thoughts in their own words, I have little belief that they are adding anything worth reading to the conversation.

Sharlin 7 hours ago | parent [-]

Why communicate your own thoughts when ChatGPT can give you the Correct Answer? Saves everybody time and effort, right? I guess that’s the mental model of many people. That, or they’re just excited to be able to participate (in their eyes) productively in a conversation.

SoftTalker 6 hours ago | parent [-]

If I want the "correct answer" I'll research it, maybe even ask ChatGPT. If I'm having a conversation I'm interesed in what the other participants think.

If I don't know something, I'll say I don't know, and maybe learn something by trying to understand it. If I just pretend I know by pasting in what ChatGPT says, I'm not only a fraud but also lazy.

ants_everywhere 9 hours ago | parent | prev | next [-]

> That is a such a wild claim.

Some people who hate LLMs are absolutely convinced everyone else hates them. I've talked with a few of them.

I think it's a form of filter bubble.

johnnyanmac 8 hours ago | parent [-]

This isn't some niche outcry: https://www.forbes.com/sites/bernardmarr/2024/03/19/is-the-p...

And that was 18 months ago.

Yes, believe it or not, people eventually wake up and realize slop is slop. But like everything else with LLM development, tech is trying to brute force it on people anyway.

elictronic 8 hours ago | parent | next [-]

You posted an article about investors trust in AI companies to deliver and societies strong distrust of large corporations.

You article isn’t making the point you seem to think it is.

johnnyanmac an hour ago | parent [-]

What point do you think it means? Seems pretty clear to me.

1. Investors are pushing a lot of hype

2. People are not trusting the hype.

Hence why people's trust in LLM's are waning.

brokencode 2 hours ago | parent | prev [-]

Yup, any day now people will suddenly realize that LLMs suck and you were right all along. Any day now..

johnnyanmac an hour ago | parent [-]

Yup, I can wait a while. Took some 7-8 years for people to turn on Facebook.

xnx 9 hours ago | parent | prev | next [-]

> AI apps like Perplexity are now beginning to challenge Google’s search dominance

Now that is a wild claim. ChatGPT might be challenging Google's dominance, but Perplexity is nothing.

brokencode 2 hours ago | parent [-]

It’s not a wild claim, though maybe your interpretation is wild.

I never said Perplexity individually is challenging Google, but rather as part of a group of apps including ChatGPT, which you conveniently left out of your quote.

tikhonj 9 hours ago | parent | prev | next [-]

At some point, Groupon was the fastest growing company ever.

JohnMakin 6 hours ago | parent | prev | next [-]

> That is a such a wild claim. People like the output of LLMs so much that ChatGPT is the fastest growing app ever.

And this kind of meaningless factoid was immediately usurped by the Threads app release, which IMO is kind of a pointless app. Maybe let's find a more meaningful metric before saying someone else's claim is wild.

og_kalu 3 hours ago | parent [-]

Asking your Instagram Users to hop on to your ready made TikTok Clone is hardly in the same sphere as spinning up that much users from nothing.

And while Threads growth and usage stalled, ChatGPT is very much still growing and has *far* more monthly visits than threads.

There's really nothing meaningless about ChatGPT being the 5th most visited site on the planet, not even 3 years after release. Threads doesn't make the top 50.

JohnMakin 11 minutes ago | parent [-]

I think you just precisely explained why MAU / DAU growth is a meaningless metric in such discussions.

johnnyanmac 8 hours ago | parent | prev | next [-]

People "like" or people "suffice" with the output? This "rise of whatever" as one blog put it gives me feelings that people are instead lowering their standards and cutting corners. Letting them cut through to stuff they actually want to do.

satvikpendem 7 hours ago | parent | prev | next [-]

> People like the output of LLMs so much that ChatGPT is the fastest growing app ever

And how much of that is free usage, like the parent said? Even when users are paying, ChatGPT's costs are larger than their revenue.

Wowfunhappy 10 hours ago | parent | prev | next [-]

...I do wonder what percent of ChatGPT usage is just students cheating on their homework, though.

genghisjahn 9 hours ago | parent [-]

Neal Stephenson has a recent post that covers some of this. Also links to teachers talking about many students just putting all their work into chatgpt and turning it in.

https://nealstephenson.substack.com/p/emerson-ai-and-the-for...

frozenseven 8 hours ago | parent [-]

He links to Reddit, a site where most people are aggressively against AI. So, not necessarily a representative slice of reality.

genghisjahn 8 hours ago | parent | next [-]

He links to a post about a teacher’s expertise with students using AI. The fact that it’s on Reddit is irrelevant.

frozenseven 8 hours ago | parent [-]

If you're going to champion something that comes from a place of extreme political bias, you could at least acknowledge it.

Capricorn2481 6 hours ago | parent | next [-]

This is a baffling response. The politics are completely irrelevant to this topic. Pretty much every American is distrustful of big tech and is completely unaware of what the current administration has conceded to AI companies, with larger scandals taking the spotlight, so there hasn't been a chance for one party or the other to rally around a talking point with AI.

People don't like AI because its impact on the internet is filling it with garbage, not because of tribalism.

frozenseven 6 hours ago | parent [-]

>This is a baffling response.

Likewise.

95+% of the time I see a response like this, it's from one particular side of the political aisle. You know the one. Politics has everything to do with this.

>what the current administration has conceded to AI companies

lol, I unironically think that they're not lax enough when it comes to AI.

intended 5 hours ago | parent [-]

Based on your response and logic - no dem should read stuff written by repub voters, or if they do read it, dismiss their account because it cannot be … what?

Not sure how we get to dismissing the teacher subreddit, to be honest.

frozenseven 5 hours ago | parent [-]

[flagged]

fireflash38 5 hours ago | parent | prev [-]

Why? So you could discard it faster?

Read things from people that you disagree with.

frozenseven 3 hours ago | parent [-]

Because I'm not going to play a game where the other side gets to ignore the rules.

Sharlin 7 hours ago | parent | prev | next [-]

I’d like to see a statistically sound source for that claim. Given how many non-nerds there are on Reddit these days, it’s unlikely that there’s any particular strong bias in any direction compared to any similar demographic.

johnnyanmac 8 hours ago | parent | prev [-]

Given recent studies, that does seem to reflect reality. Trust in AI has been waning for 2 years now.

frozenseven 8 hours ago | parent [-]

By what relevant metric?

The userbase has grown by an order of magnitude over the past few years. Models have gotten noticeably smarter and see more use across a variety of fields and contexts.

JTbane 7 hours ago | parent [-]

> Models have gotten noticeably smarter and see more use across a variety of fields and contexts.

Is that really true? The papers I've read seem to indicate the hallucination rate is getting higher.

frozenseven 6 hours ago | parent [-]

Models from a few years ago are comparatively dumb. Basically useless when it comes to performing tasks you'd give to o3 or Gemini 2.5 Pro. Even smaller reasoning models can do things that would've been impossible in 2023.

shpongled 7 hours ago | parent | prev [-]

I would pay $5000 to never have to read another LLM-authored piece of text ever again.