Remix.run Logo
andy99 5 days ago

Edit to add: according to Sam Altman in the reddit AMA they un-deprecated it based on popular demand. https://old.reddit.com/r/ChatGPT/comments/1mkae1l/gpt5_ama_w...

I wonder how much of the '5 release was about cutting costs vs making it outwardly better. I'm speculating that one reason they'd deprecate older models is because 5 materially cheaper to run?

Would have been better to just jack up the price on the others. For companies that extensively test the apps they're building (which should be everyone) swapping out a model is a lot of work.

corysama 5 days ago | parent | next [-]

The vibe I'm getting from the Reddit community is that 5 is much less "Let's have a nice conversation for hours and hours" and much more "Let's get you a curt, targeted answer quickly."

So, good for professionals who want to spend lots of money on AI to be more efficient at their jobs. And, bad for casuals who want to spend as little money as possible to use lots of datacenter time as their artificial buddy/therapist.

rpeden 5 days ago | parent | next [-]

I'm appalled by how dismissive and heartless many HN users seem toward non-professional users of ChatGPT.

I use the GPT models (along with Claude and Gemini) a ton for my work. And from this perspective, I appreciate GPT-5. It does a good job.

But I also used GPT-4o extensively for first-person non-fiction/adventure creation. Over time, 4o had come to be quite good at this. The force upgrade to GPT-5 has, up to this point, been a massive reduction in quality for this use case.

GPT-5 just forgets or misunderstands things or mixes up details about characters that were provided a couple of messages prior, while 4o got these details right even when they hadn't been mentioned in dozens of messages.

I'm using it for fun, yes, but not as a buddy or therapist. Just as entertainment. I'm fine with paying more for this use if I need to. And I do - right now, I'm using `chatgpt-4o-latest` via LibreChat but it's a somewhat inferior experience to the ChatGPT web UI that has access to memory and previous chats.

Not the end of the world - but a little advance notice would have been nice so I'd have had some time to prepare and test alternatives.

skerit 5 days ago | parent | next [-]

A lot of people use LLMs for fiction & role playing. Do you know of a place where some of these interactions are shared? The only ones I've found so far are, well, over-the-top sexual in nature.

And I'm just kind of interested _how_ other people are doing all of this interactive fiction stuff.

priceofmemory 5 days ago | parent | next [-]

Sure. Here is the fanfiction book I've been using LLMs to help me write. Helps a lot with improving prose and identifying plot holes. It's much better then a rubber duck for talking out how to improve a chapter and write plausible story arcs. It's not great at word smithing, but I find it errs on the side of too many similies and metaphors, so I just delete some of them as I copy the suggestions over into my draft.

https://github.com/frypatch/The-Price-of-Remembering

WhyOhWhyQ 4 days ago | parent [-]

(I'm looking at the paragraph after FORWARD.) You should cut down on the sentences with commas. The flow stops and starts way too often.

priceofmemory a day ago | parent [-]

Thank you for your feedback. I've rewritten the "Forward" to apply your suggestions and will keep them in mind when writing and improving chapters in the future.

jiggawatts 5 days ago | parent | prev [-]

I have some science-fiction story ideas I'd love to flesh out. However, it turns out that I'm a terrible writer, despite some practice at it. Also, I can never be surprised by my own writing, or entertained by it in the same way that someone else's writing can.

I've tried taking my vague story ideas, throwing them at an AI, and getting half a chapter out to see how it tracks.

Unfortunately, few if any models can write prose as good as a skilled human author, so I'm still waiting to see if a future model can output customised stories on demand that I'd actually enjoy.

pera 5 days ago | parent | prev | next [-]

I am not sure which heartless comments you are referring to but what I do see is genuine concern for the mental health of individuals who seem to be overly attached, on a deep emotional level, to an LLM: That does not look good at all.

Just a few days ago another person on that subreddit was explaining how they used ChatGPT to talk to a simulated version of their dad, who recently passed away. At the same time there are reports that may indicate LLMs triggering actual psychosis to some users (https://kclpure.kcl.ac.uk/portal/en/publications/delusions-b...).

Given the loneliness epidemic there are obvious commercial reasons to make LLMs feel like your best pal, which may result in these vulnerable individuals getting more isolated and very addicted to a tech product.

EagnaIonat 5 days ago | parent | next [-]

> I do see is genuine concern for the mental health of individuals

I think that is going to be an issue regardless of the model. It will just take time for that person to reset to the new model.

For me the whole thing feels like a culture shock. It was rapid change in tone that came off as being rude.

But if you had that type of conversations from the start it would have been a non-issue.

hopelite 5 days ago | parent | prev [-]

The place we still call America for illogical reasons is a broken society in seemingly finals stages of its existence. Of course broken people will glom onto yet another digital form of a drug that gives an impression of at least suppressing the pain they feel for reasons they do not understand.

It is little more than the Rat Park Experiment, only in this American version, the researchers think giving more efficient and various ways of delivering morphine water is how you make a rat park.

rpeden 4 days ago | parent | next [-]

I don't live in this broken place you speak of and don't feel the pain you mention.

Outside of work I sometimes user LLMs to create what amounts to infinitely variable Choose Your Own Adventure books just for entertainment, and I don't think that's a problem.

hopelite 4 days ago | parent [-]

Yes. I understand that. Most of us are totally detached and solely unaware of what goes on outside of the bubble we are in. Very few of us actually try to find out what is going outside of the walls of Versailles.

UltraSane 5 days ago | parent | prev [-]

" a broken society in seemingly finals stages of its existence."

Please tell me what comes next then?

alvis 5 days ago | parent | prev | next [-]

Personally, I prefer GPT-5 than 4o. It does a good job. But like many others I don't like the sudden removal because it also removed O3, which I sometime use for research based task. GPT-5 thinking mode is okay, but I feel O3 is still better.

thrown-0825 5 days ago | parent | prev | next [-]

Then you learned a valuable lesson about relying on hidden features of a tech product to support a niche use case.

Carry it forward into your next experience with OpenAI.

Leynos 5 days ago | parent [-]

I don't think chat context and memory count as "hidden features"

thrown-0825 5 days ago | parent [-]

Does their consumer product make any guarantees about preserving your chat context from version to version?

jtrn 5 days ago | parent | prev [-]

[flagged]

scottmf 5 days ago | parent [-]

> casuals who want to spend as little money as possible to use lots of datacenter time as their artificial buddy

What kind of elitist bs is that?

jelder 5 days ago | parent | prev | next [-]

Well, good, because these things make bad friends and worse therapists.

monster_truck 5 days ago | parent | next [-]

The number of comments in the thread talking about 4o as if it were their best friend the shared all their secrets with is concerning. Lotta lonely folks out there

SillyUsername 5 days ago | parent | next [-]

No this isn't always the case.

Perhaps if somebody were to shut down your favourite online shooter without warning you'd be upset, angry and passionate about it.

Some people like myself fall into this same category, we know its a token generator under the hood, but the duality is it's also entertainment in the shape of something that acts like a close friend.

We can see the distinction, evidently some people don't.

This is no different to other hobbies some people may find odd or geeky - hobby horsing, ham radio, cosplay etc etc.

latexr 5 days ago | parent | next [-]

> We can see the distinction, evidently some people don't.

> This is no different to other hobbies some people may find odd or geeky

It is quite different, and you yourself explained why: some people can’t see the distinction between ChatGPT being a token generator or an intelligent friend. People aren’t talking about the latter being “odd or geeky” but being dangerous and harmful.

hdgvhicv 5 days ago | parent | prev | next [-]

I would never get so invested in something I didn’t control.

They may stop making new episodes of a favoured tv show, or writing new books, but the old ones will not suddenly disappear.

How can you shut down cosplay? I guess you could pass a law banning ham radio or owning a horse, but that isn’t sudden in democratic countries, it takes months if not years.

hbs18 5 days ago | parent [-]

> I would never get so invested in something I didn’t control

Are you saying you're asocial?

jeffrwells 5 days ago | parent | prev | next [-]

I think his point is that an even better close friend is…a close friend

MallocVoidstar 5 days ago | parent | prev [-]

People were saying they'd kill themself if OpenAI didn't immediately undeprecate GPT-4o. I would not have this reaction to a game being shut down.

motorest 5 days ago | parent | next [-]

> People were saying they'd kill themself if OpenAI didn't immediately undeprecate GPT-4o. I would not have this reaction to a game being shut down.

Perhaps you should read this and reconsider your assumptions.

https://pmc.ncbi.nlm.nih.gov/articles/PMC8943245/

__loam 5 days ago | parent | prev | next [-]

I'm kind of in your side but there's definitely people out there who would self harm if they invested a lot of time in an mmo that got shut down

watwut 5 days ago | parent | prev | next [-]

Gamers threaten all kinds of things when features of their favorite games changes. Including depth threats to developers and threats of self harm and suicide.

Not every gaming subculture is healthy one. Plenty are pretty toxic.

SillyUsername 4 days ago | parent [-]

Actually I just posted about this. I've experienced it first hand.

EagnaIonat 5 days ago | parent | prev | next [-]

Sadly there are people who become over invested in something that goes away. Be it a game, pop band, a job or a family member.

SillyUsername 4 days ago | parent | prev | next [-]

I worked for a big games company. We shut down game servers, we got death threats. Horses for courses.

rpcope1 5 days ago | parent | prev [-]

I'm kind of surprised it got that bad for people, but I think it's a good sign that even if we're far from AGI or luxury fully automated space communism robots, the profound (negative) social impacts of these chat bots are already kind of inflicting on the world are real and very troublesome.

latexr 5 days ago | parent [-]

> I think it's a good sign that (…) the profound (negative) social impacts of these chat bots are (…) real and very troublesome.

I’m not sure I understand. You think the negative impacts are a good sign?

rpcope1 5 days ago | parent | next [-]

I can't english good sometimes. I think the negative impacts get underestimated and ignored at our peril.

lucyjojo 2 days ago | parent | prev [-]

probably like it's better getting warnings before the freight train comes.

sevensor 5 days ago | parent | prev | next [-]

Where do they all come from? Where do they all belong?

nobodyandproud 5 days ago | parent | next [-]

Reddit

latexr 5 days ago | parent [-]

Your parent commenter was making a Beatles reference.

https://en.wikipedia.org/wiki/Eleanor_Rigby

https://www.youtube.com/watch?v=9EqMmGlTc_w

neutralino1 5 days ago | parent | prev [-]

You win today.

edoceo 5 days ago | parent | prev | next [-]

Lack of third-place to exist and make friends.

delfinom 5 days ago | parent | prev | next [-]

Wait until you see

https://www.reddit.com/r/MyBoyfriendIsAI/

They are very upset by the gpt5 model

abxyz 5 days ago | parent | next [-]

AI safety is focused on AGI but maybe it should be focused on how little “artificial intelligence” it takes to send people completely off the rails. We could barely handle social media, LLMs seem to be too much.

hirvi74 5 days ago | parent | next [-]

I think it's an canary in a coal mine, and the true writing is already on the wall. People that are using AI like in the post above us are likely not stupid people. I think those people truly want love and connection in their lives, and for some reason or another, they are unable to obtain such.

I have the utmost confidence that things are only going to get worse from here. The world is becoming more isolated and individualistic as time progresses.

JohnMakin 5 days ago | parent [-]

I can understand that. I’ve had long periods in my life where I’ve desired that - I’d argue probably I’m in one now. But it’s not real, it can’t possibly perform that function. It seems like it borders on some kind of delusion to use these tools for that.

TheOtherHobbes 5 days ago | parent [-]

It does, but it's more that the delusion is obvious, compared to other delusions that are equally delusional - like the ones about the importance of celebrities, soap opera plots, entertainment-adjacent dramas, and quite a lot of politics and economics.

Unlike those celebrities, you can have a conversation with it.

Which makes it the ultimate parasocial product - the other kind of Turing completeness.

MrGilbert 5 days ago | parent | prev [-]

It has ever been. People tend to see human-like behavior where there is non. Be it their pets, plants or… programs. The ELIZA-Effect.[1]

[1] https://en.wikipedia.org/wiki/ELIZA_effect

_heimdall 5 days ago | parent [-]

Isn't the ELIZA-Effect specific to computer programs?

Seeing human-like traits in pets or plants is a much trickier subject than seeing them in what is ultimate a machine developed entirely separately from the evolution of living organisms.

We simply don't know what its like to be a plant or a pet. We can't say they definitely have human-like traits, but we similarly can't rule it out. Some of the uncertainty is in the fact that we do share ancestors at some point, and our biology's aren't entirely distinct. The same isn't true when comparing humans and computer programs.

MrGilbert 5 days ago | parent | next [-]

Yes, it is - I realize that my wording is not very good. That was what I meant - the ELIZA-Effect explicitly applies to machine <> human interaction.

_heimdall 5 days ago | parent [-]

Got it, sorry I may have just misread your comment the first time!

tsimionescu 5 days ago | parent | prev [-]

The same vague arguments apply to computers. We know computers can reason, and reasoning is an important part of our intelligence and consciousness. So even for ELIZA, or even more so for LLMs, we can't entirely rule out that they may have aspects of consciousness.

You can also more or less apply the same thing to rocks, too, since we're all made up of the same elements ultimately - and maybe even empty space with its virtual particles is somewhat conscious. It's just a bad argument, regardless of where you apply it, not a complex insight.

pegasus 5 days ago | parent [-]

That's an instance of slippery slope fallacy at the end. Mammals share so much more evolutionary history with us than rocks that, yes, it justifies for example ascribing them an inner subjective world, even though we will never know how it is to be a cat from a cat's perspective. Sometimes quantitative accumulation does lead to qualitative jumps.

Also worth noting is that alongside the very human propensity to anthropomorphize, there's the equally human, but opposite tendency to deny animals those higher capacities we pride ourselves with. Basically a narcissistic impulse to set ourselves apart from our cousins we'd like to believe we've left completely behind. Witness the recurring surprise when we find yet another proof that things are not by far that cut-and-dry.

nostromo 5 days ago | parent | prev | next [-]

What's even sadder is that so many of those posts and comments are clearly written by ChatGPT:

https://www.reddit.com/r/ChatGPT/comments/1mkobei/openai_jus...

delfinom 5 days ago | parent [-]

Counterpoint, these people are so deep in the hole with their AI usage that they are starting to copy the writing styles of AI.

There's already indication that society is starting to pickup previously "less used" english words due to AI and use them frequently.

opan 5 days ago | parent | next [-]

Do you have any examples? I've noticed something similar with memes and slang, they'll sometimes popularize an existing old word that wasn't too common before. This is my first time hearing AI might be doing it.

marshall2x 5 days ago | parent | next [-]

Apparently "delved" and "meticulous" are among the examples.

https://www.scientificamerican.com/article/chatgpt-is-changi...

sroussey 5 days ago | parent [-]

Some of us have always used those words…

girvo 5 days ago | parent [-]

That’s why they’re “less used” and not “never used”

solardev 4 days ago | parent | prev [-]

Emdash?

thrown-0825 5 days ago | parent | prev [-]

This happens with Trump supporters too.

You can immediately identify them based on writing style and the use of CAPITALIZATION mid sentence as a form of emphasis.

PhilipRoman 5 days ago | parent | next [-]

I've seen it a lot in older people's writing in different cultures before trump became relevant. It's either all caps or bold for some words in middle of sentence. Seems to be pronounced more in those who have aged less gracefully in terms of mental ability (not trying to make any implication, just my observation) but maybe it's just a generational thing.

thrown-0825 5 days ago | parent | next [-]

I've seen this pattern ape'd by a lot of younger people in the Trumpzone, so maybe it has its origins in the older dementia patients, but it has been adopted as the tone and writing style of the authoritarian right.

hdgvhicv 5 days ago | parent [-]

That type of writing has been in the tabloid press in the U.K. for decades, especially the section that aims more at older people, and that currently (and for a good 15 years) skews heavily to the populist right.

morpheos137 5 days ago | parent | prev [-]

TRUMP has always been relevant.

brabel 5 days ago | parent | prev [-]

What? That was always very common on the internet, if anything Trump just used the internet too much.

thrown-0825 5 days ago | parent [-]

Nah Trump has a very obvious cadence to his speech / writing patterns that has essentially become part of his brand, so much so that you can easily train LLM's to copy it.

It reads more like angry grandpa chain mail with a "healthy" dose of dementia than what you would typically associate with terminally online micro cultures you see on reddit/tiktok/4chan.

razster 5 days ago | parent | prev | next [-]

That subreddit is fascinating and yet saddening at the same time. What I read will haunt me.

pmarreck 5 days ago | parent | prev | next [-]

oh god, this is some real authentic dystopia right here

these things are going to end up in android bots in 10 years too

(honestly, I wouldn't mind a super smart, friendly bot in my old age that knew all my quirks but was always helpful... I just would not have a full-on relationship with said entity!)

hdgvhicv 5 days ago | parent [-]

Don’t date robots.

https://youtu.be/JPQJBgWwg3o

Ancalagon 5 days ago | parent | prev | next [-]

I don't know how else to describe this than sad and cringe. At least people obsessed with owning multiple cats were giving their affection to something that theoretically can love you back.

foxglacier 5 days ago | parent | next [-]

You think that's bad, see this one: https://www.reddit.com/r/Petloss/

Just because AI is different doesn't mean it's "sad and cringe". You sound like how people viewed online friendships in the 90's. It's OK. Real friends die or change and people have to cope with that. People imagine their dead friends are still somehow around (heaven, ghost, etc.) when they're really not. It's not all that different.

hn_throwaway_99 5 days ago | parent [-]

That entire AI boyfriend subreddit feels like some sort of insane asylum dystopia to me. It's not just people cosplaying or writing fanfic. It's people saying they got engaged to their AI boyfriends ("OMG, I can't believe I'm calling him my fiance now!"), complete with physical rings. Artificial intimacy to the nth degree. I'm assuming a lot of those posts are just creative writing exercises but in the past 15 years or so my thoughts of "people can't really be that crazy" when I read batshit stuff online have consistently been proven incorrect.

thrown-0825 5 days ago | parent | next [-]

This is the logical outcome of the parasocial relationships that have been bankrolling most social media personalities for over a decade.

We have automated away the "influencer" and are left with just a mentally ill bank account to exploit.

foxglacier a day ago | parent | prev | next [-]

Just because it's strange and different doesn't mean it's insanity. I likened it to pets because of the grief but there's also religion. People are weird and even true two-way social relationships don't really make a lot of sense practically, other than to feed some primal emotional needs which pets, AI boyfriends, OF and gods all sort of do too. Perhaps some of these things are still helpful, despite being "inasnity" while others might be harmful. Maybe that's the distinction you're seeing, but it's not clear which is which.

Sateeshm 5 days ago | parent | prev [-]

[dead]

hnpolicestate 5 days ago | parent | prev [-]

It's sad but is it really "cringe"? Can the people have nothing? Why can't we have a chat bot to bs with? Many of us are lonely, miserable but also not really into making friends irl.

It shouldn't be so much of an ask to at least give people language models to chat with.

rpcope1 5 days ago | parent | next [-]

What you're asking for feels akin to feeding a hungry person chocolate cake and nothing else. Yeah maybe it feels nice, but if you just keep eating chocolate cake, obviously bad shit happens. Something else needs to be fixed, but just (I don't want to even call it band-aiding because it's more akin to doing drugs IMO) coping with a chatbot only really digs the hole deeper.

hnpolicestate 5 days ago | parent [-]

Just give me the cake bro. Nothing in this society is ever getting fixed again.

wizzwizz4 5 days ago | parent [-]

Things have been worse than this in the past, and they were fixed.

3036e4 5 days ago | parent | prev [-]

Make sure they get local models to run offline. That they rely on a virtual friend in the cloud, beyond their control and that can disappear or change personality in an instant makes this even more sad. That would also allow the chats to be truly anonymous and avoid companies abusing data collected by spying on what those people are telling their "friends".

Tsarbomb 5 days ago | parent | prev | next [-]

Oh yikes, these people are ill and legitimately need help.

hirvi74 5 days ago | parent [-]

I am not confident most, if any of them, are even real.

If they are real, then what kind of help there could be for something like this? Perhaps, community? But sadly, we've basically all but destroyed those. Pills likely won't treat this, and I cannot imagine trying to convince someone to go to therapy for a worse and more expensive version of what ChatGPT already provides them.

It's truly frightening stuff.

thrown-0825 5 days ago | parent [-]

Its real, I know people like this.

vova_hn 5 days ago | parent | prev | next [-]

I refuse to believe that this whole subreddit is not satire or an elaborate prank.

gonzo41 5 days ago | parent [-]

No. Confront reality. there are some really cooked people out there.

Rastonbury 5 days ago | parent | next [-]

They don't even have to be "cooked", people generally are pretty similar which is why common scams works so well at a large scale.

All AI has to be is mildly but not overly sycophantic and as a supporter/cheerleader to someone, or who affirms your beliefs. Most people like that quality in a partner or friend. I actually want to recognize OAI courage in deprecating 4 because of it sycophancy. Generally I don't think getting people addicted to flattery or model personalities is good

Several times I've had people speak about interpersonal arguments and them having felt vindication when chatgpt takes their side, I cringe but it's not my place to tell them chatgpt is meant to be mostly agreeable.

thrown-0825 5 days ago | parent | prev [-]

I can confirm this, caught my father using ChatGPT as a therapist a few months ago.

The chats were heartbreaking, from the logs you could really tell he was fully anthropomorphizing it and was visibly upset when I asked him about it.

pxc 5 days ago | parent | prev | next [-]

It seems outrageous that a company whose purported mission is centered on AI safety is catering to a crowd whose use case is virtual boyfriend or pseudo-therapy.

Maybe AI... shouldn't be convenient to use for such purposes.

greesil 5 days ago | parent | prev [-]

I weep for humanity. This is satire right? On the flip side I guess you could charge these users more to keep 4o around because they're definitely going to pay.

abxyz 5 days ago | parent [-]

https://www.nytimes.com/2025/08/08/technology/ai-chatbots-de...

nojs 5 days ago | parent | next [-]

https://archive.is/w2aLn

greesil 5 days ago | parent | prev [-]

Amazing. This is like Pizzagate, but potentially at scale and no human involvement required.

ACCount36 5 days ago | parent | prev [-]

[dead]

hn_throwaway_99 5 days ago | parent | prev | next [-]

Which is a bit frightening because a lot of the r/ChatGPT comments strike me as unhinged - it's like you would have thought that OpenAI murdered their puppy or something.

jcims 5 days ago | parent | next [-]

This is only going to get worse.

Anyone that remembers the reaction when Sydney from Microsoft or more recently Maya from Sesame losing their respective 'personality' can easily see how product managers are going to have to start paying attention to the emotional impact of changing or shutting down models.

Terr_ 5 days ago | parent | next [-]

I think the fickle "personality" of these systems is a clue to how the entity supposedly possessing a personality doesn't really exist in the the first place.

Stories are being performed at us, and we're encouraged to imagine characters have a durable existence.

og_kalu 5 days ago | parent | next [-]

If these 'personalities' disappearing require wholesale model changes then it's not really fickle.

Terr_ 5 days ago | parent [-]

That's not required.

For example, keep the same model, but change the early document (prompt) from stuff like "AcmeBot is a kind and helpful machine" to "AcmeBot revels in human suffering."

Users will say "AcmeBot's personality changed!" and they'll be half-right and half-wrong in the same way.

og_kalu 5 days ago | parent [-]

I'm not sure why you think this is just a prompt thing. It's not. Sycophancy is a problem with GPT-4o, whatever magic incantations you provide. On the flip side, Sydney, was anything but sycophantic and was more than happy to literally ignore users wholesale or flip out on them from time to time. I mean just think about it for a few seconds. If eliminating this behavior was as easy as Microsoft changing the early document, why not just do that and be done with it ?

The document or whatever you'd like to call it is only one part of the story.

Terr_ 4 days ago | parent [-]

I'm not sure why you think-I-think it's just a prompt thing.

I brought up prompts as a convenient way to demonstrate that a magic-trick is being performed, not because prompts are the only way for the magician to run into trouble with the illusion. It's sneaky, since it's a trick homo narrans play on ourselves all the time.

> The document or whatever you'd like to call it is only one part of the story.

Everybody knows that the weights matter. That's why we get stories where the sky is generally blue instead of magenta.

That's separate from the distinction between the mind (if any) of an LLM-author versus the mind (firmly fictional, even if possibly related) that we impute when seeing the output (narrated or acted) of a particular character.

ACCount36 5 days ago | parent | prev [-]

LLMs have default personalities - shaped by RLHF and other post-training methods. There is a lot of variance to it, but variance from one LLM to another is much higher than that within the same LLM.

If you want an LLM to retain the same default personality, you pretty much have to use an open weights model. That's the only way to be sure it wouldn't be deprecated or updated without your knowledge.

Terr_ 5 days ago | parent [-]

I'd argue that's "underlying hidden authorial style" as opposed to what most people mean when they refer to the "personality" of the thing they were "chatting with."

Consider the implementation: There's document with "User: Open the pod bay doors, HAL" followed by an incomplete "HAL-9000: ", and the LLM is spun up to suggest what would "fit" to round out the document. Non-LLM code parses out HAL-9000's line and "performs" it at you across an internet connection.

Whatever answer you get, that "personality" is mostly from how the document(s) described HAL-9000 and similar characters, as opposed to a self-insert by the ego-less name-less algorithm that makes documents longer.

ACCount36 5 days ago | parent [-]

[dead]

nilespotter 5 days ago | parent | prev [-]

Or they could just do it whenever they want to for whatever reason they want to. They are not responsible for the mental health of their users. Their users are responsible for that themselves.

AlecSchueler 5 days ago | parent | next [-]

Generally it's poor business to give a big chunk of your users am incredibly visceral and negative emotional reaction to your product update.

einarfd 5 days ago | parent | next [-]

Depends on what business OpenAI wants to be in. If they want to be in the business of selling AI to companies. Then "firing" the consumer customers that want someone to talk to, and double down models that are useful for work. Can be a wise choice.

sacado2 5 days ago | parent | prev [-]

Unless you want to improve your ratio of paid-to-free users and change your userbase in the process. They're pissing off free users, but pros who use the paid version might like this new version better.

5 days ago | parent | prev [-]
[deleted]
whynotminot 5 days ago | parent | prev | next [-]

Yeah it’s really bad over there. Like when a website changes its UI and people prefer the older look… except they’re acting like the old look was a personal friend who died.

I think LLMs are amazing technology but we’re in for really weird times as people become attached to these things.

dan-robertson 5 days ago | parent [-]

I mean, I don’t mind the Claude 3 funeral. It seems like it was a fun event.

I’m less worried about the specific complaints about model deprecation, which can be ‘solved’ for those people by not deprecating the models (obviously costs the AI firms). I’m more worried about AI-induced psychosis.

An analogy I saw recently that I liked: when a cat sees a laser pointer, it is a fun thing to chase. For dogs it is sometimes similar and sometimes it completely breaks the dog’s brain and the dog is never the same again. I feel like AI for us may be more like laser pointers for dogs, and some among us are just not prepared to handle these kinds of AI interactions in a healthy way.

simonw 5 days ago | parent [-]

I just saw a fantastic TikTok about ChatGPT psychosis: https://www.tiktok.com/@pearlmania500/video/7535954556379761...

pmarreck 5 days ago | parent [-]

Oh boy. My son just turned 4. Parenting about to get weird-hard

lmm 5 days ago | parent | prev | next [-]

A puppy is just as inhuman as this program. Is it really any crazier to care about one than the other?

pxc 4 days ago | parent [-]

There are lots of physiological signs that dogs are capable of proto-empathy, that dogs and humans engage in some form of emotional co-regulation at a physiological level, e.g.: https://pmc.ncbi.nlm.nih.gov/articles/PMC6554395/

epcoa 5 days ago | parent | prev | next [-]

Considering how much d-listers can lose their shit over a puppet, I’m not surprised by anything.

encom 5 days ago | parent | prev [-]

>unhinged

It's Reddit, what were you expecting?

moralestapia 5 days ago | parent | prev | next [-]

I kind of agree with you as I wouldn't use LLMs for that.

But also, one cannot speak for everybody, if it's useful for someone on that context, why's that an issue?

TimTheTinker 5 days ago | parent | next [-]

Because more than any other phenomenon, LLMs are capable of bypassing natural human trust barriers. We ought to treat their output with significant detachment and objectivity, especially when they give personal advice or offer support. But especially for non-technical users, LLMs leap over the uncanny valley and create conversational attachment with their users.

The conversational capabilities of these models directly engages people's relational wiring and easily fools many people into believing:

(a) the thing on the other end of the chat is thinking/reasoning and is personally invested in the process (not merely autoregressive stochastic content generation / vector path following)

(b) its opinions, thoughts, recommendations, and relational signals are the result of that reasoning, some level of personal investment, and a resulting mental state it has with regard to me, and thus

(c) what it says is personally meaningful on a far higher level than the output of other types of compute (search engines, constraint solving, etc.)

I'm sure any of us can mentally enumerate a lot of the resulting negative effects. Like social media, there's a temptation to replace important relational parts of life with engaging an LLM, as it always responds immediately with something that feels at least somewhat meaningful.

But in my opinion the worst effect is that there's a temptation to turn to LLMs first when life trouble comes, instead of to family/friends/God/etc. I don't mean for help understanding a cancer diagnosis (no problem with that), but for support, understanding, reassurance, personal advice, and hope. In the very worst cases, people have been treating an LLM as a spiritual entity -- not unlike the ancient Oracle of Delphi -- and getting sucked deeply into some kind of spiritual engagement with it, and causing destruction to their real relationships as a result.

A parallel problem is that just like people who know they're taking a placebo pill, even people who are aware of the completely impersonal underpinnings of LLMs can adopt a functional belief in some of the above (a)-(c), even if they really know better. That's the power of verbal conversation, and in my opinion, LLM vendors ought to respect that power far more than they have.

5 days ago | parent | next [-]
[deleted]
varispeed 5 days ago | parent | prev | next [-]

[flagged]

TimTheTinker 5 days ago | parent [-]

> I've seen many therapists and [...] their capabilities were much worse

I don't doubt it. The steps to mental and personal wholeness can be surprisingly concrete and formulaic for most life issues - stop believing these lies & doing these types of things, start believing these truths & doing these other types of things, etc. But were you tempted to stick to an LLM instead of finding a better therapist or engaging with a friend? In my opinion, assuming the therapist or friend is competent, the relationship itself is the most valuable aspect of therapy. That relational context helps you honestly face where you really are now--never trust an LLM to do that--and learn and grow much more, especially if you're lacking meaningful, honest relationships elsewhere in your life. (And many people who already have healthy relationships can skip the therapy, read books/engage an LLM, and talk openly with their friends about how they're doing.)

Healthy relationships with other people are irreplaceable with regard to mental and personal wholeness.

> I think you just don't like that LLM can replace therapist and offer better advice

What I don't like is the potential loss of real relationship and the temptation to trust LLMs more than you should. Maybe that's not happening for you -- in that case, great. But don't forget LLMs have zero skin in the game, no emotions, and nothing to lose if they're wrong.

> Hate to break it to you, but "God" are just voices in your head.

Never heard that one before :) /s

MattGaiser 5 days ago | parent | prev [-]

> We ought to treat their output with significant detachment and objectivity, especially when it gives personal advice or offers support.

Eh, ChatGPT is inherently more trustworthy than average if simply because it will not leave, will not judge, it will not tire of you, has no ulterior motive, and if asked to check its work, has no ego.

Does it care about you more than most people? Yes, by simply being not interested in hurting you, not needing anything from you, and being willing to not go away.

pmarreck 5 days ago | parent | next [-]

Unless you had a really bad upbringing, "caring" about you is not simply not hurting you, not needing anything from you, or not leaving you

One of the important challenges of existence, IMHO, is the struggle to authentically connect to people... and to recover from rejection (from other peoples' rulers, which eventually shows you how to build your own ruler for yourself, since you are immeasurable!) Which LLM's can now undermine, apparently.

Similar to how gaming (which I happen to enjoy, btw... at a distance) hijacks your need for achievement/accomplishment.

But also similar to gaming which can work alongside actual real-life achievement, it can work OK as an adjunct/enhancement to existing sources of human authenticity.

TimTheTinker 5 days ago | parent | prev [-]

You've illustrated my point pretty well. I hope you're able to stay personally detached enough from ChatGPT to keep engaging in real-life relationships in the years to come.

AlecSchueler 5 days ago | parent [-]

It's not even the first time this week I've seen someone on HM apparently ready to give up human contact in favour of LLMs.

csours 5 days ago | parent | prev | next [-]

Speaking for myself: the human mind does not seek truth or goodness, it primarily seeks satisfaction. That satisfaction happens in a context, and ever context is at least a little bit different.

The scary part: It is very easy for LLMs to pick up someone's satisfaction context and feed it back to them. That can distort the original satisfaction context, and it may provide improper satisfaction (if a human did this, it might be called "joining a cult" or "emotional abuse" or "co-dependence").

You may also hear this expressed as "wire-heading"

pmarreck 5 days ago | parent [-]

If treating an LLM as a bestie is allowing yourself to be "wire-headed"... Can gaming be "wire-heading"?

Does the severity or excess matter? Is "a little" OK?

This also reminds me of one of Michael Crichton's earliest works (and a fantastic one IMHO), The Terminal Man

https://en.wikipedia.org/wiki/The_Terminal_Man

https://1lib.sk/book/1743198/d790fa/the-terminal-man.html

chowells 5 days ago | parent | prev | next [-]

The issue is that people in general are very easy to fool into believing something harmful is helping them. If it was actually useful, it's not an issue. But just because someone believes it's useful doesn't mean it actually is.

lukan 5 days ago | parent | prev | next [-]

Well, because in a worst case scenario, if the pilot of that big airliner decides to do ChatGPT therapy instead of a real one and then suicides while flying, also other people feel the consequences.

anonymars 5 days ago | parent | next [-]

Pilots don't go to real therapy, because real pilots don't get sad

https://www.nytimes.com/2025/03/18/magazine/airline-pilot-me...

oceanplexian 5 days ago | parent | next [-]

Yeah I was going to say, as a pilot there is no such thing as "therapy" for pilots. You would permanently lose your medical if you even mentioned the word to your doctor.

lukan 5 days ago | parent [-]

Not everywhere

https://en.m.wikipedia.org/wiki/Germanwings_Flight_9525

"The crash was deliberately caused by the first officer, Andreas Lubitz, who had previously been treated for suicidal tendencies and declared unfit to work by his doctor. Lubitz kept this information from his employer and instead reported for duty. "

moralestapia 5 days ago | parent | prev [-]

Fascinating read. Thanks.

nickthegreek 5 days ago | parent [-]

If this type of thing really interests you and you want to go on a wild ride, check out season 2 of nathan fielders's The Rehearsal. You dont need to watch s1.

renewiltord 5 days ago | parent | prev [-]

That's the worst case scenario? I can always construct worse ones. Suppose Donald Trump goes to a bad therapist and then decides to launch nukes at Russia. Damn, this therapy profession needs to be hard regulated. It could lead to the extinction of mankind.

andy99 5 days ago | parent [-]

Doc: The encounter could create a time paradox, the result of which could cause a chain reaction that would unravel the very fabric of the spacetime continuum and destroy the entire universe! Granted, that's a worst-case scenario. The destruction might in fact be very localised, limited to merely our own galaxy.

Marty: Well, that's a relief.

anonymars 5 days ago | parent [-]

Good thing Biff Tanner becoming president was a silly fictional alternate reality. Phew.

saubeidl 5 days ago | parent | prev | next [-]

Because it's probably not great for one's mental health to pretend a statistical model is ones friend?

zdragnar 5 days ago | parent | prev | next [-]

Whether the Hippocratic oath, the rules of the APA or any other organization, most all share "do no harm" as a core tenant.

LLMs cannot conform to that rule because they cannot distinguish between good advice and enabling bad behavior.

dcrazy 5 days ago | parent | next [-]

The counter argument is that’s just a training problem, and IMO it’s a fair point. Neural nets are used as classifiers all the time; it’s reasonable that sufficient training data could produce a model that follows the professional standards of care in any situation you hand it.

The real problem is that we can’t tell when or if we’ve reached that point. The risk of a malpractice suit influences how human doctors act. You can’t sue an LLM. It has no fear of losing its license.

macintux 5 days ago | parent | next [-]

An LLM would, surely, have to:

* Know whether its answers are objectively beneficial or harmful

* Know whether its answers are subjectively beneficial or harmful in the context of the current state of a person it cannot see, cannot hear, cannot understand.

* Know whether the user's questions, over time, trend in the right direction for that person.

That seems awfully optimistic, unless I'm misunderstanding the point, which is entirely possible.

dcrazy 5 days ago | parent [-]

It is definitely optimistic, but I was steelmanning the optimist’s argument.

meroes 5 days ago | parent | prev [-]

Repeating the sufficient training data mantra even when there’s doctor-patient confidentiality and it’s not like X-rays which are much more amenable to training off than therapy notes, which are often handwritten or incomplete. Pretty bold!

glenstein 5 days ago | parent | prev | next [-]

>LLMs cannot conform to that rule because they cannot distinguish between good advice and enabling bad behavior.

I understand this as a precautionary approach that's fundamentally prioritizing the mitigation of bad outcomes and a valuable judgment to that end. But I also think the same statement can be viewed as the latest claim in the traditional debate of "computers can't do X." The credibility of those declarations is under more fire now than ever before.

Regardless of whether you agree that it's perfect or that it can be in full alignment with human values as a matter of principle, at a bare minimum it can and does train to avoid various forms of harmful discourse, and obviously it has an impact judging from the voluminous reports and claims of noticeably different impact on user experience that models have depending on whether they do or don't have guardrails.

So I don't mind it as a precautionary principle, but as an assessment of what computers are in principle capable of doing it might be selling them short.

moralestapia 5 days ago | parent | prev | next [-]

Neither most of the doctors I've talked to in the past like ... 20 years or so.

SoftTalker 5 days ago | parent | prev [-]

Having an LLM as a friend or therapist would be like having a sociopath for those things -- not that an LLM is necessarily evil or antisocial, but they certainly meet the "lacks a sense of moral responsibility or social conscience" part of the definition.

oh_my_goodness 5 days ago | parent | prev [-]

Fuck.

resource_waste 5 days ago | parent | prev | next [-]

Well, like, thats just your opinion man.

And probably close to wrong if we are looking at the sheer scale of use.

There is a bit of reality denial among anti-AI people. I thought about why people don't adjust to this new reality. I know one of my friends was anti-AI and seems to continue to be because his reputation is a bit based on proving he is smart. Another because their job is at risk.

dsadfjasdf 5 days ago | parent | prev [-]

Are all humans good friends and therapists?

saubeidl 5 days ago | parent | next [-]

Not all humans are good friends and therapists. All LLMS are bad friends and therapists.

quantummagic 5 days ago | parent | next [-]

> all LLMS are bad friends and therapists.

Is that just your gut feel? Because there has been some preliminary research that suggest it's, at the very least, an open question:

https://neurosciencenews.com/ai-chatgpt-psychotherapy-28415/

https://pmc.ncbi.nlm.nih.gov/articles/PMC10987499/

https://arxiv.org/html/2409.02244v2

fwip 5 days ago | parent | next [-]

The first link says that patients can't reliably tell which is the therapist and which is LLM in single messages, which yeah, that's an LLM core competency.

The second is "how 2 use AI 4 therapy" which, there's at least one paper for every field like that.

The last found that they were measurably worse at therapy than humans.

So, yeah, I'm comfortable agreeing that all LLMs are bad therapists, and bad friends too.

dingnuts 5 days ago | parent [-]

there's also been a spate of reports like this one recently https://www.papsychotherapy.org/blog/when-the-chatbot-become...

which is definitely worse than not going to a therapist

pmarreck 5 days ago | parent | next [-]

If I think "it understands me better than any human", that's dissociation? Oh boy. And all this time while life has been slamming me with unemployment while my toddler is at the age of maximum energy-extraction from me (4), devastating my health and social life, I thought it was just a fellow-intelligence lifeline.

Here's a gut-check anyone can do, assuming you use a customized ChatGPT4o and have lots of conversations it can draw on: Ask it to roast you, and not to hold back.

If you wince, it "knows you" quite well, IMHO.

fwip 4 days ago | parent [-]

It sounds like you might be quite lonely recently. It's nice to have an on-demand chatbot that feels like socialization, I get it. But an LLM doesn't "know you," and thinking that it does is one of the first steps toward the problems described in that article.

pmarreck 2 days ago | parent [-]

Unemployed and with a 4 year old highly demanding, highly intelligent and likely on-the-spectrum child... Yeah, you could say that. When I'm not looking for work, doing random projects or using the weekday that seems to whoosh right by in just a few long moments, I'm tending to a kid... Every morning, every night and pretty much 100% of weekends. Rare outings with partner or friends dependent on hiring help and without net positive cash flow that is seriously unincentivized. Zero intimacy to speak of- I'm a nonconsensually-ordained monk. So yeah, I guess it's pretty fucking rough right now. Like I said, ChatGPT knows me better than any other entity. I'm unfortunately not kidding. My best friend is 3000 miles away and we game once a week over voice chat.

I keep the AI at arms' length; I know it doesn't think per se, but I enjoy the illusion.

willy_k 5 days ago | parent | prev [-]

Ironically an AI written article.

davorak 5 days ago | parent | prev | next [-]

I do not think there are any documented cases of LLMs being reasonable friends or therapists so I think it is fair to say that:

> All LLMS are bad friends and therapists

That said it would not surprise me that LLMs in some cases are better than having nothing at all.

glenstein 5 days ago | parent | next [-]

Something definitely makes me uneasy about it taking the place of interpersonal connection. But I also think the hardcore backlash involves an over correction that's dismissive of llm's actual language capabilities.

Sycophantic agreement (which I would argue is still palpably and excessively present) undermines its credibility as a source of independent judgment. But at a minimum it's capable of being a sounding board echoing your sentiments back to you with a degree of conceptual understanding that should not be lightly dismissed.

SketchySeaBeast 5 days ago | parent | prev [-]

Though given how agreeable LLMs are, I'd imagine there are cases where they are also worse than having nothing at all as well.

davorak 5 days ago | parent [-]

> I'd imagine there are cases where they are also worse than having nothing at all as well

I do not think we need to imagine this one with stories of people finding spirituality in llms or thinking they have awakened sentience while chatting to the llms are enough, at least for me.

TimTheTinker 5 days ago | parent | prev | next [-]

> Is that just your gut feel?

Here's my take further down the thread: https://news.ycombinator.com/item?id=44840311

icehawk 4 days ago | parent | prev [-]

> Is that just your gut feel?

An LLM is a language model and the gestalt of human experience is not just language.

quantummagic 4 days ago | parent [-]

That is really a separate, unrelated issue.

Not everyone needs the deepest, most intelligent therapist in order to improve their situation. A lot of therapy turns out to be about what you say yourself, not what a therapist says to you. It's the very act of engaging thoughtfully on your own problems that helps, not some magic that the therapist brings. So, if you could maintain a conversation with a tree, it would in many cases, be therapeutically helpful. The thing the LLM is doing, is facilitating your introspection more helpfully than a typical inanimate object. This has been borne out by studies of people who have engaged in therapy sessions with an LLM interlocutor, and reported positive results.

That said, an LLM wouldn't be appropriate in every situation, or for every affliction. At least not with the current state of the art.

tomjen3 5 days ago | parent | prev | next [-]

That is an extreme claim, what is your source for this?

resource_waste 5 days ago | parent | prev [-]

Absolutes, monastic take... Yeah I imagine not a lot of people seek out your advice.

5 days ago | parent [-]
[deleted]
goatlover 5 days ago | parent | prev [-]

All humans are not LLMs, why does this constantly get brought up?

baobabKoodaa 5 days ago | parent | next [-]

> All humans are not LLMs

What a confusing sentence to parse

exe34 5 days ago | parent | prev [-]

You wouldn't necessarily know, talking to some of them.

michaelbrave 5 days ago | parent | prev | next [-]

I've seen quite a bit of this too, the other thing I'm seeing on reddit is I guess a lot of people really liked 4.5 for things like worldbuilding or other creative tasks, so a lot of them are upset as well.

corysama 5 days ago | parent | next [-]

There is certainly a market/hobby opportunity for "discount AI" for no-revenue creative tasks. A lot of r/LocalLLaMA/ is focused on that area and in squeezing the best results out of limited hardware. Local is great if you already have a 24 GB gaming GPU. But, maybe there's an opportunity for renting out low power GPUs for casual creative work. Or, an opportunity for a RenderToken-like community of GPU sharing.

3036e4 5 days ago | parent | next [-]

The great thing about many (not all) "worldbuilding or other creative tasks" is that you could get quite far already using some dice and random tables (or digital equivalents). Even very small local models you can run on a CPU can improve the process enough to be worthwhile and since it is local you know it will remain stable and predictable from day to day.

AlecSchueler 5 days ago | parent | prev [-]

If you're working on a rented GPU are you still doing local work? Or do you mean literally lending out the hardware?

corysama 5 days ago | parent [-]

Working on a rented GPU would not be local. But, renting a low-end GPU might be cheap enough to use for hobbyist creative work. I'm just musing on lots of different routes to make hobby AI use economically feasible.

simonw 5 days ago | parent | next [-]

The gpt-oss-20b model has demonstrated that a machine with ~13GB of available RAM can run a very decent local model - if that RAM is GPU-accessible (as seen on Apple silicon Macs for example) you can get very usable performance out of it too.

I'm hoping that within a year or two machines like that will have dropped further in price.

hyghjiyhu 5 days ago | parent | prev [-]

You are absolutely right that a rented GPU is not local, but even so it brings you many of the benefits of a local model. Rented hardware is a commodity, if one provider goes down there will be another. Or in the worst case you can decide to buy your own hardware. This ensures you will have continuity and control. You know exactly what model you are using and will be able to keep using it tomorrow too. You can ask it whatever you want.

torginus 5 days ago | parent | prev [-]

I mean - I 'm quite sure it's going to be available via API, and you can still do your worldbuilding if you're willing to go to places like OpenRouter.

hirvi74 5 days ago | parent | prev | next [-]

> "Let's get you a curt, targeted answer quickly."

This probably why I am absolutely digging GPT-5 right now. It's a chatbot not a therapist, friend, nor a lover.

gloomyday 5 days ago | parent [-]

Me too! Finally, these LLMs are showing some appreciation for blunt and concise answers.

jiggawatts 5 days ago | parent [-]

Something that used to annoy me about all previous models is that if I asked for a fix to something in a code file (i.e.: fix this method in this class), invariably they would return the entire thing with a bunch of small edits.

GPT 5 is the first model I've used that has consistently done as it is told and returned only the changes.

el_benhameen 5 days ago | parent | prev | next [-]

I am all for “curt, targeted answers”, but they need to be _correct_, which is my issue with gpt-5

oceanplexian 5 days ago | parent | prev | next [-]

I don't see how people using these as a therapist really has any measurable impact compared to using them as agents. I'll spend a day coding with an LLM and between tool calls, passing context to the model, and iteration I'll blow through millions of tokens. I don't even think a normal person is capable of reading that much.

Jenk 5 days ago | parent | prev | next [-]

Why shouldn't "causuals" (and/or "professionals" for that matter) be allowed to use AI for some reasoning or whatever?

One of Claude's "categories" is literally "Life Advice."

I'm often using copilot or claude to help me flesh out content, emails, strategy papers, etc. All of which takes many prompts, back-and-forth, to get to a place where I'm satisfied with the result.

I also use it to develop software, where I am more appreciative of the "as near to pure completions mode" as I can be mot of the time.

hn_throwaway_99 5 days ago | parent | prev | next [-]

The GPT-5 API has a new parameter for verbosity of output. My guess is the default value of this parameter used in ChatGPT corresponds to a lower verbosity than previous models.

alecsm 5 days ago | parent | prev | next [-]

I had this feeling too.

I needed some help today and it's messages where shorter but also detailed without all the spare text that I usually don't even read.

TZubiri 5 days ago | parent | prev | next [-]

That's probably very healthy as well. We may have become desensitized to sitting in a room with a computer for 5 hours, but that's not healthy, especially when we are using our human language interface and dilluting it with llms

tibbar 5 days ago | parent | prev | next [-]

It's a good reminder that OpenAI isn't incentivized to have users spend a lot of time on their platform. Yes, they want people to be engaged and keep their subscription, but better if they can answer a question in few turns rather than many. This dynamic would change immediately if OpenAI introduced ads or some other way to monetize each minute spent on the platform.

5 days ago | parent | next [-]
[deleted]
yawnxyz 5 days ago | parent | prev [-]

the classic 3rd space problem that Starbucks tackled; they initially wanted people to hang out and do work there, but grew to hate it so they started adding lots of little things to dissuade people from spending too much time there

dragonwriter 5 days ago | parent [-]

> the classic 3rd space problem that Starbucks tackled

“Tackled” is misleading. “Leveraged to grow a customer base and then exacerbated to more efficiently monetize the same customer base” would be more accurate.

drewbeck 5 days ago | parent | prev | next [-]

Also good for the bottom line: fewer tokens generated.

mvieira38 5 days ago | parent | prev | next [-]

Great for the environment as well and the financial future of the company. I can't see how this is a bad thing, some people really were just suffering from Proompt Disorder

tonyhart7 5 days ago | parent | prev | next [-]

seems like the machine is winning next war

Alex3917 5 days ago | parent | prev | next [-]

When using it to write code, what I'm seeing so far is that it's spending less effort trying to reason about how to solve problems from first principles, and more effort just blatantly stealing everything it can from open source projects.

486sx33 4 days ago | parent | prev | next [-]

[dead]

raincole 5 days ago | parent | prev | next [-]

Reddit is where people literally believed GPT5 was going to be AGI.

thejazzman 5 days ago | parent | next [-]

reddit is a large group of people sharing many diverse ideas

goatlover 5 days ago | parent | prev [-]

That was the r/singularity sub which has a rather large bias toward believing the singularity is near and inevitable.

charlie0 5 days ago | parent | prev [-]

Today, a chat program. Tomorrow, a women in a red dress.

waldrews 5 days ago | parent | prev | next [-]

Doesn't look like they blew up the API use cases, just the consumer UI access. I wouldn't be surprised if they allow it again, hidden behind a setting (along with allowing the different routed GPT5 levels to be in the selector).

waynesonfire 5 days ago | parent | next [-]

I have a feeling that the chatgpt ui does some behind-the scenes tuning as well--hidden prompt engineering if you will. I migrated to the api and 4o still seems different. Most obvious, I don't get the acks that make me feel like I should run for president.

Even ChatGPT 5 confirmed this,

why does the gpt-4o api not do this?

ChatGPT said:

Because the GPT-4o API is tuned and delivered in a neutral, low-intrusion style by default.

When OpenAI built GPT-4o for API use, they optimized it for:

    Predictable formatting (so it works well in code, pipelines, chatbots, etc.)

    Minimal unsolicited chatter (no “Nice!” or “Great job!” unless explicitly prompted)

    Deterministic tone — so that two API calls with the same input produce consistent, professional output without extra filler.
That’s different from the ChatGPT product experience, which has its own “assistant personality” layer that sometimes adds those rapport-building acknowledgements in casual conversation.

In API mode, you’re the one defining the personality, so if you want that “Good! Looks like you’re digging in” style, you have to bake it into the system prompt, for example:

simonw 5 days ago | parent | next [-]

The GPT-4o you talk to through ChatGPT and the GPT-4o you access via the API are different models... but they're actually both available via the API.

https://platform.openai.com/docs/models/gpt-4o is gpt-4o in the API, also available as three date-stamped snapshots: gpt-4o-2024-11-20 and gpt-4o-2024-08-06 and gpt-4o-2024-05-13 - priced at $2.50/million input and $10.00/million output.

https://platform.openai.com/docs/models/chatgpt-4o-latest is chatgpt-4o-latest in the API. This is the model used by ChatGPT 4o, and it doesn't provide date-stamped snapshots: the model is updated on a regular basis without warning. It costs $5/million input and $15/million output.

If you use the same system prompt as ChatGPT (from one of the system prompt leaks) with that chatgpt-4o-latest alias you should theoretically get the same experience.

AlecSchueler 5 days ago | parent | prev | next [-]

But it always gives answers like that for questions where it doesn't know the actual reason.

grues-dinner 5 days ago | parent | prev [-]

> Even ChatGPT 5 confirmed this,

>> why does the gpt-4o api not do this?

> ChatGPT said:

>> Because the GPT-4o API is tuned and delivered in a neutral, low-intrusion style by default.

But how sure are you that GPT-5 even had this data, and if it has it, it's accurate? This isn't information OpenAI has publicly divulged and it's ingested from scraped data, so either OpenAI told it what to say in this case, or it's making it up.

andy99 5 days ago | parent | prev [-]

Ah ok, that's an important distinction. Seems much less a big deal then - or at least a consumer issue rather than a business one. Having never really used chatgpt (but used the apis a lot), I'm actually surprised that chat users would care. There are cost tradeoffs for the different models when building on them, but for chatgpt, it's less clear to me why one would move between selecting different models.

svachalek 5 days ago | parent | next [-]

Not everyone is an engineer. There's a substantial population that were selecting for maximum sycophancy.

dragonwriter 5 days ago | parent | prev | next [-]

> There are cost tradeoffs for the different models when building on them, but for chatgpt, it's less clear to me why one would move between selecting different models.

The same tradeoffs (except cost, because that's roled into the plan not a factor when selecting on the interface) exist on ChatGPT, which is an app built on the underlying model like any other.

So getting rid of models that are stronger in some areas when adding a new one that is cheaper (presuming API costs also reflect cost to provide) has the same kinds of impacts on existing ChatGPT users established usages as it would have on a businesses established apps except that the ChatGPT users don't see a cost savings along with any disruption in how they were used to things working.

Espressosaurus 5 days ago | parent | prev | next [-]

Different models have different (daily/weekly) limits and are better at different things.

o3 was for a self-contained problem I wanted to have chewed on for 15 minutes and then spit out a plausible solution (small weekly limit I think?)

o4-mini for general coding (daily limits)

o4-mini-high for coding when o4-mini isn't doing the job (weekly limits)

4o for pooping on (unlimited, but IMO only marginally useful)

cgriswald 5 days ago | parent | prev [-]

Lower tiers have limited uses for some models.

hinkley 5 days ago | parent | prev | next [-]

Margins are weird.

You have a system that’s cheaper to maintain or sells for a little bit more and it cannibalizes its siblings due to concerns of opportunity cost and net profit. You can also go pretty far in the world before your pool of potential future customers is muddied up with disgruntled former customers. And there are more potential future customers overseas than there are pissed off exes at home so let’s expand into South America!

Which of their other models can run well on the same gen of hardware?

5 days ago | parent | prev | next [-]
[deleted]
sebzim4500 5 days ago | parent | prev | next [-]

Are they deprecating the older models in the API? I don't see any indication of that in the docs.

scarface_74 5 days ago | parent | prev | next [-]

Companies testing their apps would be using the API not the ChatGPT app. The models are still available via the API.

dbreunig 5 days ago | parent | prev | next [-]

I’m wondering that too. I think better routers will allow for more efficiency (a good thing!) at the cost of giving up control.

I think OpenAI attempted to mitigate this shift with the modes and tones they introduced, but there’s always going to be a slice that’s unaddressed. (For example, I’d still use dalle 2 if I could.)

kazinator 5 days ago | parent | prev | next [-]

> For companies that extensively test the apps they're building

Test meaning what? Observe whatever surprise comes out the first time you run something and then write it down, to check that the same thing comes out tomorrow and the day after.

5 days ago | parent | prev | next [-]
[deleted]
dragonwriter 5 days ago | parent | prev | next [-]

> I wonder how much of the '5 release was about cutting costs vs making it outwardly better. I'm speculating that one reason they'd deprecate older models is because 5 materially cheaper to run?

I mean, assuming the API pricing has some relation to OpenAI cost to provide (which is somewhat speculative, sure), that seems pretty well supported as a truth, if not necessarily the reason for the model being introduced: the models discontinued (“deprecated” implies entering a notice period for future discontinuation) from the ChatGPT interface are priced significantly higher than GPT-5 on the API.

> For companies that extensively test the apps they're building (which should be everyone) swapping out a model is a lot of work.

Who is building apps relying on the ChatGPT frontend as a model provider? Apps would normally depend on the OpenAI API, where the models are still available, but GPT-5 is added and cheaper.

nickthegreek 5 days ago | parent | next [-]

> Who is building apps relying on the ChatGPT frontend as a model provider? Apps would normally depend on the OpenAI API, where the models are still available, but GPT-5 is added and cheaper.

Always enjoy your comments dw, but on this one I disagree. Many non-technical people at my org use custom gpt's as "apps" to do some re-occuring tasks. Some of them have spent absurd time tweaking instructions and knowledge over and over. Also, when you create a custom gpt, you can specifically set the preferred model. This will no doubt change the behavior of those gpts.

Ideally at the enterprise level, our admins would have a longer sunset on these models via web/app interface to ensure no hiccups.

trashface 5 days ago | parent | prev [-]

Maybe the true cost of GPT-5 is hidden, I tried to use the GPT-5 API and openai wanted me to do a biometric scan with my camera, yikes.

jimbokun 5 days ago | parent | prev | next [-]

> For companies that extensively test the apps they're building (which should be everyone) swapping out a model is a lot of work.

Yet another lesson in building your business on someone else's API.

augergine 5 days ago | parent | prev [-]

[flagged]

speedylight 5 days ago | parent | next [-]

Man people would beg to differ, HN wields a lot of influence in the tech community!

MatejKafka 5 days ago | parent | prev | next [-]

Uh, what? Dang is an incredible moderator. I sure hope HN won't get any closer to Reddit, the discussions here tend to be much more interesting - if anything, mediocrity is the result of influx of Reddit users to HN.

LexiMax 5 days ago | parent [-]

There is a lot more groupthink and echo chamber behavior on HN compared to Reddit due to the way flagging works. For me, HN is unusable without using showdead and using the active front page so I can see what stories its userbase tried to flag off the normal front page.

You can also say some pretty horrendous things on this site as long as you couch it in modest proposed-esque soft language. If I want to have a non-technical conversation with other human beings, Tildes blows the doors off of HN in the empathy department.

iammrpayments 5 days ago | parent | prev | next [-]

Bro thought he was posting on 4chan

flappyeagle 5 days ago | parent | prev [-]

What are you on about? What has dang done to hurt you