Remix.run Logo
porphyra 6 days ago

Meanwhile, tons of people on reddit's /r/ChatGPT were complaining that the shift from ChatGPT 4o to ChatGPT 5 resulted in terse responses instead of waxing lyrical to praise the user. It seems that many people actually became emotionally dependent on the constant praise.

astrange 5 days ago | parent | next [-]

GPT5 isn't much more terse for me, but they gave it a new equally annoying writing style where it writes in all-lowercase like an SF tech twitter user on ketamine.

https://chatgpt.com/share/689bb705-986c-8000-bca5-c5be27b0d0...

Eduard 5 days ago | parent [-]

> https://chatgpt.com/share/689bb705-986c-8000-bca5-c5be27b0d0...

404 not found

mhuffman 6 days ago | parent | prev | next [-]

The folks over on /r/MyBoyfriendIsAI seem to be in an absolute shambles over the change .

[0] reddit.com/r/MyBoyfriendIsAI/

PeterStuer 6 days ago | parent | prev | next [-]

[flagged]

dingnuts 6 days ago | parent | prev [-]

if those users were exposed to the full financial cost of their toy they would find other toys

zeta0134 6 days ago | parent | next [-]

And what is that cost, if you have it handy? Just as an example, my Radeon VII can perfectly well run smaller models, and it doesn't appear to use more power than about two incandescent lightbulbs (120 W or so) while the query is running. I don't personally feel that the power consumed by approximately two light bulbs is excessive, even using the admittedly outdated incandescent standard, but perhaps the commercial models are worse?

Like I know a datacenter draws a lot more power, but it also serves many many more users concurrently, so economies of scale ought to factor in. I'd love to see some hard numbers on this.

derefr 6 days ago | parent | prev [-]

IIRC you can actually get the same kind of hollow praise from much dumber, locally-runnable (~8B parameters) models.