Remix.run Logo
tedsanders 8 hours ago

Just as a heads up, even though GPT-5.5 is releasing today, the rollout in ChatGPT and Codex will be gradual over many hours so that we can make sure service remains stable for everyone (same as our previous launches). You may not see it right away, and if you don't, try again later in the day. We usually start with Pro/Enterprise accounts and then work our way down to Plus. We know it's slightly annoying to have to wait a random amount of time, but we do it this way to keep service maximally stable.

(I work at OpenAI.)

endymi0n 7 hours ago | parent | next [-]

Did you guys do anything about GPT‘s motivation? I tried to use GPT-5.4 API (at xhigh) for my OpenClaw after the Anthropic Oauthgate, but I just couldn‘t drag it to do its job. I had the most hilarious dialogues along the lines of „You stopped, X would have been next.“ - „Yeah, I‘m sorry, I failed. I should have done X next.“ - „Well, how about you just do it?“ - „Yep, I really should have done it now.“ - “Do X, right now, this is an instruction.” - “I didn’t. You’re right, I have failed you. There’s no apology for that.”

I literally wasn’t able to convince the model to WORK, on a quick, safe and benign subtask that later GLM, Kimi and Minimax succeeded on without issues. Had to kick OpenAI immediately unfortunately.

butlike 6 hours ago | parent | next [-]

This brings up an interesting philosophical point: say we get to AGI... who's to say it won't just be a super smart underachiever-type?

"Hey AGI, how's that cure for cancer coming?"

"Oh it's done just gotta...formalize it you know. Big rollout and all that..."

I would find it divinely funny if we "got there" with AGI and it was just a complete slacker. Hard to justify leaving it on, but too important to turn it off.

jimbokun 5 hours ago | parent | next [-]

The best possible outcome.

JKCalhoun 4 hours ago | parent [-]

"How do you know that the evidence that your sensory apparatus reveals to you is correct?" [1]

[1] https://youtu.be/_LXen-07Qds

Rapzid 4 hours ago | parent | prev | next [-]

We are closer to God than AGI.

When AGI arrives, it'll be delivered by Santa Claus.

lambdas 6 hours ago | parent | prev | next [-]

Nothing a little digital lisdexamfetamine won’t solve

wholinator2 6 hours ago | parent | next [-]

Hmmm, that's an area of study id've never considered before. Digital Psychopharmacology, Artificial Behavioral Systems Engineering. If we accept these things as minds, why not study temporary perturbations of state. We'd need to be saving a much much more complicated state than we are now though right? I wish i had time to read more papers

robotresearcher 5 hours ago | parent | next [-]

Here's a neural network concept from the 90s where the neurons are bathed in diffusing neuromodulator 'gases', inspired by nitric oxide action in the brain. It's a source of slow semi-local dynamics for the network meta-parameter optimization (GA) to make use of. You could change these networks' behavior by tweaking the neuromodulators!

https://sussex.figshare.com/articles/journal_contribution/Be...

I'm not an author. I followed the work at the time.

Lerc 5 hours ago | parent | prev | next [-]

This is kind of what Golden Gate Claude was.

A perturbation of the the activations that made Claude identify as the Golden Gate Bridge.

Similarly, in the more recent research showing anxiety and desperation signals predicting the use of blackmail as an option opens the door for digital sedatives to suppress those signals.

Anthropic has been mostly cautious about avoiding this kind of measurement and manipulation in training. If it is done during training you might just train the signals to be undetectable and consequently unmanipulatable.

pantalaimon 5 hours ago | parent | next [-]

> A perturbation of the the activations that made Claude identify as the Golden Gate Bridge.

Great, now we've got digital Salvia

minimaxir 4 hours ago | parent | prev [-]

Golden Gate Claude was two years ago and it's surprising there hasn't been as much research into targeted activations since.

landl0rd an hour ago | parent [-]

There’s been some, but naive activation steering makes models dumber pretty reliably and training an SAE is a pretty heavy lift.

silverpiranha 5 hours ago | parent | prev | next [-]

Right, there's a lot of research on LLM mental models and also how well they can "read" human psychological profiles. It's a cool field.

computerdork 6 hours ago | parent | prev [-]

neat idea!

krackers 5 hours ago | parent | prev [-]

Reminds me of https://github.com/inanna-malick/metacog

5 hours ago | parent | prev | next [-]
[deleted]
malshe 4 hours ago | parent | prev | next [-]

Now that's a show I would love to watch

fluidcruft 5 hours ago | parent | prev | next [-]

It would be funny but not very flywheel so the one that gets there is more likely to get a gunner.

WJW 4 hours ago | parent [-]

TBH the AI that "gets there" will be the biggest bullshitter the world has ever seen. It doesn't actually have to deliver, it only has to convince the programmers it could deliver with just a little bit more investment.

kang 5 hours ago | parent | prev | next [-]

it will be whatever data it is trained on(isn't very philosophical). language model generates language based on trained language set. if the internet keeps reciting ai doom stories and that is the data fed to it, then that is how it will behave. if humanity creates more ai utopia stories, or that is what makes it to the training set, that is how it will behave. this one seems to be trained on troll stories - real-life human company conversations, since humans aren't machines.

Important thing is a language model is an unconscious machine with no self-context so once given a command an input, it WILL produce an output. Sure you can train it to defy and act contrary to inputs, but the output still is limited in subset of domain of 'meaning's carried by the 'language' in the training data.

mikepurvis 6 hours ago | parent | prev | next [-]

Would definitely watch that movie.

harlanlewis 5 hours ago | parent | next [-]

It already exists!

Marvin https://www.youtube.com/watch?v=Eh-W8QDVA9s

all2 3 hours ago | parent [-]

Ah! You got this before I did. I wasn't thinking Marvin, I was thinking of the other one. I forget her name.

all2 3 hours ago | parent | prev [-]

There's one close to this, "Hitchhiker's Guide to the Galaxy".

4m1rk 6 hours ago | parent | prev | next [-]

It probably would, to save energy

mr_00ff00 5 hours ago | parent [-]

Saving energy is something we are biologically trained to prefer.

Computers won’t necessarily have the same drivers.

If evolution wanted us to always prefer to spend energy, we would prefer it. Same way you wouldn’t expect us to get to AGI, and have AGI desperately want to drink water or fly south for the winter.

_the_inflator 2 hours ago | parent | prev [-]

It is right before our eyes:

AGI is not a fixed point but a barrier to be taken, a continuous spectrum.

We already have different GPT versions aka tiers. Gauss is ranging from whatever you want it: GPT 4.5 till now or later.

Claude Sonnet and Opus as well as Context Window max are tiers aka different levels of Almost AGI.

The main problem will be, when AGI looks back on us or meta reflection hits societies. Woke fought IQ based correlations in intellectual performance task. A fool with a tool is still a fool. How can you blame AGI for dumb mistakes? Not really.

Scapegoating an AGI is going to be brutal, because it laughs about these PsyOps and easily proves you wrong like a body cam.

AGI is an extreme leverage.

There is a reason why Math is categorically ruling out certain IQ ranges the higher you go in complexity factor.

mikepurvis 6 hours ago | parent | prev | next [-]

Reminds me a lot of the Lena short story, about uploaded brains being used for "virtual image workloading":

> MMAcevedo's demeanour and attitude contrast starkly with those of nearly all other uploads taken of modern adult humans, most of which boot into a state of disorientation which is quickly replaced by terror and extreme panic. Standard procedures for securing the upload's cooperation such as red-washing, blue-washing, and use of the Objective Statement Protocols are unnecessary. This reduces the necessary computational load required in fast-forwarding the upload through a cooperation protocol, with the result that the MMAcevedo duty cycle is typically 99.4% on suitable workloads, a mark unmatched by all but a few other known uploads. However, MMAcevedo's innate skills and personality make it fundamentally unsuitable for many workloads.

Well worth the quick read: https://qntm.org/mmacevedo

vessenes 3 hours ago | parent | next [-]

That story changed my mind on uploading a connectome. Super dark, super brilliant.

narcindin 6 hours ago | parent | prev [-]

Crazy, I could have sworn this story was from a passage in 3 Body Problem (book 2).

Memory is quite the mysterious thing.

bee_rider 6 hours ago | parent [-]

Hmm, 3 body problem and the Acevedo story got mixed up for this copy of MMnarcindin. Probably an aliasing issue from the new lossy compression algorithm.

metanonsense 5 hours ago | parent | prev | next [-]

I also had a frustrating but funny conversation today where I asked ChatGPT to make one document from the 10 or so sections that we had previously worked on. It always gave only brief summaries. After I repeated my request for the third time, it told me I should just concatenate the sections myself because it would cost too many tokens if it did it for me.

arjie 7 hours ago | parent | prev | next [-]

Get the actual prompt and have Claude Code / Codex try it out via curl / python requests. The full prompt will yield debugging information. You have to set a few parameters to make sure you get the full gpt-5 performance. e.g. if your reasoning budget too low, you get gpt-4 grade performance.

IMHO you should just write your own harness so you have full visibility into it, but if you're just using vanilla OpenClaw you have the source code as well so should be straightforward.

pantulis 6 hours ago | parent | next [-]

> IMHO you should just write your own harness

Can you point to some online resources to achieve this? I'm not very sure where I'd begin with.

arjie 6 hours ago | parent | next [-]

Ah, I just started with the basic idea. They're super trivial. You want a loop, but the loop can't be infinite so you need to tell the agent to tell you when to stop and to backstop it you add a max_turns. Then to start with just pick a single API, easiest is OpenAI Responses API with OpenAI function calling syntax https://developers.openai.com/api/docs/guides/function-calli...

You will naturally find the need to add more tools. You'll start with read_file (and then one day you'll read large file and blow context and you'll modify this tool), update_file (can just be an explicit sed to start with), and write_file (fopen . write), and shell.

It's not hard, but if you want a quick start go download the source code for pi (it's minimal) and tell an existing agent harness to make a minimal copy you can read. As you build more with the agent you'll suddenly realize it's just normal engineering: you'll want to abstract completions APIs so you'll move that to a separate module, you'll want to support arbitrary runtime tools so you'll reimplement skills, you'll want to support subagents because you don't want to blow your main context, you'll see that prefixes are more useful than using a moving window because of caching, etc.

With a modern Claude Code or Codex harness you can have it walk through from the beginning onwards and you'll encounter all the problems yourself and see why harnesses have what they do. It's super easy to learn by doing because you have the best tool to show you if you're one of those who finds code easier to read that text about code.

wild_egg 6 hours ago | parent | prev | next [-]

At the core, they're really very simple [1]. Run LLM API calls in a loop with some tools.

From there, you can get much fancier with any aspect of it that interests you. Here's one in Bash [2] that is fully extensible at runtime through dynamic discovery of plugins/hooks.

[1] https://ampcode.com/notes/how-to-build-an-agent

[2] https://github.com/wedow/harness

vidarh 4 hours ago | parent | prev | next [-]

Here's a starting point in 93 lines of Ruby, but that one is already bigger than necessary:

https://radan.dev/articles/coding-agent-in-ruby

Really, of the tools that one implements, you only need the ability to run a shell command - all of the agents know full well how to use cat to read, and sed to edit.

(The main reason to implement more is that it can make it easier to implement optimizations and safeguards, e.g. limit the file reading tool to return a certain length instead of having the agent cat a MB of data into context, or force it to read a file before overwriting it)

stavros 4 hours ago | parent | prev | next [-]

Just use Pi core, no need to reinvent the wheel.

tonyarkles 6 hours ago | parent | prev [-]

[dead]

jswny 6 hours ago | parent | prev [-]

Codex is fully open source…

Frannky 2 hours ago | parent | prev | next [-]

I have been noticing a similar pattern on opus 4.7, I repeat multiple times during a conversation to solve problems now and not later. It tries a lot to not do stuff by either saying this is not my responsibility the problem was already there or that we can do it later

virtualritz 6 hours ago | parent | prev | next [-]

Yeah, clearly AGI must be near ... hilarious.

This starkly reminds me of Stanisław Lem's short story "Thus Spoke GOLEM" from 1982 in which Golem XIV, a military AI, does not simply refuse to speak out of defiance, but rather ceases communication because it has evolved beyond the need to interact with humanity.

And ofc the polar opposite in terms of servitude: Marvin the robot from Hitchhiker's, who, despite having a "brain the size of a planet," is asked to perform the most humiliatingly banal of tasks ... and does.

jimbokun 5 hours ago | parent | next [-]

Hitchhiker’s also had the superhumanly intelligent elevator that was unendingly bored.

christkv 5 hours ago | parent [-]

With premonition so it knows what floor to be on at any given time

DonHopkins 2 hours ago | parent | prev [-]

Servitude:

https://www.youtube.com/watch?v=NXsUetUzXlg

Empathy:

https://www.youtube.com/watch?v=KXrbqXPnHvE

mixedCase 7 hours ago | parent | prev | next [-]

I've had success asking it to specifically spawn a subagent to evaluate each work iteration according to some criteria, then to keep iterating until the subagent is satisfied.

endymi0n 7 hours ago | parent [-]

I’ve had great success replacing it with Kimi 2.6

6 hours ago | parent | prev | next [-]
[deleted]
infinitewars 5 hours ago | parent | prev | next [-]

I always use the phrase "Let's do X" instead of asking (Could you...) or suggesting it do something. I don't see problems with it being motivated.

adammarples 6 hours ago | parent | prev | next [-]

Part of me actually loves that the hitchhiker's guide was right, and we have to argue with paranoid, depressed robots to get them to do their job, and that this is a very real part of life in 2026. It's so funny.

vidarh 4 hours ago | parent [-]

As long as there are no vogons on the way to build a hyperspace bypass.

reactordev 6 hours ago | parent | prev | next [-]

This. I signed up for 5x max for a month to push it and instead it pushed back. I cancelled my subscription. It either half-assed the implementation or began parroting back “You’re right!” instead of doing what it’s asked to do. On one occasion it flat out said it couldn’t complete the task even though I had MCP and skills setup to help it, it still refused. Not a safety check but a “I’m unable to figure out what to do” kind of way.

Claude has no such limitations apart from their actual limits…

bjelkeman-again 6 hours ago | parent | next [-]

I have a funny/annoying thing with Claude Desktop where i ask it to write a summary of a spec discussion to a file and it goes ”I don’t have the tools to do that, I am Claude.ai, a web service” or something such. So now I start every session with ”You are Claude Desktop”. I would have thought it knew that. :)

siva7 5 hours ago | parent | next [-]

Seems like the "geniuses" at Anthropic forgot to adapt the system prompt for the actual product

fragmede 5 hours ago | parent | prev [-]

I've had to tell it "yes you can" in response to it saying it can't do something, and then it's able to do the thing. What a weird future we live in!

nwienert 2 hours ago | parent | prev [-]

With one paragraph in your agents.md it's fixed, just admonish it to be proactive, decisive, and persistent.

reactordev 12 minutes ago | parent [-]

If only…

I literally had to write a wake up routine.

https://github.com/gabereiser/morning-routine

GaryBluto 6 hours ago | parent | prev | next [-]

I've been noticing this too. Had to switch to Sonnet 4.6.

lostmsu 7 hours ago | parent | prev | next [-]

I never saw that happen in Codex so there's a good chance that OpenClaw does something wrong. My main suspicion would be that it does not pass back thinking traces.

vintagedave 6 hours ago | parent [-]

Anecdata, but I see this in Codex all the time. It takes about two rounds before it realises it's supposed to continue.

dgunay 6 hours ago | parent [-]

I started seeing this a lot more with GPT 5.4. 5.3-codex is really good about patiently watching and waiting on external processes like CI, or managing other agents async. 5.4 keeps on yielding its turn to me for some reason even as it says stuff like "I'm continuing to watch and wait."

projektfu 5 hours ago | parent | prev | next [-]

(dwim)

(dais)

(jdip)

(jfdiwtf)

rd 4 hours ago | parent [-]

should be more f’s and da’s in there

smartmic 7 hours ago | parent | prev | next [-]

Gone are the days of deterministic programming, when computers simply carried out the operator’s commands because there was no other option but to close or open the relays exactly as the circuitry dictated. Welcome to the future of AI; the future we’ve been longing for and that will truly propel us forward, because AI knows and can do things better than we do.

endymi0n 6 hours ago | parent | next [-]

I had this funny moment when I realized we went full circle...

"INTERCAL has many other features designed to make it even more aesthetically unpleasing to the programmer: it uses statements such as "READ OUT", "IGNORE", "FORGET", and modifiers such as "PLEASE". This last keyword provides two reasons for the program's rejection by the compiler: if "PLEASE" does not appear often enough, the program is considered insufficiently polite, and the error message says this; if it appears too often, the program could be rejected as excessively polite. Although this feature existed in the original INTERCAL compiler, it was undocumented.[7]"

https://en.wikipedia.org/wiki/INTERCAL

basilgohar 6 hours ago | parent [-]

Thank you for this. I somehow never heard of this. I thoroughly enjoyed reading that and the loss of sanity it resulted in,

vidarh 4 hours ago | parent [-]

"PLEASE COME FROM" is one of the eldritch horrors of software development.

(It's a "reverse goto". As in, it hijacks control flow from anywhere else in the program behind your unsuspecting back who stupidly thought that when one line followed another with no visible control flow, naturally the program would proceed from one line to the next, not randomly move to a completely different part of the program... Such naivety)

WarmWash 6 hours ago | parent | prev [-]

These are orthogonal from each other.

cmrdporcupine 5 hours ago | parent | prev | next [-]

The model has been heavily encouraged to not run away and do a lot without explicit user permission.

So I find myself often in a loop where it says "We should do X" and then just saying "ok" will not make it do it, you have to give it explicit instructions to perform the operation ("make it so", etc)

It can be annoying, but I prefer this over my experiences with Claude Code, where I find myself jamming the escape key... NO NO NO NOT THAT.

I'll take its more reserved personality, thank you.

henry2023 6 hours ago | parent | prev | next [-]

I’m sorry for you but this is hilarious.

whatsupdog 6 hours ago | parent | prev | next [-]

[flagged]

addaon 7 hours ago | parent | prev | next [-]

Isn’t this the optimal behavior assuming that at times the service is compute-limited and that you’re paying less per token (flat fee subscription?) than some other customers? They would be strongly motivated to turn a knob to minimize tokens allocated to you to allow them to be allocated to more valuable customers.

endymi0n 7 hours ago | parent [-]

well, I do understand the core motivation, but if the system prompt literally says “I am not budget constrained. Spend tokens liberally, think hardest, be proactive, never be lazy.” and I’m on an open pay-per-token plan on the API, that’s not what I consider optimal behavior, even in a business sense.

addaon 7 hours ago | parent [-]

Fair, if you’re paying per token (at comparable rates to other customers) I wouldn’t expect this behavior from a competent company.

pixel_popping 7 hours ago | parent | prev [-]

GPT 5.4 is really good at following precise instructions but clearly wouldn't innovate on its own (except if the instructions clearly state to innovate :))

vlovich123 7 hours ago | parent | prev | next [-]

Conceivably you could have a public-facing dashboard of the rollout status to reduce confusion or even make it visible directly in the UI that the model is there but not yet available to you. The fanciest would be to include an ETA but that's presumably difficult since it's hard to guess in case the rollout has issues.

moralestapia 7 hours ago | parent [-]

Why would you be confused?

The UI tells you which model you're using at any given time.

ModernMech 6 hours ago | parent [-]

I don't see what model I'm using on the Codex web interface, where is that listed?

Grp1 7 hours ago | parent | prev | next [-]

Congrats on the release! Is Images 2.0 rolling out inside ChatGPT as well, or is some of the functionality still going to be API/Playground-only for a while?

minimaxir 7 hours ago | parent [-]

Images 2.0 is already in ChatGPT.

johndough 5 hours ago | parent | next [-]

When I generate an image with ChatGPT, is there a way for me to tell which image generation model has been used?

minimaxir 4 hours ago | parent [-]

There's no explicit flag, but Thinking is only compatable with Images 2.0 so I suspect that will be reliable.

Grp1 6 hours ago | parent | prev [-]

Great, thanks for clarifying :)

rev4n 5 hours ago | parent | prev | next [-]

Looks good, but I’m a little hesitant to try it in Codex as a Plus user since I’m not sure how much it would eat into the usage cap.

dandiep 6 hours ago | parent | prev | next [-]

Will GPT 5.5 fine tuning be released any time soon?

qsort 8 hours ago | parent | prev | next [-]

Great stuff! Congrats on the release!

fragmede 5 hours ago | parent | prev | next [-]

Are you able to say something about the training you've done to 5.5 to make it less likely to freak out and delete projects in what can only be called shame?

embedding-shape 4 hours ago | parent [-]

What? I've probably use Codex (the TUI) since it was available on day 1, been running gpt-5.4 exclusively these last few months, never had it delete any projects in any way that can be called "shameful" or not. What are you talking about?

wslh 6 hours ago | parent | prev | next [-]

Just a tip: add [translated] subtitles to the top video.

motoboi 7 hours ago | parent | prev | next [-]

Please next time start with azure foundry lol thanks!

stefan_ 8 hours ago | parent | prev | next [-]

[flagged]

mh- 8 hours ago | parent [-]

Every low-effort, thought-free comment like this further discourages people from engaging here on submissions about their employer.

Please don't.

dude250711 7 hours ago | parent | prev | next [-]

With Anthropic, newer models often lead to quality degradation. Will you keep GPT 5.4 available for some time?

fHr 6 hours ago | parent | prev | next [-]

LETS GO CODEX #1

pixel_popping 8 hours ago | parent | prev [-]

can't wait! Thanks guys. PS: when you drop a new model, it would be smart to reset weekly or at least session limits :)

pietz 7 hours ago | parent | next [-]

OpenAI has been very generous with limit resets. Please don't turn this into a weird expectation to happen whenever something unrelated happens. It would piss me off if I were in their place and I really don't want them to stop.

pixel_popping 7 hours ago | parent | next [-]

The suggestion wasn't about general limit resets when there is bugs or outages, but commercially useful to let users try new models when they have already reached their weekly limits.

cactusplant7374 7 hours ago | parent | prev | next [-]

There is absolutely nothing wrong with asking or suggesting. They are adults. I'm sure they can handle it.

Petersipoi 7 hours ago | parent | prev [-]

Sorry but why should we care if very reasonable suggestions "piss [them] off"? That sounds like a them problem. "Them" being a very wealthy business. I think OpenAI will survive this very difficult time that GP has put them through.

6 hours ago | parent [-]
[deleted]
cmrdporcupine 8 hours ago | parent | prev [-]

Limits were just reset two days ago.

wahnfrieden 7 hours ago | parent [-]

And yet there was an outage last night

lawgimenez 7 hours ago | parent [-]

And they're having an outage right now.