Remix.run Logo
OpenAI releases image generation in the API(openai.com)
382 points by themanmaran 15 hours ago | 215 comments
cuuupid 13 hours ago | parent | next [-]

When this was up yesterday I complained that the refusal rate was super high especially on government and military shaped tasks, and that this would only push contractors to use CN-developed open source models for work that could then be compromised.

Today I'm discovering there is a tier of API access with virtually no content moderation available to companies working in that space. I have no idea how to go about requesting that tier of access, but have spoken to 4 different defense contractors in the last day who seem to already be using it.

vasco 5 hours ago | parent | next [-]

Turns out AI alignment just means "align to the customer current subscription plan", and not protecting the world. Classic.

bilbo0s 3 hours ago | parent [-]

More accurate to call it “alignment for plebes and not for the masters of the plebes”. Which I think we all kind of expect coming from the leaders of our society. That’s the way human societies have always worked.

I’m sure access to military grade tech is only one small slice in the set of advantages the masters get over the mastered in any human society.

samtp 13 hours ago | parent | prev | next [-]

What's a good use case for a defense contractor to generate AI images besides to include in presentations?

aigen000 12 hours ago | parent | next [-]

Fabricating evidence of weapons of mass destruction in some developing nation.

I kid, more real world use cases would be for concept images for a new product or marketing campaigns.

toasteros 5 hours ago | parent [-]

...you can do that with a pencil, though.

What an impossibly weird thing to "need" an LLM for.

KeplerBoy 3 hours ago | parent | next [-]

You can also create images by poking bits in a hex editor. Some tools are better suited than others.

Gud 3 hours ago | parent | prev [-]

I suppose you walk by foot everywhere?

subroutine 12 hours ago | parent | prev | next [-]

Think of all the trivial ways an image generator could be used in business, and there is likely a similar use-case among the DoD and its contractors (e.g. create a cartoon image of a ship for a naval training aid; make a data dashboard wireframe concept for a decision aid).

cuuupid 4 hours ago | parent | prev | next [-]

The very simple use case is generating mock targets. In movies they make it seem like they use mannequin style targets or traditional concentric circles but those are infeasible and unrealistic respectively. There's an entire modeling industry here and being able to replace that with infinitely diverse AI-generated targets is valuable!

missedthecue 9 hours ago | parent | prev | next [-]

Generating 30,000 unique images of artillery pieces hiding in underbrush to train autonomous drone cameras.

gmerc 3 hours ago | parent | next [-]

Unreal, Houdini and a bunch of assets do this just fine and provide actually usable depth / infrared / weather / fog / TOD / and other relevant data for training - likely cheaper than using their API

See bifrost.ai and their fun videos of training naval drones to avoid whales in an ethical manners

junon 9 hours ago | parent | prev | next [-]

It's probably not that, but who knows.

The real answer is probably way, way more mundane - generating images for marketing, etc.

m4rtink 24 minutes ago | parent | next [-]

Never underestimate the military PowerPoint[1] industry!

[1] https://media.wired.com/photos/5933e578714b881cb296c6ef/mast...

TechDebtDevin 6 hours ago | parent | prev [-]

well considering an element of their access is the lifting of safety guardrails, I'd assume the scope includes, to some degree, the processing or generation of nsfw/questionable content

Barrin92 6 hours ago | parent | prev | next [-]

I don't really understand the logic here. All the actual signal about what artillery in bushes look like is already in the original training data. Synthetic data cannot conjure empirical evidence into existence, it's as likely to produce false images as real ones. Assuming the military has more privileged access to combat footage than a multi-purpose public chatbot I'd expect synthetic data to degrade the accuracy of a drone.

johndough 5 hours ago | parent | next [-]

Generative models can combine different concepts from the training data. For example, the training data might contain a single image of a new missile launcher at a military parade. The model can then generate an image of that missile launcher hiding in a bush, because it has internalized the general concept of things hiding in bushes, so it can apply it to new objects it has never seen hiding in bushes.

rovr138 3 hours ago | parent | prev [-]

If you're building a system to detect something, usually you need enough variations. You add noise to the images, etc.

With this, you could create a dataset that will by definition have that. You should still corroborate the data, but it's a step ahead without having to take 1000 photos and adding enough noise and variations to get to 30k.

cortesoft 6 hours ago | parent | prev [-]

If the model can generate the images, can't it already recognize them?

Falimonda 6 hours ago | parent [-]

The model they're training to perform detection/identification out in the field would presumably need to be much smaller and run locally without needing to rely on network connectivity. It makes sense, so long as the openai model produces a training/validation set that's comparable to one that their development team would otherwise need to curate by hand.

ZeroTalent 12 hours ago | parent | prev | next [-]

Manufacturing consent

rnd0 10 hours ago | parent | next [-]

Literally how it will be used; you are correct.

matheusmoreira 7 hours ago | parent | prev [-]

Reality is turning into some kind of Hideo Kojima game.

https://youtu.be/-gGLvg0n-uY

Gud 3 hours ago | parent | next [-]

Wow! What an amazingly dystopian vision of the future. Probably right.

kla-s 5 hours ago | parent | prev [-]

Wow that video is awesome, thanks for sharing

potatoman22 10 hours ago | parent | prev | next [-]

Generating or augmenting data to train computer vision algorithms. I think a lot of defense problems have messy or low data

tzury 10 hours ago | parent | prev | next [-]

AI image generation is a "statistical simulator". And when fed with the right information, it can generates pretty close to reality scenery.

golergka 10 hours ago | parent | prev | next [-]

Input one image of a known military installation and one civilian building. Prompt to generate a similar _civilian_ building, but resembling that military installation in some way: similar structure, similar colors, similar lighting.

Then include this image in a dataset of another net with marker "civilian". Train that new neural net better so that it does lower false positive rate when asked "is this target military".

aprilthird2021 9 hours ago | parent [-]

You'll never get promoted thinking like that! Mark them all "military", munitions sales will soar!

derektank 8 hours ago | parent | next [-]

You might not believe it but the US military actually places a premium on not committing war crimes. Every service member, or at least every airman in the Air Force (I can't speak for other branches) receives mandatory training on the Kunduz hospital before deployment in an effort to prevent another similar tragedy. If they didn't care, they wouldn't waste thousands of man-hours on it.

guappa 19 minutes ago | parent | next [-]

Most importantly they finance propaganda films like "eye in the sky" to make it look like they give a shit about not killing civilians.

Videos on wikileaks tell a different story.

jncfhnb 6 hours ago | parent | prev | next [-]

I knew a guy whose job was to assess and approve the legality of each strike considering second order impacts on the community

handfuloflight 7 hours ago | parent | prev [-]

> On 7 October 2015, President Barack Obama issued an apology and announced the United States would be making condolence payments of $6,000 to the families of those killed in the airstrike.

Definitely a premium.

golergka 8 hours ago | parent | prev [-]

Bombs and other kinds of weapon system which are "smarter" have higher markup. It's profitable to sell smarter weapons. Dumb weapons is destroying the whole cities, like Russia did in Ukraine. Smart weapons is striking a tank, a car, an apartment, a bunker, knowing who's there and when — which obviously means less % of civilian casualties.

guappa 16 minutes ago | parent [-]

Remember when Obama re-defined so that "all adult males are terrorists"? That's how USA reduces civilian casualties.

aprilthird2021 9 hours ago | parent | prev | next [-]

Generating pictures of "bad guy looking guys" so your automated bombs shoot more so you sell more bombs

sandspar 10 hours ago | parent | prev [-]

Vastly oversimplified but for every civilian job there's an equivalent military job. Superficially, the military is basically a country-sized self-contained corporation. Anywhere that Wal-Mart's corporate office could use AI so could the military.

benterix an hour ago | parent | prev | next [-]

That tier is also available for text generation, not just images.

refulgentis 13 hours ago | parent | prev | next [-]

It's "tier 5", I've had an account since the 3.0 days so I can't be positive I'm not grandfathered in, but, my understanding is as long as you have a non-trivial amount of spend for a few months you'll have that access.

(fwiw for anyone curious how to implement it, it's the 'moderation' parameter in the JSON request you'll send, I missed it for a few hours because it wasn't in Dalle-3)

dunkmaster 13 hours ago | parent [-]

API shows either auto or low available. Is there another secret value with even lower restrictions?

refulgentis 12 hours ago | parent [-]

Not that I know of.

I just took any indication that the parent post meant absolute zero moderation as them being a bit loose with their words and excitable with how they understand things, there were some signs:

1. it's unlikely they completed an API integration quickly enough to have an opinion on military / defense image generation moderation yesterday, so they're almost certainly speaking about ChatGPT. (this is additionally confirmed by image generation requiring tier 5 anyway, which they would have been aware of if they had integrated)

2. The military / defense use cases for image generation are not provided (and the steelman'd version in other comments is nonsensical, i.e. we can quickly validate you can still generate kanban boards or wireframes of ships)

3. The poster passively disclaims being in military / defense themself (grep "in that space")

4. it is hard to envision cases of #2 that do not require universal moderation for OpenAI's sake, i.e. lets say their thought process is along the lines of: defense/military ~= what I think of as CIA ~= black ops ~= image manipulation on social media, thus, the time I said "please edit this photo of the ayatollah to have him eating pig and say I hate allah" means its overmoderated for defense use cases

5. It's unlikely openai wants to be anywhere near PR resulting from #4. Assuming there is a super secret defense tier that allows this, it's at the very least, unlikely that the poster's defense contractor friends were blabbing about about the exclusive completely unmoderated access they had, to the poster, within hours of release. They're pretty serious about that secrecy stuff!

6. It is unlikely the lack of ability to generate images using GPT Image 1 would drive the military to Chinese models (there aren't Chinese LLMs that do this! even if they were, there's plenty of good ol' American diffusion models!)

Wowfunhappy 10 hours ago | parent | next [-]

I'm Tier 4 and I'm able to use this API and set moderation to "low". Tier 4 only requires a 30 day waiting period and $1,000 spent on credits. While I as an individual was a bit horrified to learn I've actually spent that much on OpenAI credits over the life of my account, it's practically nothing for most organizations. Even Tier 5 only requires $5,000.

OP was clearly implying there is some greater ability only granted to extra special organizations like the military.

With all possible respect to OP, I find this all very hard to believe without additional evidence. If nothing else, I don't really see a military application of this API (specifically, not AI in general). I'm sure it would help them create slide decks and such, but you don't need extra special zero moderation for that.

throwup238 9 hours ago | parent | next [-]

> With all possible respect to OP, I find this all very hard to believe without additional evidence. If nothing else, I don't really see a military application of this API (specifically, not AI in general). I'm sure it would help them create slide decks and such, but you don't need extra special zero moderation for that.

I can't provide additional evidence (it's defense, duh), but the #1 use I've seen is generating images for computer vision training mostly to feed GOFAI algorithms that have already been validated for target acquisition. Image gen algorithms have a pretty good idea of what a T72 tank and different camouflage looks like, and they're much better at generating unique photos combining the two. It's actually a great use of the technology because hallucinations help improve the training data (i.e. the final targetting should be invariant to a T72 tank with a machine gun on the wrong side or with too many turrets, etc.)

That said, due to compartmentalization, I don't know the extent to which image gen is used in defense, just my little sliver of it.

cuuupid 4 hours ago | parent [-]

We can talk about it here, they put out SBIRs for satellite imagery labeling and test set evaluation that provide a good amount of detail into how they're using it.

spauldo 4 hours ago | parent | prev | next [-]

There are plenty of fairly mundane applications for this sort of thing in the military. Every base has a photography and graphic design team that makes posters, signs, PR materials, pamphlets, illustrations for manuals, you name it. Imagine a poster in the break room of a soldier in desert gear drinking from his/her canteen with a tagline of "Stay Alive - Hydrate!" and you're on the right track.

Wowfunhappy 26 minutes ago | parent [-]

You don't need a special no moderation version to do that stuff.

bayesianbot 7 hours ago | parent | prev [-]

Tier 4 requires $250 spent. I'm tier 4 as well and I can see how they get easily mixed, but it actually says $1,000 spent to move to next tier.

Wowfunhappy 7 hours ago | parent [-]

Oops, thank you! So, even easier!

cuuupid 4 hours ago | parent | prev [-]

I am actually talking about the OpenAI API :)

I'm not aware of the moderation parameter here but these contractors have special API keys that unlock unmoderated access for them, they've apparently had it for weeks.

subroutine 12 hours ago | parent | prev | next [-]

Do you work with OpenAI models via FedRAMP GGC High Azure? If so I would love to hear more about your experience.

cuuupid 4 hours ago | parent | next [-]

No, but have heard many rumors they are eyeing their own IL4 environment (apparently Azure has been a bad partner and is months behind on models)

I personally just warn customers that it cannot technically handle CUI or higher, can't say that it stops them

subroutine 3 hours ago | parent [-]

I ask, because according to MS...

"GPT-4o is now available as part of Azure OpenAI Service for Azure Government and included as part of this latest FedRAMP High and DoD IL4/IL5 Authorization."

...we have everything setup in Azure but are weary to start using with CUI. Our DoD contacts think it's good to go, but nobody wants to go on record as giving the go-ahead.

https://devblogs.microsoft.com/azuregov/azure-openai-fedramp...

https://learn.microsoft.com/en-us/azure/azure-government/com...

kryogen1c 9 hours ago | parent | prev [-]

I'd be interested to hear if that's even possible.

GCCH is typically 6-12 months behind in feature set.

subroutine 3 hours ago | parent [-]

See my comment above.

throwaway314155 13 hours ago | parent | prev | next [-]

> 4 different defense contractors in the last day

Now I'm just wondering what the hell defense contractors need image generation for that isn't obviously horrifying...

Aeolun 12 hours ago | parent | next [-]

“Generate me a crowd of civilians with one terrorist in.”

“Please move them to some desert, not the empire state building.”

“The civilians are supposed to have turbans, not ballcaps.”

ziml77 11 hours ago | parent [-]

That's very outdated, they're absolutely supposed to be at the Empire State Building with baseball caps now. See: ICE arrests and Trump's comment on needing more El Salvadoran prison space for "the homegrowns"

artemisart 10 hours ago | parent [-]

That was the joke.

Dylan16807 6 hours ago | parent [-]

The joke is AI knowing the job requirements better than the person using it? When talking about chatgpt?

I'm confused.

vFunct 12 hours ago | parent | prev | next [-]

Show me a tunnel underneath a building in the desert filled with small arms weapons with a poster on the wall with a map of the United States and a label written with sharpie saying “Bad guys here”. Also add various Arabic lettering on the weapons.

qatanah 11 hours ago | parent | prev | next [-]

All I can think of is image generation of potential targets like ships, airplane, airfield and feed them to their satellite or drones for image detection and tweak their weapons for enhance precision.

daemonologist 10 hours ago | parent [-]

I think the usual computer vision wisdom is that this (training object detection on generated imagery) doesn't work very well. But maybe the corps have some techniques that aren't in the public literature yet.

notarealllama 9 hours ago | parent [-]

My understanding is the opposite, see papers for "synthetic" data training. They use a small bit if real data to generate lots of synthetic data and get usable results.

The bias leans towards overfitting the data, which in some use cases - such as missile or drone design which doesn't need broad comparisons like 747s or artillery to complete it's training.

Kind of like neural net back propogation but in terms of model /weights

morleytj 13 hours ago | parent | prev | next [-]

It's probably horrifying!

renewiltord 12 hours ago | parent | prev [-]

They make presentations. Most of their work is presentations with diagrams. Icons.

kittikitti 12 hours ago | parent | prev [-]

This is on purpose so OpenAI can then litigate against them. This API isn't about a new feature, it's about control. OpenAI is the biggest bully in the space of generative AI and their disinformation and intimidation tactics are working.

tezza 11 hours ago | parent | prev | next [-]

For the curious I generated the same prompt for each of the quality types. ‘Auto’, ‘low’, ‘medium’, ‘high’.

Prompt: “a cute dog hugs a cute cat”

https://x.com/terrylurie/status/1915161141489136095

I also then showed a couple of DALL:E 3 images for comparison in a comment

whywhywhywhy 29 minutes ago | parent | next [-]

Crazy even photos have the OpenAI yellow color grade

latexr 10 hours ago | parent | prev | next [-]

> the same prompt for each of the quality types. ‘Auto’, ‘low’, ‘medium’, ‘high’.

“Auto” is just whatever the best quality is for a model. So in this case it’s the same as “high”.

echelon 8 hours ago | parent | prev | next [-]

> a cute dog hugs a cute cat

This prompt is best served by Midjourney, Flux, Stable Diffusion. It'll be far cheaper, and chances are it'll also look a lot better.

The place where gpt-image-1 shines if if you want to do a prompt like:

"a cute dog hugs a cute cat, they're both standing on top of an algebra equation (y=\(2x^{2}-3x-2\)). Use the first reference image I uploaded as a source for the style of the dog. Same breed, same markings. The cat can contrast in fur color. Use the second reference image I uploaded as a guide for the background, but change the lighting to sunset. Also, solve the equation for x."

gpt-image-1 doesn't make the best images, and it isn't cheap, and it isn't fast, but it's incredibly -- almost insanely -- powerful. It feels like ComfyUI got packed up into an LLM and provided as a natural language service.

stavros 8 hours ago | parent [-]

I wonder if we can use gpt-image-1 outputs, with some noise, as inputs to diffusion models, so GPT takes care of adherence and the diffusion model improves the quality. Does anyone know whether that's at all possible?

AuryGlenz 4 hours ago | parent | next [-]

Sure. I suppose with API support 3 hours ago someone probably made a Comfy node all of 2 hours ago. From there you can either just do a low denoise or use one of the many IP-Adapter type things out there.

levzzz 6 hours ago | parent | prev [-]

yes it's what a lot of people have been doing with newer models which have better prompt adherence, passing them through older models with better aesthetics

MoonGhost 8 hours ago | parent | prev [-]

Not bad. Photo forums will be soon full of them. Slightly edited to remove metadata and make them look like human made.

alasano 6 hours ago | parent | prev | next [-]

I built a local playground for it if anyone is interested (your openai org needs to be verified btw..)

https://github.com/Alasano/gpt-image-1-playground

Openai's Playground doesn't expose all the API options.

Mine covers all options, has built in mask creation and cost tracking as well.

film42 14 hours ago | parent | prev | next [-]

I generated 5 images in the playground. One using a text-only prompt and 4 using images from my phone. I spent $0.85 which isn't bad for a fun round of Studio Ghibli portraits for the family group chat, but too expensive to be used in a customer facing product.

sumedh 11 hours ago | parent [-]

> but too expensive to be used in a customer facing product.

Enhance headshots for putting on Linkedin.

salomonk_mur 7 hours ago | parent | next [-]

It doesn't keep facial details in the generation. The generated person resembles you but is definitely not you.

bamboozled 9 hours ago | parent | prev | next [-]

Can't wait to meet people in person who look nothing like their profile pictures on linkedin :)

martin_a 3 hours ago | parent [-]

I already did. Looked in the mirror just an hour ago. Strange guy, very tired, never seen him before.

BOOSTERHIDROGEN 10 hours ago | parent | prev [-]

is it good?

stavros 8 hours ago | parent [-]

No, it can't do detail well, AFAIK the images are produced at a lower resolution and then upscaled. This might be specific to the ChatGPT version, however, for cost cutting.

Imnimo 13 hours ago | parent | prev | next [-]

I'm curious what the applications are where people need to generate hundreds or thousands of these images. I like making Ghibli-esque versions of family photos as much as the next person, but I don't need to make them in volume. As far as I can recall, every time I've used image generation, it's been one-off things that I'm happy to do in the ChatGPT UI.

whywhywhywhy 28 minutes ago | parent | next [-]

> where people need to generate hundreds or thousands of these images

Anyone using image gen for real work not just for fun.

Although you're way better off finding your own workflows with local models at that scale.

minimaxir 13 hours ago | parent | prev | next [-]

As usual for AI startups nowadays, using this API you can create a downstream wrapper for image generation with bespoke prompts.

A pro/con of the multimodal image generation approach (with an actually good text encoder) is that it rewards intense prompt engineering moreso than others, and if there is a use case that can generate more than $0.17/image in revenue, that's positive marginal profit.

theptip 9 hours ago | parent | prev | next [-]

An obvious one is for video games, interactive fiction, that sort of thing. AI dungeon with visuals could be pretty interesting.

brian-armstrong 6 hours ago | parent [-]

It's too expensive for that unless you had a pretty generous subscription fee. I think local models are probably best suited for gaming where a decent GPU is already likely present.

austhrow743 12 hours ago | parent | prev | next [-]

I use the api because i don’t use chatgpt enough to justify the cost of their UI offering.

marviel 13 hours ago | parent | prev | next [-]

AI-assisted education is promising.

samtp 13 hours ago | parent | next [-]

I'm still struggling to see how you would need thousands of AI generated images rather than just using existing real images for education.

abossy 7 hours ago | parent | next [-]

The company I work for generates thousands of these each week for children's personalized storybooks to help them learn how to read. The story text is the core part of the application, but the personalized images are what make them engaging.

marviel 12 hours ago | parent | prev [-]

- personalization (style, analogy to known concepts)

- specificity (a diagram that perfectly encapsulates the exact set of concepts you're asking about)

indeyets 12 hours ago | parent [-]

But LLMs are not reliable enough, so you can not actually expect “specificity”

aeonik 8 hours ago | parent | next [-]

More reliable than 80% of my teachers growing up.

marviel 12 hours ago | parent | prev [-]

Not perfect now, but adequate in some domains. Will only get better.

Hackbraten 7 hours ago | parent [-]

> Will only get better.

AI companies are still in their "burning money" phase.

Enshittification is not on the horizon yet, but it's inevitable.

whatnow37373 an hour ago | parent | prev | next [-]

"Having trouble with your algebra? MathWiz is having a 20% discount this month only. Only $24.95 / month. This is an excellent deal. Don't you want to improve? Do you want to let your family down, like they thought you would? Or would like me to create an account for you?"

Etheryte 13 hours ago | parent | prev [-]

That is true in a broader sense, but education and abundant money don't generally go hand in hand.

marviel 12 hours ago | parent [-]

don't I know it

chipgap98 6 hours ago | parent | prev | next [-]

Interior design, fashion, and advertising all come to mind

jevogel 11 hours ago | parent | prev | next [-]

Imagine an AI recipe building app that helps you create a recipe with certain ingredients, then generates an image of what the final product might look like.

what 9 hours ago | parent [-]

Why do need to know what it looks like? Or are you publishing the recipe without cooking it?

aprilthird2021 9 hours ago | parent | prev [-]

Imagine a news feed that never ends full of AI slop to sell ads on

_pdp_ 3 hours ago | parent | prev | next [-]

We have integrated it into our platform and we already have use-cases for it to help create ads and other marketing material.

However, while being better than my other models, it is not perfect. The image edit api will make a similar looking picture (even with masking) but exactly the same with some modifications.

minimaxir 14 hours ago | parent | prev | next [-]

Pricing-wise, this API is going to be hard to justify the value unless you really can get value out of providing references. A generated `medium` 1024x1024 is $0.04/image, which is in the same cost class as Imagen 3 and Flux 1.1 Pro. Testing from their new playground (https://platform.openai.com/playground/images), the medium images are indeed lower quality than either of of two competitor models and still takes 15+ seconds to generate: https://x.com/minimaxir/status/1915114021466017830

Prompting the model is also substantially more different and difficult than traditional models, unsurprisingly given the way the model works. The traditional image tricks don't work out-of-the-box and I'm struggling to get something that works without significant prompt augmentation (which is what I suspect was used for the ChatGPT image generations)

raincole 13 hours ago | parent | next [-]

ChatGPT's prompt adherence is light years ahead of all the others. I won't even call Flux/Midjoueny its competitors. ChatGPT image gen is practically a one-of-its-kind unique product on the market: the only usable AI image editor for people without image editing experience.

I think in terms of image generation, ChatGPT is the biggest leap since Stable Diffusion's release. LoRA/ControlNet/Flux are forgettable in comparison.

thegeomaster 12 hours ago | parent | next [-]

Well, there's also gemini-2.0-flash-exp-image-generation. Also autoregressive/transfusion based.

thefourthchime 12 hours ago | parent | next [-]

Such a good name....

Yiling-J 8 hours ago | parent | prev | next [-]

gemini-2.0-flash-exp-image-generation doesn’t perform as well as GPT-4o's image generation, as mentioned in section 5.1 of this paper: https://arxiv.org/pdf/2504.02782. However based on my test, for certain types of images such as realistic recipe images, the results are quite good. You can see some examples here: https://github.com/Yiling-J/tablepilot/tree/main/examples/10...

raincole 5 hours ago | parent | prev | next [-]

It's quite bad now, but I have no doubt that Google will catch up.

The AI field looks awfully like {OpenAI, Google, The Irrelevent}.

yousif_123123 11 hours ago | parent | prev | next [-]

It's also good but clearly not close still. Maybe Gemini 2.5 or 3 will have better image gen.

swyx 7 hours ago | parent | prev [-]

> transfusion based.

what is that?

echelon 8 hours ago | parent | prev | next [-]

I'd go out on a limb and say that even your praise of gpt-image-1 is underselling its true potential. This model is as remarkable as when ChatGPT first entered the market. People are sleeping on its capabilities. It's a replacement for ComfyUI and potentially most of Adobe in time.

Now for the bad part: I don't think Black Forest Labs, StabilityAI, MidJourney, or any of the others can compete with this. They probably don't have the money to train something this large and sophisticated. We might be stuck with OpenAI and Google (soon) for providing advanced multimodal image models.

Maybe we'll get lucky and one of the large Chinese tech companies will drop a model with this power. But I doubt it.

This might be the first OpenAI product with an extreme moat.

raincole 6 hours ago | parent [-]

> Now for the bad part: I don't think Black Forest Labs, StabilityAI, MidJourney, or any of the others can compete with this.

Yeah. I'm a tad sad about it. I once thought the SD ecosystem proves open-source won when it comes to image gen (a naive idea, I know). It turns out big corps won hard in this regard.

soared 13 hours ago | parent | prev [-]

This is a take so incredulous it doesn’t seem credible.

stavros 12 hours ago | parent | next [-]

I can confirm, ChatGPT's prompt adherence is so incredibly good, it gets even really small details right, to a level that diffusion-based generators couldn't even dream of.

mediaman 13 hours ago | parent | prev | next [-]

It is correct, the shift from diffusion to transformers is a very, very big difference.

abhpro 9 hours ago | parent | prev | next [-]

Also chiming in to say you're wrong, I mean they're correct

tacoooooooo 13 hours ago | parent | prev [-]

its 100% the correct take

fkyoureadthedoc 13 hours ago | parent [-]

yeah this is my personal experience. The new image generation is the only reason I keep an OpenAI subscription rather than switching to Google.

adamhowell 13 hours ago | parent | prev | next [-]

So, I've long dreamed of building an AI-powered https://iconfinder.com.

I started Accomplice v1 back in 2021 with this goal in mind and raised some VC money but it was too early.

Now, with these latest imagen-3.0-generate-002 (Gemini) and gpt-image-1 (OpenAI) models – especially this API release from OpenAI – I've been able to resurrect Accomplice as a little side project.

Accomplice v2 (https://accomplice.ai) is just getting started back up again – I honestly decided to rebuild it only a couple weeks ago in preparation for today once I saw ChatGPT's new image model – but so far 1,000s of free to download PNGs (and any SVGs that have already been vectorized are free too (costs a credit to vectorize)).

I generate new icons every few minutes from a huge list of "useful icons" I've built. Will be 100% pay-as-you-go. And for a credit, paid users can vectorize any PNGs they like, tweak them using AI, upload their own images to vectorize and download, or create their own icons (with my prompt injections baked in to get you good icon results)

Do multi-modal models make something like this obsolete? I honestly am not sure. In my experience with Accomplice v1, a lot of users didn't know what to do with a blank textarea, so the thinking here is there's value in doing some of the work for them upfront with a large searchable archive. Would love to hear others' thoughts.

But I'm having fun again either way.

stavros 12 hours ago | parent [-]

That looks interesting, but I don't know how useful single icons can be. For me, the really useful part would be to get a suite of icons that all have a consistent visual style. Bonus points if I can prompt the model to generate more icons with that same style.

throwup238 12 hours ago | parent [-]

Recraft has a style feature where you give some images. I wonder if that would work for icons. You can also try giving an image of a bunch of icons to ChatGPT and have it generate more, then vectorize them.

vunderba 10 hours ago | parent | next [-]

Recraft's icon generator let's you do this.

https://imgur.com/a/BTzbsfh

It definitely captures the style - but any reasonably complicated prompt was beyond it.

stavros 12 hours ago | parent | prev [-]

I think the latter approach is the best bet right now, agree.

tough 14 hours ago | parent | prev | next [-]

It seems to me like this is a new hybrid product for -vibe coders- beacuse otherwise the -wrapping- of prompting/improving a prompt with an LLM before hitting the text2image model can certainly be done as you say cheaper if you just run it yourself.

maybe OpenAI thinks model business is over and they need to start sherlocking all the way from the top to final apps (Thus their interest on buying out cursor, finally ending up with windsurf)

Idk this feels like a new offering between a full raw API and a final product where you abstract some of it for a few cents, and they're basically bundling their SOTA llm models with their image models for extra margin

vineyardmike 14 hours ago | parent [-]

> It seems to me like this is a new hybrid product for -vibe coders- beacuse otherwise the -wrapping- of prompting/improving a prompt with an LLM before hitting the text2image model can certainly be done as you say cheaper if you just run it yourself.

In case you didn’t know, it’s not just wrapping in an LLM. The image model they’re referencing is a model that’s directly integrated into the LLM for functionality. It’s not possible to extract, because the LLM outputs tokens which are part of the image itself.

That said, they’re definitely trying to focus on building products over raw models now. They want to be a consumer subscription instead of commodity model provider.

tough 14 hours ago | parent | next [-]

Right! I forgot the new model was a multi-modal one generating image outputs from both image and text inputs, i guess this is good and price will come down eventually.

waiting for some FOSS multi-modal model to come out eventually too

great to see openAI expanding into making actual usable products i guess

spilldahill 13 hours ago | parent | prev [-]

yeah, the integration is the real shift here. by embedding image generation into the LLM’s token stream, it’s no longer a pipeline of separate systems but a single unified model interface. that unlocks new use cases where you can reason, plan, and render all in one flow. it’s not just about replacing diffusion models, it’s about making generation part of a broader agentic loop. pricing will drop over time, but the shift in how you build with this is the more interesting part.

furyofantares 13 hours ago | parent | prev | next [-]

I find prompting the model substantially easier than traditional models, is it really more difficult or are you just used to traditional models?

I suspect what I'll do with the API is iterate at medium quality and then generate a high quality image when I'm done.

vunderba 13 hours ago | parent | prev | next [-]

> Prompting the model is also substantially more different and difficult than traditional models

Can you elaborate? This was not my experience - retesting the prompts that I used for my GenAI image shootout against gpt-image-1 API proved largely similar.

https://genai-showdown.specr.net

simonw 14 hours ago | parent | prev | next [-]

It may lose against other models on prompt-to-image, but I'd be very excited to see another model that's as good at this one as image+prompt-to-image. Editing photos with ChatGPT over the past few weeks has been SO much fun.

Here's my dog in a pelican costume: https://bsky.app/profile/simonwillison.net/post/3lneuquczzs2...

steve_adams_86 14 hours ago | parent [-]

The dog ChatGPT generated doesn't actually look like your dog. The eyes are so different. Really cute image, though.

thot_experiment 13 hours ago | parent | prev | next [-]

Similarly to how 90% of my LLM needs are met by Mistral 3.1, there's no reason to use 4o for most t2i or i2i, however there's a definite set of tasks that are not possible with diffusion models, or if they are they require a giant ball of node spaghetti in comfyui to achieve. The price is high but the likelyhood of getting the right answer on the first try is absolutely worth the cost imo.

varenc 13 hours ago | parent | prev | next [-]

pretty amazing that in ~two years a 15 second latency AI image generation API that cost 4 cents lags behind competitors.

echelon 8 hours ago | parent [-]

This product does not lag behind competitors. Once you take the time to understand how it works, it's clear that this is an order of magnitude more powerful than anything else on the market.

While there's a market need for fast diffusion, that's already been filled and is now a race to the bottom. There's nobody else that can do what OpenAI does with gpt-image-1. This model is a truly programmable graphics workflow engine. And this type of model has so much more value than mere "image generation".

gpt-image-1 replaces ComfyUI, inpainting/outpainting, LoRAs, and in time one could imagine it replaces Adobe Photoshop and nearly all the things people use it for. It's an image manipulation engine, not just a diffusion model. It understands what you want on the first try, and it does a remarkably good job at it.

gpt-image-1 is a graphics design department in a box.

Please don't think of this as a model where you prompt things like "a dog and a cat hugging". This is so much more than that.

Sohcahtoa82 13 hours ago | parent | prev | next [-]

> A generated `medium` 1024x1024 is $0.04/image

It's actually more than that. It's about 16.7 cents per image.

$0.04/image is the pricing for DALL-E 3.

mkl 12 hours ago | parent | next [-]

16.7 cents is the high quality cost, and medium is 4.2 cents: https://platform.openai.com/docs/pricing#:~:text=1M%20charac...

Sohcahtoa82 11 hours ago | parent [-]

Ah, they changed that page since I saw it yesterday.

They didn't show low/med/high quality, they just said an image was a certain number of tokens with a price per token that led to $0.16/image.

weird-eye-issue 13 hours ago | parent | prev [-]

No, it's not

doctorpangloss 14 hours ago | parent | prev | next [-]

It's far and away the most powerful image model right now. $0.04/image is a decent price!

arevno 13 hours ago | parent [-]

This is extremely domain-specific. Diffusion models work much better for certain things.

thot_experiment 13 hours ago | parent | next [-]

Can you cite an example? I'm really curious where that set of usecases lies.

koakuma-chan 13 hours ago | parent | next [-]

Explicit adult content.

thot_experiment 13 hours ago | parent [-]

False. That has nothing to do with the model architecture and everything to do with cloud inference providers wanting to avoid regulatory scrutiny.

echelon 13 hours ago | parent | prev [-]

I work in the space. There are a lot of use cases that get censored by OpenAI, Kling, Runway, and various other providers for a wide variety of reasons:

- OpenAI is notorious for blocking copyrighted characters. They do prompt keyword scanning, but also run a VLM on the results so you can't "trick" the model.

- Lots of providers block public figures and celebrities.

- Various providers block LGBT imagery, even safe for work prompts. Kling is notorious for this.

- I was on a sales call with someone today who runs a father's advocacy group. I don't know what system he was using, but he said he found it impossible to generate an adult male with a child. In a totally safe for work context.

- Some systems block "PG-13" images of characters that are in bathing suits or scantily clad.

None of this is porn, mind you.

thot_experiment 13 hours ago | parent | next [-]

Sure but that has nothing to do with the model architecture and everything to do with the cloud inference providers wanting to cover their asses.

throwaway314155 13 hours ago | parent | prev [-]

What does any of that have to do with the distinction between diffusion vs. autoregressive models?

echelon 13 hours ago | parent | prev [-]

I don't think so. This model kills the need for Flux, ComfyUI, LoRAs, fine tuning, and pretty much everything that's come before it.

This is the god model in images right now.

I don't think open source diffusion models can catch up with this. From what I've heard, this model took a huge amount of money to train that not even Black Forest Labs has access to.

thot_experiment 13 hours ago | parent | next [-]

ComfyUI supports 4o natively so you get the best of both worlds, there is so much that you can't do with 4o because there's a fundamental limit on the level of control you can have over image generation when your conditioning is just tokens in an autoregressive model. There's plenty of reason to use comfy even if 4o is part of your workflow.

As for LoRAs and fine tuning and open source in general; if you've ever been to civit.ai it should be immediately obvious why those things aren't going away.

AuryGlenz 4 hours ago | parent | prev [-]

95% of what I do with image models is train LoRAs/finetune family and friends and create images of them.

Sure, I can ghiblify specific images of them on this model, but anything approaching realistic changes their looks. I've also done specific LoRAs for things that may or may not be in their training data, such as specific movies.

Wowfunhappy 10 hours ago | parent | prev [-]

Huh? For me the quality of the API seems to be identical to what I'm getting in ChatGPT.

jumploops 11 hours ago | parent | prev | next [-]

This new model is autoregression-based (similar to LLMs, token by token) rather than diffusion based, meaning that it adheres to text prompts with much higher accuracy.

As an example, some users (myself included) of a generative image app were trying to make a picture of person in the pouch of a kangaroo.

No matter what we prompted, we couldn’t get it to work.

GPT-4o did it in one shot!

yousif_123123 11 hours ago | parent | next [-]

It's a mix of both it feels to me as I've been testing it. For example, you can't get it to make a clock showing custom time like 3:30, or someone writing with their left hand.. And it can't do follow many instructions or do them very precisely. But it shows that this kind of architecture will be be capable of that if scaled up most likely.

jumploops 10 hours ago | parent [-]

These are great tests, thanks for sharing!

And you seem to be right, though the only reference I can find is in one of the example images of a whiteboard posted on the announcement[0].

It shows: tokens -> [transformer] -> [diffusion] pixels

hjups22 on Reddit[1] describes it as:

> It's a hybrid model. The AR component generates control embeddings that then get decoded by a diffusion model. But the control embeddings are accurate enough to edit and reconstruct the images surprisingly well.

[0]https://openai.com/index/introducing-4o-image-generation/

[1]https://www.reddit.com/r/MachineLearning/comments/1jkt42w/co...

n2d4 10 hours ago | parent | prev [-]

Source? It's much more likely that the LLM generates the latent vector which serves as an input to the diffusion model.

jumploops 10 hours ago | parent | next [-]

From the GPT-4o System Card Addendum[0]:

> Unlike DALL·E, which operates as a diffusion model, 4o image generation is an autoregressive model natively embedded within ChatGPT.

[0]https://cdn.openai.com/11998be9-5319-4302-bfbf-1167e093f1fb/...

og_kalu 10 hours ago | parent | prev [-]

Open AI said it's auto-regressive, the presentation on the app is autoregressive, it's priced auto-regressively.

Why would that be more likely ? It seems like some implementation of bytedance's VAR.

badmonster 12 hours ago | parent | prev | next [-]

Usage of gpt-image-1 is priced per token, with separate pricing for text and image tokens:

Text input tokens (prompt text): $5 per 1M tokens Image input tokens (input images): $10 per 1M tokens Image output tokens (generated images): $40 per 1M tokens

In practice, this translates to roughly $0.02, $0.07, and $0.19 per generated image for low, medium, and high-quality square images, respectively.

that's a bit pricy for a startup.

m4thfr34k 8 hours ago | parent [-]

Isn't there also a cost per image? The pricing page shows $0.25 for a high quality 1536x1024 image. 25 cents per image is ... steep lol

BoorishBears 6 hours ago | parent [-]

Cost per image is based on output tokens (because they're output tokens)

gervwyk 14 hours ago | parent | prev | next [-]

Great svg generation would be far more userful! For example, being able to edit svg images after generated by Ai would be quick to modify the last mile.. For our new website https://resonancy.io the simple svg workflow images created was still very much created by hand.. and trying various ai tools to make such images yielded shockingly bad off-brand results even when provided multiple examples. By far the best tool for this is still canva for us..

Anyone know of an Ai model for generating svg images? Please share.

jjcm 14 hours ago | parent | next [-]

Recraft also has an svg model: https://replicate.com/recraft-ai/recraft-v3-svg

One note with these is most of the production ones are actually diffusion models that get ran through an image->svg model after. The issue with this is that the layers aren't set up semantically like you'd expect if you were crafting these by hand, or if you were directly generating svgs. The results work, but they aren't perfect.

simonw 14 hours ago | parent | prev | next [-]

I was impressed with recraft.ai for SVGs - https://simonwillison.net/2024/Nov/15/recraft-v3/ - though as far as I can tell they generate raster images and then SVG-ize them before returning the result.

tough 14 hours ago | parent | prev | next [-]

SVGFusion https://arxiv.org/abs/2412.10437 which is a new paper from SVGRender group https://huggingface.co/SVGRender

OmniSVG https://arxiv.org/abs/2504.06263v1

gervwyk 14 hours ago | parent [-]

Amazing thanks for sharing! Will have a read. A commercial model would be something that I will pay for!

tough 14 hours ago | parent | next [-]

I don't know about -commercial- offerings but you can try also something like SVGRender which you should be able to run on your own GPU etc https://ximinng.github.io/PyTorch-SVGRender-project/

first paper linked on prior comment is the latest one from SVGRender group, but not sure if any runnable model weights are out yet for it (SVGFusion)

corysama 11 hours ago | parent | prev [-]

Is free cheap enough ;)

https://omnisvg.github.io/

https://huggingface.co/OmniSVG

vitorcremonez 12 hours ago | parent | prev [-]

Try neoSVG or Recraft, it is awesome!

pknerd 4 hours ago | parent | prev | next [-]

I would like to know some resources about prompt engineering to use the Image gen module by OpenAI, especially for products related to images or Ads.

PS: Does anyone know a good LLM/service to turn images into Videos?

sebastiennight 14 hours ago | parent | prev | next [-]

Hmm seems pricey.

What's the current state of the art for API generation of an image from a reference plus modifier prompt?

Say, in the 1c per HD (1920*1080) image range?

minimaxir 14 hours ago | parent [-]

"Image from a reference" is a bit of a rabbit hole. For traditional image generation models, in order for it to learn a reference, you have to fine-tune it (LoRA) and/or use a conditioning model to constrain the output (InstantID/ControlNet)

The interesting part of this GPT-4o API is that it doesn't need to learn them. But given the cost of `high` quality image generation, it's much cheaper to train a LoRA for Flux 1.1 Pro and generate from that.

thot_experiment 13 hours ago | parent | next [-]

Reflux is fantastic for the basic reference image based editing most people are using this for, but 4o is far more powerful than any existing models because of it's large scale and cross-modal understanding, there are things possible with 4o that are just 100% impossible with diffusion models. (full glass of wine, horse riding an astronaut, room without pink elephants, etc)

Tiberium 13 hours ago | parent | prev [-]

Imagen supports image references in the API as well, just on Vertex, not on Gemini API yet.

BoorishBears 6 hours ago | parent [-]

Imagen references don't feel very useful at all. At most it feels like an afterthought meant to make product photoshoots easier.

ChaitanyaSai 6 hours ago | parent | prev | next [-]

Almost every image has a yellow tint. Any discussion of why and when that's being fixed?

thinkingemote 14 minutes ago | parent [-]

maybe it's a kind of watermark?

claiir 11 hours ago | parent | prev | next [-]

> GoDaddy is actively experimenting to integrate image generation so customers can easily create logos that are editable [..]

I remember meeting someone on Discord 1-2 years ago (?) working on a GoDaddy effort to have customer-generated icons using bespoke foundation image gen models? Suppose that kind of bespoke model at that scale is ripe for replacement by gpt-image-1, given the instruction-following ability / steerability?

greatgib 11 hours ago | parent | prev | next [-]

Any one has an idea of what represent an "image token" for the pricing? Is it a block of an image from a given fixed size?

verelo 11 hours ago | parent | prev | next [-]

“ Editing videos: invideo enables millions of users to transform their ideas into videos using AI. With the integration of gpt-image-1, the platform now offers improved text generation, fine-grain editing controls, and advanced style guidance.”

Does this mean this also does video in some manner?

jeevships 9 hours ago | parent | prev | next [-]

Genuinely curious, why would someone buy from your gpt image wrapper when they can just create it in gpt themselves?

jonahx 9 hours ago | parent | next [-]

Not being glib, but this is like the famous comment when dropbox was first announced: "you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem". [1]

You might say, "but chatGPT is already as dead simple an interface as you can imagine". And the answer to that is, for specific tasks, no general interface is ever specific enough. So imagine you want to use this to create "headshots" or "linkedin bio photos" from random pictures of yourself. A bespoke interface, with options you haven't even considered already thought through for you, and some quality control/revisions baked into the process, is something someone might pay for.

[1] https://news.ycombinator.com/item?id=9224

tarikozket 9 hours ago | parent | prev [-]

different personas require different UXs. not everyone is going to understand and enjoy the chat interface; many will require a different UX.

scyzoryk_xyz 14 hours ago | parent | prev | next [-]

Intelligence is fast approaching utility status.

jonplackett 13 hours ago | parent | prev | next [-]

Does anyone know if you can give this endpoint an image as input along with text - not just an image to mask, but an image as part of a text input description.

I can’t see a way to do this currently, you just get a prompt.

This, I think, is the most powerful way to use the new image model since it actually understands the input image and can make a new one based on it.

Eg you can give it a person sitting at a desk and it can make one of them standing up. Or from another angle. Or in the moon.

loktarogar 13 hours ago | parent | next [-]

Seems like exactly one of their examples, or am I missing something? "Create a new image using image references" https://platform.openai.com/docs/guides/image-generation#cre...

jonplackett 3 hours ago | parent [-]

Awesome. Thank you!

adamhowell 13 hours ago | parent | prev [-]

I think this is technically "image variations" and I think image variations are still only dall-e 3 for now (best I could tell earlier today from the API)

MisterBiggs 14 hours ago | parent | prev | next [-]

Lots of comments on the price being too high, what are the odds this is a subsidized bare metal cost?

kevinqi 14 hours ago | parent [-]

just based on how long it takes to produce these images, and how much text responses cost, I wouldn't be surprised at all if it was close to cost

drakenot 13 hours ago | parent | prev | next [-]

Does the AI have the same content restrictions that the chat service does?

gcrfelix 11 hours ago | parent | prev | next [-]

lesson: never build your moat around optimizing the existing AI capability

topaz0 6 hours ago | parent | prev | next [-]

Criminally wasteful.

smrt 14 hours ago | parent | prev | next [-]

I don't understand why this api needs organization verification. More paperwork ahead. Facepalm

PermissionDeniedError: Error code: 403 - {'error': {'message': 'To access gpt-image-1, please complete organization verification

themanmaran 14 hours ago | parent | next [-]

Likely because they've seen a lot of the potential abuse capabilities. i.e. the "generate a drivers license with this face".

So the options are: 1) nerf the model so it can't produce images like that, or 2) use some type of KYC verification.

magackame 13 hours ago | parent | next [-]

The model is already pretty lobotomized refusing even mundane requests randomly.

Upload a picture of a friend -> OK. Upload my own picture -> I can't generate anything involving real people.

Also after they enabled global chat memory I started seeing my other chats leaking into the images as literal text. Disabled it since.

vunderba 10 hours ago | parent | prev [-]

Yep - the API lets you lower the moderation which I observed allows for more violent and graphic prompts, but it still exists and will often reject if you reference popular figures/etc.

bayesianbot 7 hours ago | parent | prev [-]

It says "Organization verification" but I got my personal account (with Personal as organization) verified with just a passport.

GaggiX 12 hours ago | parent | prev | next [-]

Far too expensive, I think I will wait for an equivalent Gemini model.

1oooqooq 13 hours ago | parent | prev | next [-]

aren't you all embarrassed seeing lame press releases of the most uninteresting things on the top of HN front page? i kinda feel bad.

bobxmax 13 hours ago | parent | next [-]

I'm embarassed that you find revolutionary tech uninteresting.

1oooqooq 13 hours ago | parent [-]

it's literary one feature now available in a different billing format. get a gripe.

urbandw311er an hour ago | parent | next [-]

It’s available by API now, previously it was not. That’s pretty big news. This isn’t a billing related thing.

stavros 8 hours ago | parent | prev [-]

When I grow up, I too want to dismiss things without even knowing what I'm talking about.

sumedh 11 hours ago | parent | prev [-]

This news is relevant for developers though.

GuinansEyebrows 11 hours ago | parent [-]

How so? I'm (nominally) a developer and this has nothing to do with my job or personal pursuits.

matkoniecz 16 minutes ago | parent [-]

Noone was claiming that it is relevant to every single developer.

Hard to find such news.

animanoir 14 hours ago | parent | prev | next [-]

Wow more AI slop

hexo 11 hours ago | parent | prev | next [-]

Thank you for a great contribution to global warming.

p1dda 6 hours ago | parent | prev | next [-]

For how long can OpenAI beat the dead horse that is LLM

pkulak 14 hours ago | parent | prev [-]

I don't get it. I've been using `dall-e-3` over the public API for a couple years now. Is this just a new model?

EDIT: Oh, yes, that's what it appears to be. Is it better? Why would I switch?

themanmaran 14 hours ago | parent | next [-]

This is the new model that's available in ChatGPT, which most notably can do transfer generation. i.e. "take this image and restyle it to look like X". Or "take this sneaker and give me a billboard ad for it"

danielbln 14 hours ago | parent | prev | next [-]

This is their presumably auto regressive image model. It has outstanding prompt adherence and great detail in addition to strong style transfer abilities.

Sohcahtoa82 13 hours ago | parent | prev | next [-]

The new image generation model is miles ahead of DALL-E 3, especially when generating text.

bradly 13 hours ago | parent | prev [-]

Basically they are charging for the ability to make accurate text generation.