▲ | minimaxir 5 days ago | |||||||
So there may be something weird going on with images in GPT-5, which OpenAI avoided any discussion about in the livestream. The artist for SMBC noted that GPT-5 was better at plagiarizing his style: https://bsky.app/profile/zachweinersmith.bsky.social/post/3l... However, there have been no updates to the underlying image model (gpt-image-1). But due to the autoregressive nature of the image generation where GPT generates tokens which are then decoded by the image model (in contrast to diffusion models), it is possible for an update to the base LLM token generator to incorporate new images as training data without having to train the downstream image model on those images. | ||||||||
▲ | simonw 5 days ago | parent [-] | |||||||
No, those changes are going to be caused by the top level models composing different prompts to the underlying image models. GPT-5 is not a multi-modal image output model and still uses the same image generation model that other ChatGPT models use, via tool calling. GPT-4o was meant to be multi-modal image output model, but they ended up shipping that capability as a separate model rather than exposing it directly. | ||||||||
|